content
stringlengths
86
994k
meta
stringlengths
288
619
Calculus problem that doesn't seem like calculus June 7th 2008, 10:05 AM #1 May 2008 Calculus problem that doesn't seem like calculus I can't seem to come up with a formula for the area of this triangle. This should be a calculus problem, but any formula that I can come up with does not require calculus to get the answer. The best formula that I have come up with so far is $<br /> (1/2)*sqrt((1/2+x)^2+(1/2-x)^2)*(1/2)<br />$ Consider a square piece of paper with sides equals to 1 unit. We label the four vertices as ABCD. Now inscribe a quarter circle with radius of 1 unit such that its center is at vertex A. (See figure 1). Next, we fold corner labeled as vertex A to touch the circumference of the quarter circle. We want the fold to create a triangle (the folded part forms a triangle; see figure 2). Note that the end points of the crease have to be on side AB and side AD in order to do this. Our task is to find the exact area of the smallest and the largest of these triangles created in the above I can't seem to come up with a formula for the area of this triangle. This should be a calculus problem, but any formula that I can come up with does not require calculus to get the answer. The best formula that I have come up with so far is $<br /> (1/2)*sqrt((1/2+x)^2+(1/2-x)^2)*(1/2)<br />$ I've attached 2 drawings: In the left sketch you can see how the triangle changes it's shape when the vertex runs along the quarter circle. In the right sketch you find all necessary variables Since the radius of the quarter circle is 1 the length of the arc x is measured in radians. $A_{triangle} = \frac12 (s+t) \cdot h$ $h = \frac12$ $s = \frac12 \cdot \tan(x)$ $t = \frac12 \cdot \tan\left(\frac{\pi}4 - x\right) = \frac12 \cdot \cot(x)$ Plug in all variables into the equation of $A_{triangle}$ : $A_{triangle}(x) = \frac12 \left(\frac12 \cdot \tan(x) + \frac12 \cdot \cot(x) \right) \cdot \frac12 = \frac18 \left(\tan(x) + \cot(x) \right)~,~ 0 \leq x \leq \frac{\pi}4$ (Personal remark: The domain of this function is wrong! Probably - but I haven't a proof yet - the domain is $\frac{\pi}6 \leq x \leq \frac{\pi}3$ ) To get the extreme values of A(x) calculate the first derivative of A: $A'(x)=\frac18 \left(\frac1{(\cos(x))^2} - \frac1{(\sin(x))^2} \right)$ Now solve for x: A'(x) = 0 I've got $x = \frac{\pi}4~\vee~x=\frac{3 \pi}4 otin \{domain\}$ With $x = \frac{\pi}4$ you get an isosceles right triangle with $A = \frac14$ By trial and error I got $A\left(\frac{\pi}6 \right) = A\left(\frac{\pi}3 \right) \approx 0.288...$ So there you have the smallest triangles at $x = \frac{\pi}4$ and the largest triangles if x approaches the bounds of the domain. Last edited by earboth; June 7th 2008 at 12:10 PM. June 7th 2008, 11:47 AM #2
{"url":"http://mathhelpforum.com/calculus/40897-calculus-problem-doesn-t-seem-like-calculus.html","timestamp":"2014-04-18T05:41:01Z","content_type":null,"content_length":"40569","record_id":"<urn:uuid:bfbafceb-ec50-4cdb-9074-491d26a6f713>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 7 - Frisbie Middle School Student Page Teacher Lesson Plan Simulating the activity: (Suggestion: Use 2 measuring cups with tape marking the 6 oz. line on one cup and the 10 oz. line on the other cup.) 1. Use the following materials: □ one 6 unit container □ one 10 unit container □ dried beans or small cubes (something to represent the diamonds) □ paper to record method 2. You are given the following scenario: The pirate ship has just landed, loaded with diamonds. You've been sent to buy 8 lbs. of diamonds but you only have 10-lb. and 6-lb. measuring containers. 3. How can you make the purchase? Remember that there are various problem solving techniques which you can choose from including: Choose the Operation Evaluate Information Find the Pattern Guess and Check Make a Table Plan and Reason Work Backward Write an Equation Specific instructions for using the ESCOT Runner software to interact with the Pirate Diamond Activity can be viewed here. The Pirate Diamond Activity can be downloaded from: ESCOT Problem of the Week hosted by the Math Forum Part I: The setting is given: The pirate ship has just landed, loaded with diamonds. You've been sent to buy 8 lbs. of diamonds but you only have one 10-lb. and one 6-lb. measuring container. You will be asked to respond to this question: How can you make the purchase? Part II: Can you measure 1 lb. of diamonds using only one 10-lb. and one 6-lb. measuring container? How about 2 lbs.? Consider the amounts between 1 lb. and 16 lbs. What conclusion can you make about the amounts of diamonds that can be purchased using only one 10-lb. and one 6-lb. measuring container? Part III: You now have a choice of five pairs of containers to use for measuring. For each pair of containers, can you make the purchase of 8 lbs. of diamonds? For those pairs that work, how can you make the purchase? Part IV: There is a counter which keeps track of the number of times you have emptied a container. You may have noticed that you end up emptying the containers quite often, which seems like a waste of time. Also, the pirate is liable to make you walk the plank if you empty the containers too often because you are handling his diamonds too much. Which pair of containers should you choose to minimize the number of steps (Fill, Empty, Pour) it takes you to measure 8 lbs. of diamonds? ││ Synthesizing the Activity ││ At this point, you have investigated the problem using manipulatives (containers) and technology (the ESCOT Runner simulation). Select one of the container problems (6 and 10 to 8; 6 and 10 to 2; 6 and 10 to 4; 6 and 10 to 12, etc.) and explain in writing how to reach the desired amount using the two given containers. Draw diagrams to accompany your explanation.
{"url":"http://mathforum.org/alejandre/escot/student.pouring.html","timestamp":"2014-04-19T15:53:56Z","content_type":null,"content_length":"7742","record_id":"<urn:uuid:c56ce007-b0b3-4a1c-9054-95d5345b4d41>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Ellipse: extract "minor axis" (b) when given "arc length" and "major axis" (a) April 24th 2011, 05:21 AM Ellipse: extract "minor axis" (b) when given "arc length" and "major axis" (a) The 1/4 of the ellipse's arc length is known ( $3$ in this case) and so is the semi-major axis $a$. So, for each $a$ that would be plugged in, how would you get $b$? (Headbang) Here is (the picture of) the equation: April 24th 2011, 06:44 AM You don't get any "analytic" form for that. That is an example of a general type of integral called (surprize, surprize!) elliptic integrals. They cannot be done "analytically". You would need to do a numerical integration. (I remember, many years ago, seeing 20 large volumes at the University of Florida library giving values for elliptic integrals with different parameters.) April 24th 2011, 07:01 AM I was looking at (among other methods; I don't really know what I'm doing (Blush)) Newton's method ... but, I'm at lost as to how to use it in practice (as in above equation)? How would you go about it, which algorithm would you use? Just a rough outline will (I hope) do.
{"url":"http://mathhelpforum.com/calculus/178455-ellipse-extract-minor-axis-b-when-given-arc-length-major-axis-print.html","timestamp":"2014-04-20T20:35:39Z","content_type":null,"content_length":"5562","record_id":"<urn:uuid:4f4e9d1a-1ddc-48b9-8fbd-0977125e8576>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Fields Institute Colloquium Series on Mathematics Outside Mathematics November 1, 2002 Stu Whittington, Department of Chemistry, University of Toronto Randomly coloured self-avoiding walks: A model of random copolymers Copolymers are polymer molecules made up of at least two types of monomers. If the sequence of monomers is determined stochastically they are called random copolymers, and can be thought of as primitive models of important biopolymers like DNA and proteins. One possible model of a random copolymer is a radomly coloured self-avoiding walk on a lattice. A sequence of colours is determined by some random process and then the vertices of the self-avoiding walk inherit these colours. The set of walks models the possible conformations of the polymer and the relative weights of different walks (and hence different polymer conformations) depend on the sequence of colours. The talk will describe these models and their application to several physical situations. Although some rigorous results are known there are many open questions and these will be introduced and discussed during the talk. February 7, 2003 John Sipe, Department of Physics, University of Toronto Effective field theories for nonlinear optics in artificially structured materials The study of nonlinear optical pulse propagation using effective field equations, such as the nonlinear Schroedinger equation and the nonlinear coupled mode equations, has been an active area of research in 1D photonic crystals, or "gratings," for the past fifteen years. These techniques are now being extended to higher dimensional photonic crystals and more general artificially structured materials. Unfortunately, while reasonably rigourous derivations of such effective field equations for structures with large variations in their linear optical properties have been performed, they are complicated and do not allow a simple understanding of the conservation laws that the final equations exhibit. After a review of experimental and theoretical work on 1D structures, we consider a new approach to the derivation of a general class of effective field equations based directly on a canonical formulation of the underlying Maxwell fields. This makes some progress towards easier and clearer derivations, results in effective theories that can then be immediately quantized, and allows for an identification of the physical significance of conserved quantities. This research field is full of challenges, both with respect to identifying the correct effective field equations and in characterizing their solutions. Some of these will be highlighted. March 21, 2003 Ray Kapral, Chemistry, University of Toronto Twisting Filaments in Oscillatory Media Scroll waves are one of the most commonly observed patterns in three-dimensional oscillatory and excitable systems. They play a role in physical systems like heart where they are believed to be responsible for flutter and fibrillation. The locus of the core of a scroll wave is a vortex filament and it organizes the structure of the pattern. If twist is applied to such a vortex filament it may undergo a series of bifurcations as the twist density is increased. Some of the bifurcations are akin to those seen in elastic rods. The filament bifurcates via supercritical Hopf bifurcations to a helix, and subsequently to a super-coiled helix. Further increases in the twist density lead to more complex structures. These features are analyzed using results from the topology of ribbon March 28, 2003 Eugene Fiume, Computer Science, University of Toronto Signal theoretic characterisation of three dimensional polygonal geometry Computer graphics abounds in the shameless theft of techniques from the mathematical sciences. In some cases, various aspects of the field can strongly benefit from more systematic mathematical treatment. In the past ten years, collections of polygons have become the "normal form" of geometric representations for computer graphics. Operations such as smoothing, enhancement, compression and decimation that are performed on such meshes very strongly suggest the desirability of representing polygonal meshes as three-dimensional signals so that, for example, frequency domain representations might be realisable. In this talk, I will speak of my collaboration with Richard Zhang on the signal theoretic representation of polygonal meshes, why it is important to find one, and our progress in the development of the equivalent of a Discrete Fourier Transform for such 3D geometry. back to top
{"url":"http://www.fields.utoronto.ca/programs/scientific/02-03/MOM/abstracts.html","timestamp":"2014-04-16T04:35:06Z","content_type":null,"content_length":"13380","record_id":"<urn:uuid:1118fe8f-e78e-4384-b7f0-87057e5c425f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Radó Tibor Birthplace: Budapest, Budapest, Hungary Death: Died in New Smyrna Beach, Volusia, Florida, United States Managed by: Martin Eriksen Last Updated: About Tibor Radó Dr. Rado was one of the galaxy of Hungarian mathematicians who came to the United States after World War I and imparted a significant impulse to the development of mathematical studies. Professor Rado's contributions to mathematical theory ranged from geometry to abstract formulas, including such subjects as calculus of variations, analysis in general, conformal mapping, minimal surfaces. complex functions, geometry of area, Riemann surfaces, and the Plateau problem. Professor Rado was born in Budapest on June 2, 1895. From 1943 to 1915 he attended the Polytechnic Institute in the Hungarian capital. He then joined the Hungarian armed forces as a first lieutenant and was captured on the Russian front and sent to Siberia. He escaped from the prisoners' camp and his subsequent odyssey took him to the Arctic regions of Russia, where he lived with Eskimos while moving slowly westward, seeking final escape to his homeland. After thousands of miles across the Arctic wastelands, Dr. Rado returned to Hungary and resumed his education. In 1923, he received a Doctor of Philosophy degree from the University of Szeged. Dr. Rado taught for a brief period at the University of Szeged and then went to Germany as a research fellow for the Rockefeller Foundation. In 1929, he came to the United States. He lectured at Harvard University and the Rice Institute and in 1930 joined the faculty of The Ohio State University. In 1933, Dr. Rado published his first original contribution to mathematical thought, "On the Problem of Plateau," which was translated into every Western language and brought him instant fame. In 1935, he published his second work, "Subharmonic Functions." As World War II entered its final phase, he interrupted his academic career to render a special service to the United States Government. As a science consultant to the armed forces, he was sent to Germany to find German scientists needed by the United States as it approached the nuclear and missile age. Dr. Rado then returned to his research at the Institute for Advanced Study at Princeton, New Jersey. In 1946, he became Chairman of the Department of Mathematics at The Ohio State University, a position he held through 1948. The following year he was named Research Professor. Professor Rado served as a Visiting Professor at a number of universities, including the University of Chicago, the University of Puerto Rico, and Kansas State. In the last six years, Dr. Rado's work with computers was concentrated on the design of automatic systems, appropriate mathematical tools, the method called Turing machines, named after the English mathematician, Alan M. Turing, which he preferred to call "Turing programs," and the limitations of what computers can do. Tibor Rado (June 2, 1895 – December 29, 1965) was a Hungarian mathematician who moved to the USA after World War I. He was born in Budapest and between 1913 and 1915 attended the Polytechnic Institute. In World War I, he became a First Lieutenant in the Hungarian Army and was captured on the Russian Front. He escaped from a Siberian prisoner camp and, traveling thousands of miles across Arctic wasteland, managed to return to Hungary. He received a doctorate from the Franz Joseph University in 1923. He taught briefly at the university and then became a research fellow in Germany for the Rockefeller Foundation. In 1929, he moved to the United States and lectured at Harvard University and the Rice Institute before obtaining a faculty position in the Department of Mathematics at Ohio State University in 1930. In 1935 he was granted American citizenship. In the 1920s, he proved that surfaces have an essentially unique triangulation. In 1933, Radó published "On the Problem of Plateau" in which he gave a solution to Plateau's problem, and in 1935, "Subharmonic Functions". In World War II he was science consultant to the United States government, interrupting his academic career. He became Chairman of the Department of Mathematics at Ohio State University in 1948. His work focused on computer science in the last decade of his life and in May 1962 he published one of his most famous results in the Bell System Technical Journal: the Busy Beaver function and its non-computability ("On Non-Computable Functions"). He died in New Smyrna Beach, Florida. • Über den Begriff der Riemannschen Fläche, Acta Scientarum Mathematicarum Universitatis Szegediensis, 1925 • The problem of least area and the problem of Plateau, Mathematische Zeitschrift Vol. 32, 1930, p.763 • On the problem of Plateau, Springer-Verlag, Berlin, Ergebnisse der Mathematik und ihrer Grenzgebiete, 1933, 1951, 1971 • Subharmonic Functions, Springer, Ergebnisse der Mathematik und ihrer Grenzgebiete, 1937[1] • Length and Area, AMS Colloquium Lectures, 1948[2] • with Paul V. Reichelderfer Continuous transformations in analysis - with an introduction to algebraic topology, Springer 1955 • On Non-Computable Functions, Bell System Technical Journal 41/1962 • Computer studies of Turing machine problems, Journal of the ACM 12/1965 • Radó's theorem (Riemann surfaces) • Radó's theorem (harmonic functions) • ^ Tamarkin, J. D. (1937). "Review: T. Radó, Subharmonic Functions". Bull. Amer. Math. Soc. 43 (11): 758-759. • ^ McShane, E. J. (1948). "Review: Tibor Radó, Length and area". Bull. Amer. Math. Soc. 54 (9): 861-863. • Tibor Radó at the Mathematics Genealogy Project • O'Connor, John J.; Robertson, Edmund F., "Tibor Radó", MacTutor History of Mathematics archive, University of St Andrews. • Biography from the Ohio State University and other links • Authority control • VIAF: 114927250 Radó Tibor's Timeline Birth of Tibor 1895 June 2, 1895 Budapest, Budapest, Hungary December 29, 1965 Death of Tibor Age 70 New Smyrna Beach, Volusia, Florida, United States ???? Burial of Tibor
{"url":"http://www.geni.com/people/Rad%C3%B3-Tibor/6000000000301998534?tracking_source=records","timestamp":"2014-04-18T03:36:07Z","content_type":null,"content_length":"60739","record_id":"<urn:uuid:e765286b-83e6-4311-a33b-f1f8a8287133>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: shuffle without enough memory Replies: 8 Last Post: Nov 12, 2012 12:34 PM Messages: [ Previous | Next ] shuffle without enough memory Posted: Dec 22, 2011 9:48 AM Dear All, I have used this shuffle function for my problem: I need to form 10^9 combinations by randomly select three from 1000 integers (1.....1000 refers to the firm identity). After finding the combinations, I refer to single firms' return and calculate the product of the three and get one value, I call it "Prod". So I should have 10^9 "Prod"s. Finally I need to calculate the median value for all these Prods. Thanks to Jan Simon, Using shuffle, I manage to do this. However, the memory in my PC can only allow me to have 10^7 combinations, and the result is not good given the limit of nr of the memory. Can anyone suggest me a good way to find the median value from the 10^7 prods? Instead of telling me to change the computer for larger space? Thanks a lot in advance and wish u all a nice holiday, Date Subject Author 12/22/11 shuffle without enough memory Skirt Zhang 12/24/11 Re: shuffle without enough memory Roger Stafford 12/25/11 Re: shuffle without enough memory Q Zhang 12/26/11 Re: shuffle without enough memory Skirt Zhang 11/9/12 Re: shuffle without enough memory Skirt Zhang 11/11/12 Re: shuffle without enough memory Roger Stafford 11/12/12 Re: shuffle without enough memory Q Zhang 11/12/12 Re: shuffle without enough memory Roger Stafford 11/12/12 Re: shuffle without enough memory Q Zhang
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2326648&messageID=7632557","timestamp":"2014-04-19T01:49:29Z","content_type":null,"content_length":"26209","record_id":"<urn:uuid:c3d4d1f2-d7e5-4145-894a-b56d8b4606b8>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00261-ip-10-147-4-33.ec2.internal.warc.gz"}
et al., the Type 2 capillaries seen above the edge of the jump before turbulent collapse can be interpreted as shear instabilities, or “vortex waves”. 8. Breaking in modulated wave trains (wave groups) occurs generally at lower wave steepnesses than in a uniform wave train. In a periodic group containing two waves (Figure 31) the onset of breaking occurred in a wave for which ak=0.32. Final overturning was at ak=0.38. There is no unique relation between the group velocity and the horizontal particle velocity at the onset of breaking. 9. Contrary to earlier conclusions, the initial crest bulge in the experiments of Duncan et al. is not a crest instability, since it occurs at a lower wave steepness, but resembles more the instability that occurs in a wave group. However the final form of a breaking wave is remarkably invariant, whether it arises from a crest instability or from a wave group. To find a rational theory for the onset of breaking waves in a group is a main unsolved problem. The author's work has been supported by the Office of Naval Research under Contracts N00014–91–1582 and N00014– 91–1–0008. 1. Duncan, J.H., Philomin, V., Qiao, H. and Kimmel, J. 1994 The formation of a spilling breaker Phys. Fluids 6, S2. 2. Longuet-Higgins, M.S. 1974 Breaking waves—in deep and shallow water. Proc. 10th Symp. on Naval Hydrodynamics ( Cambridge, Mass.) U.S. Govt. Printing Off., pp. 597–605. 3. Cleaver, R.P. 1992 Instabilities of surface gravity waves. Ph.D. thesis, University of Cambridge, 224 pp. 4. Jenkins, A.D. 1994 A stationary potential-flow approximation for a breaking-wave crest J. Fluid Mech. 280, 335–347. 5. Shrira, V.I., Badulin, S.I. and Kharif, C. 1996 A model of water-wave “horse-shoe” patterns. J. Fluid Mech. 318, 375–405. 6. Michell, J.H. 1893 On the highest gravity waves on deep water. Phil. Mag. (5) 36, 430. 7. Williams, J.M. 1981 Limiting gravity waves in water of finite depth. Phil. Trans. R. Soc. Lond. A 302, 139–188. 8. Longuet-Higgins, M.S. and Fox, M.J.H. 1977 Theory of the almost-highest wave: The inner solution. em J. Fluid Mech. 80, 721–741. 9. Longuet-Higgins, M.S. and Fox, M.J.H. 1978 Theory of the almost-highest wave. II. Matching and analytic extension J. Fluid Mech. 85, 769–786. 10. Longuet-Higgins, M.S. and Cleaver, R.P. 1994 Crest instabilities of gravity waves. Part 1. The almost-highest wave. J. Fluid Mech. 258, 115–129. 11. Longuet-Higgins, M.S., Cleaver, R.P. and Fox, M.J.H. 1994 Crest instabilities of gravity waves. Part 2. Matching and asymptotic analysis. J. Fluid Mech. 259, 333–344. 12. Longuet-Higgins, M.S. and Dommermuth, D.G. 1996 Crest instabilities of gravity waves. Part 3. Nonlinear development and breaking. J. Fluid Mech. (in press). 13. Tanaka, M. 1983 The stability of steep gravity waves. J. Phys. Soc. Japan 52, 3047–3055. 14. Saffman, P.G. 1985 The superharmonic instability of finite-amplitude water waves. J. Fluid Mech. 159, 169–174. 15. Longuet-Higgins, M.S. and Tanaka, M. 1996 On the crest instabilities of steep surface waves. J. Fluid Mech. (in press). 16. Miles, J.W. 1980 Solitary waves. Ann. Rev. Fluid Mech. 13, 11–43. 17. Evans, W.A.B. and Dörr, U. 1991 New minima in solitary water wave properties close to the maximum wave. Rep. Phys. Lab., Univ. of Kent, Canterbury, U.K., 22 pp. 18. Longuet-Higgins, M.S. and Fenton, J.D. 1974 On the mass, momentum, energy and circulation of a solitary wave. II. Proc. R. Soc. Lond. A 340, 471– 493. 19. Byatt-Smith, J.G.B. and Longuet-Higgins, M.S. 1976 On the speed and profile of steep solitary waves. Proc. R. Soc. Lond. A 305, 175–189.
{"url":"http://books.nap.edu/openbook.php?record_id=5870&page=27","timestamp":"2014-04-19T07:50:40Z","content_type":null,"content_length":"62501","record_id":"<urn:uuid:267ac346-71de-4667-9349-4656e504104c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/peeps/medals","timestamp":"2014-04-17T18:58:59Z","content_type":null,"content_length":"87710","record_id":"<urn:uuid:f40a0975-5fe5-479f-a7e9-84df96a6c5fa>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Riemannian geometry August 21st 2006, 09:25 AM #1 Jul 2006 Riemannian geometry Let M,N be a connected smooth riemannian manifolds. I define the metric as usuall, the infimum of lengths of curves between the two points. (the length is defined by the integral of the norm of the velocity vector of the curve). Suppose phi is a homeomorphism which is a metric isometry. I wish to prove phi is a diffeomorphism. Please, anyone who can help. Thanks in advance, Pick a point $q\in N$. There is an $\epsilon>0$ such that $\exp_q$ is a diffeomorphism of the ball $B(0,\epsilon)=B_{\epsilon}^N\subset T_qN$ (the tangent space) into $N$. Since $\phi$ is a homeomorphism, $\phi^{-1}(\exp_q(B_{\epsilon}^N))$ is an open set in $N$ and $\exists \ p\in\phi^{-1}({q})\bigcap\phi^{-1}(\exp_q(B_{\epsilon}^N))$. Now, since $\phi$ is an isometry, for all geodesics $\gamma<img src=$-\delta,\delta)\rightarrow N" alt="\gamma$\gamma(0)=\phi(p)$, for the geodesic $\beta<img src=$-\eta,\eta)\rightarrow M" alt=" \beta$\beta(0)=p$ and ${\rm d}\phi_p(\beta'(0))=\gamma'(0)$, we have that $\gamma=\phi\circ\beta.$ This means $({\rm d}\phi_p)^{-1}$ exists (for all of the tangent space, as $N$ is complete); and since $M$ is complete, we can consider the map $\psi:\exp_q(B_{\epsilon}^N)\rightarrow\exp_p(B_{ \ epsilon}^M)$ defined by $\psi=\exp_p\circ({\rm d}\phi_p)^{-1}\circ(\exp_q(B_{\epsilon}^N))^{-1}.$ This is a diffeomorphism, and inverse to the (restricted map) $\phi:\exp_p(B_{\epsilon}^M)\ rightarrow\exp_q(B_{ \epsilon}^N)$. So $\phi$ is a diffeomorphism. Note. I do not see why this would not work if $\phi$ was only a local isometry. Note 2. Thin Lizzy - Whiskey in the jar Last edited by Rebesques; August 26th 2006 at 06:53 PM. Sorry, stupid me - use the argument about $({\rm d}\phi)^{-1}$ to get just $\phi$ to be a diffeo. No need to make our lives harder by computing inverses. Sorry again, I was... checking my signature out and was carried away Last edited by Rebesques; August 26th 2006 at 10:35 PM. M,N are not complete I can see how the hypotheses on M can be relaxed, by following the same argument. But no completeness for N, well... There is no way I can see it happen, and my guess is there is no simple way to do this August 26th 2006, 11:25 AM #2 August 26th 2006, 06:38 PM #3 September 2nd 2006, 10:44 PM #4 Jul 2006 June 13th 2007, 03:11 PM #5
{"url":"http://mathhelpforum.com/advanced-math-topics/5053-riemannian-geometry.html","timestamp":"2014-04-19T15:26:57Z","content_type":null,"content_length":"46723","record_id":"<urn:uuid:dbc8ad78-ed98-40fe-a3b5-419a2711c1d2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
maths 08 question #4 doubt ggmatt wrote: If 4 women and 6 men work in the accounting department, in how many ways can a committee of 3 be formed if it has to include at least one woman? The official solution to the above problem is Consider an unconstrained version of the question: in how many ways can a committee of 3 be formed? The answer is 10c3. From this figure we have to subtract the number of committees that consist entirely of men: 6c3 The final answer is 10c3-6c3=120-20=100. However my doubt is why cant we solve it using below no of ways = 4c1*6c2 + 4c2*6c1 + 4c3 (sum of 3 diff cases) 1st case--->select 1 women out of 4 and 2 men out of 6 to make a team of 3 2nd case--->select 2 w and 1 m to make a team of 3 3rd case--->select all 3 members from 4 available women You can do that way too as both are same. 1. 10c3 - 6c3 = 100 2. (4c1 x 6c2) + (4c2 x 6c1) + 4c3 = 4x15 + 6x6 + 4 = 60 + 36 + 4 = 100 Verbal: new-to-the-verbal-forum-please-read-this-first-77546.html Math: new-to-the-math-forum-please-read-this-first-77764.html Gmat: everything-you-need-to-prepare-for-the-gmat-revised-77983.html
{"url":"http://gmatclub.com/forum/maths-08-question-4-doubt-85941.html","timestamp":"2014-04-18T15:50:16Z","content_type":null,"content_length":"122489","record_id":"<urn:uuid:07a5a2c2-9841-4266-9b18-a921276c9ca7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
ECON E370 ALL Statisticial Analysis for Business and Economics Economics | Statisticial Analysis for Business and Economics E370 | ALL | -- P: MATH M118 or similar course emphasizing probability concepts. P or C: E201 or E202 and MATH M119. Lectures emphasize the use of basic probability concepts and statistical theory in the estimation and testing of single parameter and multivariate relationships. In computer labs, using Microsoft Excel, each student calculates descriptive statistics, probabilities, and least squares regression coefficients in situations based on current business and economic events. E270, S270, I Sem., II Sem., SS. Credit given for only one of the following: ECON E270, S270, E370, S370, CJUS K300, MATH/PSY K300, K310; SOC S371; or SPEA K300., unemployment, and economic growth.
{"url":"http://www.indiana.edu/~deanfac/blsu208/econ/econ_e370_ALL.html","timestamp":"2014-04-16T22:19:50Z","content_type":null,"content_length":"1258","record_id":"<urn:uuid:724276bb-871f-4287-9138-99611c2e2ffa>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Baylor University || Department of Physics || Dr. Gregory A. Benesh Dr. Gregory A. Benesh Chair of the Physics Department Ph.D. Physics Northwestern University 1980 M.S. Mathematics Baylor University 1992 M.S. Physics Northwestern University 1977 B.A. Physics Rice University 1975 Dr. Benesh is a native Texan, but came to Baylor in 1982 from a postdoctoral research position at the University of Cambridge, England. He earned his B.A. degree from Rice University, and M.S. and Ph.D. degrees from Northwestern University. Since January 2006, Dr. Benesh has served as Chairman of the Physics Department, having previously served as Director of Graduate Studies and as University Ombudsman. In his free time he enjoys playing racquetball, watching baseball games, and doing chores around his ranch. Academic Interests and Research Dr. Benesh's research interests include the electronic structure of solids, interactions at surfaces, and various types of embedding problems in which disruptions in otherwise perfectly-ordered systems are treated through the imposition of appropriate boundary conditions. Recent Publications "Homothetic Self-Similar Solutions of Three-Dimensional Brans-Dicke Gravity," joint with Anzhong Wang, Gen. Relativ. Grav. 39, 277-289 (2007). "Asymptotics of Solutions of a Perfect Fluid Coupled with a Cosmological Constant in Four-Dimensional Spacetime with Toroidal Symmetry," joint with Anzhong Wang, Gen. Relativ. Grav. 38, 345-364 "Approximating Infinite-k Representations: Surface Relaxations and Work Functions of Al(001) and Be(0001)," joint with Daniel Gebreselasie, Journal of Physics: Condensed Matter 9, 8359-8368 (1997). "Surface Embedded Green Function Calculation of Total Energy and Force: Application to Al(001) and Al(110)," joint with Daniel Gebreselasie, Physical Review B 54, 5940-5945 (1996). Current Graduate Students David Katz Former Graduate Students Mark Mastin, M.S., 2007 Roger Dooley, Ph.D., 2007 Xiaojiang He, M.S., 1998 Daniel Gebreselasie, Ph.D., 1995 Lalantha Liyanage, Ph.D. 1993 William Bridgman, M.S. 1992 Derick Wristers, M.S. 1989 John Pingel, M.S. 1989 Joseph Sams. M.S. 1987 John Hester, M.S. 1985 Rex Godby, Ph.D. 1984 Courses Taught PHY 1405 - General Physics for BA Students PHY 1408 - Physics for Natural and Behavioral Sciences I PHY 1409 - Physics for Natural and Behavioral Sciences II PHY 1420 - General Physics I PHY 1430 - General Physics II PHY 2340 - General Physics III PHY 3330 - Intermediate Electricity and Magnetism PHY 3372 - Introductory Quantum Mechanics I PHY 3373 - Introductory Quantum Mechanics II PHY 4372 - Introductory Solid State Physics PHY 5340 - Statistical Mechanics PHY 5342 - Solid State Physics PHY 5360 - Mathematical Physics I PHY 5361 - Mathematical Physics II PHY 5370 - Quantum Mechanics I PHY 5371 - Quantum Mechanics II PHY 5V95 - Graduate Research PHY 5V99 - Thesis PHY 6V99 - Dissertation
{"url":"http://www.baylor.edu/physics/index.php?id=68483","timestamp":"2014-04-16T20:03:56Z","content_type":null,"content_length":"22230","record_id":"<urn:uuid:5defba35-1a61-4dfb-bde7-d5a31e64b185>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00424-ip-10-147-4-33.ec2.internal.warc.gz"}
Signal to Noise and Subexposure Calculations The signal to noise ratio per pixel, for a single sub, is expressed as follows (ignoring the contribution from dark noise): 1. SNR (for single sub) = (Obj)*t[sub ]/ sqrt[(Sky+Obj)*t[sub ]+ R^2] where SNR = signal to noise ratio per pixel; Obj = object flux in electrons/pixel/minute; Sky = sky flux in electrons/pixel/minute; t[sub] = subexposure time in minutes; R = read noise in e RMS. 2. By combining N exposures, the noise per pixel becomes: Noise (Tot) = sqrt[N*{(Sky+Obj)*t [sub ] + R Therefore, by combining N exposures, the SNR (Tot) per pixel becomes: SNR (Tot) = N*(Obj)*t [sub ] / sqrt[N*{(Sky+Obj)*t [sub ] + R where SNR(Tot) refers to SNR per pixel in the total exposure (assumes an average, median, or sigma reject combine for the signal component) ; N = number of subexposures. 3. Simplifying, this becomes SNR (Tot) = sqrt[N]*(Obj)*t [sub ] / sqrt[(Sky+Obj)*t [sub ] + R 4. Let K = total exposure time, which equals N*t[sub]. So N = K / t [sub] 5.[ ] Substituting K / t [sub ] for N in equation 3: SNR (Tot) = sqrt[K / t [sub ] / sqrt[(Sky+Obj)*t [sub ] + R SNR (Tot) = sqrt[K*t [ ] / sqrt[(Sky+Obj)*t [sub ] + R 7. This yields the following representative graph of SNR versus subexposure duration for a fixed total exposure K (4 hours in this example). The values are chosen to represent a faint Ha emitting object at my imaging site, with the Maxcam CM10 (R= 8.24 e RMS). Note that the curve rises rapidly over a short span of subexposure times, and then flattens out. In this example, the gains in SNR for subexposure times beyond 10 minutes are minimal. As indicated below, it can be shown that the curve starts to plateau when the photon noise contribution dominates the read noise. 8. The curve predicted from equation 6 has an asymptote that can be easily calculated as subexposure time approaches infinity. Specifically, as subexposure time approaches infinity, the contribution of R becomes negligible, and this factor can be ignored in determining the asymptote. The equation becomes: SNR asymptote = sqrt[K]*(Obj) [ ] / sqrt[(Sky+Obj)]. Thus, in photon noise limited conditions, the SNR is minimally affected by the read noise or subexposure duration (i.e., assuming that the subexposure duration is sufficiently 9. Here is a graph that shows the asymptote for the previous example, along with a new metric that I refer to as "F". F is the fraction of the maximum SNR (at asymptote) achieved at a given subexposure time. In this example, a value of 0.9 is chosen, along with its corresponding subexposure time (x-axis). The choice of 0.9 is arbitrary- as long as the F value is a sizable fraction of the maximum achievable SNR (e.g., F greater than or equal to 0.9, for instance), the subexposure time should be fine. 10. It is easy to calculate the subexposure time for a desired value of F. F = SNR(Tot) / SNR (asymptote), which is derived further in equation 11 below. 11. F = (sqrt[K*t [ ] / sqrt[(Sky+Obj)*t [sub ] + R ]) / (sqrt[K]*(Obj) [ ] / sqrt[(Sky+Obj)]) 12. Solving for t : t = F ^2*R^2 / [(Sky+Obj)*(1-F^2 )]. For Sky >> Obj, the equation becomes t [sub] = F^2*R^2 / [Sky*(1-F^2 )], where F is the desired fraction of maximum SNR you wish to achieve, R is the read noise (e RMS), and Sky is the sky flux (e/minute). This equation has similarities to the one derived by John, but it is not the same. 13. Note that the subexposure time is related to the square of the read noise, which makes having a low read noise camera ideal. Here is a representative graph showing this relationship. Even a small increase in read noise can significantly increase subexposure time. 14. As mentioned in point #7 above, the SNR curve rises quickly, and there is little to be gained beyond a certain subexposure duration. However, the shape of the curve (specifically how fast it flattens out) is dependent upon the sky noise level, as shown in the following graph , which demonstrates the relationship between SNR and subexposure time as a function of sky flux. Notice that the curve rapidly approaches the asymptote in a relatively light polluted site (sky flux through an Ha 6nm filter of 43 e/min), and but takes longer to do so at a dark site (sky flux 2 e/min, used for illustrative purposes only). Because of this effect, the dark site requires a longer subexposure to achieve an F value of 0.9 (in this example). 15. Feel free to download my Excel spreadsheet that will allow you to calculate your subexposure time based upon this analysis. I developed this spreadsheet, and then Neil Fleming added some nice frills, including a camera drop down button and a blue background (thanks Neil!). A few things to note when using the spreadsheet:
{"url":"http://www.starrywonders.com/snr.html","timestamp":"2014-04-21T04:32:22Z","content_type":null,"content_length":"28932","record_id":"<urn:uuid:c2204277-876a-4c3b-a886-2d722b9c1939>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 29 Find the volume of a right circular cone of height h and semi-vertical angle a (Hint: Rotate about the x-axis). Will marbles move slower in dove or pantene shampoo? Relaxing under a shady tree is very pleasant. Determine if the given relation is function or not. Give its domain and range. 16. (-1, 2), (1, 3), (-1, 4), (2, 5) 17. (1, 1), (2, 2), (3, 3), (4, 4) Check? Find the center and radius of the given circle 1. x^2+y^2+10y+21=0 = 5 Give the domain of the function. 1. g(x)= 5/ x^2 -25 = all real numbers 2.f(t)= 3t^2+5t+2 = Negative 3. h(x)= squareRoot of ; 5x-2 = All real numbers Check? Write the standard form of the equation of the circle with the given center and radius. 14. center (-2, 0) and r = 3 =18 Check? Find the slope of the line that is a) parallel and b) perpendicular to the given line. 1.x=-8 = 1/2x ; 3/7y 2.y=10 =2/3x ; 2/5y The poem is about the treatment of elderly people at accident and emergency departments of hospitals and how they seem to have to wait endlessly because the system prioritises other people over them. It was inspired by a series of visits to the A&E in our local hospital with m... I want to make sure this is right. the inverse of f(x)=(-3x+2)/(2x-1) I got f^-1(x)=(x-2)/(2x-3). if its right how do i check the answer to see if it is right. also would the domain of f be D=all real numbers except 1/2. And I do not know how to get the range for f^-1 Find S11 for 1 + 2 + 4 + 8 + I got 2047 Other choices are 1023 B. -1023 C. -2047 It was false. The following sequence is arithmetic. 2, -2, 2, -2, 2, True or False I believe it's false During a contest, a radio station gave away 40 free tickets to a concert. Only 25 of the free tickets were used. What percent of the tickets was used? How do we figure out the percentage??? but how would you say the 6M or the 2M? then how do you say 6M H2SO4 how to say 2M K2C2O4 in english is it 2 molarity of potassium oxalate? The average cost for a vacation is $1,050. If a family borrows money for the vacation at an interest rate of 11.9% for 6 months,what is the total cost of the vacation including the interest on the Critical Thinking Which of the following statements is not a claim? A) Life exists on planets other than Earth. B) Dare to stay off drugs! C) Something's force equals its mass multiplied by its acceleration. D) Joe owns a pet dog. a kite has diagonals 9.2 and 8 ft. what is the area of the kite Automotive engine parts and operation plz help me w/ the awnsers i have n tell me if i'm wrong. 1. An automotive engine's camshaft rotates at A. the same speed as the crankshaft B. one-quarter the speed of the crankshaft C. one-half the speed of the crankshaft D. twice the speed of the crankshaft my awnser... American Revolutuion good job, but what grade are you in because i think you need better vocabulary. How do hurricanes affect marsh land ecosystem? Does anyone know where I can find information on the genealogy of basketball? So, it came from blah, which was in turn derived from this or that. Please, any help would be great. Thanks. A physics question I was researching amusement park rides for a recent project (from this site I have noticed that many others are in the same boat as me). Anywho, I am doing the Ferris wheel or the Giant Wheel, and I was wondering if this was right, with respect to potential energy: this is due... I'm having alot of problems with the problem. I don't want the answer but I need to know how to figure it out to get the answer. To put it in another way I don't know where to start. A farmer takes to market in her cart, but she hit a pothole, which knock over all ... THANK YOU MANNYYY. <33 Here's an algebra question I need help on. I just need someone to explain how to get started...you don't have to solve it for me. I just don't know how to get the answer! Greta has a vegetable garden. She sells her extra produce at the local Farmer's Market. On... cramers rule Which method is most efficient for solving large systems, Cramer's Rule or Gaussian Elimination and why?
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Nessa","timestamp":"2014-04-21T10:53:02Z","content_type":null,"content_length":"11679","record_id":"<urn:uuid:7f952433-f61e-44b5-b317-d66d4724f817>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
A maximal theorem for holomorphic semigroups. Blower, Gordon and Doust, Ian (2005) A maximal theorem for holomorphic semigroups. The Quarterly Journal of Mathematics, 56 (1). pp. 21-30. ISSN 0033-5606 PDF (doustblower2.pdf) Download (126Kb) | Preview Let X be a closed linear subspace of the Lebesgue space L^p(Omega ; mu); let -A be an invertible linear operator that is the generator of abounded holomorphic semigroup T_t on X. The for each 0<a<1 the maximal operator sup |T_tf(x)| belongs to L^p for each f in the domain of A^a. If moreover iA generates a bounded C_0 group and A has spectrum contained in the positive real semi axis, then A has a bounded H infinity functional calculus. Item Type: Article Journal or The Quarterly Journal of Mathematics Publication Title: Additional The definitive publisher-authenticated version: Blower, Gordon and Doust, Ian A maximal theorem for holomorphic semigroups. Quarterly Journal of Mathematics (Oxford) 2005 56 Information: (1): 21-30 is available online at: http://qjmath.oxfordjournals.org/cgi/reprint/56/1/21 Uncontrolled UMD Banach spaces ; transference ; functional calculus Subjects: Q Science > QA Mathematics Departments: Faculty of Science and Technology > Mathematics and Statistics ID Code: 1695 Deposited By: Professor Gordon Blower Deposited On: 18 Feb 2008 09:53 Refereed?: Yes Published?: Published Last Modified: 19 Nov 2013 09:52 URI: http://eprints.lancs.ac.uk/id/eprint/1695 Actions (login required)
{"url":"http://eprints.lancs.ac.uk/1695/","timestamp":"2014-04-17T21:32:30Z","content_type":null,"content_length":"16631","record_id":"<urn:uuid:11a5ff49-17b4-422f-8a6f-fa2e7ef9ee0b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Verga, NJ Geometry Tutor Find a Verga, NJ Geometry Tutor ...I taught Precalculus with a national tutoring chain for five years. I have taught Precalculus as a private tutor since 2001. I completed math classes at the university level through advanced 12 Subjects: including geometry, calculus, algebra 1, writing ...As a tutor with multiple years of experience tutoring people in precalculus- and calculus-level courses, tutoring calculus is one of my main focuses. With a physics and engineering background, I encounter math at and above this level every day. With my experience, I walk the student through wha... 9 Subjects: including geometry, calculus, physics, algebra 1 ...Praxis math for the most part is below or at high school level with an emphasis on practicality. As a former financial advisor I worked to pass the Series 7 and 63 exams and then spent 4 years putting these principles into place. I currently still advice a few clients so I have kept up to date on the rules that govern the securities industry. 23 Subjects: including geometry, reading, calculus, statistics ...Teaching is my passion. Currently, I teach 10th grade English at a charter school in Philadelphia. I am devoted to helping students reach beyond what they think is possible for themselves. 24 Subjects: including geometry, reading, English, writing ...I have been tutoring elementary school students since I was in 8th grade and high school students beginning in my sophomore year through my school's National Honors Society. My methods for tutoring vary from subject to subject. For matters such as math, music theory, physics, and speech, I focus on the SDT standby: See, Do, Teach. 42 Subjects: including geometry, reading, writing, English Related Verga, NJ Tutors Verga, NJ Accounting Tutors Verga, NJ ACT Tutors Verga, NJ Algebra Tutors Verga, NJ Algebra 2 Tutors Verga, NJ Calculus Tutors Verga, NJ Geometry Tutors Verga, NJ Math Tutors Verga, NJ Prealgebra Tutors Verga, NJ Precalculus Tutors Verga, NJ SAT Tutors Verga, NJ SAT Math Tutors Verga, NJ Science Tutors Verga, NJ Statistics Tutors Verga, NJ Trigonometry Tutors Nearby Cities With geometry Tutor Almonesson geometry Tutors Blackwood Terrace, NJ geometry Tutors Blenheim, NJ geometry Tutors Center City, PA geometry Tutors Chews Landing, NJ geometry Tutors Grenloch geometry Tutors Hilltop, NJ geometry Tutors Jericho, NJ geometry Tutors Lakeland, NJ geometry Tutors Lester, PA geometry Tutors Passyunk, PA geometry Tutors Penn Ctr, PA geometry Tutors West Collingswood Heights, NJ geometry Tutors West Collingswood, NJ geometry Tutors Westville Grove, NJ geometry Tutors
{"url":"http://www.purplemath.com/Verga_NJ_Geometry_tutors.php","timestamp":"2014-04-19T19:54:26Z","content_type":null,"content_length":"24011","record_id":"<urn:uuid:a80ac995-c3f9-46c9-b1e6-42dc09613bc7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Carla Cederbaum, Assistant Research Professor Math @ Duke Math HOME Carla Cederbaum, Assistant Research Professor Please note: Carla has left the Mathematics department at Duke University; some info here might not be up to date. Office Location: 218 Physics Building Email Address: Office Hours: By appointment only, please email me. PhD Freie Universität Berlin 2011 Dipl Universität Freiburg 2007 MASt University of Cambridge 2003 Mathematical Physics Research Interests: Mathematical Relativity; Differential Geometry; Geometric Analysis; Calculus of Variations Mathematical Relativity, (Differential) Geometry, Geometric Analysis, and Calculus of Variations are my main mathematical interests. I particularly enjoy working on problems that are related to In my thesis, I began working on static metrics in General Relativity. My aim was and still is to obtain a deeper understanding of their geometry and to gain more insight into their physical interpretation (mass, center of mass, behaviour of test bodies etc.). I have coined the name "geometrostatics" for this endeavor. Static metrics appear in many physical and geometric settings; they are relevant for the static n-body problem as well as for Bartnik's concept of mass and his related conjecture about static metric extensions. Moreover, together with Jörg Hennig and Marcus Ansorg, I have studied a geometric inequality between horizon area and anguar momentum for stationary and axisymmetric black holes. Our work has interesting applications in proving non-existence of multiple black hole horizons (Hennig, Neugebauer). It has been extended to general axisymmetric spacetimes containing (marginally) stable marginally outer trapped surfaces (Gabach-Clément, Jaramillo). Geometric inequalities of this type are attracting more and more attention and many different techniques have been introduced to the field (e.g. by Dain). I work on understanding how the different approaches are related and am curious about what their interrelations might reveal. Finally, I am studying the Newtonian limit of General Relativity using Jürgen Ehlers' frame theory. I am particularly interested in proving consistence results showing that certain physical properties like relativistic mass converge to their Newtonian counterparts. In my thesis, I proved such consistence results for mass and center of mass in the geometrostatic setting. I am planning to extend my techniques and results to more general metrics in the future. Representative Publications (More Publications) Selected Invited Lectures 1. From Newton to Einstein: A guided tour through space and time, November 06, 2012, CUNY-CSI [Poster] [Slides] 2. From Newton to Einstein: a guided tour through space and time, April 27, 2012, Duke Physics Building 128 [video.html] [Poster] Conferences Organized □ Annual East Coast Geometry Festival, April, 2012 ph: 919.660.2800 Duke University, Box 90320 fax: 919.660.2821 Durham, NC 27708-0320
{"url":"http://fds.duke.edu/db/aas/math/faculty/carla","timestamp":"2014-04-18T03:02:18Z","content_type":null,"content_length":"12902","record_id":"<urn:uuid:8484de28-1d2e-4e9c-b841-39535ec9d6a6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
On complexity as bounded rationality Results 1 - 10 of 41 - IN PROCEEDINGS OF THE 16TH ANNUAL SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE , 1999 "... In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a ver ..." Cited by 631 (19 self) Add to MetaCart In a system in which noncooperative agents share a common resource, we propose the ratio between the worst possible Nash equilibrium and the social optimum as a measure of the effectiveness of the system. Deriving upper and lower bounds for this ratio in a model in which several agents share a very simple network leads to some interesting mathematics, results, and open problems. - Artificial Intelligence , 1997 "... This paper analyzes coalitions among self-interested agents that need to solve combinatorial optimization problems to operate e ciently in the world. By colluding (coordinating their actions by solving a joint optimization prob-lem) the agents can sometimes save costs compared to operating individua ..." Cited by 167 (24 self) Add to MetaCart This paper analyzes coalitions among self-interested agents that need to solve combinatorial optimization problems to operate e ciently in the world. By colluding (coordinating their actions by solving a joint optimization prob-lem) the agents can sometimes save costs compared to operating individually. A model of bounded rationality is adopted where computation resources are costly. It is not worthwhile solving the problems optimally: solution quality is decision-theoretically traded o against computation cost. A normative, application- and protocol-independent theory of coalitions among bounded-rational agents is devised. The optimal coalition structure and its stability are signi cantly a ected by the agents ' algorithms ' performance pro les and the cost of computation. This relationship is rst analyzed theoretically. Then a domain classi cation including rational and bounded-rational agents is in-troduced. Experimental results are presented in vehicle routing with real data from ve dispatch centers. This problem is NP-complete and the instances are so large that|with current technology|any agent's rationality is bounded by computational complexity. 1 - Proc. 13th SODA , 2002 "... We study the problem of traffic routing in non-cooperative networks. In such networks, users may follow selfish strategies to optimize their own performance measure and therefore their behavior does not have to lead to optimal performance of the entire network. In this paper we investigate the worst ..." Cited by 160 (6 self) Add to MetaCart We study the problem of traffic routing in non-cooperative networks. In such networks, users may follow selfish strategies to optimize their own performance measure and therefore their behavior does not have to lead to optimal performance of the entire network. In this paper we investigate the worst-case coordination ratio, which is a game theoretic measure aiming to reflect the price of selfish routing. Following a line of previous work, we focus on the most basic networks consisting of parallel links with linear latency functions. Our main result is that the worst-case coordination ratio on m parallel links of possibly different speeds is logm Θ log log logm In fact, we are able to give an exact description of the worst-case coordination ratio depending on the number of links and the ratio of the speed of the fastest link over the speed of the slowest link. For example, for the special case in which all m parallel links have the same speed, we can prove that the worst-case coordination ratio is Γ (−1) (m) + Θ(1) with Γ denoting the Gamma (factorial) function. Our bounds entirely resolve an open problem posed recently by Koutsoupias and Papadimitriou [KP99]. - In STOC , 2001 "... If the Internet is the next great subject for Theoretical Computer Science to model and illuminate mathematically, then Game Theory, and Mathematical Economics more generally, are likely to prove useful tools. In this talk I survey some opportunities and challenges in this important frontier. 1. ..." Cited by 135 (0 self) Add to MetaCart If the Internet is the next great subject for Theoretical Computer Science to model and illuminate mathematically, then Game Theory, and Mathematical Economics more generally, are likely to prove useful tools. In this talk I survey some opportunities and challenges in this important frontier. 1. - In Proceedings of the Sixteenth Conference on Uncertainty in Artificial Intelligence , 2000 "... Multi-agent games are becoming an increasingly prevalent formalism for the study of electronic commerce and auctions. The speed at which transactions can take place and the growing complexity of electronic marketplaces makes the study of computationally simple agents an appealing direction. In ..." Cited by 91 (0 self) Add to MetaCart Multi-agent games are becoming an increasingly prevalent formalism for the study of electronic commerce and auctions. The speed at which transactions can take place and the growing complexity of electronic marketplaces makes the study of computationally simple agents an appealing direction. In this work, we analyze the behavior of agents that incrementally adapt their strategy through gradient ascent on expected payoff, in the simple setting of two-player, two-action, iterated general-sum games, and present a surprising result. We show that either the agents will converge to a Nash equilibrium, or if the strategies themselves do not converge, then their average payoffs will nevertheless converge to the payoffs of a Nash equilibrium. 1 Introduction It is widely expected that in the near future, software agents will act on behalf of humans in many electronic marketplaces based on auctions, barter, and other forms of trading. This makes multi-agent game theory (Owen, - Journal of Artificial Intelligence Research , 1995 "... Since its inception, artificial intelligence has relied upon a theoretical foundation centred around perfect rationality as the desired property of intelligent systems. We argue, as others have done, that this foundation is inadequate because it imposes fundamentally unsatisfiable requirements. As a ..." Cited by 79 (1 self) Add to MetaCart Since its inception, artificial intelligence has relied upon a theoretical foundation centred around perfect rationality as the desired property of intelligent systems. We argue, as others have done, that this foundation is inadequate because it imposes fundamentally unsatisfiable requirements. As a result, there has arisen a wide gap between theory and practice in AI, hindering progress in the field. We propose instead a property called bounded optimality. Roughly speaking, an agent is bounded-optimal if its program is a solution to the constrained optimization problem presented by its architecture and the task environment. We show how to construct agents with this property for a simple class of machine architectures in a broad class of real-time environments. We illustrate these results using a simple model of an automated mail sorting facility. We also define a weaker property, asymptotic bounded optimality (ABO), that generalizes the notion of optimality in classical complexity th... - Artificial Intelligence , 1997 "... The long-term goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. This paper outlines a ..." Cited by 79 (1 self) Add to MetaCart The long-term goal of our field is the creation and understanding of intelligence. Productive research in AI, both practical and theoretical, benefits from a notion of intelligence that is precise enough to allow the cumulative development of robust systems and general results. This paper outlines a gradual evolution in our formal conception of intelligence that brings it closer to our informal conception and simultaneously reduces the gap between theory and practice. 1 Artificial Intelligence AI is a field in which the ultimate goal has often been somewhat ill-defined and subject to dispute. Some researchers aim to emulate human cognition, others aim at the creation of , 2003 "... We study the price of selfish routing in non-cooperative networks like the Internet. In particular, we investigate the price... ..." - Complexity , 1999 "... This article argues that the agent-based computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the followi ..." Cited by 64 (0 self) Add to MetaCart This article argues that the agent-based computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the following specific contributions to social science are discussed: The agent-based computational model is a new tool for empirical research. It offers a natural environment for the study of connectionist phenomena in social science. Agent-based modeling provides a powerful way to address certain enduring—and especially interdisciplinary—questions. It allows one to subject certain core theories—such as neoclassical microeconomics—to important types of stress (e.g., the effect of evolving preferences). It permits one to study how rules of individual behavior give rise—or “map up”—to macroscopic regularities and organizations. In turn, one can employ laboratory behavioral research findings to select among competing agent-based (“bottom up”) models. The agent-based approach may well have the important effect of decoupling individual rationality from macroscopic equilibrium and of separating decision science from social science more generally. Agent-based modeling offers powerful new forms of hybrid theoretical-computational work; these are particularly relevant to the study of non-equilibrium systems. The agentbased approach invites the interpretation of society as a distributed computational device, and in turn the interpretation of social dynamics as a type of computation. This interpretation raises important foundational issues in social science—some related to intractability, and some to undecidability proper. Finally, since “emergence” figures prominently in this literature, I take up the connection between agent-based modeling and classical emergentism, criticizing the latter and arguing that the two are incompatible. � 1999 John Wiley &
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=194655","timestamp":"2014-04-19T20:53:39Z","content_type":null,"content_length":"37611","record_id":"<urn:uuid:360d048c-f6ba-4752-8db6-555f3d295462>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient Exact Maximum a Posteriori Computation for Bayesian SNP Genotyping in Polyploids • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information PLoS One. 2012; 7(2): e30906. Efficient Exact Maximum a Posteriori Computation for Bayesian SNP Genotyping in Polyploids Fabio Rapallo, Editor^ The problem of genotyping polyploids is extremely important for the creation of genetic maps and assembly of complex plant genomes. Despite its significance, polyploid genotyping still remains largely unsolved and suffers from a lack of statistical formality. In this paper a graphical Bayesian model for SNP genotyping data is introduced. This model can infer genotypes even when the ploidy of the population is unknown. We also introduce an algorithm for finding the exact maximum a posteriori genotype configuration with this model. This algorithm is implemented in a freely available web-based software package SuperMASSA. We demonstrate the utility, efficiency, and flexibility of the model and algorithm by applying them to two different platforms, each of which is applied to a polyploid data set: Illumina GoldenGate data from potato and Sequenom MassARRAY data from sugarcane. Our method achieves state-of-the-art performance on both data sets and can be trivially adapted to use models that utilize prior information about any platform or species. Most agriculturally important plant species, such as potato, sugarcane, coffee, cotton and alfalfa, are polyploids. In fact, about half of the natural flowering plant species are polyploids [1]. Despite their importance, our understanding of these species does not fully benefit from marker technology. Molecular markers are widely used for diploid species and can be very useful for building linkage maps [2], finding genomic regions associated with variation in quantitative traits (or QTL) [3], studying the genetic architecture of quantitative traits [4], and assembling genome sequences Accurate genotyping of polyploids (even for largely uncharacterized species or in cases when the ploidy is unknown) is a missing keystone in genetics that must be solved in order to utilize the approaches that have marked a revolution in biology over the past hundred years. Accurate genotypes are necessary to understand the genetic mechanisms and specific loci that determine phenotypes via QTL mapping and association studies. These genotypes are also necessary for the creation of linkage maps, which are exceedingly useful in developing a greater understanding of genome evolution. These linkage maps will be essential for the assembly of complex polyploid genomes. The current approach used for several genetic studies on polyploids, especially for linkage mapping, is based on marker loci with only a single copy (simplex) in one of the parents and a nulliplex in the other, in i.e. microsatelites) are then scored as presence or absence of bands [6]–[8] and behave like dominant markers. For sugarcane, most available linkage maps are based on markers segregating in [9]. Even if complex statistical methods are applied to obtain integrated maps that combine information from markers with both patterns simultaneously [10], [11], the available maps are based on a small sample of the genome, since markers with higher doses are normally not included; therefore, they are not well saturated and informative for genome assembly [12]. For QTL studies in sugarcane, the situation is similar. Statistical models developed for backcrosses are used for simplex[13]. Since the ploidy level could be related with gene expression [14], these approaches need to be modified to incorporate allele dosage using more efficient marker systems. Nowadays, new technologies such as Illumina GoldenGate™ [15] and Sequenom iPLEX MassARRAY® [16] allow researchers to generate high-throughput genotyping data from SNPs. These data usually contain two signals for each SNP locus, each one corresponding to an intensity recorded for one of the two possible alleles. The expected value of each signal intensity is proportional to the corresponding allele dosage [16], [17], and therefore SNPs are the marker of choice for genetic studies in polyploids. They are more informative than presence/absence markers, and should allow a better coverage of the genome and the development of more realistic models for linkage studies, QTL and association mapping, among other applications. In order to explore the full potential of such technologies, a first required step is the development of statistical methods for SNP genotype calling, i.e. inferring the (discrete) genotype of each individual for each locus, identifying the number of copies of each allele. For diploids, including humans, a number of methods are already available [18]. This is not the case for polyploids. Methods for polyploid genotyping need to be able to deal not only with multiple copies of the alleles, but also with some complex problems such as aneuploidy and unknown ploidy, which can be present for some species. Voorrips et al. [19] presented an approach based on mixture models for genotype calling in autotetraploids, in a similar way as done by [20] in diploids. Based on the (transformed) allele signal ratio (ratio of one signal peak to the total), they fitted a mixture of five normal distributions, each one corresponding to one genotype class (from zero to four copies of the allele). They compared several models and were also able to test for Hardy-Weinberg equilibrium in a potato panel with [21]. Here we present a graphical Bayesian model for SNP genotyping calling. Our graphical Bayesian method can infer genotypes even when the ploidy of the population is unknown. At the core of Bayesian thinking is the notion of modeling processes forwards rather than trying to model their inverse. Generally, a great deal of prior knowledge is available regarding the way any process behaves running forwards; when the process is modeled generatively ( i.e. running forwards), this prior knowledge can be exploited to improve the fidelity with which it describes the process. In graphical models prior knowledge regarding independence and conditional independence of variables can be visualized in the structure of the graph. The highly connected subunits of the graph can be considered with modularity; that is, a subunit can be easily interchanged with another. This modularity is what allows our model to work with populations in Hardy-Weinberg equilibrium, the progeny of an We also introduce an algorithm for finding the exact maximum a posteriori (MAP) genotype configuration with this model. This algorithm is implemented in a freely available software package named SuperMASSA. We demonstrate the utility, efficiency, and flexibility of the model and algorithm by applying them to data from two polyploids processed with two different platforms: potato [19] using Illumina GoldenGateTM assay [15] and sugarcane using Sequenom iPLEX MassARRAY® [16]. Materials and Methods An autotetraploid potato collection was used, comprising 384 SNPs scored in a panel of 224 individuals using the Illumina GoldenGateTM assay, as described in [22] and [19]. This data set is distributed along with the free R package fitTetra [23], under the GNU General Public License. To exemplify the results obtained using the mixture model, [19] chose three loci, PotSNP016, PotSNP034 ( Figure 1) and PotSNP192. However, for loci PotSNP192, they noted that the Illumina GoldenGate assay produced significantly different signal strengths for the alleles, resulting in skewed clusters. Thus, the intensity ratio between those alleles can not be easily used to infer genotypes. Since our model assumes the signal strength of each allele is proportional to the dosage (and that the proportionality constant for both alleles is similar), we used only PotSNP016 and PotSNP034 to exemplify our method. For this data set, we use the same model of the genotype distribution as [19] ( i.e. Hardy-Weinberg). Moreover, since we know the ploidy for both the diploid and tetraploid potatoes, we can check if the ploidy estimated by our model matches the actual one. These two SNPs were also scored in 64 diploid potato varieties that were used for a visual check of the goodness of fit. We also analyze the diploid individuals using PotSNP016 and PotSNP034. A sugarcane mapping population derived from a cross between two commercial varieties (IACSP 95-3018[16]. This assay is based on allele-specific primer extension with a mass-modified terminator [24]. The DNA products of this reaction are analyzed by a MALDI-TOF mass spectrometer and each polymorphic region of interest is detected by a mass of the allele-specific primer [25]. Both parents were also scored 12 times for each SNP. If the ionization efficiency is similar for both alleles, the intensities produced by mass spectrometry are proportional to abundance (with very similar proportionality constant if run in the same sample prep); therefore, the if the amplification of both alleles is similar, the skew is minimal. We observe much less skew in the sugarcane data set compared to the potato data set. Modern sugarcane varieties have highly polyploid and aneuploid genomes, with ploidy levels ranging from 5 to 16 [26], [27]. Therefore, unless there is strong cytological information for a marker, it is important to also estimate the ploidy. Since we want to test our model and do not have a reference point for sugarcane (such as the known diploids or tetraploid potato varieties), and also because sugarcane meiosis frequently result in deviations from the expected Mendelian segregation ratios [26]–[28], we used a blind method to curate the data and evaluate SuperMASSA. First, all sugarcane loci were curated by eye using several criteria. For each locus, an expert looked at raw scatter plots as shown in Figure 1 and assessed the following: i) the overall quality; ii ) the number of clusters; and iii) the expected ploidy level based on parental data. This resulted in 27 SNPs that were easily classified by eye. SuperMASSA was used to predict the ploidy and number of clusters for each of these 27 loci and three of them (the three judged to be of the highest quality) are used to show the results of our model. It is important to note that in this blind validation experiment, SuperMASSA was not used to curate the data and the model behind SuperMASSA was not changed after observing and curating the data. Probabilistic Graphical Model We use a Bayesian approach to model the probability of the observed data given the ploidy and all genotypes. By modeling the generative process ( i.e. the process by which the data is produced assuming we know the ploidy and genotypes of all individuals), we can build the model from realistic assumptions for the data. Using the model, we then perform inference (described in the Probabilistic Inference section) to effectively enumerate all possible ploidies and genotypes for individuals in the population, and choose the configuration that maximizes the posterior probability of the model. This configuration is known as the maximum a posterior (MAP) and is guaranteed to result in the highest possible probability. In Figure 2 we present two probabilistic directed graphical models of the SNP genotyping process for a single locus: a Hardy-Weinberg model and an A Graphical View of SNP Genotyping. Hardy-Weinberg and For both models, the “genotype configuration” The observed data For some where the operator For any genotype configuration Both the Hardy-Weinberg and Supplement S1). Hardy-Weinberg Model Figure 2A depicts the dependencies of the Hardy-Weinberg model. In the Hardy-Weinberg model, the theoretical distribution of genotypes is modeled using a binomial distribution. Given Figure 2B depicts the dependencies of the Therefore, the probability of observing offspring In the In Figure 2B dashed nodes and arrows represent variables and dependencies that exist only when data from the parents is included. The probability of these parameters can be modeled as conditionally independent, just like When parental data is used, the parents are distinct and so the number of unique parental combinations becomes Generalized Population Model The inference procedure described does not make any special use of the type of parameters that determine We define the “generalized population model” as the model defined using Before inference is performed, it is necessary to demonstrate that the parameters i.e. they are “identifiable”). By the law of large numbers, the densities of the genotypes and allele intensities converge to the density expected from the parameters i.e. that By assumption, our model considers data which is a weighted sum of Gaussians (one for each genotype), each with a mean If this set of However, both models considered (Hardy-Weinberg and Probabilistic Inference In order to perform inference on the generalized population model described in the Probabilistic Graphical Model section, we introduce three approaches: a greedy approach (maximum likelihood), an exact approach (MAP) via dynamic programming, and a substantially more efficient exact approach (also MAP). For all inference methods, assume Graphically, it is trivial to demonstrate why MAP inference is difficult. Consider [29] of a graph containing an e.g. naive enumeration or junction tree inference [30], [31]) will require number of steps exponential in Greedy Inference Rather than jointly consider all genotype assignments, the greedy approach approximates The greedy estimate can independently compute the most likely genotype of each For each Using the equation 3, the configuration with the highest joint posterior can be found by enumerating outcomes of Exact Inference The combinatorial dependencies between genotypes in different individuals must be recognized in order to compute the MAP genotype configuration. It is tempting to approximate these dependencies with a mixture model. A mixture model approach treats all In the simplest approach, all possible genotype configurations can be enumerated naively in exponential time, resulting in the tree shown in Figure 3A. Although it is infeasible to think of enumerating the entire tree, it may be possible to ignore subtrees that cannot lead to an optimum, substantially reducing the search space. Illustration of Exact Inference. Consider individuals in an arbitrary order with some genotypes assigned: Let Given a genotype and parameter configuration The prefixes correspond to paths from the top of the tree in Figure 3A; prefixes that are shown to be suboptimal can be “bound,” meaning that they are not branched and searched further down. The second product may be cached for all A more sophisticated dynamic programming approach (shown in Figure 3B) merges nodes of equal depth that produce identical distribution prefixes and the number of individuals with each genotype in the genotype prefix. Because Efficient Exact Inference There are a number of reasons that the naive and dynamic programming branch and bound methods are inefficient. First, the number of nodes visited in these trees may be as much as For these reasons, we introduce a novel geometric branch and bound method; this method has several advantages. First, when the number of individuals is substantially larger than the ploidy ( To present our branch and bound method, we first rephrase the problem in a geometric context and then derive a geometric property of optimal configurations (Figure 4). In the likelihood e.g. normalizing on a unit circle) will also enable ordering the points in this way and are compatible with this method. Illustration of a Suboptimal Genotype Configuration. Fix the genotype distribution then Supplement S1) that genotype configurations that do not form contiguous genotype blocks along the line This approach lets us find the optimal genotype configuration for a given Given a prefix distribution [32]; our approach generalizes this for the multinomial distribution, rather than a single count. Furthermore, the joint probability of the best genotype configuration consistent with the prefix distribution is bounded above by the product of the multinomial bound, the prefix likelihood, and the best remaining suffix likelihood (more thorough proof shown in Supplement S1): Using this formula, branch and bound can be performed on the tree composed of the search space for the distribution e.g. using the multinomial and restricting the suffix genotype configurations) to establish a much tighter bound. This method lets us efficiently find the exact MAP Approximating the Posterior Probability of the MAP Configuration Given an initial guess at the MAP configuration where the constant of proportionality is similar for Therefore, the posterior of a configuration can be approximated: Denote the greedy genotype configuration for Rather than bound any distribution prefix for which all joint probabilities provably inferior to Supplement S1) that the greatest absolute posterior error Approximating Posterior Probabilities for Each Genotype Assignment It is important to distinguish the configuration posterior (which we approximate above) from posterior estimates that each individual is assigned the correct genotype. SuperMASSA, our implementation of the proposed efficient geometric inference method, also approximates the posteriors for each individual by using the relative likelihood between the MAP genotype and the other possible genotypes for that individual. The user is allowed to set a threshold for this value, and only the individuals with a likelihood ratio exceeding this posterior will be reported (in both figures and output genotype assignments). This approach formalizes heuristics that filter out data points with a total intensity Furthermore, it is possible to extend our approach to compute exact posteriors for each genotype assignment. The space searched by branch and bound would be much more complex; however, the MAP genotype configuration computed above would provide the most efficient possible bound. When the MAP has a substantial portion of the probability mass, nearly every subtree will be bounded, resulting in a very efficient runtime. Runtime Improvement with Geometric Branch and Bound The improved runtime of our geometric branch and bound method relative to the dynamic programming method is a nontrivial change; it makes exact MAP computation feasible where it was not before. In Figure 5, we demonstrate the relationship between the ploidy Runtime of Exact MAP Computation with Dynamic Programming and Geometric Branch and Bound. Inference Results from Potato and Sugarcane Data For all loci investigated, Table 1 shows the ploidy and number of clusters predicted by both the expert and SuperMASSA. The application of our method provided very good results for the SNPs evaluated, both for potato (diploid and tetraploid) and sugarcane. For potato, SuperMASSA was able to find the correct ploidy level and number of clusters in all cases. For sugarcane the ploidy level was the same for 21 SNPs. For the remaining loci, SuperMASSA predicted similar ploidies for four (differences from 10 to 8 in SugSNP004, 12 to 14 in SugSNP013, and 8 to 6 in SugSNP186 and SugSNP204) and incorrect ploidies (10 to 14 in SugSNP060 and 6 to 14 in SugSNP114). It is important to note that the curated result is not sacrosanct; the exact answer is not known, since the ploidy level is unknown for sugarcane. The number of clusters for sugarcane was the same for 24 SNPs, with only small differences in the remaining. Interestingly, this happened only for loci with different results for ploidy level as well. SuperMASSA Results on Potato and Sugarcane Loci. Further investigation into the loci where the expert and SuperMASSA disagree revealed that the distributions resulting from the ploidies set by the expert were quite divergent from the theoretical distributions expected for any possible sets of parents. The expert did not analyze these distributions when curating the data, because it was prohibitively time-consuming: the number of possible parents for the considered ploidy range (two to 16) totals 444; enumerating all sets of parents for the 241 considered sugarcane loci would have resulted in 107,004 figures requiring manual analysis. SuperMASSA Output from Selected Potato and Sugarcane Loci SuperMASSA was run on two potato loci (from both the diploid and tetraploid individuals) and on sugarcane loci using the same parameters. The ploidy range searched was 2 to 16 (only even ploidies were searched) and the Figure 6 shows the output from SuperMASSA on potato loci from the diploids and tetraploids. For the diploid potato used as reference by [19], it is easy to see that the results strongly agree with what is expected. First, the observed and estimated proportion of individuals on each class of the distribution are very close to each other. Second, there are 3 clusters corresponding to alleles with 0, 1 or 2 copies. It is also possible to see that there is no skew on the clusters around the expected angles for each cluster ( SuperMASSA Output on Potato Loci. Figure 7 shows the output from SuperMASSA on three sugarcane loci. For each of these loci, there is a strong agreement between the expected and observed number of individuals in each cluster for an SuperMASSA Output on Sugarcane Loci. These results presented were possible only because our novel approach to inference substantially reduced the search space and permitted much greater utilization of available information ( e.g. prior knowledge about rare genotype frequencies) in the branch and bound. We present a geometric interpretation of how our procedure reparameterizes and decreases the size of the search space; however, the key mathematical concept that allowed us to discover the geometric property of optima was due to an exploitation of symmetry. In general, it is possible to condition on outcomes of nodes in a graphical model that perform associative operations (in this instance counting), even though these nodes depend jointly on the state of all predecessor nodes. This is possible by effectively collapsing predecessor configurations that lead to the same outcome. In state-of-the-art software packages for graphical models [33], this type of symmetry may not be exploited to its full potential, and so for our problem, the best runtime for an exact result would have had a worst-case time exponential in the number of individuals. In the future, these special types of dependencies could be identified automatically; it is possible that this type of symmetry is hidden in myriad other problems and could be exploited. One such straightforward generalization that could be made to our model would use a latent variable to represent the skew of each locus. A prior probability on the skew with a unique mode at zero (no skew) would choose a skewed solution only if it was inferior to all solutions with a skew of zero. Performing inference using a discretization of this latent variable would simply multiply by a constant the runtime of our method. This improvement, though simple, would be quite useful for fluorescence-based genotyping assays, which are sometimes prone to distortion in the relative intensities of each allele. It is important to note that the method that we present is not exclusively for polyploids; instead, it is a generalized method that is applicable to any ploidy. This is especially important since our method generalizes independent mixture models so that the genotypes of individuals are considered and assigned in concert rather than one at a time. Because of its simple and modular nature, both our model and the inference procedures could be trivially inserted into existing methods. Perhaps even more importantly, the mathematical inference problem we solve is nearly identical to important inference problems proposed for analysis of copy number variation; the platforms that we tested our method on are of great importance for identifying copy number variants. Our method (or components of the model or inference algorithm) could be applied to the relative ratio intensities (due to copy number rather than ploidy) described in [16]. Our approach undoubtedly simplifies the model of meioses in polyploids. However, even when the assumptions of our meiotic model are violated, the anomalous or seemingly contradictory results ( e.g. parents with a ploidy different from some or all progeny in an Our software SuperMASSA is implemented in Python and freely available as an online application at http://statgen.esalq.usp.br/SuperMASSA. The data from the sugarcane loci analyzed are also available at this URL. The potato data analyzed is available in [23]. Supporting Information Supplement S1 Proof that both the dynamic programming branch and bound and geometric branch and bound find the MAP genotype configuration. We would sincerely like to thank Anete P. Souza and Thiago G. Marconi for making their sugarcane data available to us. Competing Interests: The authors have declared that no competing interests exist. Funding: This research was supported by Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (grant 2008/52197-4 and 2008/54402-4). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Hieter P, Griffiths T. Polyploidy–More Is More or Less. Science. 1999;285:210–211. [PubMed] Lander ES, Green P. Construction of multilocus genetic linkage maps in humans. Proc Natl Acad Sci USA. 1987;84:2363–2367. [PMC free article] [PubMed] Lander ES, Botstein D. Mapping Mendelian Factors Underlying Quantitative Traits Using RFLP Linkage Maps. Genetics. 1989;121:185–199. [PMC free article] [PubMed] Zeng ZB, Kao C, Basten CJ. Estimating the genetic architecture of quantitative traits. Genetical Research. 1999;74:279–289. [PubMed] Lewin HA, Larkin DM, Pontius J. Every genome sequence needs a good map. Genome Research. 2009;19:1925–1928. [PMC free article] [PubMed] Wu KK, Burnquist W, Sorrells ME, Tew TL, Moore PH, et al. The detection and estimation of linkage in polyploids using single-dose restriction fragments. TAG Theoretical and Applied Genetics. 1992;83 :294–300. [PubMed] Ripol MI, Churchill GA, Silva JAGD, Sorrells M. Statistical aspects of genetic mapping in autopolyploids. Gene. 1999;235:31–41. [PubMed] Baker P, Jackson P, Aitken K. Bayesian estimation of marker dosage in sugarcane and other autopolyploids. Theor Appl Genet. 2010;120:1653–72. [PubMed] 9. Alwala S, Kimbeng CA. 2010. 272 Molecular Genetic Linkage Mapping in Saccharum: Strategies, Resources and Achievements, CRC Press, Science Publishers, chapter 5. 1 edition. Garcia AAF, Kido EA, Meza AN, Souza HMB, Pinto LR, et al. Development of an integrated genetic map of a sugarcane (Saccharum spp.) commercial cross, based on a maximum-likelihood approach for estimation of linkage and linkage phases. Theor Appl Genet. 2006;112:298–314. [PubMed] 11. Oliveira KM, Pinto LR, Marconi TG, Margarido GRA, Pastina MM, et al. Functional integrated genetic linkage map based on EST-markers for a sugarcane (Saccharum spp.) commercial cross. Molecular Breeding. 2007;20:189–208. Wang J, Roe B, Macmil S, Yu Q, Murray JE, et al. Microcollinearity between autopolyploid sugarcane and diploid sorghum genomes. 2010 doi: 10.1186/1471-2164-11-261. [PMC free article] [PubMed] 13. Pastina MM, Pinto LR, Oliveira KM, Souza AP, Garcia AAF. 2010. 272 (2010) Molecular Mapping of Complex Traits, CRC Press, chapter 7. 1 edition. Galitski T, Saldanha AJ, Styles CA, Lander ES, Fink GR. Ploidy Regulation of Gene Expression. Science. 1999;285:251–254. [PubMed] Fan JB, Oliphant A, Shen R, Kermani BG, Garcia F, et al. Highly parallel SNP genotyping. Cold Spring Harbor symposia on quantitative biology. 2003;68:69–78. [PubMed] Oeth P, de Mistro G, Marnellos G, Shi T, van den Boom D. 2009. pp. 307–343. Single Nucleotide Polymor-phisms - Single Nucleotide Polymorphisms, Humana Press, chapter\Qualitative and Quantitative Genotyping Using Single Base Primer Extension Coupled with Matrix-Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry (MassARRAY)”. [PubMed] Akhunov E, Nicolet C, Dvorak J. Single nucleotide polymorphism genotyping in polyploidy wheat with the Illumina GoldenGate assay. Theor Appl Genet. 2009;119:507–17. [PMC free article] [PubMed] Nielsen R, Paul JS, Albrechtsen A, Song YS. Genotype and SNP calling from next-generation sequencing data. Nature Reviews Genetics. 2011;12:443–451. [PMC free article] [PubMed] Voorrips RE, Gort G, Vosman B. Genotype calling in tetraploid species from bi-allelic marker data using mixture models. BMC bioinformatics. 2011;12:172. [PMC free article] [PubMed] Fujisawa H, Eguchi S, Ushijima M, Miyata S, Miki Y, et al. Genotyping of single nucleotide polymorphism using model-based clustering. Bioinformatics (Oxford, England) 2004;20:718–26. [PubMed] Grivet L, Hont AD, Roques D, Feldmann P, Lanaud C, et al. RFLP Mapping in Cultivated Sugarcane (Saccharum spp.): Genome Organization in a Highly Polyploid and Aneuploid Interespecific Hybrid. Genetics. 1995;142:987–1000. [PMC free article] [PubMed] Anithakumari AM, Tang J, van Eck HJ, Visser RG, Leunissen JA, et al. A pipeline for high throughput detection and mapping of SNPs from EST databases. Molecular Breeding. 2010;26:65–75. [PMC free article] [PubMed] 23. Voorrips R, Gort G. fitTetra: fitTetra is an R package for assigning tetraploid genotype scores. 2011. R package version 1.0. 24. Sequenom 2007. Typer 4.0 manual. Storm N, Darnhofer-Patel B, van den Boom D, CP R. 2003. pp. 241–262. Single Nucleotide Polymorphisms - Methods and Protocols, Humana Press, chapter MALDI-TOF Mass Spectrometry-Based SNP Genotyping. [ Grivet L, Arruda P. Sugarcane genomics: depicting the complex genome of an important tropical crop. Current Opinion in Plant Biology. 2001;5:122–127. [PubMed] Jannoo N, Grivet L, David J, D'Hont A, Glaszmann JC. Differential chromosome pairing affinities at meiosis in polyploid sugarcane revealed by molecular markers. Heredity. 2004;93:460–467. [PubMed] 28. Singh RJ. Plant Cytogenetics. CRC Press, 2nd edition; 2002. 521 29. Arnborg S, Corneil DG, Proskurowski A. Complexity of finding embeddings in a k-tree. SIAM Journal on Algebraic and Discrete Methods. 1987;8:277–284. 30. Robertson N, Seymour PD. Graph minors. iii. planar tree-width. Journal of Combinatorial Theory, Series B. 1984;36:49–64. 31. Andersen SK, Olesen KG, Jensen FV. HUGIN, a shell for building Bayesian belief universes for expert systems. San Francisco, CA, USA: Morgan Kaufmann Publishers Inc; 1990. pp. 332–337. Serang O, MacCoss MJ, Noble WS. Efficient marginalization to compute protein posterior probabilities from shotgun mass spectrometry data. Journal of Proteome Research. 2010;9:5346–5357. [PMC free article] [PubMed] 33. Bilmes J, Zweig G. The Graphical Models Toolkit: An open source software system for speech and time-series processing. Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing. 2002. Articles from PLoS ONE are provided here courtesy of Public Library of Science • Genotyping-by-sequencing in ecological and conservation genomics[Molecular ecology. 2013] NARUM SR, BUERKLE CA, DAVEY JW, MILLER MR, HOHENLOHE PA. Molecular ecology. 2013 Jun; 22(11)2841-2847 • SNP genotyping allows an in-depth characterisation of the genome of sugarcane and other complex autopolyploids[Scientific Reports. ] Garcia AA, Mollinari M, Marconi TG, Serang OR, Silva RR, Vieira ML, Vicentini R, Costa EA, Mancini MC, Garcia MO, Pastina MM, Gazaffi R, Martins ER, Dahmer N, Sforça DA, Silva CB, Bundock P, Henry RJ, Souza GM, van Sluys MA, Landell MG, Carneiro MS, Vincentz MA, Pinto LR, Vencovsky R, Souza AP. Scientific Reports. 33399 • Linkage Analysis and QTL Mapping Using SNP Dosage Data in a Tetraploid Potato Mapping Population[PLoS ONE. ] Hackett CA, McLean K, Bryan GJ. PLoS ONE. 8(5)e63939 • The simulation of meiosis in diploid and tetraploid organisms using various genetic models[BMC Bioinformatics. ] Voorrips RE, Maliepaard CA. BMC Bioinformatics. 13248 See all... • MedGen Related information in MedGen • PubMed PubMed citations for these articles Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3281906/?tool=pubmed","timestamp":"2014-04-16T04:51:04Z","content_type":null,"content_length":"175248","record_id":"<urn:uuid:9036ec7c-f3df-446b-83b9-6c6887b5a575>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
This is adapted from versions of the game played with cards or with special picture tiles for children. It is also the basis for a popular American television quiz show in which a display with pairs of prizes is used. The game uses a double six domino set. The game is usually played by two players, but any number can play. The Deal Players get no hands. Instead all the tiles are dealt face down into a grid layout of 4 by 7 tiles. The Play In his turn, each player exposes any two tiles in the grid. • If this pair totals to 12, he removes them from the grid and takes another turn, continuing to expose pairs of tiles and take them until he fails to get a total of 12. • If the pair of exposed tiles does not total to 12, he turns them face down again and the next player takes his turn. The hand is over when the grid is empty. The winner of the game is first one to reach a total of 50 captured tiles (25 pairs), or another predetermined total. Comments & Strategy Obviously, this is a test of memory, so that the player who can envision the tiles correctly without seeing their faces is the player who will win. One important trick in play is to remember the total and not to think about the two halves of each tile. The other trick is to realize that a set of dominoes does not break down into simple pairs, like the playing card or picture card version of this game. The [0-0] and [6-6] have to pair up with each other, as do [0-1] and [5-6]. All other tiles have some options. For example, let's go for a total of 12 by getting a seven and a five. A seven can be made by picking any of the [1-6], [2-5] and [3-4] tiles; a five can be made from any of [0-5], [1-4] and [2-3] tiles. This gives nine possible combinations which will add to 12.
{"url":"http://www.pagat.com/tile/wdom/concentration.html","timestamp":"2014-04-21T00:31:18Z","content_type":null,"content_length":"6786","record_id":"<urn:uuid:287efe32-c573-4a58-b1d7-ce2a1f36f94d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Networked Bennett Linkages in Grasshopper Continuing the theme of my previous post, this shows how several of these linkages can be joined to form larger deployable structures. It works by defining an octahedron, based on 4 user positioned points and certain geometric conditions, which allow it to join to copies of itself along 4 of its edges. The slider then controls the single degree of freedom of the resulting over-constrained structural mechanism. At the moment all units are identical and the structure deploys to a flat plane, but I’m looking at ways of letting it curve and take on more interesting shapes. This is fairly simple to do for curvature in a single direction, to form vaults etc. But finding the necessary geometric conditions for doubly curved structures (such as domes) from networks of Bennett linkages is currently an open GH feels like it might be the right tool to solve it though. Again this is based on the work of Y.Chen and Z.You which I linked to earlier. There’s also a shorter paper here, and a nice overview of motion structures here. Will post the ghx as soon as I’ve cleaned it up a little. Last year I was designing some shelters based on this stuff. I was looking at ways of bracing it with tape springs or bi-stable struts (a fascinating subject in its own right, which deserves its own post). The joint still needs work – the version shown below adds unwanted degrees of freedom. September 2, 2010 at 6:58 pm any chance of posting that ghx definition? November 11, 2010 at 6:44 pm Hey Daniel, Have you posted that grasshopper script yet? If so, where can I find it? August 2, 2011 at 10:44 am [...] large deployable structures in Grasshopper. Scripted by Daniel [...] May 9, 2012 at 7:59 pm [...]Networked Bennett Linkages in Grasshopper « Space Symmetry Structure[...]…
{"url":"http://spacesymmetrystructure.wordpress.com/2009/02/23/networked-bennett-linkages-in-grasshopper/","timestamp":"2014-04-18T03:01:05Z","content_type":null,"content_length":"51056","record_id":"<urn:uuid:b6f8c163-b1b6-4bfa-961d-81788cd4b4fa>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Now with offline access functionality, CourseSmart offers instructors and students the freedom and convenience of online, offline, and mobile access using a single platform. CourseSmart eTextbooks do not include media or supplements that are packaged with the bound textbook. The Blitzer Algebra Series combines mathematical accuracy with an engaging, friendly, and often fun presentation for maximum appeal. Blitzer’s personality shows in his writing, as he draws readers into the material through relevant and thought-provoking applications. Every Blitzer page is interesting and relevant, ensuring that students will actually use their textbook to achieve success! Table of Contents 1. Algebra, Mathematical Models, and Problem Solving 1.1 Algebraic Expressions, Real Numbers, and Interval Notation 1.2 Operations with Real Numbers and Simplifying Algebraic Expressions 1.3 Graphing Equations 1.4 Solving Linear Equations Mid-Chapter Check Point Section 1.1–Section 1.4 1.5 Problem Solving and Using Formulas 1.6 Properties of Integral Exponents 1.7 Scientific Notation Chapter 1 Group Project Chapter 1 Summary Chapter 1 Review Exercises Chapter 1 Test 2. Functions and Linear Functions 2.1 Introduction to Functions 2.2 Graphs of Functions 2.3 The Algebra of Functions Mid-Chapter Check Point Section 2.1–Section 2.3 2.4 Linear Functions and Slope 2.5 The Point-Slope Form of the Equation of a Line Chapter 2 Group Project Chapter 2 Summary Chapter 2 Review Exercises Chapter 2 Test Cumulative Review Exercises (Chapters 1–2) 3. Systems of Linear Equations 3.1 Systems of Linear Equations in Two Variables 3.2 Problem Solving and Business Applications Using Systems of Equations 3.3 Systems of Linear Equations in Three Variables Mid-Chapter Check Point Section 3.1–Section 3.3 3.4 Matrix Solutions to Linear Systems 3.5 Determinants and Cramer's Rule Chapter 3 Group Project Chapter 3 Summary Chapter 3 Review Exercises Chapter 3 Test Cumulative Review Exercises (Chapters 1–3) 4. Inequalities and Problem Solving 4.1 Solving Linear Inequalities 4.2 Compound Inequalities 4.3 Equations and Inequalities Involving Absolute Value Mid-Chapter Check Point Section 4.1–Section 4.3 4.4 Linear Inequalities in Two Variables 4.5 Linear Programming Chapter 4 Group Project Chapter 4 Summary Chapter 4 Review Exercises Chapter 4 Test Cumulative Review Exercises (Chapters 1–4) 5. Polynomials, Polynomial Functions, and Factoring 5.1 Introduction to Polynomials and Polynomial Functions 5.2 Multiplication of Polynomials 5.3 Greatest Common Factors and Factoring By Grouping 5.4 Factoring Trinomials Mid-Chapter Check Point Section 5.1–Section 5.4 5.5 Factoring Special Forms 5.6 A General Factoring Strategy 5.7 Polynomial Equations and Their Applications Chapter 5 Group Project Chapter 5 Summary Chapter 5 Review Exercises Chapter 5 Test Cumulative Review Exercises (Chapters 1–5) 6. Rational Expressions, Functions, and Equations 6.1 Rational Expressions and Functions: Multiplying and Dividing 6.2 Adding and Subtracting Rational Expressions 6.3 Complex Rational Expressions 6.4 Division of Polynomials Mid-Chapter Check Point Section 6.1–Section 6.4 6.5 Synthetic Division and the Remainder Theorem 6.6 Rational Equations 6.7 Formulas and Applications of Rational Equations 6.8 Modeling Using Variation Chapter 6 Group Project Chapter 6 Summary Chapter 6 Review Exercises Chapter 6 Test Cumulative Review Exercises (Chapters 1–6) 7. Radicals, Radical Functions, and Rational Exponents 7.1 Radical Expressions and Functions 7.2 Rational Exponents 7.3 Multiplying and Simplifying Radical Expressions 7.4 Adding, Subtracting, and Dividing Radical Expressions Mid-Chapter Check Point Section 7.1–Section 7.4 7.5 Multiplying with More Than One Term and Rationalizing Denominators 7.6 Radical Equations 7.7 Complex Numbers Chapter 7 Group Project Chapter 7 Summary Chapter 7 Review Exercises Chapter 7 Test Cumulative Review Exercises (Chapters 1–7) 8. Quadratic Equations and Functions 8.1 The Square Root Property and Completing the Square 8.2 The Quadratic Formula 8.3 Quadratic Functions and Their Graphs Mid-Chapter Check Point Section 8.1–Section 8.3 8.4 Equations Quadratic in Form 8.5 Polynomial and Rational Inequalities Chapter 8 Group Project Chapter 8 Summary Chapter 8 Review Exercises Chapter 8 Test Cumulative Review Exercises (Chapters 1–8) 9. Exponential and Logarithmic Functions 9.1 Exponential Functions 9.2 Composite and Inverse Functions 9.3 Logarithmic Functions 9.4 Properties of Logarithms Mid-Chapter Check Point Section 9.1–Section 9.4 9.5 Exponential and Logarithmic Equations 9.6 Exponential Growth and Decay; Modeling Data Chapter 9 Group Project Chapter 9 Summary Chapter 9 Review Exercises Chapter 9 Test Cumulative Review Exercises (Chapters 1–9) 10. Conic Sections and Systems of Nonlinear Equations 10.1 Distance and Midpoint Formulas; Circles 10.2 The Ellipse 10.3 The Hyperbola Mid-Chapter Check Point Section 10.1–Section 10.3 10.4 The Parabola; Identifying Conic Sections 10.5 Systems of Nonlinear Equations in Two Variables Chapter 10 Group Project Chapter 10 Summary Chapter 10 Review Exercises Chapter 10 Test Cumulative Review Exercises (Chapters 1–10) 11. Sequences, Series, and the Binomial Theorem 11.1 Sequences and Summation Notation 11.2 Arithmetic Sequences 11.3 Geometric Sequences and Series Mid-Chapter Check Point Section 11.1–Section 11.3 11.4 The Binomial Theorem Chapter 11 Group Project Chapter 11 Summary Chapter 11 Review Exercises Chapter 11 Test Cumulative Review Exercises (Chapters 1–11) Where Did That Come From? Selected Proofs Purchase Info ? With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs. Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere. Buy Access Intermediate Algebra for College Students, CourseSmart eTextbook, 6th Edition Format: Safari Book $81.99 | ISBN-13: 978-0-321-76039-5
{"url":"http://www.mypearsonstore.com/bookstore/intermediate-algebra-for-college-students-coursesmart-0321760395","timestamp":"2014-04-20T13:39:47Z","content_type":null,"content_length":"21529","record_id":"<urn:uuid:f90bde76-e838-4a02-bcd6-026a65db6bf1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Double Elimination [Winners: dragon dor & xMBKx] For example, I will go to and I will generate a sequence of numbers in 2 columns. These numbers will correspond to the team numbers. The odd man out will have a bye week. In week 3 Danryan and hornet95 received the bye. I just ran this sequence: 8 would have got the bye week if there were 15 teams remaining and this sequence came up. Re: DOUBLE Elimination *Round 4* FrancisBoyle wrote:For example, I will go to http://www.random.org and I will generate a sequence of numbers in 2 columns. These numbers will correspond to the team numbers. The odd man out will have a bye week. In week 3 Danryan and hornet95 received the bye. I just ran this sequence: 8 would have got the bye week if there were 15 teams remaining and this sequence came up. ok cool. thx for the explanation Re: DOUBLE Elimination *Round 4* sorry for all the technical questions but i was wondering.... you mention 2 losses and you are out and the last one remaining wins... if the final is between two perfect teams, is it a best of 3 sets of 3 games? or just the winner of the first set of three wins the tourny? Re: DOUBLE Elimination *Round 4* If 2 perfect teams enter into the finals, then they keep playing the same way as the rest of the tourney until 1 team loses 2 rounds. Re: DOUBLE Elimination *Round 4* FrancisBoyle wrote:If 2 perfect teams enter into the finals, then they keep playing the same way as the rest of the tourney until 1 team loses 2 rounds. thx! on that note... croat_ante and iblaskov won all three of their games against Johnny and vnuts so I guess we are all ready for the next round! Re: DOUBLE Elimination *Round 4* Final 4 games have been sent out. This tourney should get very interesting in the next 2 rounds. Re: DOUBLE Elimination *Round 4* Francis, my partner won't respond to pm's and hasn't joined the games. Would you mind if I replaced him? Re: DOUBLE Elimination *Round 4* Game 5739704 and Game 5739701 finished. That makes Team Red Flag wins 2 out of 3 in both match-ups. Re: DOUBLE Elimination *Round 4* Question: According to the other games that also have been finished, does Team Red Flag have 8 wins now and the other teams have both 6 wins?? So what will be the next round games? Re: DOUBLE Elimination *Round 4* I think you guys won the tournament. Well deserved, too! Re: DOUBLE Elimination - Winners *dragon dor & xMBKx* Yihhhhaaaaaaa. Thanks all, were great games (so far I played, last 3 rounds as a substitute for Pascalleke). My second Tourney win with: Thanks mate Longstanding Xi-Member Playing Risk II on: http://eWarZone.com/ "...Niet geschoten, is altijd mis..."
{"url":"http://www.conquerclub.com/forum/viewtopic.php?f=682&t=83671&start=75","timestamp":"2014-04-20T12:11:49Z","content_type":null,"content_length":"135125","record_id":"<urn:uuid:6458c828-bbc8-4772-9d16-f88c8e44b3e7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Vadim Kyrylov - Operations Research Interactive Learning Objects for Operations Research Interactive learning objects on this Web page are split by their implementation in two categories: Excel and Java programs. Both could be used for teaching Business and Information Technology Excel Spreadsheet Programs Quantitative Decision Analysis Java Applets To run any applet, click on the respective image. Applets have been designed to open a new browser window and set the preferred size. Still some browsers open a new tab instead. In this case resize the browser window as appropriate. Alternatively, while opening the applet, right-click and in the context menu select command Open in a new window. 1. Two-Dimensional Probability Distribution Viewer. This applet demonstrates the uniform and normal probability distribution functions in two variables. Charts of the marginal and conditional distribution functions also can be displayed. These charts help students understand these nontrivial concepts. Besides the well familiar probability density function f(x,y), this program displays the cumulative probability distribution function F(x,y). It is indeed hard to find on the Internet any charts of the cumulative function; nor do textbooks contain any good examples. So take your time now! 2. Generating 2D Normal Random Vector This applet generates 2D Normal random vector with correlated components. Vectors can be viewed as points scattered on a plane. By changing the correlation coefficient and other parameters of the probability distribution, user can see how they affect the scatter pattern. The next four applets illustrate optimization problems with two decision variables and linear constraints. The limitation to just two variables is necessary for using planar geometry visualization. 3. Solving a System of Two Linear Equations. The intersection point of two lines is the solution. This is necessary for determining the corner points of the feasible region specified by linear constraints. 4. Determining the Feasible Region. We may need to know the feasible region to determine the range of solutions to a set of linear constraints. This demo builds upon the previous. Now instead of a single intersection point you can determine the whole region where all linear constraints are simultaneously satisfied. The feasible region turns to be a convex polygon that may be finite or semi-finite. 5. Solving a Linear Programming Problem. This applet adds a linear objective function and allows finding the optimal solution. This solution is always located on the feasible region boundary. If infinite number of solutions exists (i.e. they cover the whole edge of the feasible polygon), only one point of this infinite set is determined. 6. Solving a Nonlinear Optimization Problem with Linear Constraints. This applet differs in that the objective function is nonlinear (quadratic or Cobb-Douglas). The approach used in this program is applicable to the objective function of any kind. Still only approximate optimal solution is found. This is the cost paid for the universal approach and the visualization effect which is the main purpose of this applet. 7. 2D Nonlinear Function Viewer. This applet illustrates a non-linear (quadratic or Cobb-Douglas) objective function in two variables (hence 2D). With its different parameter values, the quadratic function can turn to be convex, concave, or none of these. Convexity and concavity are important in nonlinear optimization. When the function parameters are changed, the applet determines these properties and displays the answer. 8. Finding the Shortest Path in a Graph. This applet illustrates the Dijkstra's algorithm. The graph can be directed or undirected. Also this graph allows finding some random path for the comparison with the shortest one. By default, edge weights are set to distances in pixels on the screen. User can edit these weights as appropriate. 9. M/M/1 Waiting-Line Simulation. This applet demonstrates the way how to simulate simple discrete-event systems. Besides the concept of queuing system itself, it shows the difference between the time-slice and next-event time advance mechanisms in computer simulation. Also the description gives an example of computer model validation. By state By province
{"url":"http://www.rsu.edu/faculty/vkyrylov/JavaApplets/OperationsResearch/OperationsResearch.html","timestamp":"2014-04-20T01:03:16Z","content_type":null,"content_length":"8926","record_id":"<urn:uuid:c64e2754-668b-4fb4-99e4-49205e4c98f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
18" x 2 x pi / 3 = 37" = ~3', that's an average force of only 14.6lbs. (Trying not to hijack James post about his awesome gun.....) Steve, now that yer back on earth, I ran that same equation thru my calcualtor and got 38.04. Which I supoose is close enough that it probably doesnt matter. But, as I started breakng it down trying to figure it out, it occured to me maybe I didnt follow the "order of operations" when running thru the equation. But IIRC from my programming class days, Multipication and division carry the same "wieght" and in this case just work it left to right. The other thing I think my have skewed my answer is not using enough places to the right of the decimal point (I used 3.17.... cant mremember more than that...) I did figure out a couple things on my own: the "/3" referes to the arm moving thru a 120 degree arc, and 120 is one third of 360 degrees of the circle. Still dont understand what the 2 between 18 and pi is, or how you got from 36 inches to 14.6 pounds of effort. the other thing I may not have accounted for is the inches aprt of that... does that have an influence on the arc of travel the pump arm moves thru to the energy needed? (Edit: Wikipedia says pi is " 3.14159". Which brigns my 38.04 down to 37.69908, and thats much closer to your answer.) dr_subsonic's pneumatic research lab the Lunatic Fringe of American Airgunning Southwest Montana's headquarters for Airgunning Supremacy Proud Sponsor of team_subsonic
{"url":"http://www.network54.com/Forum/275684/message/1344278719/18%26quot%3B+x+2+x+pi+-+3+-+37%26quot%3B+-+~3'%2C+that's+an+average+force+of+only+14-6lbs-","timestamp":"2014-04-18T18:26:16Z","content_type":null,"content_length":"10130","record_id":"<urn:uuid:fff5cb40-6608-436e-b22f-db1f96cbfb52>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
UT Austin Junior Numerical Analysis and Applied Math Group Welcome to the website for Jr. NAP! The Jr. Numerical Analysis and Applied Math (NAP) seminar is on Thursdays from 2:00-3:00 in RLM 10.176. If you are interested in giving a talk, please email svallelian_at_math Subscribe to posts 3/20: Chris Title: Randomized Methods in Numerical Linear Algebra Abstract: In this talk we will introduce the basics of random matrix theory, and use these tools to develop randomized techniques for computing the eigenvalues and eigenvectors of large matrices. In particular, we will sketch the proofs of some error bounds for the Nystr\"om method and for some simple SVD algorithms applied to low-rank matrices. 2/27: Tian Title: Algorithms and Computational Complexity Abstract: An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values as output. In other words, algorithms are like road maps for accomplishing a given, well-defined task. One of the most important aspects of an algorithm is how fast it is. It is often easy to come up with an algorithm to solve a problem, but if the algorithm is too slow, it may be not worth trying at all. Since the exact speed of an algorithm depends on where the algorithm is run, as well as the exact details of its implementation, typically runtime relative to the size of the input is talked. We will introduce the notations for computational complexity and also how to perform an algorithm analysis. 2/20: Charlie Title: Mullins-Sekerka Problem and LaplaceEquation on Multiply Connected Regions Abstract: Mullins-Sekerka problem is a free boundary problem commonly used in modeling crystal growth or solidification and liquidation where material movement is governed by diffusion and no surface tension. The talk will focus on why Laplace Equation in non-conventional regions is of interest, and the peculiarity of existence, uniqueness, and numerical methods under this circumstance. 2/13: Christina Title: Numerical methods for multiscale inverse problems Abstract: We will consider inverse problems for multiscale partial differential equations of the form $-\div \left(\aeps\nabla u^\epsilon\right)+b^{\epsilon}u^{\epsilon} = f$ in which solution data is used to determine coefficients in the equation. Such problems contain both the general difficulty of finding an inverse and the challenge of multiscale modeling, which is hard even for forward computations. The problem in its full generality is typically ill-posed and one approach is to reduce the dimensionality of the original problem by just considering the inverse of an effective equation without microscale $\epsilon$. We will here include microscale features directly in the inverse problem. In order to reduce the dimension of the unknowns and avoid ill-posedness, we will assume that the microscale can be accurately parametrized by piecewise smooth coefficients. We indicate in numerical examples how the technique can be applied to medical imaging and exploration seismology. 11/14: Rohit Title: Regularity of the solution for Obstacles exhibiting non-local behavior Abstract: Starting from an optimal cash management problem and its formulation as a stochastic impulse control problem we will derive an obstacle problem where the obstacle exhibits non-local behavior. Generalizing the 1-d situation we will discuss some properties of the solution as well as introduce the notion of a quasi-variational inequality. The objective of the talk will be to relate the probabilistic interpretation of the problem to its analytic reformulation and discuss some applications. 10/31: Sara Title: MRI Made Easy Abstract: Magnetic resonance imaging (MRI) is a test that uses a magnetic field and pulses of radio wave energy to make pictures of organs and structures inside the body. This talk will provide a very brief introduction to the physics and techniques of MRI. 10/17: Charlie Title: A Level Set Approach to Mullins-Sekerka problem and the Regularization of Layer Potentials Abstract: We will look over the Mullins-Sekerka problems and introduce the level set approach, some difficulties and possible solutions to the local expansion of layer potentials. 10/10: Jamie Title: An Overview of Numerical Methods for Conservation Laws Abstract: We will first look at the mathematical properties of conservation laws. Then look at a variety of numerical methods that attempt to solve them. Hopefully in the process shedding some light on the difficulties that arise while attempting to model the general problem. 10/3: Jane Title: Can a finite element method perform arbitrarily badly? Abstract: The talk will simply go through the paper of the same title given by Ivo Babushka and John Osborn, 1999. In that paper, they construct a toy elliptic boundary value problem which converges arbitrary slowly in almost all reasonable norms. Moreover, adaptive procedures cannot save the convergence rate. The problem is 1D with piecewise polynomial elements, and the rest is just elementary arithmetic. But revisit such an easy case may suggest a different view of classical methods. 9/26: Chris Title: An eigenvalue optimization problem for graph partitioning Abstract: We begin by reviewing the graph partitioning problem and its applications to data clustering. We then proceed to introduce a new non-convex graph partitioning objective where the optimality criterion is given by the sum of the Dirichlet eigenvalues of the partition components. A relaxed formulation is identified and a novel rearrangement algorithm is proposed, which we show is strictly decreasing and converges in a finite number of iterations to a local minimum of the relaxed objective function. We end by discussing some applications, as well as connections to other problems such as Nonnegative Matrix Factorization and Reaction Diffusion equations. This is joint work with Braxton Osting and \'Edouard Oudet. 1-10 of 29 The Jr. Numerical Analysis and Applied Math (NAP) seminar is on Thursdays from 2:00-3:00 in RLM 10.176. If you are interested in giving a talk, please email svallelian_at_math Title: Randomized Methods in Numerical Linear Algebra Abstract: In this talk we will introduce the basics of random matrix theory, and use these tools to develop randomized techniques for computing the eigenvalues and eigenvectors of large matrices. In particular, we will sketch the proofs of some error bounds for the Nystr\"om method and for some simple SVD algorithms applied to low-rank matrices. Abstract: In this talk we will introduce the basics of random matrix theory, and use these tools to develop randomized techniques for computing the eigenvalues and eigenvectors of large matrices. In particular, we will sketch the proofs of some error bounds for the Nystr\"om method and for some simple SVD algorithms applied to low-rank matrices. Title: Algorithms and Computational Complexity Abstract: An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values as output. In other words, algorithms are like road maps for accomplishing a given, well-defined task. One of the most important aspects of an algorithm is how fast it is. It is often easy to come up with an algorithm to solve a problem, but if the algorithm is too slow, it may be not worth trying at all. Since the exact speed of an algorithm depends on where the algorithm is run, as well as the exact details of its implementation, typically runtime relative to the size of the input is talked. We will introduce the notations for computational complexity and also how to perform an algorithm analysis. Abstract: An algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values as output. In other words, algorithms are like road maps for accomplishing a given, well-defined task. One of the most important aspects of an algorithm is how fast it is. It is often easy to come up with an algorithm to solve a problem, but if the algorithm is too slow, it may be not worth trying at all. Since the exact speed of an algorithm depends on where the algorithm is run, as well as the exact details of its implementation, typically runtime relative to the size of the input is talked. We will introduce the notations for computational complexity and also how to perform an algorithm analysis. Title: Mullins-Sekerka Problem and LaplaceEquation on Multiply Connected Regions Abstract: Mullins-Sekerka problem is a free boundary problem commonly used in modeling crystal growth or solidification and liquidation where material movement is governed by diffusion and no surface tension. The talk will focus on why Laplace Equation in non-conventional regions is of interest, and the peculiarity of existence, uniqueness, and numerical methods under this circumstance. Title: Numerical methods for multiscale inverse problems Abstract: We will consider inverse problems for multiscale partial differential equations of the form $-\div \left(\aeps\nabla u^\epsilon\right)+b^{\epsilon}u^{\epsilon} = f$ in which solution data is used to determine coefficients in the equation. Such problems contain both the general difficulty of finding an inverse and the challenge of multiscale modeling, which is hard even for forward computations. The problem in its full generality is typically ill-posed and one approach is to reduce the dimensionality of the original problem by just considering the inverse of an effective equation without microscale $\epsilon$. We will here include microscale features directly in the inverse problem. In order to reduce the dimension of the unknowns and avoid ill-posedness, we will assume that the microscale can be accurately parametrized by piecewise smooth coefficients. We indicate in numerical examples how the technique can be applied to medical imaging and exploration seismology. Title: Regularity of the solution for Obstacles exhibiting non-local behavior Abstract: Starting from an optimal cash management problem and its formulation as a stochastic impulse control problem we will derive an obstacle problem where the obstacle exhibits non-local behavior. Generalizing the 1-d situation we will discuss some properties of the solution as well as introduce the notion of a quasi-variational inequality. The objective of the talk will be to relate the probabilistic interpretation of the problem to its analytic reformulation and discuss some applications. Abstract: Starting from an optimal cash management problem and its formulation as a stochastic impulse control problem we will derive an obstacle problem where the obstacle exhibits non-local behavior. Generalizing the 1-d situation we will discuss some properties of the solution as well as introduce the notion of a quasi-variational inequality. The objective of the talk will be to relate the probabilistic interpretation of the problem to its analytic reformulation and discuss some applications. Title: MRI Made Easy Abstract: Magnetic resonance imaging (MRI) is a test that uses a magnetic field and pulses of radio wave energy to make pictures of organs and structures inside the body. This talk will provide a very brief introduction to the physics and techniques of MRI. Abstract: Magnetic resonance imaging (MRI) is a test that uses a magnetic field and pulses of radio wave energy to make pictures of organs and structures inside the body. This talk will provide a very brief introduction to the physics and techniques of MRI. Title: A Level Set Approach to Mullins-Sekerka problem and the Regularization of Layer Potentials Abstract: We will look over the Mullins-Sekerka problems and introduce the level set approach, some difficulties and possible solutions to the local expansion of layer potentials. Title: An Overview of Numerical Methods for Conservation Laws Abstract: We will first look at the mathematical properties of conservation laws. Then look at a variety of numerical methods that attempt to solve them. Hopefully in the process shedding some light on the difficulties that arise while attempting to model the general problem. Abstract: We will first look at the mathematical properties of conservation laws. Then look at a variety of numerical methods that attempt to solve them. Hopefully in the process shedding some light on the difficulties that arise while attempting to model the general problem. Title: Can a finite element method perform arbitrarily badly? Abstract: The talk will simply go through the paper of the same title given by Ivo Babushka and John Osborn, 1999. In that paper, they construct a toy elliptic boundary value problem which converges arbitrary slowly in almost all reasonable norms. Moreover, adaptive procedures cannot save the convergence rate. The problem is 1D with piecewise polynomial elements, and the rest is just elementary arithmetic. But revisit such an easy case may suggest a different view of classical methods. Title: An eigenvalue optimization problem for graph partitioning Abstract: We begin by reviewing the graph partitioning problem and its applications to data clustering. We then proceed to introduce a new non-convex graph partitioning objective where the optimality criterion is given by the sum of the Dirichlet eigenvalues of the partition components. A relaxed formulation is identified and a novel rearrangement algorithm is proposed, which we show is strictly decreasing and converges in a finite number of iterations to a local minimum of the relaxed objective function. We end by discussing some applications, as well as connections to other problems such as Nonnegative Matrix Factorization and Reaction Diffusion equations. This is joint work with Braxton Osting and \'Edouard Oudet.
{"url":"https://sites.google.com/site/juniornumericalanalysis/","timestamp":"2014-04-21T02:24:58Z","content_type":null,"content_length":"39695","record_id":"<urn:uuid:d31118b4-b0f4-4b06-9feb-d87bc4926201>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Hackers beware: quantum encryption is coming | EE Times News & Analysis Hackers beware: quantum encryption is coming Hackers beware: quantum encryption is coming NEW YORK Quantum encryption pioneers promise to put the world's first uncrackably secure networks online by early 2003. Based on the quantum properties of photons, quantum encryption guarantees absolutely secure optical communications. Three independent experiments recently have demonstrated such systems. Geneva-based id Quantique SA encoded a secure transmission on a 70-kilometer fiber-optic link in Europe; MajiQ Technologies Inc., here, used a 30-km link; and researchers at Northwestern University (Evanston, Ill.) demonstrated a 250-Mbit/second quantum encrypted transmission over a short link. "Our quantum random-number generator and our single-photon detector module are available now and are in use by several customers around the world," said Gregoire Ribordy, a manager at id Quantique. A beta version of a third product, a quantum-key distribution system, "has been fully tested, and we are in advanced discussions with several potential launch customers," he added. Securing the Internet For its part, MagiQ says that its Navajo system is currently at the alpha stage and promises real beta sites on selected campuses in the United States in the first quarter. Both companies are also talking about secure through-the-air communications with satellites. Northwestern, meanwhile, vows to have a 2.5-Gbit/s quantum-encryption technology capable of securing the Internet backbone in five years. It says that commercial partners are working with the There is strong interest in quantum encryption because of its ability to completely eliminate the possibility of eavesdropping. Today encryption/decryption methods are only as good as the length of the key a 56- to 256-bit value used to scramble the data to be transmitted with a one-way function that's used to encrypt a message. A common way to create such a one-way function is to multiply two large prime numbers, a simple operation for a computer to perform. However, going backward that is, taking a large number and finding its prime factors is very difficult for computers to Other methods use some hard mathematical problem to create one-way functions, but any scheme of that kind is vulnerable both to advances in computational power and new breakthroughs in mathematics. Brute force can work The theory is that secret keys for one-time functions let only the receiver decrypt the scrambled bits, but in practice even the most secret key can be found by trial and error. For instance, multiplying two prime numbers together is a difficult code to crack, since there is no known efficient algorithm to find prime factors. But a brute-force approach, in which a hacker tries a large number of multiplications in the hope of hitting the result, might pay off. The standard 56-bit DES encryption code can be cracked on a supercomputer in a few hours; its next-generation successor, AES, ups the ante to a 256-bit key, but code-cracking computers are also speeding up, so the security is only temporary. By contrast, "quantum cryptography offers the ultimate in secure communications it's no longer a matter of how fast a computer an eavesdropper has," said Andy Hammond, vice president of marketing at MajiQ. Instead of depending on the computational difficulty of cracking one-way functions, quantum encryption creates uncrackable codes that employ the laws of physics to guarantee security. Different quantum states, such as photon polarization, can be used to represent 1s and 0s in a manner that cannot be observed without the receiver's discovering it. For instance, if hackers observe a polarized photon, then 50 percent of the time they will scramble the result, making it impossible to hide the eavesdropping attempt from the receiver. The first quantum technology out of the gate will supplement current public-key systems. Security will be guaranteed only for the keys by means of a technique to change the keys so quickly (up to four times a second) that eavesdropping hackers will have to crack multiple AES codes used during a data transmission. Called quantum-key distribution (QKD), the scheme uses a new type of emitter/ receiver for fiber-optic networks based on a single photon. The emitter/receivers are slow (about 1 kbit/s) and limited to less than 100 km, but they offer unerring security that would only be possible for AES by making a new key for each transmission that is the same length as the data to be transmitted. Earlier this year, id Quantique demonstrated its version of QKD over standard optical fibers installed between Geneva and Lausanne, Switzerland a 67-km distance. At 1 kbit/s, its 256-bit keys were updated four times a second, greatly complicating the code-cracking task for eavesdroppers. Id Quantique says it is collaborating with European communications giants to add quantum security to satellite communications. In the United States, MagiQ Technologies recently demonstrated its Navajo QKD system over a 30-km fiber link. Navajo's secure communications link consists of two "black boxes" connected by optical fiber. Like id Quantique's QKD, Navaho implements the uncrackable BB84 quantum-encryption code proposed by Gilles Brassard and Charles Bennett in a 1984 paper titled "Quantum Cryptography: Public Key With BB84, each bit fed into a black box is encoded as a mixture of two equally likely nonorthogonal (separated by an angle not at 90°) quantum states, in this case photon polarization. According to Heisenberg's uncertainly principle, it is impossible to distinguish with certainty between two nonorthogonal quantum states without making a measurement that will change the photon state detected by the receiver. Id Quantique plans to announce real customer installations soon. Encryption at 2.5 Gbits/s Instead of QKD, Northwestern's approach proposes uncrackable quantum codes for the data itself, not just for the key. Northwestern University professors Prem Kumar and Horace Yuen have reported successful testing of a prototype that runs at 250 Mbits/s. They promise a second-generation model within five years that will attain the 2.5 Gbits/s typical of Internet backbones. "No one else is doing quantum encryption at these speeds," said Kumar. In QKD, an uncrackably secure key is transmitted first, after which a normal encryption/decryption method is used over insecure lines to send the real data. The algorithm of Kumar and Yuen, in contrast, sidesteps the secure-key route and instead secures the high-speed data streams themselves with quantum physics. Kumar and Yuen use quantum mechanics to encode the actual data being transmitted, not just the key to a one-way function. So even if hackers intercept the data transmission down the optical fiber, quantum physics denies them the ability to decode it because of quantum "noise," Kumar said. Northwestern's patented technology applies a quantum polarization angle to each transmitted bit. If eavesdroppers try to decode the message they must transgress Heisenberg's uncertainly principle that is, their observation of the data introduces so much quantum noise as to render the result indecipherable, according to Kumar and Yuen. However, the intended receiver can use the secret key to remove enough noise to decode the encrypted data. Northwestern's secret-key encryption (SKE) secures the data stream using the same basic encoding method MajiQ and id Quantique apply to secure keys. But the SKE system uses 4,096 different polarization angles, vs. four for the QKD technique. So instead of a 50 percent chance a hacker will choose the wrong filter, as in QKD, the chances are only 1 /4,096, or .02 percent, with SKE. Kumar and Yuen predict products within five years from Northwestern's industrial partners: Telcordia Technologies (Red Bank, N.J.) and BBN Technologies (Cambridge, Mass.).
{"url":"http://www.eetimes.com/document.asp?doc_id=1200212","timestamp":"2014-04-17T16:06:49Z","content_type":null,"content_length":"128811","record_id":"<urn:uuid:b6163130-09a9-4448-91b2-4ae82511a44e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Logarithmic Differentiation February 22nd 2011, 01:08 PM #1 Sep 2008 Logarithmic Differentiation I can get this far: But I get confused on where to go from here. Thanks in advance for any help. Do you have to differentiate that function? What's the problem? For the first summand apply directly the derivative of ln x, and for the second one use the chain rule or $\ln (x^2-1)=\ln (x-1)+\ln (x+1)$ . Sorry, should have mentioned it was a differentiation problem. I get this far: Is this right so far? You need to use the chain rule on $\dfrac{1}{2}\ln(x^2-1)$. In other words multiply by the derivative of $x^2-1$ and see what cancels Sorry, i'm really struggling with this. It's been a while. Multiply what by the derivative of $x^2-1$ (2x) Can you show me step by step from where I went wrong? The derivative of $\ln(\sqrt{x^2+1})$ is $\dfrac{x}{x^2+1}$. Now you tell us what about that you do not understand. It could be that you are simply not ready to do this question. In that case you need to have a sit down one to one with an tutor. We here do not provide that service. I'm not asking for tutoring. If I was ready to do this question I wouldn't have had to post it. I figured it out eventually though. Thanks for the help guys. The purpose of logarithmic differentiation is to make use of logarithm rules to simplify the function, so that difficult functions can be differentiated more easily. Make use of the fact that $\displaystyle \ln{(x^2 - 1)} = \ln{[(x - 1)(x + 1)]} = \ln{(x- 1)} + \ln{(x + 1)}$. I'm sure you can differentiate the last term... February 22nd 2011, 01:22 PM #2 Oct 2009 February 22nd 2011, 02:44 PM #3 Sep 2008 February 22nd 2011, 02:51 PM #4 February 22nd 2011, 03:05 PM #5 Sep 2008 February 22nd 2011, 03:08 PM #6 February 22nd 2011, 03:30 PM #7 Sep 2008 February 22nd 2011, 03:37 PM #8 February 22nd 2011, 03:53 PM #9 Sep 2008 February 22nd 2011, 03:59 PM #10 February 22nd 2011, 04:37 PM #11 Sep 2008 February 22nd 2011, 04:58 PM #12
{"url":"http://mathhelpforum.com/calculus/172245-logarithmic-differentiation.html","timestamp":"2014-04-17T15:12:03Z","content_type":null,"content_length":"72026","record_id":"<urn:uuid:45c0a6dc-fb18-42cd-b944-50ef7b051c53>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
ot the Navigation Panel: Go backward to This is Not the Fallacy Go up to All People in Canada are the Same Age Switch to graphical version (better pictures & formulas) Go to University of Toronto Mathematics Network Home Page This step is not the source of the fallacy. This step is correctly summarizing the preceding steps. Step 2 states that the statement is true for n=1. Step 3 states that the only other thing needed is to prove that, whenever the statement is true for one number (say n = k), it must also be true for the next number (that is, for n = k+1). Steps 4 through 5 state that the only thing needed to accomplish this goal is to show that everyone in an arbitrary group G of k+1 people has the same age, under the assumption that, in every group of k people, everyone has the same age. Step 13 states that this has been accomplished. Therefore, from these steps, it follows that the statement is true for all n. Of course, we know it's not, so that means the fallacy must lie in one of the earlier steps. Why don't you go back to the list of steps in the proof and see if you can identify which one is wrong, now that you know it isn't this one? This page last updated: May 26, 1998 Original Web Site Creator / Mathematical Content Developer: Philip Spencer Current Network Coordinator and Contact Person: Joel Chan - mathnet@math.toronto.edu Navigation Panel: Previous | Up | Graphical Version | U of T Math Network Home
{"url":"http://www.math.toronto.edu/mathnet/plain/falseProofs/guess32.html","timestamp":"2014-04-18T08:37:09Z","content_type":null,"content_length":"2448","record_id":"<urn:uuid:21c0e9a3-f08d-461f-ab67-7062d8f35a0d>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00059-ip-10-147-4-33.ec2.internal.warc.gz"}
Predicting the Labelling of a Graph via Minimum p-Seminorm Interpolation Mark Herbster and Guy Lever In: The 22nd Annual Conference on Learning Theory (COLT 2009), 18-21 Jun 2009, Montreal, Canada. We study the problem of predicting the labelling of a graph. The graph is given and a trial sequence of (vertex,label) pairs is then incrementally revealed to the learner. On each trial a vertex is queried and the learner predicts a boolean label. The true label is then returned. The learner's goal is to minimise mistaken predictions. We propose {\em minimum $p$-seminorm interpolation} to solve this problem. To this end we give a $p$-seminorm on the space of graph labellings. Thus on every trial we predict using the labelling which {\em minimises} the $p$-seminorm and is also {\em consistent} with the revealed (vertex, label) pairs. When $p=2$ this is the {\em harmonic energy minimisation} procedure of~\cite{\ZhuHarmonicFunctions}, also called (Laplacian) {\em interpolated regularisation} in~\cite{Belkin2}. In the limit as $p\rightarrow 1$ this is equivalent to predicting with a label-consistent mincut. We give mistake bounds relative to a label-consistent mincut and a resistive cover of the graph. We say an edge is {\em cut} with respect to a labelling if the connected vertices have disagreeing labels. We find that minimising the $p$-seminorm with $p=1+\epsilon$ where $\epsilon\into 0$ as the graph diameter $D\into\infty$ gives a bound of $O(\cut^2 \log D)$ versus a bound of $O(\cut D)$ when $p=2$ where $\cut$ is the number of cut edges. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00006758/","timestamp":"2014-04-18T10:36:35Z","content_type":null,"content_length":"8590","record_id":"<urn:uuid:c930e282-162f-4865-b79f-f326f525aa16>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/fatemah/asked","timestamp":"2014-04-18T21:13:22Z","content_type":null,"content_length":"69490","record_id":"<urn:uuid:9d2d0cf0-fcab-4256-9024-276a0ab169c7>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Matheology § 222 Back to the roots Date: Feb 19, 2013 6:43 AM Author: mueckenh@rz.fh-augsburg.de Subject: Re: Matheology § 222 Back to the roots On 19 Feb., 02:36, Virgil <vir...@ligriv.com> wrote: > In article > <20086a5e-4a68-44dd-99b7-5a6b7c0c3...@x13g2000vby.googlegroups.com>, > WM <mueck...@rz.fh-augsburg.de> wrote: > > On 17 Feb., 22:24, Virgil <vir...@ligriv.com> wrote: > > > In article > > > There is, however, a natural larger than any previously given natural. > > Nevertheless it is a natural number and therefore finite. > but for every one of them there is successor which is also one of them. This is true for the list as well as for its unchanged diagonal. You cannot get a diagonal that is longer than every line of the list. > But for lines that are not finite there are no FISs equal to those lines. For natural numbers that are not finite your statement may be relevant. But we can ignore it, since the premise is false. > Does WM claim to know of a natural that does NOT have a successor > natural? No, that is just my argument. There is no last finite line in the list. Therefore the diagonal cannot surpass every line. > Unless WM, or someone else, can name a d_n that d cannot not exceed, Irrelevant. Find a part of the diagonal that is not as a line in the Regards, WM
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8371097","timestamp":"2014-04-16T08:11:44Z","content_type":null,"content_length":"2679","record_id":"<urn:uuid:f2c6b815-daf7-4f45-9ba9-50a674cefbd0>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
im trying to smooth out the local variation of a grayscale image, the image is stored in an array of structs. when iterating over each pixel of the image , i need to replace the gray level of the pixel with the average of the gray levels of the neighbours and the pixel itself. so for example, A[3[3]1 2 3 B[3][3] _ _ _ 4 5 6 _ _ _ 7 8 9 _ _ _ -5 has neighbours 1,2,3,4,6,7,8,9. and need to find the average of them and replace it to position B[1][1]. -1 has 2,4,5. and need to find average of them and replace it to position B[0][0]. this is wat i have atm. for(r=1; r<h-1; r++){ for(c=1; c<w-1; c++){ tot=cpyimage[r-1][c-1].gray_level+cpyimage[r-1][c].gray_level+cpyimage[r-1][c+1].gray_level+cpyimage[r][c-1].gray_level+cpyimage[r][c].gray_level+cpyimage[r] [c+1].gray_level+cpyimage[r+1][c-1].gray_level+cpyimage[r+1][c].gray_level+cpyimage[r+1][c+1].gray_level; but it misses out the edges. im having trouble averaging them because they have less than 8 neighbours , unlike the ones in the middle which have 8 neighbours. how do i o about doing this? can someone help point me in the right direction?
{"url":"http://www.dreamincode.net/forums/topic/331658-help-with-average-smoothing-in-c/","timestamp":"2014-04-20T15:21:38Z","content_type":null,"content_length":"143734","record_id":"<urn:uuid:9f8c8505-ece9-42fd-a32a-51748f629fd1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Are we in a black hole? I am happy to see a discussion about horizons, because this will give me the oportunity to ask some questions that bother me a long time ago... The cosmological event horizon is actually an observer dependent horizon, same as the Rindler horizon. What bothers me, specially in case of the Unruh radiation, is the following: You can get the result of an observer dependent horizon considering only the action of some field (a scalar field for example) in flat spacetime viewed by an accelerated observer. This observer will detect a thermal bath of particles. Calculations and qualitative discussions I am aware of stop at that point. However, a thermal bath of particles should create a gravitational field that should pertub the curvature making it non-vanishing. But, does this make sense? The inertial observer and the accelerated observer, both in the "same" spacetime, would measure different curvatures. There are other things that bother me, like the fact that everywhere it is mentioned that the thermal radiation is "emitted" by the horizon. The thermal radiation from horizons is a mathematical consequence of the Bogolyubov transformations between two different vacua. In general, these lead to particle creation when evaluating the number operators in different vacua and I would expect that this is a homogeneous and isotropic and non-localized phenomenon of particle creation. These particles do not form a thermal distribution in the general case and thermal distributions appear only in case of horizons. Thus I would expect the thermal case to be a special case of the general case of particle creation between two different vacua. However, everywhere one can read that "horizons radiate", meaning that they produce the thermal distribution of particles. According to the more general case I would expect the thermal radiation in case of horizons to be also a bath of particles, a kind of "diffuse" radiation, isotropic and homogeneous, but not a radiation coming from the horizon.
{"url":"http://www.physicsforums.com/showpost.php?p=886565&postcount=42","timestamp":"2014-04-16T13:55:16Z","content_type":null,"content_length":"8807","record_id":"<urn:uuid:5725e86b-209e-4e13-a516-50d9d5287230>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
Oscillation for a Class of Fractional Differential Equation Discrete Dynamics in Nature and Society Volume 2013 (2013), Article ID 390282, 6 pages Research Article Oscillation for a Class of Fractional Differential Equation School of Mathematical Sciences, University of Jinan, Jinan, Shandong 250022, China Received 3 May 2013; Accepted 21 June 2013 Academic Editor: Shurong Sun Copyright © 2013 Zhenlai Han et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We consider the oscillation for a class of fractional differential equation for where is a real number and is the Liouville right-sided fractional derivative of order of . By generalized Riccati transformation technique, oscillation criteria for a class of nonlinear fractional differential equation are obtained. 1. Introduction Fractional differential equations have been of great interest recently. Apart from diverse areas of mathematics, fractional differential equations arise in rheology, dynamical processes in self-similar and porous structures, fluid flows, electrical networks, viscoelasticity, chemical physics, and many other branches of science. There have appeared lots of works in which fractional derivatives are used for a better description of considered material properties; mathematical modelling based on enhanced rheological models naturally leads to differential equations of fractional order and to the necessity of the formulation of initial conditions to such equations. It is caused both by the intensive development of the theory of fractional calculus itself and by the applications; see [1–6]. It should be noted that most of the papers and books on fractional calculus are devoted to the solvability of linear fractional differential equations. Recently, there are many papers dealing with the qualitative theory, especially the existence of solutions (or positive solutions) of nonlinear initial (or boundary) value problems for fractional differential equation (or system) by the use of techniques of nonlinear analysis (fixed-point theorems, Leray-Schauder theory, Adomian decomposition method, etc.); see [7–11]. The oscillation theory as a part of the qualitative theory of differential equations has been developed rapidly in the last decades, and there has been a great deal of work on the oscillatory behavior of integer order differential equations. However, there are only very few papers dealing with the oscillation of fractional differential equation; see [12–15]. Grace et al. [12] initiated the oscillatory theory of fractional differential equations where denotes the Riemann-Liouville differential operator of order with and the functions ,, and are continuous. By the expression of solution and some inequalities, oscillation criteria are obtained for a class of nonlinear fractional differential equations. The results are also stated when the Riemann-Liouville differential operator is replaced by Caputo’s differential operator. Chen [13] considered the oscillation of the fractional differential equation where is the Liouville right-sided fractional derivative of order of , is a quotient of odd positive integers, and are positive continuous functions on for a certain , and is a continuous function such that for a certain constant and for all . They established some oscillation criteria for the equation by using a generalized Riccati transformation technique and an inequality. In 2013, Chen [15] studied oscillatory behavior of the fractional differential equation with the form where is the Liouville right-sided fractional derivative of order of . To the best of our knowledge, nothing is known regarding the oscillatory behavior for the following fractional differential equation: where is a real number, is the Liouville right-sided fractional derivative of order of defined by for , here is the gamma function defined by for , and the following conditions are assumed to hold:(H[1]) and are two positive continuous functions on for a certain ;(H[2]) are continuous functions with for , and there exists some positive constant such that for ;(H[3]) are continuous function with for , and there exist positive constants such that for all . By a solution of (4), we mean a nontrivial function with , and satisfies (4) for . Our attention is restricted to those solutions of (4) which exist on and satisfy for any . A solution of (4) is said to be oscillatory if it is neither eventually positive nor eventually negative. Otherwise it is nonoscillatory. Equation (4) is said to be oscillatory if all its solutions are oscillatory. 2. Preliminaries For the convenience of the reader, we give some background materials from fractional calculus theory. These materials can be found in the recent literature; see [12, 13, 16, 17]. Definition 1 (see [16]). The Liouville right-sided fractional integral of order of a function on the half-axis is given by provided that the right side is pointwise defined on , where is the gamma Definition 2 (see [16]). The Liouville right-sided fractional derivative of order of a function on the half-axis is given by where , provided that the right side is pointwise defined on . The following lemma is fundamental in the proofs of our main results. Lemma 3 (see [13]). Let be a solution of (4) and Then Lemma 4 (see [17]). If and are nonnegative, then 3. Main Results Theorem 5. Suppose that (H[1])–(H[3]) and hold. Furthermore, assume that there exists a positive function such that where are defined as in . Then every solution of (4) is oscillatory. Proof. Suppose that is a nonoscillatory solution of (4). Without loss of generality, we may assume that is an eventually positive solution of (4). Then there exists such that where is defined as in ( 7). Therefore, it follows from (4) that Thus, is strictly increasing on and is eventually of one sign. Since for and (H[2]), we see that is eventually of one sign. We now claim that If not, then is eventually positive, and there exists such that . Since is strictly increasing on , it is clear that for . Therefore, from (8), we have Then, we get Integrating the above inequality from to , we have Letting , we see This contradicts (10). Hence, (14) holds. Define the function by the generalized Riccati substitution Then we have for . From (19), (4), (8), and (H[1])–(H[3]), it follows that Taking from Lemma 4 and (20) we get Integrating both sides of the inequality (22) from to , we obtain Taking the limit supremum of both sides of the above inequality as , we get which contradicts (11). The proof is complete. Theorem 6. Suppose that (H[1])–(H[3]) and (10) hold. Furthermore, suppose that there exist a positive function and a function , where , such that where , and has a nonpositive continuous partial derivative on with respect to the second variable and satisfies where , , and are defined as in Theorem 5. Then all solutions of (4) are oscillatory. Proof. Suppose that is a nonoscillatory solution of (4). Without loss of generality, we may assume that is an eventually positive solution of (4). We proceed as in the proof of Theorem 5 to get (22), that is, Multiplying the previous inequality by and integrating from to , for , we obtain Therefore, which is a contradiction to (26). The proof is complete. Next, we consider the case which yields that (10) does not hold. In this case, we have the following results. Theorem 7. Suppose that (H[1])–(H[3]) and (30) hold, is an increasing function, and that there exists a positive function such that (11) holds. Furthermore, assume that for every constant , Then every solution of (4) is oscillatory or satisfies Proof. Assume that is a nonoscillatory solution of (4). Without loss of generality, assume that is an eventually positive solution of (4). Proceeding as in the proof of Theorem 5, there are two cases for the sign of . The proof when is eventually negative is similar to that of Theorem 5 and hence is omitted. Next, assume that is eventually positive. Then there exists such that for . From (8), we get for . Thus, we get and . We now claim that . Assume not, that is, , then from (H[3]) we get Integrating both sides of the last inequality from to , we have Hence, from (8), we get Integrating both sides of the last inequality from to , we obtain Letting , from (31), we get This contradicts . Therefore, we have , that is, . In view of (7), we see that the proof is complete. Theorem 8. Suppose that (H[1])–(H[3]) and (30) hold and is an increasing function. Let and be defined as in Theorem 6 such that (26) holds. Furthermore, assume that for every constant , (31) holds. Then every solution of (4) is oscillatory or satisfies . Proof. Assume that is a nonoscillatory solution of (4). Without loss of generality, assume that is an eventually positive solution of (4). Proceeding as in the proof of Theorem 5, there are two cases for the sign of . The proof when is eventually negative is similar to that of Theorem 6 and hence is omitted. The proof when is eventually positive is similar to that of the proof of Theorem 7 and thus is omitted. The proof is complete. 4. Example Example 1. Consider the fractional differential equation In (36),, , , and . Take , . It is clear that conditions (H[1])–(H[3]) and (10) hold. Furthermore, taking , we have which shows that (11 ) holds. Therefore, by Theorem 5 every solution of (36) is oscillatory. Example 2. Consider the fractional differential equation In (38),. Take . It is clear that conditions (H[1])–(H[3]) and (30) hold. Taking , we have which shows that (11) holds. Furthermore, for every constant , we have which shows that (31) holds. Therefore, by Theorem 7 every solution of (38) is oscillatory or satisfies The authors sincerely thank the reviewers for their valuable suggestions and useful comments that have led to the present improved version of the original paper. This research is supported by the Natural Science Foundation of China (11071143), the Natural Science Outstanding Youth Foundation of Shandong Province (JQ201119), the Shandong Provincial Natural Science Foundation (ZR2012AM009, ZR2011AL007), and the Natural Science Foundation of Educational Department of Shandong Province (J11LA01). 1. K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, A Wiley-Interscience Publication, John Wiley & Sons, New York, NY, USA, 1993. View at Zentralblatt MATH · View at MathSciNet 2. K. B. Oldham and J. Spanier, The Fractional Calculus, Academic Press, New York, NY, USA, 1974. View at Zentralblatt MATH · View at MathSciNet 3. I. Podlubny, Fractional Differential Equations, vol. 198 of Mathematics in Science and Engineering, Academic Press, San Diego, Calif, USA, 1999. View at Zentralblatt MATH · View at MathSciNet 4. S. G. Samko, A. A. Kilbas, and O. I. Marichev, Fractional Integrals and Derivatives. Theory and Applications, Gordon and Breach Science Publishers, Yverdon, Switzerland, 1993. View at Zentralblatt MATH · View at MathSciNet 5. A. A. Kilbas and J. J. Trujillo, “Differential equations of fractional order: methods, results and problems. I,” Applicable Analysis, vol. 78, no. 1-2, pp. 153–192, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 6. A. A. Kilbas and J. J. Trujillo, “Differential equations of fractional order: methods, results and problems. II,” Applicable Analysis, vol. 81, no. 2, pp. 435–493, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 7. D. Delbosco and L. Rodino, “Existence and uniqueness for a nonlinear fractional differential equation,” Journal of Mathematical Analysis and Applications, vol. 204, no. 2, pp. 609–625, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 8. H. Jafari and V. Daftardar-Gejji, “Positive solutions of nonlinear fractional boundary value problems using Adomian decomposition method,” Applied Mathematics and Computation, vol. 180, no. 2, pp. 700–706, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 9. Z. Bai and H. Lü, “Positive solutions for boundary value problem of nonlinear fractional differential equation,” Journal of Mathematical Analysis and Applications, vol. 311, no. 2, pp. 495–505, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 10. S. Sun, Y. Zhao, Z. Han, and Y. Li, “The existence of solutions for boundary value problem of fractional hybrid differential equations,” Communications in Nonlinear Science and Numerical Simulation, vol. 17, no. 12, pp. 4961–4967, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 11. Y. Zhao, S. Sun, Z. Han, and M. Zhang, “Positive solutions for boundary value problems of nonlinear fractional differential equations,” Applied Mathematics and Computation, vol. 217, no. 16, pp. 6950–6958, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 12. S. R. Grace, R. P. Agarwal, P. J. Y. Wong, and A. Zafer, “On the oscillation of fractional differential equations,” Fractional Calculus and Applied Analysis, vol. 15, no. 2, pp. 222–231, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 13. D.-X. Chen, “Oscillation criteria of fractional differential equations,” Advances in Difference Equations, vol. 2012, article 33, pp. 1–18, 2012. View at Publisher · View at Google Scholar · View at MathSciNet 14. S. L. Marian, M. R. Sagayaraj, A. G. M. Selvam, and M. P. Loganathan, “Oscillation of fractional nonlinear difference equations,” Mathematica Aeterna, vol. 2, no. 9-10, pp. 805–813, 2012. View at 15. D. Chen, “Oscillatory behavior of a class of fractional differential equations with damping,” Universitatea Politehnica din Bucuresti Scientific Bulletin, Series A, vol. 75, no. 1, pp. 107–118, 16. A. A. Kilbas, H. M. Srivastava, and J. J. Trujillo, Theory and Applications of Fractional Differential Equations, vol. 204 of North-Holland Mathematics Studies, Elsevier Science B.V., Amsterdam, The Netherlands, 2006. View at MathSciNet 17. G. H. Hardy, J. E. Littlewood, and G. Pólya, Inequalities, Cambridge Mathematical Library, Cambridge University Press, Cambridge, Mass, USA, 1988. View at MathSciNet
{"url":"http://www.hindawi.com/journals/ddns/2013/390282/","timestamp":"2014-04-18T12:59:43Z","content_type":null,"content_length":"471536","record_id":"<urn:uuid:a937eeac-8810-463b-ad3e-ca242779f1af>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Auburn, WA Geometry Tutor Find an Auburn, WA Geometry Tutor ...My name is Bill, and I have been working as a chemist for several years. I have also been teaching chemistry part-time at Tacoma Community College. I have degrees from Santa Clara University and the University of Washington in Chemistry - majoring in Organic Chemistry. 12 Subjects: including geometry, chemistry, algebra 1, algebra 2 ...From 2003 to 2005, I tutored all GED subjects and administered the practice tests to dozens of students at the Muckleshoot Tribal College in Auburn, WA. The SAT tests on a specific range of vocabulary, fairly abstract, but not technical terms, and on secondary meanings of words. Certain of these abstract words turn up over and over again on the SAT. 38 Subjects: including geometry, English, writing, GRE ...Prealgebra skills begin with review of all prior skills and recognizing how these patterns work with variables. Success in this area leads to success in Algebra class. I can share visual versions of concepts that may initially seem difficult for students, and give them the confidence to approach the world of "numbers and letters" with confidence. 12 Subjects: including geometry, chemistry, GRE, reading ...Most of my experience has been teaching high school physics and biology. I have experience privately tutoring students in geometry, algebra, and biology. I would be happy to help you learn any of these subjects. 7 Subjects: including geometry, physics, biology, algebra 1 ...I currently reside in the beautiful state of Washington. I've been tutoring students for years, starting in high school as part of the National Honor Society and as the president of our school's literary club, Writers' Circle. As a member of NHS, I scheduled tutoring sessions before and after school in math and French to help fellow high school students who were struggling in those subjects. 35 Subjects: including geometry, English, reading, writing Related Auburn, WA Tutors Auburn, WA Accounting Tutors Auburn, WA ACT Tutors Auburn, WA Algebra Tutors Auburn, WA Algebra 2 Tutors Auburn, WA Calculus Tutors Auburn, WA Geometry Tutors Auburn, WA Math Tutors Auburn, WA Prealgebra Tutors Auburn, WA Precalculus Tutors Auburn, WA SAT Tutors Auburn, WA SAT Math Tutors Auburn, WA Science Tutors Auburn, WA Statistics Tutors Auburn, WA Trigonometry Tutors Nearby Cities With geometry Tutor Algona, WA geometry Tutors Bellevue, WA geometry Tutors Burien, WA geometry Tutors Des Moines, WA geometry Tutors Edgewood, WA geometry Tutors Federal Way geometry Tutors Kent, WA geometry Tutors Milton, WA geometry Tutors Pacific, WA geometry Tutors Puyallup geometry Tutors Renton geometry Tutors Seatac, WA geometry Tutors Seattle geometry Tutors Tacoma geometry Tutors Tukwila, WA geometry Tutors
{"url":"http://www.purplemath.com/auburn_wa_geometry_tutors.php","timestamp":"2014-04-21T12:44:33Z","content_type":null,"content_length":"23962","record_id":"<urn:uuid:19311db8-3013-4c05-b071-d53f44bb364f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
BrainBashers : Puzzles and Brain Teasers Daily 3-In-A-Row Complete the grid without forming a 3-In-A-Row? Daily ABC Path Place the letters A to Y in the grid so that every letter touches the previous one. Daily Acrostics Can you find the hidden quote? Daily Battleships Can you find all of the hidden ships? Daily Bridges Can you lay all of the bridges between the islands? Daily CalcuDoku Similar to Sudoku and Killer Sudoku except now you're dealing with +, -, x and ÷. Daily Codewords Complete the daily crossword-type puzzle by deciding which number represents each letter. Daily Crosswords A daily dose of the traditional crossword - in four levels of difficulty. Daily Cryptograms A dose of cryptograms - encrypted quotes from across the ages. Daily Drop Quotes A hidden quote, simply drop each letter into the appropriate squares in each row and column to reveal the quote. Daily Fill-Ins Complete the crossword-type puzzles by placing the given words into the grid. Daily Fillomino Fill in the missing numbers by creating coloured blocks of numbers. Daily Futoshiki Sudoku-type game with the addition of less-than and greater-than signs. Daily Hitori Eliminate numbers until there are no duplicates in any row or column. Daily Jigsaw BrainBashers jigsaw puzzle. Daily Kakurasu Find all of the Blue squares such that all of the totals match the clues. Daily Kakuro Numerical crossword where you fill all of the blank squares in the grid with the numbers 1-9. Daily Killer Sudoku Sudoku with the added twist of making some of the squares add up. Daily Light Up Light up the grid with lots of light bulbs. Daily Logic Puzzle A logic puzzle from BrainBashers. Daily MathemaGrids Complete the grid, using the digits 1-9, and make the sums correct? Daily Neighbours Complete the grid, using each number once per row/column, whilst following the neighbours symbols. Daily Net Slide Slide all of the squares to reconnect the network. Daily Network Rotate all of the squares to reconnect the network. Daily Nonogrids Painting by numbers. Daily Nurikabe Find all of the islands in the ocean. Daily Patchwords A word puzzle where you drag tiles into their correct place to make valid words. Daily Range Place black squares into the grid and limit the visible range from each numbered cell. Daily Rectangles Break down the grid into rectangles of the correct size. Daily Skyscrapers Find where all of the skyscrapers are - some hide behind others. Daily Slant Replace all of the crosses with slanted lines. Daily Slitherlink Create a continuous loop around the grid following the clues given. Daily Sudoku Complete the grid such that every row, every column, and the nine 3x3 blocks contain the digits from 1 to 9. Daily Tents Find all of the tents in the forest. Daily Word Searches A daily word search. Daily Word Twist How many words can you find in the 4x4 grid?
{"url":"https://www.brainbashers.com/puzzles.asp","timestamp":"2014-04-16T13:04:13Z","content_type":null,"content_length":"46025","record_id":"<urn:uuid:6834871c-bbef-4c36-882a-9cedf0e61822>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
West Coast Stat Views (on Observational Epidemiology and more) The odd thing about the much publicized Jeopardy match between humans and IBM's Watson is how differently both sides are challenged by the game. Arguably the hardest part for the human players, acquiring and retaining information, is trivial for the computer while certainly the hardest part for Watson, understanding everyday human language, is something almost all of us master as young Natural language processing continues to chug along at a respectable pace. Things like Watson and even Google Translate represent remarkable advances. Still, they hardly seem like amazing advances in artificial intelligence. I'm not going to worry about the rise of the machines until they start beating us at games like Robert Abbott's Eleusis Abbott's game (old Eleusis -- you can buy a booklet of rules for the updated game from Mr. Abbott himself) made its national début in the Second Scientific American Book of Mathematical Puzzles and Diversions by Martin Gardner. It's easy to play but a bit complicated to score (not unnecessarily complicated -- there's a real flash of insight behind the process). The dealer (sometimes referred to as 'Nature' or 'God' for what will be obvious reasons) writes a rule like "If the card on top is red, play a black card. If the card on top is is black, then play a red card." on a piece of paper then folds it and puts it away. The dealer then shuffles the deck, randomly selects a card, puts it face up in the center of the table then deals out the rest evenly to the players (the dealer doesn't get a hand). If the number of cards isn't divisible by the number of players the extra cards are put aside. The first player selects any card from his or her hand and puts it on top of the starter card. Based on the hidden rule, the dealer says 'right' and the card stays on the pile or says 'wrong' and the card (called a mistake card) goes face up in front of the player. The players continue in turn The object for players is to have as few mistake cards as possible. The object for the dealer is to have the largest possible range in players' scores. At the end of the first hand, the score is calculated for the dealer. The scoring method is clever but a bit complicated. For n players (excluding the dealer), have each player count his or her mistake cards then multiply the smallest number by n-1 and subtract the product from the total number of mistake cards in front of the other players. For example, if there were four players with 7, 2, 9 and 8 mistake cards, you would multiply 2 (the lowest) times 3 (n-1) and subtract that from 24 (the sum of the rest). In the second stage, the players take turns selecting cards from their mistake pile (leaving them face up so that other players can see what has been rejected). Play continues until someone goes out or until the dealer sees that no more cards can be accepted. At that point the rule is revealed. Players' score are then calculated with a formula similar to the one used for the dealer: each player multiplies his or her mistake cards by n-1 then subtracts the product from the total of the other players' mistake cards. If the difference is negative, the score is zero. The player who goes out first or who has the fewest cards if no one goes out gets an additional six points. While most 'new' games are actually collections of old ideas with new packaging, Abbott managed to come up with two genuinely innovative ideas for Eleusis: the use of induction and the scoring of the dealer. As someone who has spent a lot of time studying games , I may be even more impressed with the second. One of the fundamental challenges of game design is coming up with rules that encourage strategies that make the game more enjoyable for all the players. In this case, that means giving the dealer an incentive to come up with rules that are both challenging enough to stump some of the players and simple enough that someone will spot the Eleusis has often been used as a tool for teaching the scientific method. You recognize a pattern, form a hypothesis, and test it. Gardner discusses this analogy at length. At one point, he even brings William James and John Dewey into the conversation. New York Times said that Robert Abbott's games were "for lovers of the unfamiliar challenge." Any AI people out there up to that challenge? 2 comments: 1. Here is an old AI system that looked at this problem. 2. Interesting paper. Thanks for the link.
{"url":"http://observationalepidemiology.blogspot.com/2011/02/forget-jeopardy-show-me-computer-that.html","timestamp":"2014-04-18T00:13:26Z","content_type":null,"content_length":"111150","record_id":"<urn:uuid:b9961ec6-d9d3-4e40-a549-02d34c5a01a2>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial differential equation differential equation , which involves differentiating a function with more than one parameter. For example: (d^2/dt^2 - v^2 d^2/dx^2) y(x,t) = 0 where t is time, x is position, v is velocity and y is displacement, is a partial differential equation (abbreviated PDE), which can be used to describe a vibrating guitar string.
{"url":"http://everything2.com/title/Partial+differential+equation","timestamp":"2014-04-20T21:06:39Z","content_type":null,"content_length":"30725","record_id":"<urn:uuid:10622fef-e199-47eb-95c8-c1522bd2ff03>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Base e, In, Log Date: 03/08/2002 at 05:28:55 From: Jerry Harper Subject: Base e, In, Log First of all, I would like to thank you for telling us, how the value of base e works. I have some problems below, in base e. I've tried everything, but can't come up with the same answer as the book, and they don't give a clear example, or next to none of how they got their answer. Would you give me the complete clue on how they got their answer please? 3e^(2x-1) = 7 the textbook answer is 1.079 I have tried: Div^Log(7+3) 2Log3 = 1.047.... Div^(log7+log3) 2Log3 = 1.3856 Div^(Log(e)7+Log(e)3) 2Log(e)3 = 1.666... e^(x+1) = 8 the textbook answer is .924 I tried a similar approach, but no luck! We very much appreciate your site! Date: 03/08/2002 at 09:07:17 From: Doctor Peterson Subject: Re: Base e, In, Log Hi, Jerry. Thanks for your comments! We're always happy to see people both reading our site and asking questions. You solve this kind of problem just the way you solve other basic algebra problems: undo what is done on the left, one step at a time, until you get the variable alone. Note what is being done: 3 e^(2x-1) = 7 If you knew the value of x, you would 1. multiply by 2 2. subtract 1 3. raise e to that power 4. multiply by 3 and you'd get 7. How can we undo this? Think about putting on and taking off your shoes: in the morning, you first put on your socks, then your shoes; in the evening you take off your shoes, then your socks. You undo each step in the reverse order. Same here; undo each step this way: 4. divide by 3 3. take the natural log 2. add 1 1. divide by 2 Here we go: 3 e^(2x-1) = 7 e^(2x-1) = 7/3 2x-1 = ln(7/3) 2x = ln(7/3) + 1 x = [ln(7/3) + 1]/2 = [ln(2.3...] + 1]/2 = [0.8473 + 1]/2 = 1.8473/2 = 0.9236 I think you must have copied the wrong answers! Of course, you can (and always should) check your answer by plugging it back in: 3 e^(2*0.9236-1) = 3e^0.8473 = 3*2.333... = 7 See if you can do the other problem the same way. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/53311.html","timestamp":"2014-04-16T22:07:26Z","content_type":null,"content_length":"7201","record_id":"<urn:uuid:f71016f5-e70c-4122-a167-981c3a1edaac>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Induced Representations of Locally Compact Groups Gratia von Neumann, Hermann Weyl, and a few physicists, it’s probably true that the right way for mathematicians to develop quantum mechanics is in functional analytic terms, with the unitary operators featured there requiring a stiff dose of representation theory, and right off the bat, we’re face-to-face with locally compact groups. From another point of view, the theory of the metaplectic group, developed by André Weil in the 1960s, is built around the unitary representation theory of the (locally compact) symplectic group, and this work, which is to a large degree an exploitation of the Stone-Von Neumann theorem, is central to the modern theory of metaplectic forms. Interestingly, a theme common to quantum mechanics and the inner life of the metaplectic group is the centrality of the Heisenberg group (in different flavors), and certain of its induced representations. This, indeed, is a dramatic illustration of a very striking and somewhat mysterious phenomenon, namely, that methodologically as well as structurally quantum physics and number theory are deeply intertwined, if the reader will excuse a cheap pun. The point is that induced representations of locally compact groups are of great importance and deep significance both to mathematicians and to (certain) physicists, and therefore a text such as the one under review is very desirable indeed. There are a number of sources available that deal with aspects of this material, including of course Weil’s original L’Intégration dans les Groupes Topologiques et ses Applications and Alain Robert’s Introduction to the Representation Theory of Compact and Locally Compact Groups. These books are, in a sense, non-negotiable as far as background in this subject goes: they set the stage for, e.g., Armand Borel’s terse Representations de Groupes Localement Compacts, which is by no means a leisurely introduction to the subject. Indeed, pedagogically speaking, it is Robert’s book which is the front-runner when it comes to preparing the reader for what Kaniuth and Taylor offer in the book under review. But it should be noted that Robert’s discussion of induced representations per se only occupies around seven pages, although the process of induction is prominently featured elsewhere in his book. Of course, in Kaniuth-Taylor, induced representations take central stage, and this is reasonable in light of the algebraic subtext of the authors’ discussion, exemplified by Mackey’s “machine.” This phrase, or, equivalently, “Mackey analysis” has to do with describing the dual group to a locally compact group from data about the dual group to a closed normal subgroup of the given group. Obviously induction is at the heart of all this. Mackey analysis is also literally the central chapter of this book. It is preceded by three chapters, respectively on “basics,” a ramified discussion of induced representations, and Mackey’s famous imprimitivity theorem to the effect that a system of imprimitivity for a locally compact group in the presence of a closed subgroup comes from some unitary representation of the subgroup. Then, in the wake of Mackey analysis, topological questions are addressed, particularly the matter of topologies on indicated dual spaces, and the book ends with “further applications.” Says the back cover in this connection: “An extensive introduction to Mackey analysis is applied to compute dual spaces for a wide variety of examples.” Indeed. Obviously this text is meant for, at least, advanced graduate students headed in the direction of representation theory of locally compact groups, and it is pitched at a professional level, as is consonant with the goals of the Cambridge Tracts in Mathematics. This is good and serious mathematics, presented well, and constitutes a valuable contribution to the discipline. Michael Berg is Professor of Mathematics at Loyola Marymount University in Los Angeles, CA.
{"url":"http://www.maa.org/publications/maa-reviews/induced-representations-of-locally-compact-groups","timestamp":"2014-04-16T22:16:24Z","content_type":null,"content_length":"98668","record_id":"<urn:uuid:45545815-e2db-4437-9682-cc223db9b100>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00209-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Value a Vending Route Have you ever tried to sell a route but weren’t sure how to figure its value? Have you ever looked at purchasing a route but were unsure of how much you should pay? Determining the fair value of a vending route can be tough! I’ve heard someone say you should never pay more for a route than what the machines were worth, but that doesn’t sound right. We all know some locations are better than others, and therefore more valuable. This article will provide a guideline for determining the fair value of a vending route which can serve as a starting point for negotiation. The rest of this post explains how the route evaluator works. If you just want to download and use the calculator, click here. The formula we’ll use for estimating the value of a vending route has three basic components, the value of the machines themselves, the estimated cost to service the machines, and the estimated monthly income after the cost of the product. Let’s take a look at each in turn. The Value of the Machines I saw an ad recently for a few vending machines for sale. The seller claimed to pay $6,000 for each machine and is offering them at a great deal for only $2,000 each. At the same time, there were other sellers selling the exact same machine for about $400. Maybe this is common sense, but please don’t think that what the seller paid for a machine or a location has any bearing on what it’s actually worth. Do your own research and keep an eye on advertizements to get an idea of what machines are selling for. The Cost to Service the Machines If there are multiple machines in different locations all over town, it’s going to cost you more in time and gas to make a run than if there are multiple machines all on one street. Even if you’re doing the run yourself, use an hourly rate for the time it takes to service all the machines. If you were sick, or if you were to hire someone in the future, you would have to pay them. Your time is valuable, and you shouldn’t work for free. Also, estimate the amount of gas you’ll use. You could try to take the current price of gas and the mpg of your vehicle to come up with a number, but it’s probably easiest to take the standard mileage deduction. It’s close enough. The Estimated Monthly Income Finally, we’ll need to know how much the machines are bringing in. If you’re buying a route, the seller should be able to give you an estimate of monthly sales. This is good enough to use for these calculations and to get an idea of what you’re willing to pay, but I highly encourage you to ask for bank statements. After all, you’re not buying a box of baseball cards at a garage sale here, you’re buying a business. The seller should be able to back up his claims. From the current sales, you can estimate the cost of the product. This may vary by location, but an estimate of 50% should be reasonable. Calculating the Present Value of the Vending Route Now that we have the required info, we’ll use what’s called a present value calculation. Explaining this calculation in detail is beyond the scope of this post. Fortunately, Microsoft Excel can do the work for us! At this point, it may be helpful to open Excel and follow along. Pick a cell and type in “=PV(“. The Present Value function takes four parameters that we’re interested in: rate, nper, pmt, and fv. The Rate is complicated and can differ from person to person, but you can think of it as a way of putting a value on what else you would do with the money if you weren’t buying a vending route. For our purposes, 10% per year should be fine. All of our other numbers are in terms of months, however, so use 10% (.1) and divide by 12 which is .0083. nper stands for Number of Periods. This is how long you expect to operate the route. If you’re selling the route and you’ve been running it for 10 years or if you have a contract which guarantees the machines can stay for another 5 years, you can explain to the buyer that the route is very stable and therefore worth more. You might be justified in using 60 months. If you’re buying a route, and the seller placed the machines three months ago in a strip mall with high turnover, you might not be able to count on keeping the location for more than a year. I usually start with 24 months and adjust from there. pmt stands for Payment. This is the amount of net profit you estimate will be coming in from all machines per month. Remember to subtract you time, mileage, and cost of product from the monthly sales to estimate the monthly payment. Finally, fv stands for future value. This is where the value of the machines come in to play. If everything goes exactly as planned and at the end of two years you have to leave the location, you still own the machines. You could move them to a new location or sell them. If you assume you can keep the machines in good condition, it’s probably a safe bet you can sell them for about what they’re worth today. An Example Whew! That was a lot of information! Still with me? Let’s consider a brief example. Let’s say you see an ad for a route for sale in your area on the internet. The seller is asking $5,000 for three machines located in an office building. After talking with him on the phone, he says the machines do a total of $600 per month in sales (about $200 each machine). He says he services the machines once a week and it takes him an hour. He also tells you what the machines are. You’re familiar with these machines, and you estimate they’re worth $400 each ($1200 total). You also know the office building is 10 miles away, so for this example, we’ll estimate 20 miles of driving per week, 4 weeks per month, and 44 cents per mile which comes to about $35. You also figure you can do the run in one hour, but that it will take another half an hour of paperwork and running to get product. You think an hourly rate of $15 is reasonable, so 1.5 hours per week * 4 weeks per month * $15 per hour = $90 per month. Let’s also assume product cost is 50% of sales. this means the $600 in sales is reduced to $175 (($600*50%) – $90 – $35). Expenses take their toll pretty quickly! Finally, we’ll assume that you can count on the machines staying where they are for another 2 years (24 months). We have all our information, let’s plug it in to Excel. Type in “=PV(.0083, 24, 175, 1200)”. The result I get is $4,778. Remember, even though we get an exact number, we did a lot of estimating. In this case, if I saw proof the machines were indeed making $600 per month in sales, I would be comfortable paying his asking price of $5,000. On the other hand, if everything else were the same except the seller said he paid $6,000 each for the machine and location and he wants to sell for $9,000 in order to pay off the loan, I would determine this deal is not worth pursuing. I’ve done the work for you in Microsoft Excel. You can download the RouteEvaluator spreadsheet here. I encourage you to play with the numbers to see what happens. For example, if you were sure the route would remain for three years, you might be willing to pay more than $6,000. Or if instead you increased sales by $50 per month (maybe by adjusting prices) you might be willing to pay $7,000. Have fun! One Response Articles like this really grease the sfhats of knowledge.
{"url":"http://www.indivendor.com/?p=13","timestamp":"2014-04-18T02:58:39Z","content_type":null,"content_length":"27319","record_id":"<urn:uuid:89a4d0a4-378e-4db6-abab-a27b1422fd94>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
An impossible case In a previous blog entry (Inconsistent NullIf behaviour), Jeff Moden and Dan Halliday(twitter) both made a very similar observation. When executing this code Code Snippet 1. select top(10) abs(checksum(newid()))%4 2. from sys.objects the only possible values returned can be 0,1,2 or 3. Anything else is a mathematical impossibility. Abs(checksum(new())) will return a ‘random’ positive integer and the modulus (%) operator will returned the remainder after the division. So that being the case, how case this code return NULL values. Code Snippet 1. select top(10) 2. case abs(checksum(newid()))%4 3. when 0 then 0 4. when 1 then 1 5. when 2 then 2 6. when 3 then 3 end 7. from sys.objects Does that mean that SQLServer is mathematically flawed ? Should we avoid modulus ? Once again we need to see what SQLServer is ACTUALLY executing. Within the execution plan, the defined values for the ‘Compute Scalar’ operator is shown below. Which when applied as sql code would be Code Snippet 1. Select top (10) 2. CASE WHEN abs(checksum(newid()))%(4)=(0) THEN (0) ELSE 3. CASE WHEN abs(checksum(newid()))%(4)=(1) THEN (1) ELSE 4. CASE WHEN abs(checksum(newid()))%(4)=(2) THEN (2) ELSE 5. CASE WHEN abs(checksum(newid()))%(4)=(3) THEN (3) ELSE 6. NULL END END END END) 7. from sys.objects Now its pretty clear where the NULL values are coming from. Some people have suggested that the real problem is that newid() is non-deterministic, and as such gives a different value on each execution. I would disagree with that point of view, newid() should be non-deterministic. What is the real world value of this knowledge ? So what if you cant use a newid() in a case statement , does that really matter ? When put like that, I would have to agree that, no not really, but let us now expand upon this. Let us create a simplistic function in AdventureWorks Code Snippet 1. Create Function fnGetOrderDate(@SalesOrderId integer) 2. returns date 3. with schemabinding 4. as 5. begin 6. declare @OrderDate smalldatetime 7. select @OrderDate = OrderDate 8. from sales.SalesOrderHeader 9. where SalesOrderID = @SalesOrderId 10. return @OrderDate 11. end Then execute the following code Code Snippet 1. select case datepart(MONTH,dbo.fnGetOrderDate(45038)) 2. when 1 then 'Jan' 3. when 2 then 'Feb' 4. when 3 then 'Mar' 5. when 4 then 'Apr' 6. when 5 then 'May' 7. when 6 then 'Jun' 8. when 7 then 'Jul' 9. when 8 then 'Aug' 10. when 9 then 'Sept' 11. when 10 then 'Oct' 12. when 11 then 'Nov' 13. when 12 then 'Dec' end with a trace on SP:StatementCompleted and SQL:BatchCompleted , we will then see this. But what will happen when we change the passed SalesOrderId to a different value , try with 43659. Now, this is probably saying more about inappropriate use of a scalar user defined function than the case statement itself , but it highlights how performance can be damaged with the combination of the two. No comments.
{"url":"http://www.sqlservercentral.com/blogs/dave_ballantynes_blog/2010/09/06/an-impossible-case/","timestamp":"2014-04-16T06:17:09Z","content_type":null,"content_length":"34851","record_id":"<urn:uuid:bd72a579-9f5b-4091-ad5b-62d1497b47d8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Morton numbers Long time no posting, but I have excuses (also I’m posting some at openstreetmap user diaries). So anyway here’s a cheap trick I came up with but which you might already know. If you’re indexing any georeferenced data, such as when doing any fun stuff with OpenStreetMap data, you’ve probably wanted to index by location among other things, and location is two or three dimensional (without loss of generality assume two as in GIS). So obviously you can combine latitude and longitude as one key and index by that but that’s only good for searching for exact pairs of values. If your index is for a hash table then you can’t hope for anything more but if it’s for sorting of an array you can do a little better (well, here’s my trick): convert the two numbers to fixed point and interleave their bits to make one number. This is better because two positions that are close to each other in an array sorted by this number probably are close to each other on the map. You could probably use floating point too if you stuff the exponent in the most significant bits and get a result similar to some degree. With fixed point you can then compare only the top couple of bits when searching in the array to locate something with a desired accuracy. Converting to and from the interleaved bits form is straight forward and you can easily come up with a O(log(number of bits)) procedure (5 steps for 32 bit lat / lon) or use lookup tables as suggested by the Bit Twiddling Hacks page, where I learnt they’re called Morton numbers. 32-bit lat/lon will give you a 64-bit number and that should be accurate enough for most uses if you map the whole -90 – 90 / -180 – 180 deg range to integers. Even 20-bit lat/lon (5 bytes for the index) gives you 0.0003 deg accuracy. So what else can you do with this notation? Obviously you can compare two numbers and use bisection search in arrays or the different kinds of trees. You can not add or subtract them directly (or rather, you won’t get useful results) but you can add / subtract individual coordinates without converting to normal notation and back, here’s how: First separate latitude from longitude by masking: uint64_t x = a & 0x5555555555555555; uint64_t y = a & 0xaaaaaaaaaaaaaaaa; Now you can subtract two numbers directly, you’ll notice that the carry flags are correctly carried over the unused bits, you’ll just need to mask them out of the result: uint64_t difference(uint64_t a, uint64_t b) return ((ax - bx) & 0x5555555555555555) | ((ay - by) & 0xaaaaaaaaaaaaaaaa); (you can also use this Masked Merge hack from the page I linked earlier). The result is signed two’s complement with two sign bits in the top bits. Now something much less obvious is that if you want to calculate absolute difference, you can call abs() directly on the result of subtraction and only mask out the unused bits afterwards. How does this work? The top bit in (ax - bx) always equals the sign bit even if ax and bx only use even bits (top bit is odd), so this part is ok. Now, if the number is positive then there’s nothing to do with it. If it’s negative, then abs negates it again (strips the minus). Conveniently -x equals ~(x - 1) in two’s complement, so let’s see what these two operations do to a negative (ax - bx). ~ or bitwise negation just works because it inverts all bits including the ones we’re interested in. The x – 1 part also works because it flips all the bits until the first 1 bit starting from lowest bit, and you’ll find, although it may be tricky to see, that the first bit set in (ax - bx) is always even (or always odd). uint64_t distance(uint64_t a, uint64_t b) return (llabs(ax - bx) & 0x5555555555555555) | (llabs(ay - by) & 0xaaaaaaaaaaaaaaaa); Addition requires a little trick for the carry flags to work: just set all unused bits in either ax or bx: uint64_t sum(uint64_t a, uint64_t b) uint64_t ax = a & 0x5555555555555555; uint64_t ay = a & 0xaaaaaaaaaaaaaaaa; uint64_t bx = b | 0xaaaaaaaaaaaaaaaa; uint64_t by = b | 0x5555555555555555; return ((ax + bx) & 0x5555555555555555) | ((ay + by) & 0xaaaaaaaaaaaaaaaa); ephemient Says: August 3, 2009 at 9:18 pm | Reply IEEE-754 floating point numbers are designed such that they can be compared/ordered simply by treating their binary representations as 2′s-complement integers. So you don’t need to switch to fixed-point to get “two positions that are close to each other in an array sorted by this number probably are close to each other on the map”; it works with even when mixing floats. Looking at the high bits to “locate something with a desired accuracy” does have a pretty warped scale, though, and the arithmetic tricks here won’t work either. Eric Says: September 12, 2009 at 4:57 am | Reply I have a webapp that uses Google Maps and a database with lots of sites. I wanted a way of getting lat-lon into some 1D order. I hit upon the idea of scaling the lat & lon to 16-bit integers and interleaving the bits. I should have known I wasn’t the only one! Today I googled and found out that my idea is quite widely used. I used (lon+180)*180 and (lat+90)*180 to get 16 & 15 bit integers. I’m using JavaScript and PHP/MySQL. So I would use this number to filter which sites I load from my DB. I get the two opposite corners, convert both to the integer to see what range I need. A small window will always give you a small integer range unless your window happens to cross a major division. Alas, that’s exactly what happened! Adelaide (my city) is 138.6E. (138.6 + 180) * 180 = 0xE004. So the 7/8 of the way around the world longitude line ran right through the area I was looking at! This rendered the filter useless, and I’ve since refined it to get around such problems, but it was just funny that I chose an easy trick with a scaling system that would work almost anywhere but here! btw Morton number in SQL: conv(bin(n1),4,10) + 2*conv(bin(n2),4,10) • balrog Says: September 13, 2009 at 7:38 pm | Reply Hi Eric, [plug]I can’t help asking if you’ve considered switching over to OpenStreetMap for your web app? Re looking up sites inside a rectangular region, I pondered the same issue and my solution (although not implemented yet) is for every pair of opposite corners to be converted to four (or fewer) integer ranges mapping to one, two or four db queries, depending on where exactly the two corners fall in that “grid”. Complexity wise making it four lookups instead of one makes no difference. So if the corners were at, say (-1,-1), (3, 3), then you’d split it up into four square lookups: (-1, -1)-(0, 0), (-4, 0)-(0, 4), (0, -4)-(4, 0), (0, 0)-(4,4), each power-of-two-sized, then, in my case, I will just have the server return all results that matched and have the javascript filter out unwanted results. Assuming the original query is usually for objects visible on the screen, the query will almost always have a screen like width:height ratio, so nearly square, and if it’s square, the area returned from the four queries will never sum up to more than twice the original area. Note that you can do (lat+90)*360 in the conversion to integers to get a better resolution, unless you have other use for that 16th bit there. Cheers Eric Says: September 14, 2009 at 8:34 am | Reply Mappage (my webapp) is here: The features I described are not yet on the live version though. OSM looks OK, and I’m generally a fan of free/open stuff. Adelaide is mostly covered now, and I suspect an old friend of mine has mapped most of the northern suburbs. GM was the best option when I started Mappage: for things like geocoding and translucent polygon overlay there was nothing else. Later it might be possible to decouple from GM so it can sit on something else, but unlikely. My solution to getting the rectangular range was much like what you suggested. Scale-wise I kept the scale the same in both directions, keeping the 16th bit clear so I could do comparisons without worrying about sign. Accuracy isn’t important as it’s only for rough sorting, and being in Australia the graticules are close to square. Reid Cantero Says: September 17, 2010 at 6:36 am | Reply Sick of obtaining low amounts of useless traffic for your website? Well i wish to let you know about a new underground tactic which makes myself $900 on a daily basis on 100% AUTOPILOT. I could be here all day and going into detail but why dont you merely check their site out? There is a great video that explains everything. So if your serious about making easy cash this is the website for you. Auto Traffic Avalanche
{"url":"http://unadventure.wordpress.com/2009/08/03/morton-numbers/","timestamp":"2014-04-17T03:49:11Z","content_type":null,"content_length":"53494","record_id":"<urn:uuid:28320a7a-ffcd-4d1d-8a30-d8d92f3428b5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Wheeling, IL Trigonometry Tutor Find a Wheeling, IL Trigonometry Tutor ...I help them understand math easily and we have fun learning! They love the accomplishment of seeing their grades improve and knowing they understand what they're learning. Their confidence 26 Subjects: including trigonometry, Spanish, geometry, chemistry ...My methods have proven to be very successful. I have a Masters degree in applied mathematics and most coursework for a doctorate. This includes linear algebra, modern algebra, mathematical physics, topology, complex and real variable analysis and functional analysis in addition to calculus and differential equations. 18 Subjects: including trigonometry, physics, calculus, algebra 1 ...I took the PRAXIS in 2005. I received a score of 281 out of a possible 300. In math I received a perfect 300. 24 Subjects: including trigonometry, calculus, geometry, GRE ...In college I received a Bachelors in Computer Science and was introduced to many languages but PHP in particular was a language I fell in love with. Working in finance, a heavy emphasis was on Excel and I became adept at programming Excel macros with Visual Basic. I'm very comfortable with comp... 22 Subjects: including trigonometry, calculus, computer programming, ACT Math ...I have my bachelors in engineering and I can help you improve your grades and even score better in an exam. I have worked with high school students as well as college students. I have helped students excel in various exams. 14 Subjects: including trigonometry, geometry, GRE, algebra 1 Related Wheeling, IL Tutors Wheeling, IL Accounting Tutors Wheeling, IL ACT Tutors Wheeling, IL Algebra Tutors Wheeling, IL Algebra 2 Tutors Wheeling, IL Calculus Tutors Wheeling, IL Geometry Tutors Wheeling, IL Math Tutors Wheeling, IL Prealgebra Tutors Wheeling, IL Precalculus Tutors Wheeling, IL SAT Tutors Wheeling, IL SAT Math Tutors Wheeling, IL Science Tutors Wheeling, IL Statistics Tutors Wheeling, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/Wheeling_IL_trigonometry_tutors.php","timestamp":"2014-04-17T10:49:02Z","content_type":null,"content_length":"24111","record_id":"<urn:uuid:c768156d-e87b-4cb9-bf07-96114a3276d8>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: measured boundary conditions with pde toolbox Date: Aug 5, 2009 6:21 PM Author: Bruno Luong Subject: Re: measured boundary conditions with pde toolbox "Doug " <dhk@umd.edu> wrote in message <h5cmv4$lbc$1@fred.mathworks.com>... > Has anybody tried using the pde toolbox with boundary conditions taken from experimental measurements (ie, not a formula)? > I see that assempde will take as its input either a boundary condition matrix or a boundary M-file, but both seem to give boundary conditions by specifying a simple formula (like x.^2+y.^2, from the example in the help files). I want to solve Poisson's equation using actual experimental data for the Neumann boundary conditions. It would seem reasonable to provide a matrix similar in form to the edge matrix "e", but containing local values of the Neumann boundary condition. Can it be done? > Thanks in advance from a lowly experimentalist to all you pde gurus! Unless if you refer to Dirac Neumann bc, the local values Neumann's bc does not make sense. It must be something you can *apply* on another function defined on the boundary. Specifically, mathematician like to refer it as continuous linear application on the fractional Sobolev H^{1/2} space; or even more barbaric: the "H^{-1/2} space". The later does not have defined local value, unfortunately.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=6806826","timestamp":"2014-04-17T19:00:32Z","content_type":null,"content_length":"2260","record_id":"<urn:uuid:9f6a6507-57cd-43fa-a723-1f2589bfff06>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
A review of experimental investigations on thermal phenomena in nanofluids Nanoparticle suspensions (nanofluids) have been recommended as a promising option for various engineering applications, due to the observed enhancement of thermophysical properties and improvement in the effectiveness of thermal phenomena. A number of investigations have been reported in the recent past, in order to quantify the thermo-fluidic behavior of nanofluids. This review is focused on examining and comparing the measurements of convective heat transfer and phase change in nanofluids, with an emphasis on the experimental techniques employed to measure the effective thermal conductivity, as well as to characterize the thermal performance of systems involving nanofluids. The modern trends in process intensification and device miniaturization have resulted in the quest for effective heat dissipation methods from microelectronic systems and packages, owing to the increased fluxes and the stringent limits in operating temperatures. Conventional methods of heat removal have been found rather inadequate to deal with such high intensities of heat fluxes. A number of studies have been reported in the recent past, on the heat transfer characteristics of suspensions of particulate solids in liquids, which are expected to be cooling fluids of enhanced capabilities, due to the much higher thermal conductivities of the suspended solid particles, compared to the base liquids. However, most of the earlier studies were focused on suspensions of millimeter or micron sized particles, which, although showed some enhancement in the cooling behavior, also exhibited problems such as sedimentation and clogging. The gravity of these problems has been more significant in systems using mini or micro-channels. A much more recent introduction into the domain of enhanced-property cooling fluids has been that of nanoparticle suspensions or nanofluids. Advances in nanotechnology have made it possible to synthesize particles in the size range of a few nanometers. These particles when suspended in common heat transfer fluids, produce the new category of fluids termed nanofluids. The observed advantages of nanofluids over heat transfer fluids with micron sized particles include better stability and lower penalty on pressure drop, along with reduced pipe wall abrasion, on top of higher effective thermal conductivity. It has been observed by various investigators that the suspension of nanoparticles in base fluids show anomalous enhancements in various thermophysical properties, which become increasingly helpful in making their use as cooling fluids more effective [1-4]. While the reasons for the anomalous enhancements in the effective properties of the suspensions have been under investigation using fundamental theoretical models such as molecular dynamics simulations [5,6], the practical application of nanofluids for developing cooling solutions, especially in miniature domains have already been undertaken extensively and effectively [7,8]. Quantitative analysis of the heat transfer capabilities of nanofluids based on experimental methods has been a topic of current interest. The present article attempts to review the various experimental techniques used to quantify the thermal conductivity, as well as to investigate and characterize thermal phenomena in nanofluids. Different measurement techniques for thermal conductivity are reviewed, and extensive discussions are presented on the characterization of thermal phenomena such as forced and free convection heat transfer, circulation in liquid loops, boiling and two phase flow in nanofluids, in the sections to follow. Thermal conductivity The techniques employed for measurement of thermal conductivity can be broadly classified into transient and steady state methods. The transient measurement techniques frequently used are the hot wire method, the hot strip method, the temperature oscillation method and the 3ω method. Steady-state measurement using a 'cut-bar apparatus' has also been reported. These methods are reviewed below. The short hot wire (SHW) method The transient short hot wire (SHW) method used to measure the thermal conductivity and thermal diffusivity of nanofluids has been described by Xie et al. [9,10]. The technique is based on the comparison of experimental data with a numerical solution of the two-dimensional transient heat conduction applied to a short wire with the same length-to-diameter ratio and boundary conditions as in the experimental setup. The experimental apparatus consists of a SHW probe and a teflon cell of 30 cm^3 volume. The dimensions of the SHW probe are shown in Figure 1. The SHW probe is mounted on the teflon cap of the cell. A short platinum wire of length 14.5 mm and 20 μm diameter is welded at both ends to platinum lead wires of 1.5 mm in diameter. The platinum probe is coated with a thin layer (1 μm) of alumina for insulation, thus preventing electrical leakage. Before and after the application of the Al[2]O[3 ]film coating, the effective length and radius of the hot wire and the thickness of the Al[2]O[3 ] insulation film are calibrated. Figure 1b shows the dimensions of the Teflon cell used for measurements in nanofluids. Two thermocouples located at the same height, at the upper and lower welding spots of the hot wire and lead wires, respectively, monitor the temperature homogeneity. The temperature fluctuations are minimized by placing the hot wire cell in a thermostatic bath at the measurement temperature. Figure 1. Short hot wire probe apparatus of Xie et al. [9]. In the calculation method, the dimensionless volume-averaged temperature rise of the hot wire, θ[v ][= (T[v ]- T[i])/(q[v]r^2/λ)] is approximated by a linear equation in terms of the logarithm of the Fourier number Fo [=αt/r^2], where T[i ]and T[v ]are the initial liquid temperature and volume averaged hot-wire temperature, q[v ]the heat rate generated per unit volume, r the radius of the SHW, t is the time, and λ and α the thermal conductivity and the thermal diffusivity of liquid, respectively. The coefficients of the linear equation, A and B, are determined by the least squares method for a range of Fourier numbers corresponding to the measuring period. The measured temperature rise of the wire ΔT[v ][=T[v ]- T[i]] is also approximated by a linear equation with coefficients a and b, determined by the least square method for the time range before onset of natural convection. Thermal conductivity (λ) and thermal diffusivity (α) of nanofluids are obtained as λ = (VI/πl)(A/a) and α = r^2 exp[(b/a) - (B/A)], where l is the length of the hotwire, and V and I are the voltage and current supplied to the wire. The uncertainties of the thermal conductivity and thermal diffusivity measurements using SHW have been estimated to be within 1.0 and 5.0%, respectively. Temperature oscillation technique Das et al. [11] proposed and demonstrated the temperature oscillation method for estimating thermal conductivity and thermal diffusivity of nanofluids. The method can be understood with the help of Figure 2, which shows a cylindrical fluid volume analyzed, with periodic temperature oscillations applied at surfaces A and B. The temperature oscillations are generated using Peltier elements attached to reference layer. The Peltier elements are powered by a DC power source. The real measurable phase shift and amplitude ratio of temperature oscillation can be expressed as, Figure 2. The fluid volume for analysis corresponding to the experimental setup of Das et al. [11]. where G is the phase shift, u amplitude in Kelvin, and L thickness of fluid sample in meter. The complex amplitude ratio between the mid-point of the specimen and the surface can be given by where α is the thermal diffusivity and the angular velocity, ω, is given by The phase and amplitude of temperature oscillation at the two surfaces as well as at the central point C, gives the thermal diffusivity of the fluid, from Equations 1 or 2. The temperature oscillation in the reference layer at the two boundaries of the test fluid yields the thermal conductivity. The frequency of temperature oscillation in the reference layer, in the Peltier element and that in the test fluid are the same. The complex amplitude ratio at x = -D (D being the thickness of the reference layer) and x = 0 is given by where λ is the thermal conductivity of the fluid. The real phase shift and amplitude attenuation of the reference layer is given by The thermal diffusivity of the reference layer being known either from Equations 7 or 8, the thermal conductivity of the specimen can be evaluated from Equation 6. The test cell is a flat cylindrical cell as shown in Figure 3, which is cooled on both of the ends using a thermostatic bath. DC power is applied to the Peltier element. A number of thermocouples measure the temperatures in the test section which are amplified, filtered, and fed to the data acquisition system. The frame of the cell is made of POM (polyoxymethylene), which acts as the first layer of insulation. The frame has a 40-mm diameter cavity to hold the test fluid. Two disk type reference materials of 40 mm diameter and 15 mm thickness are kept on top and bottom side of the cavity. The space for the test fluid has a dimension of 40 mm diameter and 8 mm thickness. The fluid is filled through a small hole in the body of the cell. Temperatures are measured at the interface of the Peltier element and the reference layer, at the interface of the reference layer and test fluid and the central axial plane of the test fluid. The thermocouples are held precisely centralized. The entire cell is externally insulated. The experimental setup was calibrated by measuring the thermal diffusivity of demineralized and distilled water over the temperature range of 20 to 50°C. The results showed that the average deviation of thermal diffusivity from the standard values was 2.7%. As the range of enhancement in thermal conductivity values of nanofluids is 2 to 36%, this ranges of accuracy was found to be acceptable. Figure 3. Construction of the test cell used by Das et al. [11]. 3ω method The 3-Omega method [12] used for measuring the thermal conductivity of nanofluids is a transient method. The device fabricated using micro electro-mechanical systems (MEMS) technique can measure the thermal conductivity of the nanofluid with a single droplet of the sample fluid. Figure 4 shows the nanofluid on a quartz substrate, which is modeled as a thermal resistance between the heater and he ambient. The total heat generated from the heater (Q[total]) passes through either the nanofluid layer (Q[nf]) or the substrate (Q[sub]). The fluid-substrate interface resistance is neglected when the thermal diffusivities of the fluid and the substrate are similar. If ΔT[h ]is the measured temperature oscillation of the heater in the presence of the nanofluid it can be shown that Figure 4. Schematic of the experimental setup for the 3ω method reported by Oh et al. [12]. The relationship between the temperature oscillation and the heat generation rate can be expressed as, where Q' is the heating power per unit length generated at the metal heater, k the thermal conductivity of the substrate, q the complex thermal wave number, ω the angular frequency of the input current, and ρ and C[p ]the substrate density and heat capacity, respectively. The temperature oscillation and the heat generation per unit heater length are related through Equation 10. It follows that a simple relationship between the temperature oscillations can be obtained as follows: ΔT[sub ]is the heater temperature oscillation due to the heat transfer in the quartz substrate alone (measured in vacuum). The nanofluid thermal conductivity k[nf ]is obtained from a least squares fit of ΔT[nf ]calculated from Equation 10. Microlitre hot strip devices for thermal characterization of nanofluids A simple device based on the transient hot strip (THS) method used for the investigations of nanofluids of volumes as small as 20 μL is reported in the literature by Casquillas et al. [13]. In this method, when the strip, in contact with a fluid of interest is heated up by a constant current, the temperature rise of the strip is monitored. Photolithography patterning of the strip was done using AZ5214 Shipley resist spin coated on a glass substrate. Electron beam evaporation deposition of Cr (5 nm)/Pt (50 nm)/Cr (5 nm) sandwich layer was followed by deposition of SiO[2 ](200 nm) cover layer deposition by PECVD (plasma enhanced chemical vapor deposition). The electrical contact areas of the sample were obtained by photolithography and reactive ion etching of SiO[2 ]layer with SF6 plasma, followed by chromium etching. The micro-reservoir for nanofluids was fabricated by soft lithography. The PDMS (polydimethylsiloxane) cover block was created from a 10:1 mixture of PDMS-curing agent. The PDMS was degassed at room temperature for 2 h and cured at 80°C for 3 h. A PDMS block of 20 mm long, 10 mm large, and 3 mm thick was cut and a 5 mm diameter hole was drilled in the center for liquid handling. The PDMS block and the glass substrates were exposed to O[2 ]plasma, before the device was baked at 80°C for 3 h for irreversible bonding. THS device, with a water droplet confined in the open hole is shown in Figure 5. The current and voltage measurements were performed using a voltmeter (Agilent 34410A) and a function generator (Agilent 33220A) linked to a current source. The temperature variation of the strip was recorded by applying a constant current and monitoring the resistivity change with time from which the liquid thermal conductivity was deduced. Figure 5. THS device, with a water droplet confined in the open hole, as reported in [13]. The transient response of the platinum strip temperature can be described by the following expression for t > 0.2 s: where T[o ]is the intercept on the temperature axis of the T vs. ln(t) graph. The thermal diffusivity, α[f ]depends on the thermal conductivity k, the density, and the specific heat capacity of the fluid. As a first-order approximation, it is possible to obtain the thermal conductivity from the measurement of α[f]. Steady state measurement using cut-bar apparatus Steady-state measurement of the thermal conductivity of nanofluids using a cut-bar apparatus has been reported by Sobhan and Peterson [14]. The steady state thermal conductivity of the nanofluid can be modeled as shown in Figure 6. The apparatus consists of a pair of copper rods (2.54 cm diameter) separated by an O-ring to form the test cell as shown in Figure 7. Several thermocouples are soldered into the copper bars to measure surface temperatures and the heat flux. The test cell is placed in a vacuum chamber maintained at less than 0.15 Torr. The external convection and/or radiation losses are thus minimized, and hence neglected. The size of the test cell is kept small, such that convection currents do not set in, as indicated by an estimation of the Rayleigh number. The heat flux in the cut-bar apparatus is the average of the heat fluxes from Equation 14 below, calculated from the temperature differences between the upper and lower copper bars: Figure 6. Heat flux paths in the steady-state measurement method reported in Sobhan et al. [14]. Figure 7. Test cell for steady-state measurement of thermal conductivity of nanofluids [14]. where q is the heat flux, k[copper ]the thermal conductivity of copper bars, ΔT[bar ]the temperature difference along the copper bars, and ΔZ[bar ]the distance along the copper bars. The effective thermal conductivity of the nanoparticle suspension contained in the test cell can be calculated as: where k[eff ]is the effective thermal conductivity of the nanofluid, q the heat flux, ΔT[cell ]the average temperature difference between the two surfaces of the test cell, ΔZ[cell ]the distance between the two cell surfaces, k[O-ring ]the thermal conductivity of the rubber O-ring, A[O-ring ]the cross-sectional area of the rubber O-ring, and A[cell ]the cross-sectional area of the test cell. Baseline experiments using ethylene glycol and distilled water showed an accuracy of measurement within +/-2.5%. Comparison of thermal conductivity results The transient hot wire (THW) method for estimating experimentally the thermal conductivity of solids and fluids is found to be the most accurate and reliable technique, among the methods discussed in the previous sections. Most of the thermal conductivity measurements in nanofluids reported in the literature have been conducted using the transient hot wire method. The temperature oscillation method helps in estimating the temperature dependent thermal conductivity of nanofluids. The steady-state method has the difficulty that steady-state conditions have to be attained while performing the measurements. A comparison of the thermal conductivity values of nanofluids obtained by various measurement methods and reported in literature is shown in Table 1. Viscosity, like thermal conductivity, influences the heat transfer behaviour of cooling fluids. Nanofluids are preferred as cooling fluids because of their improved heat removal capabilities. Since most of the cooling methods used involve forced circulation of the coolant, modification of properties of fluids which can result in an increased pumping power requirement could be critical. Hence, viscosity of the nanofluid, which influences the pumping power requirements in circulating loops, requires a close examination. Investigations [3,4,15-22] reported in the literature have shown that the viscosity of base fluids increases with the addition of nanoparticles. Praveen et al. [15] measured the viscosity of copper oxide nanoparticles dispersed in ethylene glycol and water. An LV DV-II+ Brookfield programmable viscometer was used for the viscosity measurement. The copper oxide nanoparticles with an average diameter of 29 nm and a particle density of 6.3 g/cc were dispersed in a 60:40 (by weight) ethylene glycol and water mixture, to prepare nanofluids with different volume concentrations (1, 2, 3, 4, 5, and 6.12%). The viscosity measurements were carried out in the temperature range of -35 to 50°C. The variation of the shear stress with shear strain was found to be linear for a 6.12% concentration of the nanofluid at -35°C, which confirmed that the fluid has a Newtonian behavior. At all concentrations, the viscosity value was found to be decreasing with an increase in the temperature and a decrease in concentration of the nanoparticles. The suspension with 6.12% concentration gave an absolute viscosity of around 420 centi-Poise at -35°C. Nguyen et al. [3] measured the temperature and particle size dependent viscosity of Al[2]O[3]-water and CuO-water nanofluids. The average particle sizes of the samples of Al[2]O[3 ]nanoparticles were 36 and 47 nm, and that of CuO nanoparticles was 29 nm. The viscosity was measured using a ViscoLab450 Viscometer (Cambridge Applied Systems, Massachusetts, USA). The apparatus measured viscosity of fluids based on the couette flow created by the rotary motion of a cylindrical piston inside a cylindrical chamber. The viscometer was having an accuracy and repeatability of ±1 and ±0.8%, respectively, in the range of 0 to 20 centi-Poise. The dynamic viscosities of nanofluids were measured for fluid temperatures ranging from 22 to 75°C, and particle volume fractions varying from 1 to 9.4%. Both Al[2]O[3]-water and CuO-water nanofluids showed an increase in the viscosity with an increase in the particle concentration, the largest increase being for the CuO-water nanofluid. The alumina particles with 47 nm were found to enhance viscosity more than the 36 nm nanoparticles. At 12% volume fraction, the 47-nm particles were found to enhance the viscosity 5.25 times, against a 3% increase by the 36-nm particles. The increase in the viscosity with respect to the particle volume fraction has been interpreted as due to the influence on the internal shear stress in the fluid. The increase in temperature has shown to decrease the viscosities for all nanofluids, which can be attributed to the decrease in inter-particle and inter-molecular adhesive forces. An interesting observation during viscosity measurements at higher temperatures was the hysteresis behaviour in nanofluids. It was observed that certain critical temperature exists, beyond which, on cooling down the nanofluid from a heated condition, it would not trace the same viscosity curve corresponding to the heating part of the cycle. This was interpreted as due to the thermal degradation of the surfactants at higher temperatures which would result in agglomeration of the particles. A comparison of the viscosity values of nanofluids reported in literature [3,4,15-22] is shown in Table 2. Forced convection in nanofluids Forced convection heat transfer is one of the most widely investigated thermal phenomena in nanofluids [23-35], relevant to a number of engineering applications. Due to the observed improvement in the thermal conductivity, nanofluids are expected to provide enhanced convective heat transfer coefficients in convection. However, as the suspension of nanoparticles in the base fluids affect the thermophysical properties other than thermal conductivity also, such as the viscosity and the thermal capacity, quantification of the influence of nanoparticles on the heat transfer performance is essentially required. As the physical mechanisms by which the flow is set up in forced convection and natural convection are different, it is also required to investigate into the two scenarios individually. The case of the natural convection (thermosyphon) loops is another problem in itself, because the characteristic of the flow is similar to that of the forced convection loop, though the mechanism is buoyancy drive. Some of the important investigations on forced convection in nanofluids are reviewed in this section. Studies on free convection and thermosyphon loops will be discussed in the sections to follow. Convective heat transfer in fully developed laminar flow Experimental investigations on the convective heat transfer coefficient of water-Al[2]O[3 ]nanofluids in fully developed laminar flow regime have been reported by Hwang et al. [23]. Their experimental setup consisted of a circular tube of diameter 1.812 mm and length 2500 mm, with a test section having an externally insulated electrical heater supplying a constant surface heat flux (5000 W/m^2), a pump, a reservoir tank, and a cooler, as shown in Figure 8. T-type thermocouples were used to measure the tube wall temperatures, T[s](x), and the mean fluid temperatures at the inlet (T[m,i]) and the exit. A differential pressure transducer was used to measure the pressure drop across the test section. The flow rate was held in the range of 0.4 to 21 mL/min. With the measured temperatures, heat flux, and the flow rate, the local heat transfer coefficients were calculated as follows: where T[m](x) and h(x) are the mean temperature of fluid and the local heat transfer coefficient. The mean temperature of fluid at any axial location is given by, where P, C[p ]are the surface perimeter, the mass flow rate, and the heat capacity, respectively. The darcy friction factor for the flow of Al[2]O[3]-water nanofluids was calculated using the measured pressure drop in the pipe and plotted against the Reynolds number. The result was found to agree with the theoretical value for the fully developed laminar flow obtained from f = 64/Red, as shown in Figure 9. The measured heat transfer coefficient for water was found to provide an accuracy of measurement with less than 3% error when compared to the Shah equation. The convective heat transfer coefficient for nanofluids was found to be enhanced by around 8%, compared to pure water. It was proposed that the flattening of the fluid velocity profile in the presence of the nanoparticles could be one of the reasons for enhancement in the heat transfer coefficient. Figure 9. Variation of the friction factor for water-based nanofluids in fully developed laminar flow, as given by Hwang et al. [23]. Convective heat transfer under constant wall-temperature condition Heris et al. [24] measured convective heat transfer in nanofluids in a circular tube, subjected to a constant wall temperature condition. The test section consisted of a concentric tube assembly of 1 m length. In this, the inner copper tube was of 6 mm diameter and 0.5 mm thickness, and the outer stainless steel tube was of 32 mm diameter, which was externally insulated with fiber glass. The experimental setup is shown schematically in Figure 10. The constant wall temperature condition was maintained by passing saturated steam through the annular section. The nanofluid flow rate was controlled by a reflux line with a valve. K-type thermocouples were used to measure the wall temperatures (T[w]) and bulk temperatures of the nanofluid at the inlet and the outlet (T[b1 ]and T[b2]). A manometer was used to measure the pressure drop along the test section. From a measurement of the time required to fill the glass vessel, the flow rate was calculated. The uncertainties associated with the measurement of the temperature and the flow rate measurements were found to be 1.0 and 2.0%, respectively. The convective heat transfer coefficient and the Nusselt number were calculated as where (T[w ]- T[b])[LM ]is the logarithmic mean temperature difference, A, D, and L cross-sectional area, diameter, and heated length of the pipe and Convective heat transfer in thermally developing region Anoop et al. [25] investigated the effect of the size of nanoparticles on forced convection heat transfer in nanofluids, focusing the study on the thermally developing region. The experimental forced circulation loop consisted of a pump, a heated test section (copper tube, 1200 mm length, 4.75 ± 0.05 mm inner diameter, 1.25 mm thickness), a cooling section, and a collecting tank, as shown in Figure 11. A constant laminar flow rate was maintained in the loop. A variable transformer connected to the electric circuit of the pump was used to vary the flow rates. The DC power source connected to the electrically insulated Ni-Cr wire, uniformly wound around the pipe dissipated a maximum power of 200 W. T-type thermocouples were used to measure the wall temperatures as well as the fluid inlet and exit temperatures. Plug flow was maintained at the entrance using a series of wire meshes. A precise measuring jar and stop watch is used to measure the flow rates. The local heat transfer coefficient and local Nusselt number are defined by Equations 16, 17, and 19. The thermal conductivity value used was at the bulk mean temperature. The density and specific heat of the nanofluid dependent on the volume fraction, φ, was given by, The convective heat transfer coefficient was measured with nanofluids mixed with Al[2]O[3 ]nanoparticles of average sizes 45 and 150 nm. In the developing flow region and for a Reynolds number of 1500, the 45-nm sized particles gave 25% enhancement in heat transfer compared with 11% by the 150-nm particles, for a concentration of 4% by weight, as shown in Figure 12. The enhancement in heat transfer coefficient was also found to decrease, from the developing to fully developed region. For a concentration of 4% (by weight) of 45 nm particles and an approximate Reynolds number of 1500, the enhancement in heat transfer coefficient was 31% at x/D = 63, while it was 10% at x/D = 244. The uncertainty in the measurement of thermal conductivity was found to be less than 2%, and that for viscosity was 0.5%. A systematic uncertainty analysis yielded the maximum error in the Reynolds number and the Nusselt number to be around 3.24 and 2.45%, respectively. Figure 12. Variation of heat transfer coefficient with particle size and Reynolds number as given by Anoop et al. [25]. Single-phase and two-phase heat transfer in microchannels Lee et al. [26] investigated on the use of nanofluids for single-phase and two-phase heat transfer in microchannels. The experimental setup used for the measurements is shown in Figure 13. The channels were fabricated by milling rectangular grooves, 215 μm wide and 821 μm deep, into the top surface of an oxygen-free copper block. The block was inserted into a G-7 plastic housing and sealed on top with a polycarbonate cover plate. The method produced 21 parallel microchannels, each with a hydraulic diameter of 341 μm, occupying a total substrate area with 1 cm width and 4.48 cm length. Heating was provided by 12 cartridge heaters embedded in the bottom of the copper block. The fluid temperature and pressure were measured at the inlet and exit plenums of the housing. The bottom wall temperature was also measured using K-type thermocouples inserted along the flow direction. A Yokogawa WT210 power meter was used to measure the electric power input to the copper block. A bypass was included immediately downstream of the flow-meters to calibrate the flow meters. An HP 3852A data acquisition system was utilized in the setup. Heat loss from the copper block was estimated as less than 5% of the electrical power input. The single phase flow experiments in the laminar regime showed an enhancement in heat transfer with the nanoparticle concentration. The fluid and pipe wall temperatures were found to increase with the nanoparticle concentration, which was interpreted as due to the reduced specific heat of nanofluids. The enhancement in heat transfer was found to be lesser in the turbulent flow regime than in the laminar regime. In the case of two phase heat transfer using nanofluids, it was observed that the chances of particles separating, getting deposited as clusters and thus clogging passages in micro-channels could make the method less preferable. Convective heat transfer in confined laminar radial flows Impinging jets with or without confinement as well as fluid flow between fixed or rotating disks with axial injection have applications in turbomachinery and localized cooling. Gherasim at al. [27] experimentally investigated the heat transfer enhancement capabilities of coolants with Al[2]O[3 ]nanoparticles suspended in water inside a radial flow cooling system. The test rig was as shown in Figure 14. Parametric studies were performed on heat transfer inside the space delimited by the nozzle and the heated disk (Aluminum, 30 cm diameter, 7.5 cm thick), with and adjustable separating distance between them. The disk was heated with seven symmetrically implanted 200 W cartridge heating elements, one at the center of the disk, and the other six spaced at 60° from each other at approximately half the radial distance. Thermally insulated K-type thermocouples were used to measure the temperatures. The heated disk was insulated using a 1.5-cm Teflon disk and a 3-cm thick insulating foam board. The periphery of the test section was surrounded by insulating foam. The concentric inlet and outlet tubes were insulated from each other using a plastic sleeve and a layer of air. From the time required to accumulate a certain quantity of fluid, the fluid mass flow rate was calculated. The heat flux was varied by changing the tension applied to the heating elements. The applied power was calculated from the measured voltage and current. The Reynolds number, as defined in Equation 22, and the Nusselt number (Equation 19) were calculated: where D[h ]is given by 2δ, where δ is the distance separating the disks. The local heat transfer coefficient h[r ]is obtained as: The bulk temperature at a given radial section (T[b,r]) was calculated as: where T[b,r ]and T[b,i ]are the bulk temperatures at a given radius and at the inlet. Considering all the uncertainties on experimental measurements, the average relative errors on Nusselt number calculations were estimated as 12.1, 11.5, and 11% for cases with particle volume concentrations of 2, 4, and 6%, respectively. The experiments were aimed at investigating the effect of nanofluids in a steady laminar flow between the disk and a flat plate, with axial entry and radial exit. The heat transfer coefficient was found to increase with the particle concentration and the flow rate and decrease with an increasing gap between disks. A review of the important investigations on forced convection heat transfer in nanofluids, presented above reveals the following general inferences. Though not extensively, attention has been devoted to explore the fluid dynamic and thermal performance of nanofuids under various physical situations. Convective heat transfer studies have been carried out in the developing region [25,34] as well as under fully developed conditions [15]. Studies have been reported pertaining to laminar [23,24,27-29], transition [32,35], and turbulent [28,33] regimes of flow. Single phase and two phase flows have been analyzed with axial and radial flow directions [27]. Constant heat flux [25,28] and constant temperature [24,29] boundary conditions have been investigated. Studies have also been reported on flow and heat transfer in compact passages such as micro channels [26,30]. A comparison of the convective heat transfer coefficients for different nanofluids at various flow and heat transfer conditions reported in the literature [23-35] is shown in Table 3. Table 3. Convective heat transfer coefficient and frictional effects Almost all of the above investigations have shown that the performance of nanofluids in forced convection heat transfer is better than that of the base fluid. However, there have been studies which reported deterioration in convective heat transfer in ethylene glycol based titanate nanofluids [31]. It generally is noticed that the percentage enhancement in heat transfer is much more than the individual enhancement in thermal conductivity. This fact is often attributed to the effect of the disruption of the thermal boundary layer due to particle movement [25]. The enhancement of heat transfer capabilities of fluids results in accomplishing higher heat transfer rates without incorporating any modifications to existing heat exchangers. It also effectively leads to a reduction in the pumping power requirements in practical applications, as a lower flow rate will produce the required amount of heat transfer. These, in general makes the use of nanofluids for forced circulation loops attractive, leading to better performance and the resulting advantage in energy efficiency. Natural convection loops using nanofluids Many of the investigations on natural convection phenomena in nanofluids deal with stagnant columns of the liquid, and in these studies, a possibility of reduction of the heat transfer coefficient has been observed [36]. Some investigators have discussed on the reasons for this behavior, and have suggested that this may be due to the reduction in the gradients of temperature within the fluid, resulting from the enhancement of the fluid thermal conductivity. However, natural circulation loops present a different scenario compared to convection in liquid columns, as the circulation is developed due to thermosyphon effect. It is of interest to look into some of the investigations on natural circulation loops with nanofluids, and understand the heat transfer performance under the influence of the nanoparticles. A few important articles on this topic are reviewed below. Some investigations on natural convection in stagnant fluid columns and pool boiling heat transfer are also Noie et al. [37] reported an enhancement in heat transfer when nanofluids were used in a two-phase closed thermosyphon (TPCT). The TPCT was made of a copper tube (20 mm internal diameter, 1 mm thick, 1000 mm long) and, the evaporator (300 mm long) and condenser (400 mm long) sections. Heating was provided by a Nickel-Chrome wire electric heater wound around the evaporator section, with a nominal power of 1000 W. The experimental setup was as shown in Figure 15. The input power is given by: where Q[loss ]is the total heat loss from the evaporator section by radiation and free convection: The radiation and free convection heat transfer rates were calculated as follows: In the above, the free convection heat transfer coefficient was determined using the expression: The total heat loss was estimated to be about 2.49% of the input power to the evaporator section. As shown in Figure 15, LM35 thermocouples were mounted on the TPCT, evaporator section, adiabatic section, and condenser section. Precise thermometers were used in the condenser section to read the input and output temperature of the coolant water. All the measured data were monitored using a data acquisition system. The quantity of heat transferred to the coolant water was calculated as: The efficiency of the TPCT was expressed as a ratio of the output heat by condensation to the input heat by evaporation: Considering the measurement errors of the parameters such as the current, the voltage, the inlet and outlet temperature of cooling water, and the mass flow rate, and neglecting the effect of Q[loss], the maximum uncertainty of the efficiency was calculated as 5.41%. Figure 16 shows that the efficiency of TPCT increases with nanoparticle concentration at all input power. For an input power of 97.1 W, the 1% nanofluid gives an efficiency of 85.6% as compared to 75.1% given by pure water. Figure 16. Variation of efficiency of TPCT with nanoparticle concentration and input power as given by Noie et al. [37]. Nayak et al. [38] investigated the single phase natural circulation behavior of nanofluids in a rectangular loop. The test facility was made of glass tubes with 26.9 mm inner diameter, and had a heating section at the bottom and a cooling section at the top, as shown in Figure 17. The volumetric expansion of the fluid was accommodated by the expansion tank which also ensured that the loop remains full of water. Thermocouples were used to measure the instantaneous local temperature, and a pressure transducer installed in the horizontal leg of the loop measured the flow rate. The loop was insulated to minimize the heat losses to the ambient. The measurement accuracy was 0.4% (+1.1°C) for the thermocouples, +0.25% for the flow rate measurement and +0.5% of the range (0 to 1250 W) for power and pressure drop (-100 to +100 Pa). Experimental results have shown that the steady-state flow rate of nanofluids in the thermosyphon loop is higher compared to pure water. The flow rate is increased by 20 to 35% depending on the nanoparticle concentration and the heat input. Khandekar et al. [39] reported investigations on the thermal performance of a closed two-phase thermosyphon system, using pure water and various water-based nanofluids of Al[2]O[3], CuO, and laponite clay as working fluids. The setup shown in Figure 18 has a pressure transducer fitted to the thermosyphon to monitor proper initial vacuum level and subsequent saturation pressure profiles. Four mica insulated surface heaters (116 mm × 48 mm) were mounted on the outer surface of a copper heating block (120 mm × 50 mm × 50 mm) with a center bore to accommodate the thermosyphon container which acts as the evaporator. The finned tube condenser was made of 40 square copper fins (70 mm × 70 mm × 1 mm), brazed at a pitch of 6.5 mm. The inlet and outlet of the shell side were designed so as to produce cross-flow conditions over the condenser fins. K-type thermocouples were used to measure the temperature at important axial locations on the thermosyphon tube. A PC based data acquisition system (NI-PCI-4351, National Instruments) was used to acquire the data. The thermal resistance is defined as: where T[e ]and T[c ]are average values of the temperatures measured by the thermocouples. The basic mechanisms of heat transfer, in a gravity-assisted thermosyphon, are nucleate pool boiling in the evaporator and film-wise condensation in the condenser section [14]. The boiling and condensation heat transfer rates are influenced by the thermophysical properties of the working fluid and the characteristic features of the solid substrate. Major limitations of the gravity assisted thermosyphon are the dry-out limitation, counter current flow limitation (CCFL) or flooding, and the boiling limitation. It was noticed that if the filling ratio (FR) is more than 40%, dry out phenomenon is not observed and the maximum heat flux is limited by the CCFL/flooding or the boiling limitation (BL). The thermal performance of the system was found to be deteriorating when nanofluids were used as working fluids. The deterioration was maximum with laponite and minimum for aluminum oxide suspended nanofluids. Increased thermal conductivity of the nanofluids showed no effect on the nucleate pool boiling heat transfer coefficient. It was suggested that physical interaction of nanoparticles with the nucleating cavities has been influencing the boiling characteristics of the nanofluids. The deterioration of the thermal performance of the nanofluid in closed two-phase thermosyphon was attributed to the improvement in wettability due to entrapment of nanoparticles in the grooves present on the surface. Improved critical heat flux values were also observed, which effect was also attributed to the increased wettability characteristics of nanofluids. Natural convection heat transfer is a preferred mode as it is comparatively noise less and does not have pumping power requirement. The use of Al[2]O[3]/water nanofluids in closed two-phase thermosyphon systems [37] has shown to increase its efficiency by 14.7% when compared to water. In rectangular loops [38] with water-based nanofluids, the flow instabilities were found to decrease and the circulation rates improved, compared to the base fluid. At the same time, there have been observations [39] that in two-phase thermosyphon loops, water-based nanofluids with suspended metal oxides have inferior thermal performance compared to the base fluids, which was explained as due to the increased surface wettability of nanofluids. Studies in stagnant columns Experimental investigations have been reported on natural convection in stagnant columns, as well as pool boiling heat transfer in nanofluids. Measurement of critical heat flux (CHF) has also been reported in pool boiling. Putra et al. [40] experimentally investigated the natural convection inside a horizontal cylinder heated from one side and cooled from the other. The effects of the particle concentration, the material of the particles, and the geometry of the containing cavity on natural convection were investigated. A systematic and definite deterioration of natural convection was observed and the deterioration increased with particle concentration. Copper oxide nanofluids showed larger deterioration than aluminum oxide nanofluids. With 4% Al[2]O[3 ]concentration, an L/D ratio of 1.5 showed a higher value of Nusselt number compared to an L/D ratio of 0.5. Liu et al. [41] studied the boiling heat transfer characteristics of nanofluids in a flat heat pipe evaporator with a micro-grooved heating surface. The nucleate boiling heat transfer coefficient and CHF of water-CuO nanofluids at different operating pressures and particle concentrations were measured. For a nanoparticle mass concentration less than 1%, the heat transfer coefficient and CHF were found to increase. Above 1% by weight, the CHF was almost constant and the heat transfer coefficient deteriorated. This was explained to be due to a decrease in the surface roughness and the solid-liquid contact angle. Heat transfer coefficient and CHF of nanofluids were found to increase with a reduction in the pressure. At the atmospheric pressure, the heat transfer coefficient and CHF showed 25 and 50% enhancement, respectively, compared to 150 and 200% enhancement at a pressure of 7.4 kPa. Boiling heat transfer on a high-temperature silver sphere immersed in TiO[2 ]nanofluid was investigated by Lotfi et al. [42]. A 10 mm diameter silver sphere heated to 700°C was immersed in the nanofluid at 90°C to study the boiling heat transfer and quenching capabilities. Film boiling heat flux in the TiO[2 ]nanofluid was found to be lower than that in water. The accumulation of nanoparticles at the liquid-vapor interface was found to reduce the vapor removal rate from the film, creating a thick vapor film barrier which reduced the minimum film boiling heat flux. Experiments by Narayan et al. [43] showed that surface orientation has an influence on pool boiling heat transfer in nanoparticle suspensions. A smooth heated tube was suspended at different orientations in nanofluids to study the pool boiling performance. The pool boiling heat transfer was found to be maximum for the horizontal inclination. Al[2]O[3]-water nanofluids of 47 nm particles and 1% by weight concentration showed enhancement in pool boiling heat transfer performance, over that of water. With increase in concentration and particle size, the performance decreased for nanofluids. For vertical and 45° inclination orientations, nanofluids showed inferior performance compared to pure water. Coursey and Kim [44] investigated the effect of surface wettability on the boiling performance of nanofluids. In the experiments, heater surfaces altered to varying degrees by oxidization or by depositing metal were investigated by measuring the surface energy measurements and boiling heat transfer (CHF). It was found that the CHF of poorly wetting systems could be improved by up to 37% by the use of nanofluids, while surfaces with good wetting characteristics showed less Suspending nanoparticles in base fluids has proven to show considerable effects on various thermophysical properties, which influences the heat transfer performance. This article focused on some of the recently reported investigations on convective heat transfer and phase change in nanofluids. It also presented some discussions on the experimental techniques employed to measure the effective thermal conductivity, as well as to characterize the thermal performance of systems involving nanofluids. The thermal conductivity of nanofluids has been measured using transient and steady-state methods, of which the transient hot wire method is found to be more versatile, accurate, and reliable. A review of the important investigations on forced convection heat transfer at various flow and heat transfer conditions have shown that the performance of nanofluids in forced convection is better than that of the base fluid. It has also been noticed that the percentage enhancement in heat transfer is much more than the individual enhancement in thermal conductivity. The use of nanofluids in thermosyphon loops has shown an increase in the efficiency, a decrease in flow instabilities, and an increase in the flow rates. There have also been observations that in two-phase thermosyphon loops, the increased wettability of nanofluids may adversely affect the thermal performance compared to that of the base fluid. Investigation on the natural convection inside a horizontal cylinder heated from one side and cooled from the other has shown deterioration in heat transfer while nanofluids are used. At low nanoparticle mass concentrations, the CHF was found to increase in a flat heat pipe. In pool boiling heat transfer in nanoparticle suspensions, the orientation of the heater surface is found to have an influence on the heat transfer rate, the maximum being for horizontal orientation. It has been noticed that for poorly wetting surfaces, the CHF can be increased by the use of nanofluids. Of the various applications proposed, the use of nanofluids in closed circulation loops for sensible heat removal is found to be the most attractive, and these can become part of steady-state heat exchange systems. The enhancement of the heat transfer capability of fluids with suspended nanoparticles makes their use in convection loops and thermosyphons an interesting option, leading to better system performance and the resulting advantage in energy efficiency. BL: boiling limitation; CCFL: counter current flow limitation; CHF: critical heat flux; MEMS: micro electro-mechanical systems; PDMS: polydimethylsiloxane; PECVD: plasma enhanced chemical vapor deposition; POM: polyoxymethylene; SHW: short hot wire; THS: transient hot strip; TPCT: two-phase closed thermosyphon. Authors' contributions ST compiled the studies conducted on thermal conductivity, viscosity, free and forced convection and boiling phenomena, compared and analysed the results. CBS contributed in conceptualizing the manuscript and revising it critically for improving technical contents. 1. Lee S, Choi US, Li S, Eastman JA: Measuring thermal conductivity of fluids containing oxide nanoparticles. ASME J Heat Transf 1999, 121:280-289. Publisher Full Text 2. Xie H, Wang J, Xi T, Liu Y, Ai F: Thermal conductivity enhancement of suspensions containing nanosized alumina particles. J Appl Phys 2002, 91:4568-4572. Publisher Full Text 3. Nguyen CT, Desgranges F, Roy G, Galanis N, Mare T, Boucher S, Angue Mintsa H: Temperature and particle-size dependent viscosity data for water-based nanofluids - Hysteresis phenomenon. Int J Heat Fluid Flow 2007, 28:1492-1506. Publisher Full Text 4. Duangthongsuk W, Wongwises S: Measurement of temperature-dependent thermal conductivity and viscosity of TiO[2]-water nanofluids. Exp Therm Fluid Sci 2009, 33:706-714. Publisher Full Text 5. Sobhan CB, Sankar N, Mathew N, Ratnapal R: Molecular dynamics modeling of thermal conductivity of engineering fluids and its enhancement using nanoparticles. In CANEUS 2006 Micro-Nano Technology for Aerospace Applications. Toulouse, France; 2006. 6. Sankar N, Mathew N, Sobhan CB: Molecular dynamics modeling of thermal conductivity enhancement in metal nanoparticle suspensions. Int Commun Heat Mass Transf 2008, 35:867-872. Publisher Full Text 7. Pantzali MN, Kanaris AG, Antoniadis KD, Mouza AA, Paras SV: Effect of nanofluids on the performance of a miniature plate heat exchanger with modulated surface. Int J Heat Fluid Flow 2009, 30:691-699. Publisher Full Text 8. Nguyen CT, Roy G, Gauthier C, Galanis N: Heat transfer enhancement using Al[2]O[3 ]- water nanofluid for an electronic liquid cooling system. Appl Therm Eng 2007, 27:1501-1506. Publisher Full Text 9. Xie HQ, Gu H, Fujii M, Zhang X: Short hot wire technique for measuring thermal conductivity and thermal diffusivity of various materials. Meas Sci Technol 2006, 17:208-214. Publisher Full Text 10. Zhang X, Gu H, Fujii M: Effective thermal conductivity and thermal diffusivity of nanofluids containing spherical and cylindrical nanoparticles. Exp Therm Fluid Sci 2007, 31:593-599. Publisher Full Text 11. Das SK, Putra N, Thiesen P, Roetzel W: Temperature dependence of thermal conductivity enhancement for nanofluids. ASME J Heat Transf 2003, 125:567-574. Publisher Full Text 12. Oh DW, Jain A, Eaton JK, Goodson KE, Lee JS: Thermal conductivity measurement and sedimentation detection of aluminum oxide nanofluids by using the 3ω method. Int J Heat Fluid Flow 2008, 29:1456-1461. Publisher Full Text 13. Casquillas GV, Berre ML, Peroz C, Chen Y, Greffet JJ: Microlitre hot strip devices for thermal characterization of nanofluids. Microelectron Eng 2007, 84:1194-1197. Publisher Full Text 14. Sobhan CB, Peterson GP: Microscale and Nanoscale Heat Transfer Fundamentals and Engineering Applications. Boca Raton: Taylor and Francis/CRC Press; 2008. 15. Praveen KN, Devdatta PK, Misra D, Das DK: Viscosity of copper oxide nanoparticles dispersed in ethylene glycol and water mixture. Exp Therm Fluid Sci 2007, 32:397-402. Publisher Full Text 16. Nguyen CT, Desgranges F, Roy G, Galanis N, Mare' T, Boucher S, Angue Mintsa H: Viscosity data for Al[2]O[3]-water nanofluid--hysteresis: is heat transfer enhancement using nanofluids reliable? Int J Therm Sci 2008, 47:103-111. Publisher Full Text 17. Chen H, Ding Y, Lapkin A: Rheological behaviour of nanofluids containing tube/rod-like nanoparticles. Powder Technol 2009, 194:132-141. Publisher Full Text 18. Phuoc TX, Massoudi M: Experimental observations of the effects of shear rates and particle concentration on the viscosity of Fe[2]O[3]-deionized water nanofluids. Int J Therm Sci 2009, 48:1294-1301. Publisher Full Text 19. Garg P, Alvarado JL, Marsh C, Carlson TA, Kessler DA, Annamalai K: An experimental study on the effect of ultrasonication on viscosity and heat transfer performance of multi-wall carbon nanotube-based aqueous nanofluids. Int J Heat Mass Transf 2009, 52:5090-5101. Publisher Full Text 20. Murshed SMS, Leong KC, Yang C: Investigations of thermal conductivity and viscosity of nanofluids. Int J Therm Sci 2008, 47:560-568. Publisher Full Text 21. Chena H, Witharana S, Jina Y, Kimd C, Ding Y: Predicting thermal conductivity of liquid suspensions of nanoparticles (nanofluids) based on rheology. Particuology 2009, 7:151-157. Publisher Full Text 22. Lee JH, Hwang KS, Jang SP, Lee BH, Kim JH, Choi SUS, Choi CJ: Effective viscosities and thermal conductivities of aqueous nanofluids containing low volume concentrations of Al[2]O[3 ] Int J Heat Mass Transf 2008, 51:2651-2656. Publisher Full Text 23. Hwang KS, Jang SP, Choi US: Flow and convective heat transfer characteristics of water-based Al[2]O[3 ]nanofluids in fully developed laminar flow regime. Int J Heat Mass Transf 2009, 52:193-199. Publisher Full Text 24. Heris SZ, Esfahany MN, Etemad SGh: Experimental investigation of convective heat transfer of Al[2]O[3]/water nanofluid in circular tube. Int J Heat Fluid Flow 2007, 28:203-210. Publisher Full Text 25. Anoop KB, Sundararajan T, Das SK: Effect of particle size on the convective heat transfer in nanofluid in the developing region. Int J Heat Mass Transf 2009, 52:2189-2195. Publisher Full Text 26. Lee J, Mudawar I: Assessment of the effectiveness of nanofluids for single-phase and two-phase heat transfer in micro-channels. Int J Heat Mass Transf 2007, 50:452-463. Publisher Full Text 27. Gherasim I, Roy G, Nguyen CT, Vo-Ngoc D: Experimental investigation of nanofluids in confined laminar radial flows. Int J Therm Sci 2009, 48:1486-1493. Publisher Full Text 28. Kim D, Kwon Y, Cho Y, Li C, Cheong S, Hwang Y, Lee J, Hong D, Moon S: Convective heat transfer characteristics of nanofluids under laminar and turbulent flow conditions. Curr Appl Phys 2009, 9:119-123. Publisher Full Text 29. Heris SZ, Etemad SGh, Esfahany MN: Experimental investigation of oxide nanofluids laminar flow convective heat transfer. Int Commun Heat Mass Transf 2006, 33:529-535. Publisher Full Text 30. Jung JY, Oh HS, Kwak HY: Forced convective heat transfer of nanofluids in microchannels. Int J Heat Mass Transf 2009, 52:466-472. Publisher Full Text 31. Ding Y, Chen H, He Y, Lapkin A, Yeganeh M, Siller L, Butenko YV: Forced convective heat transfer of nanofluids. Adv Powder Technol 2007, 18:813-824. Publisher Full Text 32. Sharma KV, Sundar LS, Sarma PK: Estimation of heat transfer coefficient and friction factor in the transition flow with low volume concentration of Al[2]O[3 ]nanofluid flowing in a circular tube and with twisted tape insert. Int Commun Heat Mass Transf 2009, 36:503-507. Publisher Full Text 33. Duangthongsuk W, Wongwises S: Heat transfer enhancement and pressure drop characteristics of TiO[2]-water nanofluid in a double-tube counter flow heat exchanger. Int J Heat Mass Transf 2009, 52:2059-2067. Publisher Full Text 34. Ding Y, Alias H, Wen D, Williams RA: Heat transfer of aqueous suspensions of carbon nanotubes (CNT nanofluids). Int J Heat Mass Transf 2006, 49:240-250. Publisher Full Text 35. Yu W, France DM, Smith DS, Singh D, Timofeeva EV, Routbort JL: Heat transfer to a silicon carbide/water nanofluid. Int J Heat Mass Transf 2009, 52:3606-3612. Publisher Full Text 36. Chang BH, Mills AF, Hernandez E: Natural convection of microparticle suspensions in thin enclosures. Int J Heat Mass Transf 2008, 51:1332-1341. Publisher Full Text 37. Noie SH, Heris SZ, Kahani M, Nowee SM: Heat transfer enhancement using Al[2]O[3]/water nanofluid in a two-phase closed thermosyphon. Int J Heat Fluid Flow 2009, 30:700-705. Publisher Full Text 38. Nayak AK, Gartia MR, Vijayan PK: An experimental investigation of single-phase natural circulation behaviour in a rectangular loop with Al[2]O[3 ]nanofluids. Exp Therm Fluid Sci 2008, 33:184-189. Publisher Full Text 39. Khandekar S, Joshi YM, Mehta B: Thermal performance of closed two-phase thermosyphon using nanofluids. Int J Therm Sci 2008, 47:659-667. Publisher Full Text 40. Putra N, Roetzel W, Das SK: Natural convection of nano-fluids. Heat Mass Transf 2003, 39:775-784. Publisher Full Text 41. Liu Z, Xiong JG, Bao R: Boiling heat transfer characteristics of nanofluids in a flat heat pipe evaporator with micro-grooved heating surface. Int J Multiphase Flow 2007, 33:1284-1295. Publisher Full Text 42. Lotfi H, Shafii MB: Boiling heat transfer on a high temperature silver sphere in nanofluid. Int J Therm Sci 2009, 48:2215-2220. Publisher Full Text 43. Narayan GP, Anoop KB, Sateesh G, Das SK: Effect of surface orientation on pool boiling heat transfer of nanoparticle suspensions. Int J Multiphase Flow 2008, 34:145-160. Publisher Full Text 44. Coursey JS, Kim J: Nanofluid boiling: the effect of surface wettability. Int J Heat Fluid Flow 2008, 29:1577-1585. Publisher Full Text Sign up to receive new article alerts from Nanoscale Research Letters
{"url":"http://www.nanoscalereslett.com/content/6/1/377","timestamp":"2014-04-18T10:57:55Z","content_type":null,"content_length":"173484","record_id":"<urn:uuid:d16aa0c5-4bd0-4303-8792-5943c7ba617a>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
intersecting intervals None of the intervals can be unbounded, right or left rays. Otherwise there is a counter-example. Name the interval with the least left end point $I_1$. There are at most six intervals that may share a point with it. So we have at least 43 which have no point common with $I_1$. Of those, name the interval with the least left end point $I_2$. Again are at most six intervals that may share a point with $I_2$, leaving 36 that have no point either of the two named. We continue getting $I_3$ from 28 and so fourth until we get $I_8$ from 8. Now you have 8 intervals that are pairwise disjoint.
{"url":"http://mathhelpforum.com/discrete-math/7369-intersecting-intervals-print.html","timestamp":"2014-04-21T02:39:13Z","content_type":null,"content_length":"4848","record_id":"<urn:uuid:a74b8aea-df9d-44b0-b3ec-9e591c70afb2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Predicting the Labelling of a Graph via Minimum p-Seminorm Interpolation Mark Herbster and Guy Lever In: The 22nd Annual Conference on Learning Theory (COLT 2009), 18-21 Jun 2009, Montreal, Canada. We study the problem of predicting the labelling of a graph. The graph is given and a trial sequence of (vertex,label) pairs is then incrementally revealed to the learner. On each trial a vertex is queried and the learner predicts a boolean label. The true label is then returned. The learner's goal is to minimise mistaken predictions. We propose {\em minimum $p$-seminorm interpolation} to solve this problem. To this end we give a $p$-seminorm on the space of graph labellings. Thus on every trial we predict using the labelling which {\em minimises} the $p$-seminorm and is also {\em consistent} with the revealed (vertex, label) pairs. When $p=2$ this is the {\em harmonic energy minimisation} procedure of~\cite{\ZhuHarmonicFunctions}, also called (Laplacian) {\em interpolated regularisation} in~\cite{Belkin2}. In the limit as $p\rightarrow 1$ this is equivalent to predicting with a label-consistent mincut. We give mistake bounds relative to a label-consistent mincut and a resistive cover of the graph. We say an edge is {\em cut} with respect to a labelling if the connected vertices have disagreeing labels. We find that minimising the $p$-seminorm with $p=1+\epsilon$ where $\epsilon\into 0$ as the graph diameter $D\into\infty$ gives a bound of $O(\cut^2 \log D)$ versus a bound of $O(\cut D)$ when $p=2$ where $\cut$ is the number of cut edges. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00006758/","timestamp":"2014-04-18T10:36:35Z","content_type":null,"content_length":"8590","record_id":"<urn:uuid:c930e282-162f-4865-b79f-f326f525aa16>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00318-ip-10-147-4-33.ec2.internal.warc.gz"}
How to access the size of another class March 5th, 2011, 10:56 AM #1 Junior Member Join Date Mar 2011 Hello, I'm having trouble with multiple parts in my polynomial program. First of all, I can't seem to find a way to access the mySize variable from the Polynomial class, also, I'm not quite sure how to combine the liketerms, and finally if you could give me some tips for the multiplication function in Polynomial class. Any help would be appreciated. #include <cassert> using namespace std; #include "Polynomial.h" Member and friend functions for Monomial. Monomial operator+(Monomial& a, Monomial& b) { Monomial res; for(int i = 0; i < mySize; i++){ // Attempting to access mySize from here and combine liketerms if(a.exp == b.exp){ res.coef = a.coef + b.coef; res.exp = a.exp; else if(a.exp != b.exp){ res.coef = a.coef; res.exp = a.exp; else if(b.exp != a.exp){ res.coef = b.coef; res.exp = b.exp; return res; Monomial operator-(Monomial& a, Monomial& b) { Monomial res; for(int i = 0; i < mySize; i++){ // Attempting to access mySize from here and combine liketerms if(a.exp == b.exp){ res.coef = a.coef - b.coef; res.exp = a.exp; else if(a.exp != b.exp){ res.coef = a.coef; res.exp = a.exp; else if(b.exp != a.exp){ res.coef = b.coef; res.exp = b.exp; return res; Monomial operator*(Monomial& a, Monomial& b) { Monomial res; res.coef = a.coef * b.coef; res.exp = a.exp + b.exp; return res; ostream & operator<< (ostream & out, const Monomial & mono) out << mono.coef ; if (mono.exp > 0) out << "x^" << mono.exp; return out; Member and friend functions for Polinomial. : mySize(0) bool Polynomial::empty() const return mySize == 0; void Polynomial::display(ostream & out) const for (int i = mySize-1; i >= 0 ; i--) { if (i != mySize-1) out << " + "; out << myArray[i]; ostream & operator<< (ostream & out, const Polynomial & aList) return out; void Polynomial::insert(ElementType item, int pos) if (mySize == CAPACITY) cerr << "*** No space for list element -- terminating " "execution ***\n"; if (pos < 0 || pos > mySize) cerr << "*** Illegal location to insert -- " << pos << ". List unchanged. ***\n"; // First shift array elements right to make room for item for(int i = mySize; i > pos; i--) myArray[i] = myArray[i - 1]; // Now insert item at position pos and increase list size myArray[pos] = item; void Polynomial::erase(int pos) if (mySize == 0) cerr << "*** List is empty ***\n"; if (pos < 0 || pos >= mySize) cerr << "Illegal location to delete -- " << pos << ". List unchanged. ***\n"; // Shift array elements left to close the gap for(int i = pos; i < mySize; i++) myArray[i] = myArray[i + 1]; // Decrease list size Polynomial operator+(Polynomial &p1, Polynomial &p2){ Polynomial res; int i = 0; if (p1.mySize > p2.mySize){ res.mySize = p1.mySize; res.mySize = p2.mySize; for(i = 0; i < res.mySize; i++){ res.myArray[i] = p1.myArray[i] + p2.myArray[i]; return res; Polynomial operator*(Polynomial &p1, Polynomial &p2){ // Multiplication does not work properly Polynomial res; int i = 0; if (p1.mySize > p2.mySize){ res.mySize = p1.mySize; res.mySize = p2.mySize; /*for(i = 0; i < res.mySize; i++){ res.myArray[i] = p1.myArray[i] * p2.myArray[i]; for (i = 0; i < res.mySize; i++){ for (int j=0; j < res.mySize; j++){ res.myArray[i+j] = (p1.myArray[i]*p2.myArray[j]) ; return res; #include <iostream> #ifndef PNOM #define PNOM const int CAPACITY = 1024; class Monomial { int coef; int exp; // int size; Construct a Monomial object. Precondition: None Postcondition: An empty Monomial object has been constructed.. Monomial(int c,int p) { coef = c; exp = p;}; friend Monomial operator+ (Monomial&, Monomial&); friend Monomial operator- (Monomial&, Monomial&); friend Monomial operator* (Monomial&, Monomial&); friend class Polynomial; int getExponent() {return exp;} int getCoefficient() {return coef;} Construct a Monomial object with specified coeffient and exponent. Precondition: None Postcondition: A Monomial object has been constructed with the specified coeffient and exponent. friend ostream & operator<< (ostream & out, const Monomial & mono); Overloading the OUTPUT operator for Monomials. Precondition: None. Postcondition: The coefficient and exponent (if != 0) of the monomial are displayed in the default output device. typedef Monomial ElementType; class Polynomial friend class Monomial; Construct a List object. Precondition: None Postcondition: An empty List object has been constructed; mySize is 0. /***** empty operation *****/ bool empty() const; Check if a list is empty. Precondition: None Postcondition: true is returned if the list is empty, false if not. /***** insert and erase *****/ void insert(ElementType item, int pos); Insert a value into the list at a given position. Precondition: item is the value to be inserted; there is room in the array (mySize < CAPACITY); and the position satisfies 0 <= pos <= mySize. Postcondition: item has been inserted into the list at the position determined by pos (provided there is room and pos is a legal void erase(int pos); Remove a value from the list at a given position. Precondition: The list is not empty and the position satisfies 0 <= pos < mySize. Postcondition: element at the position determined by pos has been removed (provided pos is a legal position). friend Polynomial operator+(Polynomial &, Polynomial &); friend Polynomial operator*(Polynomial &, Polynomial &); /***** output *****/ void display(ostream & out) const; int mySize; // current size of list stored in myArray ElementType myArray[CAPACITY]; // array to store the Monomials }; //--- end of List class //------ Prototype of output operator ostream & operator<< (ostream & out, const Polynomial & p); Re: How to access the size of another class First of all, I can't seem to find a way to access the mySize variable from the Polynomial class I'm not sure what your problem is, need more details. combine the liketerms Any two Monomials within a Polynomial should have different exponents. If you end up with more than one with the same exponent during a computation, just add their coefficients. if you could give me some tips for the multiplication function in Polynomial class Write down the steps for doing polynomial multiplication on paper. The programming logic will be very similar. Re: How to access the size of another class mySize is only in polynomial. So how does it make sense for monomial to access mySize from a monomial? Re: How to access the size of another class Well, for now I suppose the mySize can be ignored; however, for multiplication I implemented this piece of code but the result is not correct: for (i = 0; i < p1.mySize; i++){ for (int j=0; j < p2.mySize; j++){ res.myArray[i+j] = (p1.myArray[i]*p2.myArray[j]) ; I tried using += but I get an error stating that 'ElementType' does not define this operator. I appreciate your help Re: How to access the size of another class Well, you could define operator+= for Monomial. In fact, that's usually a more useful thing to define than operator+. Now that I look more closely at your code, I'm wondering why you're trying to do Polynomial operations in a Monomial operator. This: Monomial operator+(Monomial& a, Monomial& b) should have no need to know anything about other Monomials, only those two that are passed. The point of confusion may be----what to do if the two monomials have different exponents? Well, you can't handle that situation, so just throw an exception or an assertion failure or something like that. This will help make sure you never pass invalid arguments to the function. Re: How to access the size of another class Allright, well I was able to make the multiplication function work by adding a cout inside the second for and taking out the one in the main function. Now my problem is the adding liketerms and sorting the polynomial. I understand that all I'd have to do is compare the exponents and add the coefficients, but I am unsure of where to put it and how to access the seperate terms after each Re: How to access the size of another class If your multiplication function is truly "working", then it shouldn't matter where you do the output.... Re: How to access the size of another class Is this for a school assignment and you need to use the 'friend' option ? If not, stop using 'friend' all together. It's bad by design. Re: How to access the size of another class ... If not, stop using 'friend' all together. It's bad by design. sorry, that's nonsense. All non-member friend operator overloads are bad by design? March 5th, 2011, 03:20 PM #2 Elite Member Power Poster Join Date Oct 2007 Fairfax, VA March 5th, 2011, 03:21 PM #3 Member + Join Date Apr 2008 March 5th, 2011, 03:54 PM #4 Junior Member Join Date Mar 2011 March 5th, 2011, 04:06 PM #5 Elite Member Power Poster Join Date Oct 2007 Fairfax, VA March 5th, 2011, 06:49 PM #6 Junior Member Join Date Mar 2011 March 5th, 2011, 11:07 PM #7 Elite Member Power Poster Join Date Oct 2007 Fairfax, VA March 6th, 2011, 06:31 AM #8 Elite Member Power Poster Join Date Sep 2004 Holland (land of the dope) March 6th, 2011, 02:31 PM #9 Member + Join Date Apr 2008
{"url":"http://forums.codeguru.com/showthread.php?509601-How-to-access-the-size-of-another-class&p=2001638","timestamp":"2014-04-16T19:45:16Z","content_type":null,"content_length":"123748","record_id":"<urn:uuid:ea8f4554-81c2-46a9-8d6c-8e7c158cc0b8>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
ith the Task Parallel Library (TPL) I brushed off some old benchmarking code used in my clustering application and decided to see what I can do using today’s multi-core hardware. When writing computationally intensive algorithms, we have a number of considerations to evaluate. The best (IMHO) algorithms to parallelize are data parallel algorithms without loop carried dependencies. You may think nothing is special about matrix multiplication, but it actually points out a couple of performance implications when writing CLR applications. I originally wrote seven different implementations of matrix multiplication in C# – that’s right, seven. • Dumb • Standard • Single • Unsafe Single • Jagged • Jagged from C++ • Stack Allocated Dumb: double[N,N], real type: float64[0…,0…] The easiest way to do matrix multiplication is with a .NET multidimensional array with i,j,k ordering in the loops. The problems are twofold. First, the i,j.k ordering accesses memory in a hectic fashion causing data in varied locations to be pulled in. Second, it is using a multidimensional array. Yes, the .NET multidimensional array is convenient, but it is very slow. Let’s look at the C# and IL 1 C[i, j] += A[i, k] * B[k, j]; IL of C#: 1 ldloc.s i 2 ldloc.s jcall instance float64& float64[0...,0...]::Address(int32, int32) 3 dup 4 ldobj float64 5 ldloc.1 6 ldloc.s i 7 ldloc.s k 8 call instance float64 float64[0...,0...]::Get(int32, int32) 9 ldloc.2 10 ldloc.s k 11 ldloc.s j 12 call instance float64 float64[0...,0...]::Get(int32, int32) 13 mul 14 add 15 stobj float64 If you notice the ::Address and ::Get parts, these are method calls! Yes, when you use a multidimensional array, you are using a class instance. So every access, assignment, and read incurs the cost of a method call. When you are dealing with and N^3 algorithm, that is N^3 method calls making this implementation much slower than other methods. Standard: double[N,N], real type float64[0…,0…] This implementation rearranges the loop ordering to i,k,j in order to optimize memory access to the arrays. No other changes are made from the dumb implementation. The Standard implementation is what is used for the base of all other multidimensional implementations. Single: double[N * N], real type float64[] Instead of creating a multidimensional array, we create a single block of memory. The float64[] type is a block of memory instead of a class. Downside here is that we have to calculate all offsets Unsafe Single, real type float64[] This method is the same as the single dimensional array, except that the pointers to the arrays are fixed and pointers are used in unsafe C#. Jagged, double[N][N], real type float64[][] This is the same implementation as standard, except that we use arrays of arrays instead of a multidimensional array. It takes an extra step to initialize, but it is a series of blocks to raw memory eliminating the method call overhead. It is typically 30% faster that the multidimensional array. Jagged from C++, double[N][N], real type float64[][] This is a bit more difficult. When writing these algorithms, we let the JIT compiler optimize for us. The C++ compiler is unfortunately a lot better, but it isn’t real-time. I ported the code from the jagged implementation to C++/CLI and enabled heavy optimization. Once compiled, I disassembled the dll and converted the IL to C#. The result is this implementation which is harder to read, but it is really fast. Stack Allocated, stackalloc double[N * N], real type float64* This implementation utilizes the rarely used stackalloc keyword. Using this implementation is very problematic as you may get a StackOverflowException depending on your current stack usage. Enough, what does it mean?! Because of the way matrix multiplication works, we can multiply a row without impacting the result of other row multiplications. That’s right, we have a data parallel algorithm! We need to identify the chucks of work to parallelize. We can go about this in a couple different ways. First, manual threading. Second, using the ThreadPool. Third, using the new task parallel library. In this post I am going to use the third option as it makes things very nice for us. We don’t want to deal with scheduling or synchronization ourselves. We can do a couple of different schemes. We can use ParallelFor and add each rows multiplication to the list of work to be done. This is very easy 1 for ( int i = 0; i < N; i++ ) 2 { 3 for ( int k = 0; k < N; k++ ) 4 { 5 for ( int j = 0; j < N; j++ ) 6 { 7 double[] Ci = C[i]; 8 Ci[j] = ( A[i][k] * B[k][j] ) + Ci[j]; 9 } 10 } 11 } 1 Parallel.For( 0, N, i => 2 { 3 for ( int k = 0; k < N; k++ ) 4 { 5 for ( int j = 0; j < N; j++ ) 6 { 7 double[] Ci = C[i]; 8 Ci[j] = ( A[i][k] * B[k][j] ) + Ci[j]; 9 } 10 } 11 } ); The bad news is that this can be too granular. Another way to look at it is over parallelization. The overhead of scheduling and threading hurts our performance. This method does give us the smallest synchronization time. In the worst case, all rows are calculated except for one, which is started at the completion of the penultimate calculation. So the worst case is we wait for a single row. If we are willing to put in more work, we can optimize the row striping to group the calculations into groups of rows in order to minimize synchronization and threading overhead. Looking at the number of processors available, we can make the number of chunks relative to the number of processors. 1 IEnumerable<Tuple<int, double[][]>> PartitionData( int N, double[][] A ) 2 { 3 int pieces = ( ( N % ChunkFactor ) == 0 ) 4 ? N / ChunkFactor 5 : ( (int) ( ( N ) / ( (float) ChunkFactor ) ) + 1 ); 7 int remaining = N; 8 int currentRow = 0; 10 while ( remaining > 0 ) 11 { 12 if ( remaining < ChunkFactor ) 13 { 14 ChunkFactor = remaining; 15 } 17 remaining = remaining - ChunkFactor; 18 var ai = new double[ChunkFactor][]; 19 for ( int i = 0; i < ChunkFactor; i++ ) 20 { 21 ai[i] = A[currentRow + i]; 22 } 24 int oldRow = currentRow; 25 currentRow += ChunkFactor; 26 yield return new Tuple<int, double[][]>( oldRow, ai ); 27 } 28 } Here we partner the row to start on in the result matrix C with the pointer to the head of the row of A to start on. We could use the jagged implementation, but for pure speed, I am sticking with the fastest implementation I have: 1 void Multiply( Tuple<int, double[][]> A, double[][] B, double[][] C ) 2 { 3 int size = A.Item2.GetLength( 0 ); 4 int cols = B[0].Length; 5 double[][] ai = A.Item2; 7 int i = 0; 8 int offset = A.Item1; 9 do 10 { 11 int k = 0; 12 do 13 { 14 int j = 0; 15 do 16 { 17 double[] ci = C[offset]; 18 ci[j] = ( ai[i][k] * B[k][j] ) + ci[j]; 19 j++; 20 } while ( j < cols ); 21 k++; 22 } while ( k < cols ); 23 i++; 24 offset++; 25 } while ( i < size ); 26 } It is a little more complicated to write this heavily optimized version, but it pays off. To actually do the multiplication is very simple now using the TPL: 1 Parallel.ForEach( PartitionData( N, A ), item => Multiply( item, B, C ) ); Enough! Show me the numbers! OK. You have patiently waited and I will give you the results. The parallel numbers are for my quad core machine with 4GB RAM. The Y-Axis is millions of algorithmic steps per second. The X-Axis is the matrix size. First, we can look at the performance of the various single-threaded implementations. As you can see, there is a 5x difference between the easiest way and my most optimized implementation that does not use unsafe code. So if you need to multiply matrices – use jagged arrays! The bulge you see on the left when the matrix size is small is the result of the Cache Memory Swap Die (CMSD) envelope. When the data size is small, the system leverages cache which is the fastest. When the data becomes to large, it has to store the data in main memory which is still pretty fast. What you don’t see is the swap and die levels. When the data set becomes very large, the system must use the swap file and store data on disk which is levels of magnitude slower than memory. When the data set becomes too large, the hard drive thrashes and you are constantly paging data in. At this point, your performance dies. There is a problem with the chunking. Depending on how many pieces, the load of the processors, and the worst case job completion, the performance can vary for the chunking implementation. Below you can see the relative performance of the optimized chunking implementation against the simple parallelization and my heuristic. I ran the chunking algorithm with many different granule sizes and recorded the best and worst values to give a performance range. The overlap between actual and max is the CPU being used by other processes out of my control – one of the issues with granule calculation is that you can’t predict the future. One possible thing we could do is to utilize coroutines to adjust the granule size as time goes on in oder to adjust to machine load. At best, we are calculating over 4x faster than the best single-threaded implementation. Yes, better than 4x with 4 cores. We are taking advantage of caching when we row stripe the data. This performance becomes even more significant when running this code across a cluster – you can get over 5x with 4 machines. Unfortunately, I never had the chance to work with four quad core machines in a cluster to test out a massively parallel implementation. As you can see, small changes in algorithm implementation can have quite adverse effects on performance. Also, parallelizing data parallel algorithms can be incredibly simple and provide a great performance boost with very little effort. I would love to do some work showing the parallelization of wavefront, four square, and other data dependency schemes and their effect on performance and the effort needed to parallelize them, but we will see how much time I have. Another fun problem might be a massively parallel N-queens or Knights tour solver. You can find all of the source code for these benchmarks on my blog’s GitHub page.
{"url":"http://innovatian.com/2010/03/parallel-matrix-multiplication-with-the-task-parallel-library-tpl/","timestamp":"2014-04-19T17:15:43Z","content_type":null,"content_length":"44644","record_id":"<urn:uuid:5d00352d-3869-4f82-947f-98a514ba1c37>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
A Survey of Inductive Inference Results 1 - 10 of 28 , 1999 "... Motivation: Predicting the secondary structure of a protein (alpha-helix, beta-sheet, coil) is an important step towards elucidating its three dimensional structure, as well as its function. Presently, the best predictors are based on machine learning approaches, in particular neural network archite ..." Cited by 116 (22 self) Add to MetaCart Motivation: Predicting the secondary structure of a protein (alpha-helix, beta-sheet, coil) is an important step towards elucidating its three dimensional structure, as well as its function. Presently, the best predictors are based on machine learning approaches, in particular neural network architectures with a fixed, and relatively short, input window of amino acids, centered at the prediction site. Although a fixed small window avoids overfitting problems, it does not permit to capture variable long-ranged information. Results: We introduce a family of novel architectures which can learn to make predictions based on variable ranges of dependencies. These architectures extend recurrent neural networks, introducing non-causal bidirectional dynamics to capture both upstream and downstream information. The prediction algorithm is completed by the use of mixtures of estimators that leverage evolutionary information, expressed in terms of multiple alignments, both at the input and output levels. While our system currently achieves an overall performance close to 76% correct prediction---at least comparable to the best existing systems---the main emphasis here is on the development of new algorithmic ideas. Availability: The executable program for predicting protein secondary structure is available from the authors free of charge. Contact: pfbaldi@ics.uci.edu, gpollast@ics.uci.edu, brunak@cbs.dtu.dk, paolo@dsi.unifi.it. 1 - in &quot;Proceedings 5th Annual ACM Workshop on Computational Learning Theory,&quot; July 27 - 29, Pittsburgh , 1992 "... The present paper deals with strong--monotonic, monotonic and weak--monotonic language learning from positive data as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and b ..." Cited by 32 (26 self) Add to MetaCart The present paper deals with strong--monotonic, monotonic and weak--monotonic language learning from positive data as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce always better and better generalizations when fed more and more data on the concept to be learnt. We characterize strong-- monotonic, monotonic, weak--monotonic and finite language learning from positive data in terms of recursively generable finite sets, thereby solving a problem of Angluin (1980). Moreover, we study monotonic inference with iteratively working learning devices which are of special interest in applications. In particular, it is proved that strong--monotonic inference can be performed with iteratively learning devices without limiting the inference capabilities, while monotonic and weak--monotonic inference cannot. 1 Introduction The process of hypothesizing a general rule from eventually inc... , 1993 "... In the present paper strong--monotonic, monotonic and weak--monotonic reasoning is studied in the context of algorithmic language learning theory from positive as well as from positive and negative data. Strong--monotonicity describes the requirement to only produce better and better generalization ..." Cited by 21 (13 self) Add to MetaCart In the present paper strong--monotonic, monotonic and weak--monotonic reasoning is studied in the context of algorithmic language learning theory from positive as well as from positive and negative data. Strong--monotonicity describes the requirement to only produce better and better generalizations when more and more data are fed to the inference device. Monotonic learning reflects the eventual interplay between generalization and restriction during the process of inferring a language. However, it is demanded that for any two hypotheses the one output later has to be at least as good as the previously produced one with respect to the language to be learnt. Weak--monotonicity is the analogue of cumulativity in learning theory. We relate all these notions one to the other as well as to previously studied modes of identification, thereby in particular obtaining a strong hierarchy. - Information and Computation , 1995 "... The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed m ..." Cited by 20 (7 self) Add to MetaCart The present paper deals with monotonic and dual monotonic language learning from positive as well as from positive and negative examples. The three notions of monotonicity reflect different formalizations of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned. , 1994 "... In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to t ..." Cited by 19 (13 self) Add to MetaCart In designing learning algorithms it seems quite reasonable to construct them in a way such that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail, i.e., it may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to ignore data. - Journal of the ACM , 1993 "... this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that ..." Cited by 10 (3 self) Add to MetaCart this paper contributes toward the goal of understanding how a computer can be programmed to learn by isolating features of incremental learning algorithms that theoretically enhance their learning potential. In particular, we examine the effects of imposing a limit on the amount of information that learning algorithm can hold in its memory as it attempts to This work was facilitated by an international agreement under NSF Grant 9119540. "... this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence ..." Cited by 8 (1 self) Add to MetaCart this paper initiates a study in which it is demonstrated that certain concepts (represented by functions) can be learned, but only in the event that certain relevant subconcepts (also represented by functions) have been previously learned. In other words, the Soar project presents empirical evidence that learning how to learn is viable for computers and this paper proves that doing so is the only way possible for computers to make certain inferences. - Annals of Mathematics and Artificial Intelligence , 1995 "... View of Learning To implement a program that somehow "learns" it is neccessary to fix a set of concepts to be learned and develop a representation for the concepts and examples of the concepts. In order to investigate general properties of machine learning it is neccesary to work in as representati ..." Cited by 5 (2 self) Add to MetaCart View of Learning To implement a program that somehow "learns" it is neccessary to fix a set of concepts to be learned and develop a representation for the concepts and examples of the concepts. In order to investigate general properties of machine learning it is neccesary to work in as representation independent fashion as possible. In this work, we consider machines that learn programs for recursive functions. Several authors have argued that such studies are general enough to include a wide array of learning situations [2,3,22,23,24]. For example, a behavior to be learned can be modeled as a set of stimulus and response pairs. Assuming that any behavior associates only one response to each possible stimulus, behaviors can be viewed as functions from stimuli to responses. Some behaviors, such as anger, are not easily modeled as functions. Our primary interest, however, concerns the learning of fundamental behaviors such as reading (mapping symbols to sounds), recognition (mapping pa... - Bull. Inf. Cybern , 1995 "... The present paper deals with the learnability of indexed families L of uniformly recursive languages from positive data. We consider the influence of three monotonicity demands and their dual counterparts to the efficiency of the learning process. The efficiency of learning is measured in depend ..." Cited by 3 (1 self) Add to MetaCart The present paper deals with the learnability of indexed families L of uniformly recursive languages from positive data. We consider the influence of three monotonicity demands and their dual counterparts to the efficiency of the learning process. The efficiency of learning is measured in dependence on the number of mind changes a learning algorithm is allowed to perform. The three notions of (dual) monotonicity reflect different formalizations of the requirement that the learner has to produce better and better (specializations) generalizations when fed more and more data on the target concept. , 1995 "... The present paper deals with the learnability of indexed families of uniformly recursive languages by single inductive inference machines (abbr. IIM) and teams of IIMs from positive and both positive and negative data. We study the learning power of single IIMs in dependence on the hypothesis space ..." Cited by 3 (3 self) Add to MetaCart The present paper deals with the learnability of indexed families of uniformly recursive languages by single inductive inference machines (abbr. IIM) and teams of IIMs from positive and both positive and negative data. We study the learning power of single IIMs in dependence on the hypothesis space and the number of allowed anomalies the synthesized language may have. Our results are fourfold. First, we show that allowing anomalies does not increase the learning power as long as inference from positive and negative data is considered. Second, we establish an infinite hierarchy in the number of allowed anomalies for learning from positive data. Third, we prove that every learnable indexed family L may be even inferred with respect to the hypothesis space L itself. Fourth, we characterize learning with anomalies from positive data. Finally, we investigate the error correcting power of team learners, and relate the inference capabilities of teams in dependence on their size to one another...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1950626","timestamp":"2014-04-19T20:31:21Z","content_type":null,"content_length":"38222","record_id":"<urn:uuid:470f4df9-df87-4d24-baf3-487ee100d609>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: SYSTEMS AND METHODS FOR LARGE-SCALE RANDOMIZED OPTIMIZATION FOR PROBLEMS WITH DECOMPOSABLE LOSS FUNCTIONS Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP Systems and methods directed toward processing optimization problems using loss functions, wherein a loss function is decomposed into at least one stratum loss function, a loss is decreased for each stratum loss function to a predefined stratum loss threshold individually using gradient descent, and the overall loss is decreased to a predefined threshold for the loss function by appropriately ordering the processing of the strata and spending appropriate processing time in each stratum. Other embodiments and aspects are also described herein. A system comprising: at least one processor; a system memory operatively coupled to the at least one processor; and at least one optimization module communicatively coupled to the system memory, wherein the at least one optimization module is adapted to: decompose a primary loss function into at least one stratum loss function; decrease a stratum loss to a predefined stratum loss threshold for each of the at least one stratum loss function by processing each of the at least one stratum loss function individually using gradient descent; and decrease a primary loss to a predefined primary loss threshold of the primary loss function by processing each of the at least one stratum loss function according to a stratum sequence, wherein a processing time of each of the at least one stratum loss function is proportional to a weight of the at least one stratum loss function. The system according to claim 1, wherein processing each of the at least one stratum loss function individually using gradient descent comprises using stochastic gradient descent. The system according to claim 1, wherein the stratum sequence is decomposed into consecutive, independent and identically distributed cycles. The system according to claim 1, wherein each of the at least one stratum loss function correspond to a partition of an underlying dataset. The system according to claim 1, wherein the predefined primary loss threshold comprises a minimum primary loss. The system according to claim 1, wherein the stratum sequence is selected to establish convergence to at least one stationary point of the primary loss function. The system according to claim 1, further comprising: at least one matrix factorization module communicatively coupled to the system memory, wherein the at least one matrix factorization module is adapted to: partition a matrix as a union of at least one matrix stratum; determine a sequence of the at least one matrix stratum; decompose a matrix loss function into at least one matrix stratum loss function, each at least one stratum loss function configured to minimize a loss of at least one matrix stratum; decrease a matrix stratum loss to a predefined matrix stratum loss threshold for each at least one matrix stratum loss function by processing each at least one matrix stratum loss function individually using stochastic gradient descent; and decrease a matrix loss to a predefined matrix loss threshold of the matrix loss function processing the at least one matrix stratum loss functions according to a matrix stratum sequence, wherein a processing time of each of the at least one matrix stratum loss function is proportional to a weight of the at least one stratum. The system according to claim 7, wherein each of the at least one matrix stratum comprise one or more interchangeable blocks. The system according to claim 8, wherein each of the at least one matrix stratum are processed in parallel. The system according to claim 9, wherein the at least one matrix factorization module is further adapted to: communicate with at least one processing node; wherein processing the at least one matrix stratum is distributed across the at least one processing node. A computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to decompose a primary loss function into at least one stratum loss function; computer readable program code configured to decrease a stratum loss to a predefined stratum loss threshold for each at least one stratum loss function by processing each at least one stratum loss function individually using gradient descent; and computer readable program code configured to decrease a primary loss to a predefined primary loss threshold of the primary loss function stratum by processing each of the at least one stratum loss function according to a stratum sequence, wherein a processing time of each of the at least one stratum loss function is proportional to a weight of the at least one stratum loss function. BACKGROUND [0001] As Web 2.0 and enterprise-cloud applications have proliferated, data collection processes increasingly require the ability to efficiently and effectively handle web-scale datasets. Such processes include, but are not limited to, chemical and mechanical manufacturing optimizations or economic regret formulations, where the corresponding chemical or mechanical waste or economic loss is minimized. Of particular interest is the analysis of "dyadic data," which concerns discovering and capturing interactions between two entities. For example, certain applications involve topic detection and keyword search, where the corresponding entities are documents and terms. Other examples concern news personalization with user and story entities, and recommendation systems with user and item entities. In large applications, these problems often involve matrices with millions of rows (e.g., distinct customers) and millions of columns (e.g., distinct items), ultimately resulting in billions of populated cells (e.g., transactions between customers and items). In these data collection applications, the corresponding optimization problem is to compute row and column profiles, such that the loss between the "predicted" cells (from the corresponding row and column profiles) and the actual cells is minimized. BRIEF SUMMARY [0002] In summary, one aspect provides a system comprising: at least one processor; a system memory operatively coupled to the at least one processor; at least one optimization module communicatively coupled to the system memory, wherein the at least one optimization module is adapted to: decompose a primary loss function into at least one stratum loss function; decrease a stratum loss to a predefined stratum loss threshold for each at least one stratum loss function by processing each at least one stratum loss function individually using gradient descent; and decrease a primary loss to a predefined primary loss threshold of the primary loss function by processing each of the at least one stratum loss function according to a stratum sequence, wherein a processing time of each of the at least one stratum loss function is proportional to a weight of the at least one stratum loss function. Another aspect provides a method comprising: decomposing a primary loss function into at least one stratum loss function; decreasing a stratum loss to a predefined stratum loss threshold for each at least one stratum loss function by processing each at least one stratum loss function individually using gradient descent; and decreasing a primary loss to a predefined primary loss threshold of the primary loss function by processing each of the at least one stratum loss function according to a stratum sequence, wherein a processing time of each of the at least one stratum loss function is proportional to a weight of the at least one stratum loss function. A further aspect provides a computer program product comprising: a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising: computer readable program code configured to decompose a primary loss function into at least one stratum loss function; computer readable program code configured to decrease a stratum loss to a predefined stratum loss threshold for each at least one stratum loss function by processing each at least one stratum loss function individually using gradient descent; and computer readable program code configured to decrease a primary loss to a predefined primary loss threshold of the primary loss function by processing each of the at least one stratum loss function according to a stratum sequence, wherein a processing time of each of the at least one stratum loss function is proportional to a weight of the at least one stratum loss function. The foregoing is a summary and thus may contain simplifications, generalizations, and omissions of detail; consequently, those skilled in the art will appreciate that the summary is illustrative only and is not intended to be in any way limiting. For a better understanding of the embodiments, together with other and further features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying drawings. The scope of the invention will be pointed out in the appended claims. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS [0006] FIG. 1 provides an example of stochastic gradient descent (SGD). FIGS. 2A-2D provide graphs depicting example stratified stochastic gradient descent (SSGD) cycles for two strata. FIG. 3 provides a graph depicting an example of 100 cycles of SSGD. FIG. 4 provides an example matrix modeling a movie feedback system. FIG. 5 provides an example of generalized matrix factorization. FIG. 6 provides an example of stochastic gradient descent (SGD). FIG. 7 Provides an example SGD matrix factorization process. FIGS. 8A-8C provide an example SSGD process. FIG. 9 provides an example distributed stochastic gradient descent (DSGD) matrix factorization process. FIG. 10 provides a graph of results from DSGD comparison experiments. FIG. 11 provides another graph of results from DSGD comparison experiments. FIG. 12 illustrates an example computer system. DETAILED DESCRIPTION [0018] It will be readily understood that the components of the embodiments, as generally described and illustrated in the figures herein, may be arranged and designed in a wide variety of different configurations in addition to the described example embodiments. Thus, the following more detailed description of the example embodiments, as represented in the figures, is not intended to limit the scope of the claims, but is merely representative of certain example embodiments. Reference throughout this specification to an "embodiment" or "embodiment(s)" means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of "embodiment" or "embodiment(s)" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of example embodiments. One skilled in the relevant art will recognize, however, that aspects can be practiced without one or more of the specific details, or with other methods, components, materials, et cetera. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid prolixity. Data collection and processing functions are continually being re-designed to handle the increasing size and scope of modern web-scale datasets, such as those associated with Web 2.0 and enterprise-cloud applications. One common data processing challenge concerns optimization problems, where a large number of parameters must be optimized. As such, optimization problems generally involve using and determining certain approximations concerning data from a subject data set. Loss functions provide a process for representing the loss associated with an optimization problem approximation varying from a desired or true value. Accordingly, a loss function may be minimized or decreased, for example, to a predifined threshold to achieve a desired outcome for an optimization problem. Loss functions have many applications, including matrix factorization, chemical and manufacturing processes, and economic regret formulations. For example, a loss function may be configured to minimize the economic loss resulting from a non-conforming product produced according to a certain manufacturing process. According to current technology, loss functions process the subject data set or process in only one comprehensive iteration. For example, a loss function using stochastic gradient descent (SGD) according to current technology would run SGD on an entire loss function configured for the dataset or process. In addition, existing technology mainly provides for specialized loss functions that are configured for specific applications, such as a particular manufacturing process or economic event. Embodiments provide for a loss function process generalized to operate within any applicable loss-minimization problem. According to embodiments, applicable loss minimization problems may include loss minimization problems in which the loss function has a decomposable form. As a non-limiting example, embodiments provide that a decomposable form may involve expressing a loss function as a weighted sum of loss minimization components. As a non-limiting example, a component may be comprised of a local loss function. According to embodiments, each component may be considered a stratum, and the loss for each component may be processed to provide a local or stratum loss. For example, if a loss function has multiple components, a process according to embodiments will minimize a first component, then a second component, and so forth until all components have been minimized. Embodiments provide that the stratum losses may be summed to provide a global loss representing the minimum loss for the overall loss function. A common optimization technique is continuous gradient descent (CGD) that finds a minimum θ* of a loss function L. CGD involves determining a starting point (θ ), computing a gradient L'(θ ), and moving toward the minimum in the opposite direction of the gradient. Throughout the disclosure, θ represents arguments to a function or process, unless stated otherwise. The following differential equation is associated with CGD: ∂ θ ( t ) ∂ t = - L ' ( θ ( t ) ) , ##EQU00001## with a boundary condition of . Under certain conditions, it can be shown that asymptotically θ(t)→θ*. Discrete gradient descent (DGD) provides a computer friendly process for determining a minimum loss by finding a minimum θ* of a loss function L. DGD involves ascertaining a starting point (θ ), computing a gradient L'(θ ), and moving toward the minimum in the opposite direction of the gradient. The following difference equation is associated with the DGD method: θ -.di-elect cons. L' (θ ). In addition, under certain conditions, DGD approximates CGD and converges to a local minimum. Stochastic gradient descent (SGD) determines the parameters that minimize the loss function L by using noisy observations {circumflex over (L)}(θ) of L'(θ), the function's gradient with respect to θ. Starting with some initial value θ , SGD refines the parameter value by iterating the stochastic difference equation as follows: -.di-elect cons. {circumflex over (L)}'θ , (1) where n denotes the step number and {.di-elect cons. } is a sequence of decreasing step sizes. Since L'(θ ) is the direction of steepest descent, equation (1) constitutes a noisy version of discrete gradient descent. Among other characteristics, stochastic approximation may be used to show that, under certain regularity conditions, noise in the gradient estimates "averages out" and SGD converges to the set of stationary points satisfying L'(θ)=0. These stationary points may represent certain characteristics, including minima, maxima, or saddle points. Convergence to a maximum or saddle point may be considered unlikely because the noise in the gradient estimates reduces the likelihood of getting stuck at such points. Thus {θ } typically converges to a local minimum of L. A variety of methods may be used to increase the likelihood of finding a global minimum, including, but not limited to, running SGD multiple times and starting from a set of randomly chosen initial solutions. Referring to FIG. 1, therein is depicted an example of SGD. A graph 101 illustrates SGD that finds a minimum θ* of a function L. SGD involves determining a starting point (θ ), determining an approximate gradient {circumflex over (L)}'(θ ), and moving "approximately" toward the minimum in the opposite direction of the gradient. Also shown in FIG. 1, is a stochastic difference equation 102 associated with SGD. In addition, under certain conditions, such as n→∞, SGD approximates CGD. As demonstrated in the SGD graph 101 of FIG. 1, the gradient representation is "noisy" because the process does not computes the exact gradient but rather approximations of the gradient. However, this process is faster because such approximate gradients can typically be computed much faster than the exact gradients. Although the SGD descent comprises more noise than, for example CGD or DGD, it arrives at the minimum nonetheless. In addition, SGD converges more efficiently than other GD processes because the advantage of being able to take more steps per unit of computation time outweighs the disadvantage of taking steps in only the approximately best direction. In addition, one may use an additional projection, Π , that keeps the iterate in a given constraint set H. For example, processes may be directed toward nonnegative matrix factorizations, which may correspond to setting H={θ: θ≧0}. Such processes may take the following form: -.di-elect cons. {circumflex over (L)}'(θ )] (2) In addition to the set of stationary points , the projected process may converge to a set of "chain recurrent" points, which may be influenced by the boundary of the constraint set H. Embodiments break the optimization problem into smaller sub-problems and describe effective, parallel and distributed techniques for such data collection processes. In addition, embodiments provide a process for solving optimization problems and, in particular, for optimization problems minimizing loss through one or more loss functions. Embodiments provide for a general stratified stochastic gradient descent (SSGD) process, wherein one or more loss functions are expressed as a weighted sum of loss function terms. According to embodiments, SSGD may be applied to a large range of optimization problems where applicable loss functions have a decomposable form. In addition, further embodiments provide methods and systems for applying SSGD to obtain an efficient distributed stochastic gradient descent (DSGD) matrix factorization process. According to embodiments, the SSGD loss function L(θ) is decomposed into a weighted sum of loss functions L (θ) as follows: (θ)+ . . . +w (θ), (3) , without a loss of generality, 0<w ≦1 and Σw =1. Index s is referred to as the stratum, L as the loss function of stratum s, and w as the weight of stratum s. In general, strata often correspond to a part or partition of some underlying dataset. As a non-limiting example, L may be the loss incurred on the respective partition, wherein the overall loss is obtained by summing up the weighted per-partition losses. However, the decomposition of L may be arbitrary, wherein there may or may not be an underlying data partitioning. According to embodiments, there is some freedom of choice of w , such as being altered to arbitrary values by appropriately modifying the stratum loss functions. Embodiments provide that this freedom of choice gives opportunities for optimization. Embodiments run SSGD on a single stratum at a time, but switch strata in a way that guarantees correctness. The following non-limiting example provides an illustration of the process according to embodiments. In a potentially random stratum sequence {γ }, where each γ , takes values in {1, . . . , q} and the stratum is determined to use the n iteration. Using a noisy observation {circumflex over (L)}'γ of the gradient L'γ , an update rule θ -.di-elect cons. {circumflex over (L)}'.sub.γ )] may be obtained. The sequence {γ } has to be chosen carefully in order to establish convergence to the stationary, or chain-recurrent, points of L. In addition, because each step of the process proceeds approximately in the "wrong" direction, that is, -L'.sub.γ ) rather than -L'(θ ), it may not be obvious that the process will lead to convergence at all. However, certain embodiments provide that SSGD will indeed converge under appropriate regularity conditions provided that, in essence, the "time" spent on each stratum is proportional to its weight. According to embodiments, step-size conditions involve the sequence {.di-elect cons. }, which may converge to 0 at the "right speed." Embodiments provide for at least two step-size conditions, including the following: (1) A first step size condition provides that the step sizes may slowly approach zero in that .di-elect cons. →0 and Σ.di-elect cons. →∞; and (2) the step sizes decrease "quickly enough" in that Σ.di-elect cons. <∞. The simplest valid choice is .di-elect cons. =1/n. A non-limiting example of a sufficient set of loss conditions includes H being a hyperrectangle, L being bounded in H, and L being twice continuously differentiable. Regarding stratification, embodiments require that the estimates L' (θ) of the gradient L' (θ) of stratum s be unbiased, have bounded second moment for θ.di-elect cons.H, and do not depend on the past. DSGD according to embodiments satisfies these conditions by design. Embodiments provide for a first condition wherein the step sizes satisfy (.di-elect cons. -.di-elect cons. )/.di-elect cons. ═O(.di-elect cons. ) and the γ are chosen such that the directions "average out correctly" in the sense that, for any θ.di-elect cons.H: lim n → ∞ .di-elect cons. n i = 0 n - 1 [ L γ i ' ( θ ) - L ' ( θ ) ] = 0 ##EQU00002## A non-restrictive example provides that if .di-elect cons. were equal to 1/n, then the n term would represent the empirical average deviation from the true gradient over the first n steps. In addition, if all conditions hold, then the sequence {θ } converges almost surely to the set of limit points of the "projected ODE" in H, taken over all initial conditions, provides the following: {dot over (θ)}=-L'(θ)+z. In this non-restrictive example, z is the minimum force required to keep the solution in H, wherein the limit points consist of the set of stationary points of L in H (z=0), as well as a set of chain-recurrent points on the boundary of H (z≠0). According to embodiments, SSGD may converge to a local minimum, such that demonstrating the correctness of SSGD may involve demonstrating that the first condition, described above, holds. As a non-limiting example, sufficient conditions on L(θ), step sizes {.di-elect cons. }, and the stratum sequence {γ } may be provided such that the first condition holds. According to embodiments, the sequence {γ } may be regenerative, such that an increasing sequence of finite random indices 0=β(0)<β(1)<β(2)< . . . that serves to decompose {γ } into consecutive, independent and identically distributed (i.i.d.) cycles {C }, with C ={γ.sub.β(k-1), γ.sub.β(k-1)-1, . . . γ.sub.β(k)-1} for k≧1. Embodiments provide that the cycles need not directly correspond to strata. As such, embodiments utilize strategies in which a cycle comprises multiple strata. A non-restrictive and illustrative example provides that at each β(i), the stratum may be selected according to a probability distribution that is independent of past selections, and the future sequence of selections after step β(i) looks probabilistically identical to the sequence of selections after step β(0). In addition, the length τ of the k cycle may be given by τ =β(k)-β(k-1). Furthermore, letting I.sub.γn=s be the indicator variable for the event that stratum s is chosen in the n step, results in the following definition: X k ( s ) = n = β ( k - 1 ) β ( k ) - 1 I γ n = s - w s , ##EQU00003## 1≦s≦d. It follows from the regenerative property that the pairs {(X (s), τ )} are i.i.d. for each s. Embodiments provide for a third principle in which, under regularity conditions, any regenerative sequence γ may be selected such that E[X (s)]=0 for all strata. The third principle provides that if it is supposed that L(θ) is differentiable on H and sup.sub.θ.di-elect cons.H|L' (θ)/<∞ for 1≦s≦d and θ.di-elect cons.H, .di-elect cons. =O(n.sup.-α) for some α.di-elect cons.(0.5,1] and that (.di-elect cons. -.di-elect cons. )/.di-elect cons. =O(.di-elect cons. ), and {γ } is regenerative with E[τ /α]<∞ and E[X (s)]=0 for 1≦s≦d, then the first condition holds. The condition E[X (s)]=0 essentially requires that, for each stratum s, the expected fraction of visits to s in a cycle equals w . The finite-moment condition is typically satisfied whenever the number of successive steps taken within a stratum is bounded with a probability of one (1). Referring now to FIGS. 2A-2D, therein is depicted an example of cycles of SSGD for two strata, L and L according to an embodiment. FIG. 2A illustrates A first cycle for stratum L 201A, FIG. 2B illustrates a first cycle for stratum L 201B, FIG. 2C illustrates a second cycle for L 201C, and FIG. 2D illustrates second cycle for L 201D. In FIG. 3, therein is provided a graph depicting 100 cycles of SSGD according to embodiments wherein the paths have converged to a local minimum 301. As described above, one illustrative and non-restrictive example use of a loss function may involve minimizing the loss of approximating missing matrix entries during a matrix factorization process. According to current technology, low-rank matrix factorizations are being used to handle modern web-scale datasets because, for example, they are fundamental to a variety of data collection tasks that have been applied to massive datasets with increased frequency. Low-rank matrix factorizations may be used for many processes, including analysis of "dyadic data," which aims at discovering and capturing the interactions between two entities. Illustrative interactions may involve viewer ratings of movies, online purchases of goods, or click-throughs and web sites. An increasingly prevalent use of such data involves making assumptions about user interests based on past interactions, such as predicting which books a user will be interested in based on his past ratings of other books. At modern data scales, distributed processes for matrix factorization are essential to achieving reasonable performance. For example, a large matrix according to modern data scales may be comprised of millions of rows, millions of columns, and billions of non-zero elements. However, in practice, exact factorization is generally not practical or desired, so virtually all matrix factorization processes actually produce low-rank approximations. A prominent application of matrix factorization involves minimizing a "loss function" that measures the discrepancy between an original input matrix and a product of the factors returned by the process. Use of the term "matrix factorization" herein refers to such loss function matrix factorizations, unless specified otherwise. Such factorizations are at the center of the widely known "Netflix® contest" of recommending movies to customers. Netflix® is registered trademark of Netflix, Inc. Netflix®, Inc. provides tens of thousands of movies for rental to more than fifteen million customers. Each customer is able to provide a feedback rating for each movie on a scale of 1 to 5 stars. The following illustrates a simple, non-limiting example of a movie feedback matrix. Certain feedback ratings in the matrix, represented by question marks (?), are unknown, for example, because the user has not yet rated the movie: 1 Movie 2 Movie 3 ##EQU00004## Alice Bob Charlie ( ? 4 2 3 2 ? 5 ? 3 ) ##EQU00004.2## Each entry may contain additional data , such as the date of the rating or click history information. A main goal of factorization is to predict the missing entries in the feedback matrix. According to the Netflix® recommender system, entries with a predicted rating may be selectively recommended to other users for viewing. In addition to this recommender system, other related recommender systems have been attempted according to existing technologies, such as product recommender systems utilized by Amazon® and eBay®, content recommender systems such as the system provided by Digg®, and music recommender systems such as the system provided by Last.fm®. Amazon® is a trademark of Amazon.com, Inc. or its affiliates. Digg® is a registered trademark of Digg Inc. eBay® is a registered trademark of eBay Inc. Last.fm® is a registered trademark of Audioscrobbler Limited LLC. The traditional matrix factorization problem may be defined according to the following: given an m×n matrix V and a rank r, find an m×r matrix W and an r×n matrix H such that V=WH. A primary goal is to obtain a low-rank approximation V≈WH, where the quality of the approximation may be described as an application-dependent loss function L. Methods may be configured to find the following value: arg min W , H L ( V , W , H ) ##EQU00005## This value represents the choice of W and H that give rise to the smallest loss. For example, assuming that missing ratings are coded with the value zero (0), loss functions for recommender systems are often based on the following nonzero squared loss: - , where regularization terms are usually incorporate into the function , such as user and subject biases (e.g., movie biases), time drifts, and implicit feedback. Referring to FIG. 4, therein is depicted an example matrix modeling a movie feedback system and an associated loss function for determining missing matrix values according to an embodiment. A matrix 401 has three rows 402 labeled Alice, Bob, and Charlie and three columns 403 labeled Movie 1, Movie 2, and Movie 3. The rows 402 represent movie feedback system users, while the columns 403 represent movies available within the system. Each user is associated with a "user factor" 404 and each movie is associated with a "movie factor" 405. In the example depicted in FIG. 4, a user's estimated movie rating 406 is found by multiplying the respective movie 405 and user 404 factors, which are represented by the matrix entries enclosed within parentheses. Matrix entries not enclosed in parentheses represent actual movie ratings 407 given by system users and question marks (?) are entries with unknown feedback. Although only one user factor 404 and one movie factor 405 are depicted in FIG. 4, multiple factors are possible. For example, a movie factor 405 may involve the level of violence, the time period of the movie, or an intended demographic, while user factors 404 may involve whether the user likes certain movie categories or genres, such as science fiction. If multiple movie and user factors 404, 405 are used, the estimated movie factor 406 may be determined through a dot product operation. FIG. 4 provides a first loss function 408 associated with the matrix 401, wherein V represents the actual movie rating 407, W represents the user factor 404, and H represents the movie factor 405. A second loss function 409 represents the first loss function 407 augmented to take bias 410 into account. For example, bias 409 may remove certain user biases, such as users who rate nearly every movie highly, or users who give nearly every movie a low rating. A third loss function 411 modifies the second loss function 409 to implement regularization 412, wherein the number of user factors 404 and the number of movie factors 405 are selected for applicability to data outside of the training data. For example, regularization serves to prevent "over fitting" of the data, wherein the factors 404, 405 become customized for the training data but are less effective for data outside of this training set. A fourth loss function 412 is configured to account for time in the factorization process. For example, certain movies may be viewed and rated more highly during certain times, such as seasonal movies, and a user's preferences may change over time. Referring to FIG. 5, therein is depicted an example of generalized matrix factorization, which may be utilized for various problems, including, but not limited to, machine learning problems such as recommender systems, text indexing, medical diagnosis, or face recognition. A generalized matrix 501 is illustrated in FIG. 5, and is comprised of row factors (H) 502, column factors (W) 503, and an input matrix (V) 504. V 504 represents training data as an m×n matrix. W 503 and H 502 represent the parameter space. W 503 involves row factors, such as m×r, wherein r represents certain latent factors, and H 502 involves column factors, such as r×n. Embodiments provide for model processes to determine the loss at a particular element. Such models may include certain factors, such as prediction error, regularization, and auxiliary information. As such, a model process 505 is provided in FIG. 5, that is configured to determine the loss at element (i,j), wherein Z represents a training subset of indexes in V because not all values are known. Embodiments provide for loss functions, like L , above, that may be decomposed into a sum of local losses over a subset of the entries in V . According to embodiments, such loss functions may be written according to the following: = ( i , j ) .di-elect cons. Z l ( V ij , W i * , H * j ) , ( 4 ) ##EQU00006## for training set Z .OR right.{1, 2, . . . , m}×{1, 2, . . . , n} and local loss function l, where A * and A.sub.*i denote row i and column j of matrix A, respectively. Many conventional loss functions, such as squared loss, generalized Kullback-Leibler divergence (GKL), and Lp regularization, may also be decomposed in such a manner. In addition, a given loss function L has the potential to be decomposed in multiple other ways. Accordingly, certain embodiments focus primarily on the class of nonzero decompositions, in which Z={(i,j):V ≠0} refers to the nonzero entries in V. Such decompositions may occur naturally when zeros represent missing data. Although certain embodiments may use nonzero decompositions, embodiments as described herein are not so limited, as any appropriate decomposition may be used. SGD may be applied to matrix factorization by, inter alia, setting θ=(W,H) and decomposing the loss L, for example, as in equation (4), for an appropriate training set Z and local loss function l. The local loss at position z=(i,j) may be denoted by L , W *, H.sub.*j). Using the sum rule for differentiation, L'(θ) may be defined according to the following: L'(θ)=Σ L'(θ). In addition, DGD methods may exploit the summation form of L'(θ), for example, they may compute the local gradients L' (θ) in parallel and sum up. In contrast, SGD obtains noisy gradient estimates by scaling up just one of the local gradients, for example, {circumflex over (L)}' (θ), where N=|Z| and the training point z is chosen "randomly" from the training set. FIG. 6 provides an example SGD for a matrix factorization process, including an example matrix 601, and an associated loss function 602 and epoch 603 determination both configured for SGD. Replacing exact gradients (DGD) by noisy estimates (SGD) is beneficial for multiple reasons. A main reason is that exact gradient computation is costly, whereas noisy estimates are quick and easy to obtain. In a given amount of time, many quick-and-dirty SGD updates may be performed instead of a few, carefully planned DGD steps. The noisy SGD process may also allow for the escaping of local minima, such as those with a small basin of attraction, especially in the beginning when step sizes are likely to be large. In addition, SGD is able to exploit repetition within the data. Parameter updates based on data from a certain row or column may also decrease the loss in similar rows and columns. Thus, the more data similarity, the better SGD is likely to perform. Accordingly, the increased number of steps may leads to faster convergence, which has been observed in certain cases of large-scale matrix factorization. Recently, more programmer-friendly parallel processing frameworks, such as MapReduce, have been used for data collection and processing. A result is that web-scale matrix factorizations have become more practicable and of increasing interest to consumers and users of massive data. MapReduce may be used to factor an input matrix, but may also be used to efficiently construct an input matrix from massive, detailed raw data, such as customer transactions. Existing technology has facilitated distributed processing through parallel matrix factorization processes implemented on a MapReduce cluster. However, the choice of process was generally driven by the ease with which it could be distributed. To compute W and H on MapReduce, processes according to existing technology generally start with certain initial factors, such as W and H , and iteratively improve on them. The m×n input matrix V may then be partitioned into d blocks, which are distributed in a MapReduce cluster. Both row and column factors may be blocked conformingly as depicted in the following example matrix, where superscripts are used to refer to individual blocks: 1 W 2 W d 1 ( H 1 H 2 H d 2 V 11 V 12 V 1 d 2 V 21 V 22 W 2 d 2 V d 1 1 V d 1 2 W d 1 d 2 ) ##EQU00007## Such processes are designed such that each block V^ij can be processed independently in the map phase, taking only the corresponding blocks of factors W and H as input. In addition, some processes directly update the factors in the map phase wherein either d =m or d =n to avoid overlap, while others aggregate the results in a reduce phase. Factorization processes may be classified into specialized processes, which are designed for a particular loss, and generic processes, which work for a wide variety of loss functions. Currently, specialized processes only exist for a small class of loss functions, such as EM-based and multiplicative-update methods for GKL loss. In the multiplicative-update method, the latter MULT approach may also be applied to squared loss and nonnegative matrix factorization with an "exponential" loss function (exponential NMF). Essentially, each of these example processes takes a previously developed parallel matrix factorization method and directly distributes it across the MapReduce cluster. For example, the widely used alternating least squares (ALS) method may handle factorization problems with a nonzero squared loss function and an optional weighted L2 regularization term. This approach requires a double-partitioning of V, with one partition by row and another partition by column. In addition, this method may require that each of the factor matrices, namely W and H, fit alternately in main memory. On the other hand, generic processes are able to handle differentiable loss functions that decompose into summation form. One common approach is distributed gradient descent, which distributes exact gradient computation across a computer cluster, and then performs centralized parameter updates using quasi-Newton methods such as L-BFGS-B. Partitioned SGD approaches make use of a similar idea where SGD is run independently and in parallel on partitions of the dataset, and parameters are averaged after each pass over the data (PSGD) or once at the end (ISGD). However, these approaches have not been applied to matrix factorization before and, similarly to L-BFGS-B, exhibit slow convergence in practice and need to store the full factor matrices in memory. This latter limitation is very often a serious drawback. For example, for large factorization problems, it is crucial that both the one or more matrices and the factors be distributed. Distributing SGD is complicated by the fact that individual steps depend on each other. For example, equation (2) demonstrates that θ has to be known before θ can be computed. This characteristic leads to synchronization overhead that defies efforts to provide distributed processing. Nonetheless, in the case of matrix factorization, there are structural properties that can be exploited using SSGD, as described below. Embodiments provide for a process for approximately factoring large matrices. Embodiments incorporate, inter alia, stochastic gradient descent (SGD), an iterative stochastic optimization process. According to certain embodiments, characteristics of the matrix factorization problem are exploited through SSGD, which may function on web-scale datasets, for example, using MapReduce. In addition, a variant of SSGD according to embodiments may operate in a fully distributed environment, thus providing a "distributed" SGD (DSGD). Embodiments provide that the convergence properties of DSGD may be established using certain processes, including, but not limited to, stochastic approximation theory and regenerative process theory. The ability to perform SGD in a distributed environment is crucial to processing data of modern dimensions and scales. For example, current data scales may result in sparse, high-dimensional matrices, such as 16 bytes/matrix entry with 100 or more factors. In addition, modern data systems may require the creation of large, gigabyte size models for processing. Current technology requires many scans using iterative processes, resulting in expensive computations, including calculations involving many factors per matrix entry, inner-bound products, and CPU-bound processes. As such, DSGD according to embodiments has certain advantages over existing technology, including, but not limited to, significantly faster convergence and superior scalability because, inter alia, the process is able to operate in a distributed environment. The SGD process demonstrates good performance in non-parallel environments and is very effective for matrix factorization, such as in a sequential setting. SGD may also be run in a distributed fashion when the input matrix exhibits a "d-monomial" structure. For example, a block-diagonal matrix with k blocks is d-monomial for all d≦k. Although few input matrices are d-monomial, the matrix may always be represented as a union of pieces that are each d-monomial. The pieces comprising the union may overlap to create overlapping pieces commonly referred to as "strata." DSGD according to certain embodiments may repeatedly and carefully select one or more stratum and process them in a distributed fashion. Embodiments use stratification techniques to derive a distributed factorization process, for example, a distributed factorization process with demonstrable convergence guarantees. Embodiments provide for DSGD, a process for low-rank matrix factorization wherein both data and factors may be fully distributed. In addition, memory requirements are low for DSGD configured according to embodiments, which may be scaled to large matrices, such as matrices with millions of rows, millions of columns, and billions of non-zero elements. According to embodiments, DSGD is a generic process because it may be used for a variety of different loss functions, including, but not limited to, classes of factorizations that minimize a "non-zero loss." Classes of non-zero loss functions have many applications, including operations wherein a zero represents missing data and, therefore, would conventionally be ignored when computing loss. As a non-limiting example, one use of non-zero loss functions involves estimating missing values, such as a rating that a customer would likely give to a previously unseen movie. Embodiments may utilize loss functions L having the following summation form L(θ)=Σ .di-elect cons.ZL (θ). According to embodiments, a first definition provides that training points z , z , .di-elect cons.Z are interchangeable if for all loss functions L having a summation form according to embodiments, all θ.di-elect cons.H, and .di-elect cons.>0, wherein: (θ-.di-elect cons.L' and L (θ-.di-elect cons.L' (θ)) (5) In addition , embodiments provide that two disjoint sets of training points z , z c Z are interchangeable if z and z are interchangeable for every z .di-elect cons.Z and z .di-elect cons.Z . In addition, embodiments provide for swapping the order of SGD steps that involve interchangeable training points without affecting the final outcome. Furthermore, when z and z are interchangeable, the SGD steps become parallelized according to the following: θ n + 2 = θ n - .di-elect cons. L ^ ' ( θ n , Z n ) - .di-elect cons. L ^ ' ( θ n + 1 , Z n + 1 ) = θ n - .di-elect cons. L ^ ' ( θ n , Z n ) - .di-elect cons. L ^ ' ( θ n , Z n + 1 ) ##EQU00008## Embodiments provide for utilizing simple criterion to determine interchangeability. According to embodiments, a first principle provides that two training points z ).di-elect cons.Z and z ).di-elect cons.Z may be interchangeable if they do not share a row or column, that is, i and j . This is a direct consequence of the decomposition of the global loss into a sum of local losses. As such, since L , W *, H.sub.*j), embodiments provide for the following: ∂ ∂ W i ' k L ij ( θ ) = { 0 if i ≠ i ' ∂ ∂ W ik l ( V i ' k , W i * , H * j ) otherwise . ∂ ∂ H k ' j L ij ( θ ) = { 0 if j ≠ j ' ∂ ∂ H kj l ( V ij , W i * , H * j ) otherwise . ##EQU00009## 1≦k≦r. The partial derivatives of L from equation (4) may only depend on V , row W *, and column H.sub.*j, while equation (1) may only be nonzero for row W * and column H.sub.*j. When i and j , both θ and θ-.di-elect cons.L (θ) agree on the values of W * and H.sub.*j2 for any choice of θ, in accordance with the second part of equation (5). If two blocks, or submatrices, of V share neither rows nor columns, then the set of training points contained in these blocks may be interchangeable. Embodiments exploit the structure of the matrix factorization problem to derive a distributed process for matrix factorization via SGD. FIG. 7 provides an example process of SGD for matrix factorization. The following depicts a block-diagonal training matrix according to a non-limiting simple matrix example. 1 W 2 W d ( H 1 H 2 H d Z 1 0 0 0 Z 2 0 0 0 Z d ) ( 6 ) ##EQU00010## For a given training set Z , the corresponding training matrix is denoted by Z. The corresponding training matrix is obtained by zeroing out the elements in V that are not in Z, these elements usually represent missing data or held-out data for validation. In this non-limiting example, the rows and columns are blocked conformingly and the set of training points in block Z is denoted by Z . Embodiments exploit a key property wherein, according to the first principle described above, Z is interchangeable with Z for i≠j. In this non-limiting example, for some T.di-elect cons.[1,∞), T steps of SGD are run on Z, starting from some initial point θ and using a fixed step size .di-elect cons.. According to embodiments, an instance of the SGD process may be described by a training sequence w=(z , z , . . . , z ) of T training points. Designating θ , the following definition is ascertained according to an embodiment: (w)+.di-elect cons.Y where the update term Y[n] (w)) represents a scaled negative gradient estimate. In addition, θ (w) may be written according to the following: θ T ( w ) = θ 0 + .di-elect cons. n = 0 T - 1 Y n ( w ) ( 7 ) ##EQU00011## As an illustrative and non-restrictive example, for the subsequence σ of training points from block Z , the subsequence has length T (w)|. According to embodiments, SGD may run on each block independently and the results summed together. Using the formulations of θ (w) and θ (w) provided above, embodiments provide for the following second principal: θ T ( w ) = θ 0 + .di-elect cons. d = 1 d k = 0 T b ( w ) - 1 Y k ( σ b ( w ) ) ( 8 ) ##EQU00012## According to embodiments , a one-to-one correspondence is established between the update terms Y (w) in equation (7) and Y (w)) in equation (8) (i.e., the second principle). The (k+1) element in σ (w), that is, the (k+1) element from block Z in w, may be denoted by z . The zero-based position of this element in w may be denoted by π(z ). Accordingly, embodiments arrive at the following: w.sub.π(z ). For the first element z ,0 from block b, z for all previous elements n<π(,0). In addition, because the training matrix is block-diagonal, blocks have pairwise disjoint rows and pairwise disjoint columns. Thus, embodiments provide that according to the first principal described above, z ,0 is interchangeable with each of the z for n<π(z ,0). As such, z , may be eliminated one by one according to the following: π ( z b , k ) ( w ) = - NL z b , 0 ' ( θ π ( z b , 0 ) ( w ) ) = - NL z b , 0 ' ( θ π ( z b , 0 ) - 1 ( w ) ) = - NL z b , 0 ' ( θ 0 ) = Y 0 ( σ b ( w ) ) ##EQU00013## The update terms may be safely removed from elements not in block , and by induction on k, wherein the following may be obtained: π ( z b , k ) ( w ) = - NL z b , k ' ( θ 0 + .di-elect cons. n = 0 π ( z b , k ) - 1 Yn ( w ) ) = - NL z b , k ' ( θ 0 + .di-elect cons. l = 0 k - 1 Y π ( z b , l ) ( σ b ( w ) ) ) = Y k ( σ b ( w ) ) ( 9 ) ##EQU00014## According to embodiments , an assertion of the second principle may result from the following: θ T ( w ) = θ 0 + .di-elect cons. n = 0 T - 1 Y n ( w ) = θ 0 + .di-elect cons. b = 1 d k = 0 T b ( w ) - 1 Y π ( z b , k ) ( w ) = θ 0 + .di-elect cons. b = 1 d k = 0 T b ( w ) - 1 Y k ( σ k ( w ) ) , ##EQU00015## where the update terms are first re -ordered, followed by the use of equation (9). In the non-limiting simple matrix example, first described above, the fact that Z is block-diagonal is utilized only to establish interchangeability between blocks. As such, embodiments provide that the second principle, described above, may also apply when the matrix is not block-diagonal, but still may be divided into interchangeable sets of training points. Embodiments provide that the second principle, described above, may be exploited for distributed processing on MapReduce. According to embodiments, W and H are blocked conformingly to Z, as provided in equation (6), and processing is divided into d independent map tasks J , . . . , J . Embodiments provide that task J is responsible for subsequence σ (w), receiving Z , W , and H as input, performing block-local updates σ (w), and producing updated factor matrices W , and H . According to embodiments, by the second principle, the following matrices may be obtained: ' = ( W new 1 W new d ) and H ' = ( H new 1 H new d ) ##EQU00016## where W ' and H' are the matrices obtained by running sequential SGD on w. Since each task accesses different parts of both training data and factor matrices, embodiments provide that the data may be distributed across multiple nodes and the tasks can run simultaneously. Embodiments provide that the DSGD process stratifies the training set Z into a set S={Z , . . . , Z ) of q strata so that each individual stratum Z .OR right.Z may be processed in a distributed fashion. Embodiments ensure that each stratum is "d-monomial," as defined below. According to embodiments, the d-monomial property generalizes the block diagonal structure, while still permitting the application of techniques such as those discussed with the non-limiting simple matrix example, introduced above. In general, the parallelism parameter d is greater than or equal to the number of available processing tasks. Embodiments provide for a second definition wherein a stratum Z is d-monomial if it can be partitioned into d nonempty and mutually disjoint subsets Z , Z , . . . , Z such that i≠i' and j≠j' whenever (i,j).di-elect cons.Z and (i',j').di-elect cons.Z with b . In addition, a training matrix Z is d-monomial if it is constructed from a d-monomial stratum Z Embodiments may stratify a training set according to the second definition, provided above, in many ways. One non-limiting example utilizes data-independent blocking, while other non-limiting examples may employ more advanced strategies that may further improve convergence speed. According to embodiments, the rows and column of Z are permutated randomly, then d×d blocks of size m d × n d ##EQU00017## are created , and the factor matrices W and H are blocked conformingly. This process ensures that the expected number of training points in each of the blocks is the same, namely, N/d . For a permutation j , . . . , j of 1, 2, . . . , d, a stratum may then be defined as Z ∪ . . . ∪Z , where the substratum Z denotes the set of training points that fall within block Z . Embodiments provide that a stratum Z may be represented by a template {tilde over (Z)} that displays each block Z corresponding to a substratum Z of Z , with all other blocks represented by zero matrices. As such, when d=2, for example, two strata may be obtained, which may be represented by the following templates: ~ 1 = ( Z 11 0 0 Z 22 ) and Z ~ 2 = ( 0 Z 12 Z 21 0 ) ##EQU00018## The set S of possible strata contains d! elements, one for each possible permutation of 1, 2, . . . , d. In addition, different strata may overlap when d>2. Furthermore, there is no need to materialize these strata as they are constructed on-the-fly by processing only the respective blocks of Z. The actual number of training points in stratum Z may be denoted by N |. In addition, given a training point (i,j).di-elect cons.Z , the derivative L' (θ)=Σ.sub.(k,l).di-elect cons.Z s(θ) may be estimated by {circumflex over (L)}' Embodiments may group the individual steps of DSGD into "subepochs" that each process one of the strata. According to embodiments, DSGD may use a sequence {(ξ , T )}, where ξ denotes the stratum selector used in the k subepoch, and T the number of steps to run on the selected stratum. This sequence of pairs uniquely determines an SSGD stratum sequence, including, but not limited to, the following sequence: γ = . . . =γ +1= . . . =γ . Embodiments provide that the {(ξ )} sequence may be selected such that the underlying SSGD process, and hence the DSGD factorization process, is guaranteed to converge. According to embodiments, once a stratum ξ has been selected, T SGD steps may be performed on Z.sub.ξ . In addition, embodiments provide that these steps may be distributed and performed in parallel. Referring to FIGS. 8A-8C, therein is illustrated an example DSGD process according to an embodiment. DSGD according to embodiments provides for a first step wherein data is blocked and distributed. For example FIG. 8A illustrates a matrix (V) 801A that has been partitioned into a 3×3 grid according to an embodiment. V 801A has interchangeability properties such that V 801A may be processed on the diagonal, as illustrated by matrix (V) 801B of FIG. 8B, in parallel. The example DSGD process as depicted in FIGS. 8A-8C provides for the following steps according to an embodiment: (1) select a diagonal set of blocks, for example, a set of interchangeable blocks, one per row; (2) perform SGD on the diagonal, in parallel; (3) merge the results; and (4) proceed to the next diagonal set of blocks. According to embodiments, steps (1)-(3) comprise a "cycle." In addition, the set of blocks selected through step (1) is described herein as a "stratum." Embodiments provide that a set of blocks does not necessarily have to be diagonal, as this configuration is purely for demonstrative purposes. Any appropriate configuration will operate within the process as described herein. For example, embodiments provide for any configuration wherein the applicable portions of data may be processed in parallel, as illustrated in FIG. 8C, where the sequential simulation of SGD is processed in parallel according to embodiments. As depicted in FIGS. 8A-8C, performing SGD over each stratum only estimates the gradient over three blocks, while the total gradient is the sum of the loss over all nine blocks of V 801A. Processing each stratum individually produces a gradient in the wrong direction. However, if the process progresses from stratum to stratum, such that all of the data are eventually taken into consideration, the resultant gradients will average out and produce the minimum loss. In addition, according to embodiments, each stratum may be processed in parallel and in a distributed manner. Referring to FIG. 9, therein is depicted a DSGD process for matrix factorization according to an embodiment, wherein an epoch is defined as a sequence of d subepochs. According to embodiments, an epoch may roughly correspond to processing the entire training set once. Embodiments provide that executing the DSGD process depicted in FIG. 9 on d nodes in a shared-nothing environment, such as MapReduce, may only require that the input matrix be distributed once. Accordingly, the only data that may require transmission between nodes during subsequent processing may be the small factor matrices. As a non-limiting example, if node i stores blocks W , Z 1, Z , . . . , Z d, for 1≦i≦d then only matrices H , H , . . . , H need to be transmitted. Embodiments provide that, by construction, parallel processing leads to the same update terms as the corresponding sequential SGD on Z.sub.ξ , and the correctness of DSGD may be implied by the correctness of the underlying SSGD process. The following provides a non-limiting training example according to embodiments. Processing a subepoch (i.e., a stratum) according to embodiments does not comprise generating a global training sequence and then distributing it among blocks. Rather, embodiments provide that each task generates a local training sequence directly for its corresponding block. This reduces communication cost and avoids the bottleneck of centralized computation. Good training results are more likely when, inter alia, the local training sequence covers a large part of the local block, and the training sequence is randomized. In the non-limiting training example, a block Z is processed by randomly selecting training points from Z such that each point is selected precisely once. This ensures that many different training points are selected in the non-limiting training example, while at the same time maximizing randomness. The third principle, discussed above, implicitly assumes sampling with replacement, but embodiments provide that it may be extended to cover other strategies as well, including redefining a stratum to consist of a single training point and redefining the stratum weights w , accordingly In addition, the non-limiting training example concerns stratum selection. According to embodiments, a stratum sequence (ξ , T ) determines which of the strata are chosen in each subepoch and how many steps are run on that stratum. The training sequences selected in the non-limiting training example are T . For the data independent blocking scheme, each block Z occurs in (d-1)! of the d! strata. As such, embodiments provide that all strata do not have to be processed in order to cover the entire training set of the non-limiting training example. The non-limiting training example processes a large part of the training set in each epoch, while at the same time maximizing randomization. Accordingly, in each epoch, a sequence of d strata is selected such that the d strata jointly cover the entire training set. In addition, the sequence is picked uniformly and at random from all such sequences of d strata. This selection process is covered by the third principle, introduced above, wherein each epoch corresponds to a regenerative cycle. The third principle implies that w corresponds to the long-term fraction of steps run on stratum Z (otherwise E[x (s)]≠0 for some s). Conversely, this strategy corresponds to a specific choice of stratum weights, namely w . This is due to selecting each stratum s equally often in the long run, and always performing N steps on stratum s. Accordingly, embodiments provide that {w } satisfies equation (3) for all Z and L of the form depicted in equation (4), if and only if the following: : Zs Zij w s N N s = 1 ( 10 ) ##EQU00019## for each substratum Z^ij . Accordingly, strategies ensuring that each Z appear in exactly (d-1)! strata imply equation (10) for the choice of w Furthermore, the non-limiting training example concerns step size selection. Stochastic approximation according to current technology often works with step size sequences roughly of form .di-elect =1/n.sup.α with α.di-elect cons.(0.5,1]. The third principle, discussed above, guarantees asymptotic convergence for such choices. However, deviation from these choices may allow for faster convergence over a finite number of executed steps. In contrast to SGD in general, embodiments may determine the current loss after every epoch. Accordingly, embodiments may check whether an epoch decreased or increased the loss. Embodiments employ a heuristic called "bold driver," which is often used for gradient descent. Starting from an initial step size .di-elect cons. , embodiments increase the step size by a small percentage (for example, 5%) whenever a decrease of loss is experienced, and drastically decrease the step size (for example, by 50%) whenever an increase of loss is experienced. Within each epoch, the step size remains fixed. Selecting .di-elect cons. according to embodiments leverages the fact that many compute nodes are available. A small sample of Z (for example, 0.1%) may be replicated to each node. Different step sizes are attempted in parallel. Initially, embodiments may make a pass over the sample for step sizes 1, 1/2, 1/4, . . . , 1/2 -1 which occur in parallel at all d nodes. The step size that gives the best result is selected as .di-elect cons. . As long as loss decreases, a variation of this process is repeated after every epoch, wherein step sizes within a factor of [1/2, 2] of the current step size are attempted. Eventually, the chosen step size will become too large and the value of the loss will increase. As a non-limiting example, this may happen when the iterate has moved closer to the global solution than to the local solution of a given sample. Embodiments provide that responsive to detecting an increase of loss, the process may switch to the bold driver method for the rest of the process. Certain experiments, referred to herein as the "DSGD comparison experiments," reveal that DSGD according to embodiments often converges faster than alternative methods. The DSGD comparison experiments implemented DSGD according to embodiments on top of MapReduce, along with implementations of PSGD, L-BFGS, and ALS methods. DSGD, PSGD, and L-BFGS are generic methods that work with a wide variety of loss functions, whereas ALS is restricted to quadratic loss functions. Two different implementations and compute clusters were utilized. A first implementation for in-memory experiments and one for large scale-out experiments on very large datasets using Hadoop, an opensource MapReduce implementation. The in-memory implementation is based on R and C, and uses R's snowfall package to implement MapReduce. This implementation targeted datasets that are small enough to fit in aggregate memory, for example, datasets with up to a few billion nonzero entries. The input matrix was blocked and distributed across the cluster before running each experiment. The second implementation was similarly based on Hadoop. The DSGD comparison experiments involving PSGD and DSGD used adaptive step size computation based on a sample of roughly 1 million data points. The bold driver was used as soon as an increase in loss was observed. The Netflix® competition dataset was used for the DSGD comparison experiments experiments on real data. The dataset contains a small subset of movie ratings given by Netflix® users, specifically, 100 million anonymized, time-stamped ratings from roughly 480 thousand customers on roughly 18 thousand movies. A synthetic dataset with 10 million rows, 1 million columns, and 1 billion nonzero entries was used for larger-scale DSGD comparison experiments on in-memory implementations. Matrices W* and H* were generated by repeatedly sampling values from a Gaussian (0,10) distribution. Subsequently, 1 billion entries from the product W*, H* were sampled, and Gaussian (0,1) noise was added to each sample. This procedure ensures that there exists a reasonable low-rank factorization. For all GSD comparison experiments, the input matrix was centered around its mean. The starting points W and H were chosen by sampling entries uniformly and at random from [-0.5, 0.5] and, generally, rank r=50 was used. The DSGD comparison experiments used the loss functions plain nonzero squared loss (L ) and nonzero squared loss with an L2 regularization term (L ). These loss functions may be expressed as follows: = ( i , j ) .di-elect cons. Z ( V ij - [ WH ] ij ) 2 ##EQU00020## L L 2 = L NZSL + λ ( W F 2 - H F 2 ) ##EQU00020.2## In addition, for synthetic data and L , a "principled" value of λ=0.1 was used. This choice of λ is "natural" in that the resulting minimum-loss factors correspond to the "maximum a posteriori" Bayesian estimator of W and H under the Gaussian-based procedure used to generate the synthetic data. Representative results of the DSGD comparison experiments are depicted in FIGS. 10 and 11. Although specific loss functions are described herein, these functions are not exhaustive as embodiments are configured to operate using any applicable loss function. For example, certain embodiments may use the following additional loss 2 = L NZSL + λ ( N 1 W F 2 - HN 2 F 2 ) ##EQU00021## L GKL = i , j .di-elect cons. z ( V ij log V ij / WH ij - V ij ) + i , j WH ij ##EQU00021.2## In addition, certain other experiments involving DSGD according to embodiments reveals hat DSGD has good scalability properties, including, but not limited to, when implemented using Hadoop. In addition, certain other experimental observations include that DSGD according to embodiments may require only simple aggregates per tuple, and that increased randomization increases convergence Referring to FIG. 12, it will be readily understood that embodiments may be implemented using any of a wide variety of devices or combinations of devices. An example device that may be used in implementing one or more embodiments includes a computing device in the form of a computer 1210. In this regard, the computer 1210 may minimize a loss function by decomposing the loss function into component loss functions and minimizing each of the component loss functions according to a sequence using gradient descent. A user can interface with (for example, enter commands and information) the computer 1210 through input devices 1240. A monitor or other type of device can also be connected to the system bus 1222 via an interface, such as an output interface 1250. In addition to a monitor, computers may also include other peripheral output devices. The computer 1210 may operate in a networked or distributed environment using logical connections to one or more other remote computers or databases. In addition, Remote devices 1270 may communicate with the computer 1210 through certain network interfaces 1260. The logical connections may include a network, such as a local area network (LAN) or a wide area network (WAN), but may also include other networks/buses. It should be noted as well that certain embodiments may be implemented as a system, method or computer program product. Accordingly, aspects of the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, et cetera) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied therewith. Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain or store a program for use by or in connection with an instruction execution system, apparatus, or device. A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, et cetera, or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java®, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer (device), partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Aspects of the invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatuses (systems) and computer program products according to example embodiments. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks. The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. This disclosure has been presented for purposes of illustration and description but is not intended to be exhaustive or limiting. Many modifications and variations will be apparent to those of ordinary skill in the art. The example embodiments were chosen and described in order to explain principles and practical application, and to enable others of ordinary skill in the art to understand the disclosure for various embodiments with various modifications as are suited to the particular use contemplated. Although illustrated example embodiments have been described herein with reference to the accompanying drawings, it is to be understood that embodiments are not limited to those precise example embodiments, and that various other changes and modifications may be affected therein by one skilled in the art without departing from the scope or spirit of the disclosure. Patent applications by John Sismanis, San Jose, CA US Patent applications by Peter Jay Haas, San Jose, CA US Patent applications by International Business Machines Corporation Patent applications in class MACHINE LEARNING Patent applications in all subclasses MACHINE LEARNING User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120330867","timestamp":"2014-04-20T17:34:08Z","content_type":null,"content_length":"117824","record_id":"<urn:uuid:4447c501-2def-48d8-bda2-489fc7d6077e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
How many kilos is 719 lbs? You asked: How many kilos is 719 lbs? 326.13291403 kilograms the mass 326.13291403 kilograms Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/how_many_kilos_is_719_lbs","timestamp":"2014-04-20T01:08:06Z","content_type":null,"content_length":"51535","record_id":"<urn:uuid:2b9e00ba-b20d-4f64-9267-e6a5f5440f94>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig functions April 29th 2009, 05:53 PM #1 Oct 2008 1) Find the exact measures of sin2x, cos2x, and tan2x using the double angle formulas, that sinx = -12/13. Show all work. 2) Find the exact value of the trig function given that sinx = 12/13 and cosy = -4/5 and you are in Quadrant 2. Show all work. 3) Find all solutions of the equation cosx + sin2x = 0 in the interval [0, 2π). For this one, I changed sin2x to 2sinxcosx, then factored out a cosx, for it to be cosx(1 + 2sinx), but am unsure where to go from there. Any help is appreciated! 1) Find the exact measures of sin2x, cos2x, and tan2x using the double angle formulas, that sinx = -12/13. Show all work. 2) Find the exact value of the trig function given that sinx = 12/13 and cosy = -4/5 and you are in Quadrant 2. Show all work. 3) Find all solutions of the equation cosx + sin2x = 0 in the interval [0, 2π). For this one, I changed sin2x to 2sinxcosx, then factored out a cosx, for it to be cosx(1 + 2sinx), but am unsure where to go from there. Any help is appreciated! 1. to use the double angle formulae, you need the value of $\cos{x}$ and the quadrant that x resides (III or IV, since $\sin{x} < 0$). use the Pythagorean identity or a reference triangle to find the value of $\cos{x}$ 2. "Find the exact value of the trig function ... " what trig function? Is there more information to this question? 3. $\cos{x}(1 + 2\sin{x}) = 0$ set each factor equal to zero ... $\cos{x} = 0$ , $1 + 2\sin{x} = 0$ solutions for x will be angles from the unit circle ... for the first factor, $\cos{x} = 0$ at $x = \frac{\pi}{2}$ and $\frac{3\pi}{2}$ you solve for the second factor. April 30th 2009, 05:42 AM #2
{"url":"http://mathhelpforum.com/pre-calculus/86529-trig-functions.html","timestamp":"2014-04-17T18:41:49Z","content_type":null,"content_length":"36024","record_id":"<urn:uuid:25b4b5ad-4b38-41e0-a51b-c2379cf9c45e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: two persons of equal weight are hanging by thier hands from the ends of a rope hung over a frictionless pulley. they begin to climb the ropes. one person can climb twice the speed of other wrt rope. who will get to the top first? • one year ago • one year ago Best Response You've already chosen the best response. have you the free body diagram? Best Response You've already chosen the best response. yes i have. but i dont seem to get where the speed factor will come in. Best Response You've already chosen the best response. the distance covered is not the same. if one climbs faster, the other guy might also climb a shorter distance. Best Response You've already chosen the best response. so will they get to the top together? Best Response You've already chosen the best response. :) you'll have to calculate that. they might. Best Response You've already chosen the best response. well i think i got it. without solving. since there is no external force momentum of the system is conserved. and since there masses are equal the rate at which they climb up the rope relative to ground is equal. hence they will reach at the same time. is that fine? correct me if i am wrong. Best Response You've already chosen the best response. yup, their velocity relative to the ground is equal. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bd075de4b09e7e3b853821","timestamp":"2014-04-18T23:17:43Z","content_type":null,"content_length":"42280","record_id":"<urn:uuid:13ad82ac-ff3f-4042-bd1d-dff524db6b71>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Le Monde puzzle [#4] A fairly simple puzzle in this weekend Le Monde magazine: given five points on a line such that their pairwise distances are 1,2,4,…,14,18,20, find the respective positions of the five points over the line and deduce the missing distances. Without loss of generality, we can set the first point at 0 and the fifth point at 20. The three remaining points are in between 1 and 19, but again without loss of generality we can choose the fourth point to be at 18. (Else the second point would be at 2.) Finding the three remaining points can be done by the R code while (i==1){ if ((a[1]==1)&&(a[2]==2)&&(a[3]==4)&&(a[8]==14)) Removing 18 from the above takes about the same time! A solution (modulo permutations) is 0, 13, 14, 18, 20. 7 Responses to “Le Monde puzzle [#4]” 1. [...] Sudoku-like puzzle from the weekend edition of Le Monde. The object it starts with is a 9×9 table where each entry is an integer and where neighbours [...] 3. Cool! “while (TRUE)” works too, right? □ Yes indeed. And certainly more elegant. 4. And the symmetric: 0 2 6 7 20. □ Yes this was my meaning of “permutation” but symmetry would have been clearer!!!
{"url":"http://xianblog.wordpress.com/2011/02/04/le-monde-puzzle-4/","timestamp":"2014-04-21T01:59:45Z","content_type":null,"content_length":"42960","record_id":"<urn:uuid:783bd09f-699f-4569-8b8f-02a7b9947446>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:Wittengenstein - Tractatus Logico-Philosophicus, 1922.djvu/135 This page has been , but needs to be 5.474 The number of necessary fundamental operations depends only on our notation. 5.475 It is only a question of constructing a system of signs of a definite number of dimensions — of a definite mathematical multiplicity. 5.476 It is clear that we are not concerned here with a number of primitive ideas which must be signified but with the expression of a rule. 5.5 Every truth-function is a result of the successive application of the operation (- - - - T)($\xi$, . . . .) to elementary propositions. This operation denies all the propositions in the right-hand bracket and I call it the negation of these propositions. 5.501 An expression in brackets whose terms are propositions I indicate — if the order of the terms in the bracket is indifferent — by a sign of the form "$(\bar \xi)$". "$\xi$" is a variable whose values are the terms of the expression in brackets, and the line over the variable indicates that it stands for all its values in the bracket. (Thus if $\xi$ has the 3 values P, Q, R, then $(\bar \xi)$=(P,Q, R).) The values of the variables must be determined. The determination is the description of the propositions which the variable stands for. How the description of the terms of the expression in brackets takes place is unessential. We may distinguish 3 kinds of description: 1. Direct enumeration. In this case we can place simply its constant values instead of the variable. 2. Giving a function "$fx$" whose values for all values of "$x$" are the propositions to be described. 3. Giving a formal law, according to which those propositions are constructed. In this case the
{"url":"http://en.wikisource.org/wiki/Page:Wittengenstein_-_Tractatus_Logico-Philosophicus,_1922.djvu/135","timestamp":"2014-04-16T09:29:06Z","content_type":null,"content_length":"26563","record_id":"<urn:uuid:0b2b30ac-defe-4ef4-8b9f-197ed1256d58>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple Choice Identify the letter of the choice that best completes the statement or answers the question. 1. In a regression and correlation analysis if r^2 = 1, then a. SSE = SST b. SSE = 1 c. SSR = SSE d. SSR = SST 2. If the coefficient of determination is a positive value, then the regression equation a. must have a positive slope b. must have a negative slope c. could have either a positive or a negative slope d. must have a positive y intercept 3. If the coefficient of correlation is a positive value, then the slope of the regression line a. must also be positive b. can be either negative or positive c. can be zero d. can not be zero 4. If the coefficient of determination is 0.81, the coefficient of correlation a. is 0.6561 b. could be either + 0.9 or - 0.9 c. must be positive d. must be negative 5. If a data set has SST = 2,000 and SSE = 800, then the coefficient of determination is 6. If the coefficient of correlation is a negative value, then the coefficient of determination a. must also be negative b. must be zero c. can be either negative or positive d. must be positive 7. If the coefficient of determination is 0.9, the percentage of variation in the dependent variable explained by the variation in the independent variable a. is 0.90% b. is 90%. c. is 0.81% d. can be any positive value Exhibit 14-1 The following information regarding a dependent variable (Y) and an independent variable (X) is provided. SSE = 6 SST = 16 8. Refer to Exhibit 14-1. The MSE is Exhibit 14-7 You are given the following information about y and x. y x Dependent Variable Independent Variable 9. Refer to Exhibit 14-7. The least squares estimate of b[1] equals a. -0.7647 b. -0.13 c. 21.4 d. 16.412 Exhibit 14-9 A regression and correlation analysis resulted in the following information regarding a dependent variable (y) and an independent variable (x). n = 10 Sx = 55 Sy = 55 Sx^2 = 385 Sy^2 = 385 Sxy = 220 10. Refer to Exhibit 14-9. The point estimate of y when x = 20 is
{"url":"http://pegasus.cc.ucf.edu/~xander/q4.html","timestamp":"2014-04-18T23:15:12Z","content_type":null,"content_length":"70559","record_id":"<urn:uuid:6e629967-2217-43f1-bbb4-f0ce17e4b619>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
A.: Reachability analysis of dynamical systems having piecewise-constant derivatives. Theoret Results 1 - 10 of 89 , 2000 "... The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fi ..." Cited by 116 (21 self) Add to MetaCart The purpose of this paper is twofold: (a) to provide a tutorial introduction to some key concepts from the theory of computational complexity, highlighting their relevance to systems and control theory, and (b) to survey the relatively recent research activity lying at the interface between these fields. We begin with a brief introduction to models of computation, the concepts of undecidability, polynomial time algorithms, NP-completeness, and the implications of intractability results. We then survey a number of problems that arise in systems and control theory, some of them classical, some of them related to current research. We discuss them from the point of view of computational complexity and also point out many open problems. In particular, we consider problems related to stability or stabilizability of linear systems with parametric uncertainty, robust control, time-varying linear systems, nonlinear and hybrid systems, and stochastic optimal control. - IEEE Transactions on Automatic Control , 1999 "... In this pap e we prove in a constructive way, the ee ale b e we e pie- a#ne syste and a broad class of hybridsyste de e d by inte line dynamics, automata, and propositional logic. By focusing our inveon the forme class, we show through countethat obse ability and controllability prope rtie cannot b ..." Cited by 93 (14 self) Add to MetaCart In this pap e we prove in a constructive way, the ee ale b e we e pie- a#ne syste and a broad class of hybridsyste de e d by inte line dynamics, automata, and propositional logic. By focusing our inveon the forme class, we show through countethat obse ability and controllability prope rtie cannot be e asilydely from those of the comp tline subsyste Inste we propose practical nume te base onmixe te line programming. Keywords--- Hybrid syste controllability,obse ability, pie line syste pie a#ne syste mixe teline programming I. Introducti In recent yearsb oth control and computer science haveb een attractedb y hybridsystem [1], [2], [23], [25], [26],b ecause they provide a unified framework fordescribgARB( cesses evolving accordingto continuous dynamics, discrete dynamics, and logic rules. The interest is mainly motivatedb y the large variety of practical situations, for instance real-time systems, where physical processes interact with digital controllers. Several modelingformalisms h... , 2000 "... In this work we suggest a novel methodology for synthesizing switching controllers for continuous and hybrid systems whose dynamics are defined by linear differential equations. We formulate the synthesis problem as finding the conditions upon which a controller should switch the behavior of the sys ..." Cited by 78 (8 self) Add to MetaCart In this work we suggest a novel methodology for synthesizing switching controllers for continuous and hybrid systems whose dynamics are defined by linear differential equations. We formulate the synthesis problem as finding the conditions upon which a controller should switch the behavior of the system from one "mode" to another in order to avoid a set of bad states, and propose an abstract algorithm which solves the problem by an iterative computation of reachable states. We have implemented a concrete version of the algorithm, which uses a new approximation scheme for reachability analysis of linear systems. - in P.E. Camurati, H. Eveking (Eds.), Proc. CHARME'95, LNCS 987 , 1995 "... Abstract. In this paper we present a method for modeling asynchronous digital circuits by timed automata. The constructed timed automata serve as\mechanical " and veri able objects for asynchronous sequential machines in the same sense that (untimed) automata do for synchronous machines. These ..." Cited by 59 (14 self) Add to MetaCart Abstract. In this paper we present a method for modeling asynchronous digital circuits by timed automata. The constructed timed automata serve as\mechanical &quot; and veri able objects for asynchronous sequential machines in the same sense that (untimed) automata do for synchronous machines. These results, combined with recent results concerning the analysis and synthesis of timed automata provide for the systematic treatment of a large class of problems that could be treated by conventional simulation methods only in an ad-hoc fashion. The problems that can be solved due to the results presented in this paper include: the reachability analysis of a circuit with uncertainties in gate delays and input arrival times, inferring the necessary timing constraints on input signals that guaranteeaproper functioning of a circuit and calculating the delay characteristics of the components required inorder to meet some given behavioral speci cations. Notwithstanding the existence of negative theoretical results concerning the worst-case complexity of timed automata analysis algorithms, initial experimentation with the Kronos tool for timing analysis suggest that timed automata derived from circuits might not be so hard to analyze in practice. 1 , 1997 "... this paper, weconsider simple classes of nonlinear systems and provethatbasic questions related to their stabilityandcontrollabilityare either undecidable or computationally intractable (NP-hard). As a special case, weconsider a class of hybrid systems in which the state space is partitioned into tw ..." Cited by 34 (10 self) Add to MetaCart this paper, weconsider simple classes of nonlinear systems and provethatbasic questions related to their stabilityandcontrollabilityare either undecidable or computationally intractable (NP-hard). As a special case, weconsider a class of hybrid systems in which the state space is partitioned into two halfspaces, and the dynamics in eachhalfspace correspond to a differentlinear system - In HSCC’2001, number 2034 in LNCS , 2001 "... Abstract. In this paper we develop an algorithm for solving the reachability problem of two-dimensional piece-wise rectangular differential inclusions. Our procedure is not based on the computation of the reach-set but rather on the computation of the limit of individual trajectories. A key idea is ..." Cited by 33 (13 self) Add to MetaCart Abstract. In this paper we develop an algorithm for solving the reachability problem of two-dimensional piece-wise rectangular differential inclusions. Our procedure is not based on the computation of the reach-set but rather on the computation of the limit of individual trajectories. A key idea is the use of one-dimensional affine Poincaré maps for which we can easily compute the fixpoints. As a first step, we show that between any two points linked by an arbitrary trajectory there always exists a trajectory without self-crossings. Thus, solving the reachability problem requires considering only those. We prove that, indeed, there are only finitely many “qualitative types ” of those trajectories. The last step consists in giving a decision procedure for each of them. These procedures are essentially based on the analysis of the limits of extreme trajectories. We illustrate our algorithm on a simple model of a swimmer spinning around a whirlpool. 1 - In Hybrid Systems: Computation and Control , 2000 "... In this paper, we formulate the problem of characterizing the stability of a piecewise affin (PWA) system as a verification problem. The basic idea is to take the whole R^n as the set of initial conditions, and check that all the trajectories go to the origin. More precisely, we test for semi-global ..." Cited by 29 (8 self) Add to MetaCart In this paper, we formulate the problem of characterizing the stability of a piecewise affin (PWA) system as a verification problem. The basic idea is to take the whole R^n as the set of initial conditions, and check that all the trajectories go to the origin. More precisely, we test for semi-global stability by restricting the set of initial conditions to an (arbitrarily large) bounded set X(0), and label as "asymptotically stable in T steps" the trajectories that enter an in variant set around the origin within a finite time T ,or as "unstable in T steps" the trajectories which enter a (very large) set X_inst . Subsets of X (0) leadin ton2W of the two previous cases are labeled as "nv classifiable in T steps". The domain of asymptotical stability in T steps is a subset of the domain of attraction ofan equilibrium poin t, an has the practicalmeanca of collectin inPv)v convW2xvP from which the settlin time of the system is smaller than T . In addition it can be computed algorithmically i... - IEEE Transactions on Software Engineering , 1999 "... The purpose of this paper is to describe a method for simulation of recently introduced fluid stochastic Petri nets. Since such nets result in rather complex set of partial differential equations, numerical solution becomes a formidable task. Because of a mixed, discrete and continuous state space, ..." Cited by 29 (6 self) Add to MetaCart The purpose of this paper is to describe a method for simulation of recently introduced fluid stochastic Petri nets. Since such nets result in rather complex set of partial differential equations, numerical solution becomes a formidable task. Because of a mixed, discrete and continuous state space, simulative solution also poses some interesting challenges, which are addressed in the paper. 1 - in Proceedings of the 39th IEEE Conference on Decision and Control , 2000 "... In this paper we propose a procedure for synthesizing piecewise linear optimal controllers for hybrid systems and investigate conditions for closed-loop stability. Hybrid systems are modeled in discrete-time within the mixed logical dynamical (MLD) framework[8], or, equivalently [7], as piecewise af ..." Cited by 28 (7 self) Add to MetaCart In this paper we propose a procedure for synthesizing piecewise linear optimal controllers for hybrid systems and investigate conditions for closed-loop stability. Hybrid systems are modeled in discrete-time within the mixed logical dynamical (MLD) framework[8], or, equivalently [7], as piecewise affine (PWA) systems. A stabilizing controller is obtained by designing a model predictive controller (MPC), which is based on the minimization of a weighted 1/∞-norm of the tracking error and the input trajectories over a finite horizon. The control law is obtained by solving a mixed-integer linear program (MILP) which depends on the current state. Although efficient branch and bound algorithms exist to solve MILPs, these are known to be NP-hard problems, which may prevent their on-line solution if the sampling-time is too small for the available computation power. Rather than solving the MILP on line, in this paper we propose a different approach where all the computation is moved off line, by solving a multiparametric MILP (mp-MILP). As the resulting control law is piecewise affine, on-line computation is drastically reduced to a simple linear function evaluation. An example of piecewise linear optimal control of the heat exchange system [16] shows the potential of the method.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=105945","timestamp":"2014-04-20T10:23:57Z","content_type":null,"content_length":"39116","record_id":"<urn:uuid:c68f23b7-e1f1-46c1-806e-92eb15080ec8>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Rapid mixing of Gibbs sampling on graphs that are sparse on average Seminar Room 1, Newton Institute We overcome an obstacle of most techniques for analysing the mixing time of the Glauber dynamics, that they are stated in terms of the maximal degree and are therefore insufficient for Erdos Renyi random graphs where the maximum degree grows as order (log n)/(log log n). We show that for most natural models defined on G(n,d/n) if the "temperature" is high enough (as a function of d only) then the mixing time of Glauber dynamics is polynomial. This proves in particular a conjecture of Dyer et.al. proving rapid mixing of random colourings on G(n,d/n) with a constant number of colours. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/CSM/seminars/2008032610351.html","timestamp":"2014-04-19T04:34:54Z","content_type":null,"content_length":"6756","record_id":"<urn:uuid:899d3448-ceca-4822-8e56-a5622df62c9a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Ask Dr. Math FAQ: Grazing Animals 1. A donkey is attached by a rope to a point on the perimeter of a circular field. How long should the rope be in terms of the radius of the field so that the donkey can reach exactly half the field and eat half the grass? From: Doctor Anthony Subject: Donkey in Field This problem comes up from time to time in the guise of the length of the rope required to tether a goat (instead of a donkey) on the boundary of a circular field such that the goat can eat exactly half the grass in the field. Here is a diagram to refer to while we go through the working: Draw a circle with suitable radius r. Now take a point C on the circumference and with a slightly larger radius R draw an arc of a circle to cut the first circle in points A and B. Join AC and BC. Let O be the centre of the first circle of radius r. Let angle OCA = x (radians). This will also be equal to angle OCB. The area we require is made up of a sector of a circle radius R with angle 2x at the centre, C, of this circle, plus two small segments of the first circle of radius r cut off by the chords AC and BC. The area of the sector of circle R is (1/2)R^2*2x = R^2*x The area of the two segments = 2[(1/2)r^2(pi-2x) - (1/2)r^2sin(pi-2x)] = r^2[pi - 2x - sin(2x)] We also have R = 2rcos(x) so R^2*x = 4r^2*x*cos^2(x) We add the two elements of area and equate to (1/2)pi*r^2 4r^2*x*cos^2(x) + r^2[pi-2x-sin(2x)] = (1/2)pi*r^2 divide out r^2 4x*cos^2(x) + pi - 2x - sin(2x) = (1/2)pi 4x*cos^2(x) + (1/2)pi - 2x - sin(2x) = 0 We must solve this for x and we can then find R/r from R/r = 2cos(x) Newton-Raphson is a suitable method for solving this equation, using a starting value for x at about 0.7 radians. The solution I get is x = 0.95284786466 and from this cos(x) = 0.579364236509 and so finally R/r = 2cos(x) = 1.15872847 - Doctor Anthony, The Math Forum 2. A cow is tethered in a field using a 50-ft. rope tied to one corner of the outside of a 20 ft. by 10 ft. barn. What is the total area that the cow is capable of grazing? From: Dr. Ken Subject: Grazing Cow I have made a sketch of this problem using The Geometer's Sketchpad: Basically, the shape of the grazing area is circular with a little chunk out of it. In my drawing, the barn looks like this: | | | | | | | | | | | | | | | | | | | | | | With this setup, the chunk that's missing from the grazing circle is in the lower lefthand corner. The right half of the grazing area, then, is just the area of the semicircle, which is 1250. To find the area of the remaining grazing area (including the area of the barn, which you subtract later), you use integration. Do you see how the grazing area in the lower lefthand corner is the intersection of two circles? You need to find that intersection point, integrate to find the area of the grazing area to the left of the intersection point, and then do the same to the area to the right of the intersection point. - Doctor Ken, The Math Forum 3. We can generalize this problem slightly by allowing the cow to be tethered at any point along the longer wall of the barn. From: Dr. Peterson Subject: Grazing Cow Let's say that the cow is x feet from one end of the longer wall of the barn and y = 20-x feet from the other: The heavy curve in the figure shows the maximum extent of the cow's range, which consists of several arcs of circles. First there is an arc of radius 50 centered at the tether point P; then when the rope wraps around either corner of the barn, it follows an arc of 50-x or 50-y feet respectively, centered at that corner (A or D); when it wraps around another corner (B or C) the radius becomes 50-x-10 or 50-y-10 respectively. We don't need to worry about all these arcs, because the last few overlap and do not add any area to the region that can be grazed. (You'll have to be careful to determine which arcs intersect for a given location of P.) The total area consists of: □ a semicircle (red) with radius 50; □ a quarter-circle (magenta) with radius 50-x; □ two sectors(yellow) with radii 50-y and 40-x that meet at point Q where two circles intersect; and □ a quadrilateral BCDQ (green) determined by the intersection point Q and two walls of the barn. The hard part is to find where the circles intersect, so we can get the angles of the yellow sectors. The area of the quadrilateral is easy, if we think of it as the difference between the triangle BQD determined by the intersection point and the diagonal of the barn, and the right triangle BCD forming half of the barn itself. You can use Heron's formula to find the area of BQD. So how do we find where the circles intersect? Applying the Law of Cosines to triangle BQD, we can find angles BDQ and DBQ; and since we know angles ADB and ABD, we can determine the angles we need for the two sectors. There's some work left to do, but it's all doable, and it will be easier if x is a constant rather than a variable. Finally, if the rope is short enough (less than half the perimeter of the barn), the problem simplifies immensely. With no overlap to consider, the area is merely the sum of half- and
{"url":"http://mathforum.org/dr.math/faq/faq.grazing.html","timestamp":"2014-04-16T20:13:07Z","content_type":null,"content_length":"12699","record_id":"<urn:uuid:5221be1d-51f1-4f1f-ab32-6a1d003faace>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is c the symbol for the speed of light? [Physics FAQ] - [Copyright] Updated by PG 2004. Original by Philip Gibbs 1997. Why is c the symbol for the speed of light? "As for c, that is the speed of light in vacuum, and if you ask why c, the answer is that it is the initial letter of celeritas, the Latin word meaning speed." Isaac Asimov in "C for Celeritas (1959) " [1] A Short Answer Although c is now the universal symbol for the speed of light, the most common symbol in the nineteenth century was an upper-case V which Maxwell had started using in 1865. That was the notation adopted by Einstein for his first few papers on relativity from 1905. The origins of the letter c being used for the speed of light can be traced back to a paper of 1856 by Weber and Kohlrausch [2]. They defined and measured a quantity denoted by c that they used in an electrodynamics force law equation. It became known as Weber's constant and was later shown to have a theoretical value equal to the speed of light times the square root of two. In 1894 Paul Drude modified the usage of Weber's constant so that the letter c became the symbol for the speed of electrodynamic waves [3]. In optics Drude continued to follow Maxwell in using an upper-case V for the speed of light. Progressively the c notation was used for the speed of light in all contexts as it was picked up by Max Planck, Hendrik Lorentz and other influential physicists. By 1907 when Einstein switched from V to c in his papers, it had become the standard symbol for the speed of light in vacuum for electrodynamics, optics, thermodynamics and relativity. Weber apparently meant c to stand for "constant" in his force law, but there is evidence that physicists such as Lorentz and Einstein were accustomed to a common convention that c could be used as a variable for velocity. This usage can be traced back to the classic Latin texts in which c stood for "celeritas" meaning "speed". The uncommon English word "celerity" is still used when referring to the speed of wave propagation in fluids. The same Latin root is found in more familiar words such as acceleration and even celebrity, a word used when fame comes quickly. Although the c symbol was adapted from Weber's constant, it was probably thought appropriate for it to represent the velocity of light later on because of this Latin interpretation. So history provides an ambiguous answer to the question "Why is c the symbol for the speed of light?", and it is reasonable to think of c as standing for either "constant" or "celeritas". The Long Answer In 1992 Scott Chase wrote on sci.physics that "anyone who read hundreds of books by Isaac Asimov knows that the Latin word for `speed' is `celeritas', hence the symbol `c' for the speed of light". Asimov had written an article entitled "C for Celeritas" in a sci-fi magazine in 1959 and had reprinted it in some of his later books [1]. Scott was the first editor of the Physics FAQ on Usenet and Asimov's explanation was later included in the relativity section as the "probable" answer to the question "Why is c the symbol for the speed of light?". Since then, Asimov's answer has become a factoid repeated in many articles and books. But if you go back and read his essay you discover that Asimov merely stated his case in one sentence, and made no further attempt to justify his theory for the origin of the "c" notation. So is his claim really born out by history, or was c originally introduced as a variable standing for something else? The special theory of relativity is based on the principle that the speed of light is constant; so did c stand for "constant", or did it simply appear by accident in some text where all the other likely variables for speed had already been used up? These questions have been asked repeatedly on usenet, and now after much searching through old papers and books the answers can be revealed. A lower-case c has been consistently used to denote the speed of light in textbooks on relativity almost without exception since such books started to be written. For example, the notation was used in the earliest books on relativity by Lorentz (1909) [4], Carmichael (1913) [5], Silberstein (1914) [6], Cunningham (1915) [7], and Tolman (1917) [8]. That was not the case just a few years before. In his earliest papers on relativity from 1905—1907 Einstein began by using an upper-case V for the speed of light [9]. At that time he was also writing papers about the thermodynamics of radiation, and in those he used up upper-case L [10]. All of these papers appeared in volumes of the German periodical Annalen Der Physik. Einstein's notation changed suddenly in 1907 in a paper for the Journal Jahrbuch der Radioaktivität und Elektronik [11]. There he used the lower case c, and his most famous equation E = mc^2 came into being. It is not difficult to find where the upper case V had come from. Maxwell used it extensively in his publications on electrodynamics from as early as 1865 [12]. It was the principal symbol for the speed of light in his 1873 treatise on electrodynamics [13]. By the 1890s Maxwell's book was in wide circulation around the world and there were translations available in French and German. It is no surprise then that the upper-case V is found in use in such papers as the 1887 report of Michelson and Morley on their attempt to find seasonal variations in the speed of light [14]. That was written in the United States, but the same notation was also found across Europe, from papers by Oliver Lodge [15] and Joseph Lamor [16] in England, to the lecture notes of Poincaré in France [17], and the textbooks of Paul Drude in Germany [18] and Lorentz in the Netherlands [19]. Einstein's education at the Polytechnik in Zurich had not covered Maxwell's theory of Electrodynamics in the detail he would have liked. But he had read a number of extra textbooks on the new Electrodynamics as self study, so he would have been familiar with the standard notations. From 1905 he wrote his first papers on relativity, and there is nothing extraordinary in his choice of the symbol V for the speed of light [9]. Why then, did he change it to c in 1907? At that time he still worked as a clerk in the Bern patent office, but for the previous two years he had been in regular correspondence with eminent physicists such as Max Laue, Max Planck, Wilhelm Wien and Johannes Stark. Stark was the editor of the Jahrbuch, and had asked Einstein to write the article in which he was to first use the letter c. Einstein mentioned to Stark that it was hard for him to find the time to read published scientific articles in order to acquaint himself with all the work others have done in the field, but he had seen papers by Lorentz, Kohn, Monsegeil and Planck [20]. Lorentz and Planck in particular had been using c for the speed of light in their work. Lorentz had won the 1902 Nobel prize for physics, and it is not surprising that physicists in Germany had now taken up the same notation. It is also not surprising that Einstein, who was looking for an academic position, aligned himself to the same conventions at that time. Another reason for him to make the switch was that the letter c is simply more practical. The upper-case V would have been easily confused with the lower case v appearing in the equations of relativity for the velocity of moving bodies or frames of reference. Einstein must have found this confusion inconvenient, especially in his hand written notes. Looking back at papers of the late 1890s, we find that Max Planck and Paul Drude in particular were using the symbol c at that time. The name of Drude is less well known to us today. He worked on relations between the physical constants and high precision measurements of their value. These were considered to be highly worthy pursuits of the time. Drude had been a student of Voigt, who himself had used a Greek ω for the speed of light when he wrote down an almost complete form of the Lorentz transformations in 1887 [43]. Voigt's ω was later used by a few other physicists [44, 45], but Drude did not use his teacher's notation. Drude first used the symbol c in 1894, and in doing so he referenced a paper by Kirchhoff [3]. As already mentioned, Paul Drude also used V. In fact he made a distinction of using V in the theory of optics for the directly-measured speed of light in vacuum, whereas he used c for the electromagnetic constant that was the theoretical speed of electromagnetic waves. This is seen especially clearly in his book "Theory of Optics" of 1900 [21], which is divided into two parts with V used in the first and c in the second part. Although Maxwell's theory of light predicted that they had the same value, it was only with the theory of relativity that these two things were established as fundamentally the same constant. Other notations vied against Drude's and Maxwell's for acceptance. Herglotz [46] opted for an elaborate script B, while Himstedt [47], Helmholtz [48] and Hertz [49] wrote the equations of electrodynamics with the letter A for the reciprocal of the speed of light. In 1899 Planck backed Drude by using c, when he wrote a paper introducing what we now call the Planck scale of units based on the constants of electrodynamics, quantum theory and gravity [22]. Drude and Planck were both editors of the prestigious journal Annalen Der Physik, so they would have had regular contact with most of the physicists of central Europe. Lorentz was next to change notation. When he started writing about light speed in 1887 he used an upper case A [23], but then switched to Maxwell's upper case V [24]. He wrote a book in 1895 [25] that contained the equations for length contraction, and was cited by Einstein in his 1907 paper. While Drude had started to use c, Lorentz was still using V in this book. He continued to use V until 1899 [26], but by 1903 when he wrote an encyclopedia article on electrodynamics [27] he too used c. Max Abraham was another early user of the symbol c in 1902, in a paper that was seen by Einstein [28]. From Drude's original influence, followed by Planck and Lorentz, by 1907 the c symbol had become the prevailing notation in Germanic science and it made perfect sense for Einstein to adopt it In France and England the electromagnetic constant was symbolised by a lower case v rather than Drude's c. This was directly due to Maxwell, who wrote up a table of experimental results for direct measurements of the speed of light on the one hand and electromagnetic experiments on the other. He used V for the former and v for the latter. Maxwell described a whole suite of possible experiments in electromagnetism to determine v. Those that had not already been done were performed one after the other in England and France over the three decades that followed [29]. In this context, lower case v was always used for the quantity measured. But using v was doomed to pass away once authors had to write relativistic equations involving moving bodies, because v was just too common a symbol for velocity. The equations were much clearer when something more distinct was used for the velocity of light to differentiate it from the velocity of moving bodies. While Maxwell always used v in this way, he also had a minor use for the symbol c in his widely read treatise of 1873. Near the end he included a section about the German electromagnetic theory that had been an incomplete precursor to his own formulation [30]. This theory, expounded by Gauss, Neumann, Weber, and Kirchhoff, attempted to combine the laws of Coulomb and Ampère into a single action-at-a-distance force law. The first versions appeared in Gauss's notes in 1835 [31], and the complete form was published by Weber in 1846 [32]. Many physicists of the time were heavily involved in the process of defining the units of electricity. Coulomb's law of electrostatic force could be used to give one definition of the unit of charge while Ampère's force law for currents in wires gave another. The ratio between these units had the dimension of a velocity, so it became of great practical importance to measure its value. In 1856 Weber and Kohlrausch published the first accurate measurement [2]. To give a theoretical backing they rewrote Weber's force law in terms of the measured constant and used the symbol c. This c appeared in numerous subsequent papers by German physicists such as Kirchhoff, Clausius, Himstedt, and Helmholtz, who referred to it as "Weber's constant". That continued until the 1870s, when Helmholtz discredited Weber's force law on the grounds of energy conservation, and Maxwell's more complete theory of propagating waves prevailed. Two papers using Weber's force law are of particular note. One by Kirchhoff [33] and another by Riemann [34] related Weber's constant to the velocity at which electricity propagated. They found this speed to be Weber's constant divided by the square root of two and it was very close to the measured speed of light. It was already known from experiments by Faraday that light was affected by magnetic fields, so there was already much speculation that light could be an electrodynamic phenomenon. This was the inspiration for Maxwell's work on electrodynamics, so it is natural that he finally included a discussion of the force law in his treatise [30]. The odd thing is that when Maxwell wrote down the force law, he changed the variable c so that it was smaller than Weber's constant by a factor of the square root of two. So Maxwell was probably the first to use c for a value equal to the speed of light, although he defined it as the speed of electricity through wires So c was used as Weber's constant having a value of the speed of light times the square root of two, and this can be related to the later use of c for the speed of light itself. Firstly, when Maxwell wrote Weber's force law in his treatise in 1873, he modified the scale of c in the equation so that it reduced by a factor of the square root of two. Secondly, when Drude first used c in 1894 for the speed of light [3], the paper by Kirchhoff that he cited [35] was using c for Weber's constant, so Drude had made the same adjustment as Maxwell. It is impossible to say if Drude copied the notation from Maxwell, but he did go one step further in explicitly naming his c as the velocity of electrodynamic waves which by Maxwell's theory was also the speed of light. He seems to have been the first to do so, with Lorentz, Planck, and others following suit a few years later. So to understand why c became the symbol for the speed of light we now have to find out why Weber used it in his force law. In the paper of 1856 [2] Weber's constant was introduced with these words "and the constant c represents that relative speed, that the electrical masses e and e must have and keep, if they are not to affect each other." So it appears that c originated as a letter standing for "constant" rather than "celeritas". However, it had nothing to do with the constancy of the speed of light until much later. Despite this, there could still be some substance to Asimov's claim that c is the initial letter of "celeritas". It is true, after all, that c is also often used for the speed of sound, and it is commonly used as the velocity constant in the wave equation. Furthermore, this usage was around before relativity. Starting with the Latin manuscripts of the 17th century, such as Galileo's "De Motu Antiquiora" or Newton's "Principia", we find that they often use the word "celeritas" for speed. However, their writing style was very geometric and descriptive. They did not tend to write down formulae where speed is given a symbol. But an example of the letter c being used for speed can be found from the eighteenth century. In 1716 Jacob Hermann published a Latin text called Phoronomia, meaning the science of motion [36]. In it he developed Newton's mechanics in a form more familiar to us now, except for the Latin symbols. His version of the basic newtonian equation F = ma was dc = p dt, where c stands for "celeritas" meaning speed, and p stands for "potentia", meaning force. Apart from in relativity, the most pervasive use of c to represent a speed today is in the wave equation. In 1747 Jean d'Alembert made a mathematical study of the vibrating string and discovered the one dimensional wave equation, but he wrote it without the velocity constant. Euler generalised d'Alembert's equation to include the velocity, denoting it by the letter a [38]. The general solution is y = f(x - at) + f(x + at), representing two waves of fixed shape travelling in opposite directions with velocity a. Euler was one of the most prolific mathematicians of all time. He wrote hundreds of manuscripts and most of them were in Latin. If anyone established a convention for using c for "celeritas", it has to have been Euler. In 1759 he studied the vibrations of a drum, and moved on to the 2-dimensional wave equation. This he wrote in the form we are looking for with c now the velocity constant [39]. The wave equation became a subject of much discussion, being investigated by all the great mathematicians of the époque including Lagrange, Fourier, Laplace, and Bernoulli. Through their works, Euler's form of the wave equation with c for the speed of wave propagation was carved in stone for good. To a first approximation, sound waves are also governed by the same wave equation in three dimensions, so it is not surprising that the speed of sound also came to be denoted by the symbol c. This predates relativity and can be found, for example, in Lord Rayleigh's classic text "Theory of Sound" [40]. Physicists of the nineteenth century would have read the classic Latin texts on physics, and would have been aware that c could stand for "celeritas". As an example, Lorentz used c in 1899 for the speed of the Earth through the ether [41]. We even know that Einstein used it for speed outside relativity, because in a letter to a friend about a patent for a flying machine, he used c for the speed of air flowing at a mere 4.9 m/s [42]. In conclusion, although we can trace c back to Weber's force law where it most likely stood for "constant", it is possible that its use persisted because c could stand for "celeritas" and had therefore become a conventional symbol for speed. We cannot tell for sure how Drude, Lorentz, Planck or Einstein thought about their notation, so there can be no definitive answer for what it stood for then. The only logical answer is that when you use the symbol c, it stands for whatever possibility you prefer. [1] Isaac Asimov "C for Celeritas" in "The Magazine of Fantasy and Science Fiction", Nov-59 (1959), reprinted in "Of Time, Space, and Other Things", Discus (1975), and "Asimov On Physics", Doubleday (1976) [2] R. Kohlrausch and W.E. Weber, "Ueber die Elektricitätsmenge, welche bei galvanischen Strömen durch den Querschnitt der Kette fliesst", Annalen der Physik, 99, pg 10 (1856) [3] P. Drude, "Zum Studium des elektrischen Resonators", Göttingen Nachrichten (1894), pgs 189—223 [4] H.A. Lorentz, "The theory of Electrons and its applications to the phenomena of light and radiant heat". A course of lectures delivered in Columbia University, New York, in March and April 1906, Leiden (1909) [5] R.D. Carmichael, "The Theory of Relativity", John Wiley & Sons (1913) [6] L. Silberstein, "The Theory of Relativity", Macmillan (1914) [7] E. Cunningham, "The Principle of Relativity", Cambridge University Press (1914) [8] R.C. Tolman, "The Theory of the Relativity of Motion", University of California Press (1917) [9] A. Einstein, From "The Collected Papers, Vol 2, The Swiss Years: Writings, 1900—1909", English Translation, he wrote five papers using V, e.g. "On the Electrodynamics of Moving Bodies", Annalen Der Physik 17, pgs 891—921 (1905), "On the Inertia of Energy Required by the Relativity Principle", Annalen Der Physik 23, pgs 371—384 (1907) [10] A. Einstein, e.g. "On the Theory of Light Production and Light Absorption", Annalen Der Physik, 20, pgs 199—206 (1906) [11] A. Einstein, "On the Relativity Principle and the Conclusions Drawn From It", Jahrbuch der Radioaktivität und Elektronik 4, pgs 411—462 (1907) [12] J. Clerk Maxwell, "A dynamical theory of the electromagnetic field", Philos. Trans. Roy. Soc. 155, pgs 459—512 (1865). Abstract: Proceedings of the Royal Society of London 13, pgs 531—536 [13] J. Clerk Maxwell, "A Treatise on Electricity and Magnetism", Oxford Clarendon Press (1873) [14] A.A. Michelson and E.W. Morley, "On the Relative Motion of the Earth and the Luminiferous Ether", Amer. J. Sci. 34, pgs 333—345 (1887), Philos. Mag. 24, pgs 449—463 (1887) [15] O. Lodge, "Aberration Problems", Phil. Trans. Roy. Soc. 184, pgs 729—804 (1893) [16] J. Larmor, "A Dynamical Theory of the Electric and Luminiferous Medium I", Phil. Trans. Roy. Soc. 185, pgs 719—822 (1894) [17] H. Poincaré, "Cours de physique mathématique. Electricité et optique. La lumière et les théories électrodynamiques" (1900) [18] P. Drude, "Physik des Äthers auf elektromagnetischer Grundlage", Verlag F. Enke, Stuttgart (1894) [19] H. Lorentz, "Versuch einer Theorie der elektrischen und optischen Erscheinungen in bewegten Körpern", Leiden (1895) [20] A. Einstein, from "The Collected Papers, Vol 5, The Swiss Years: Correspondence, 1902—1914", English Translation, Doc 58. [21] P. Drude, "The theory of optics", translated from German by C.R. Mann and R.A. Millikan, New York, Longmans, Green, and Co. (1902) [22] M. Planck, "Uber irreversible Strahlungsvorgange", Verl. d. Kgl. Akad. d. Wiss. (1899) [23] H.A. Lorentz, "De l'Influence du Mouvement de la Terre sur les Phenomenes Lumineux", Arch. Neerl. 21, pg 103 (1887) [24] H.A. Lorentz, "On the Reflection of Light by Moving Bodies", Versl. Kon. Akad. Wetensch Amsterdam I, 74 (1892) [25] H.A. Lorentz, "Versuch einer Theorie der elektrischen und optischen Erscheinungen in bewegten Körpern", Leiden (1895) [26] H. A. Lorentz, "Théorie simplifiée des phenomènes electriques et optiques dans des corps en mouvement", Proc. Roy. Acad. Amsterdam I 427 (1899) [27] H.A. Lorentz, "Maxwells elektromagnetische Theorie" Encyclopädie der Mathematischen Wissenschaften. Leipzig, Teubner (1903) [28] M. Abraham, "Prinzipien der Dynamik des Elektrons", Annalen der Physik 10, pgs 105—179 (1903) [29] e.g. J.J. Thomson and G.F.C. Searle, "A Determination of `v', the Ratio of the Electromagnetic Unit of Electricity to the Electrostatic Unit", Proc. Roy. Soc. Lond. 181, pg 583 (1890), M. Hurmuzescu, "Nouvelle determination du rapport v entre les unites electrostatiques et electromagnetiques", Ann. de Chim. et de Phys., 7a serie T. X April 1897, pg 433. (1897) [30] J. Clerk Maxwell, "A Treatise on Electricity and Magnetism", Oxford Clarendon Press, Vol II; Chapter 23, section 849 (1873) [31] K.F. Gauss, "Zur mathematischen Theorie der elektrodynamischen Wirkung" (1835), in "Werke", Göttingen 1867; Vol. V, pg 602 [32] W. Weber, "Elektrodynamische Maassbestimmingen uber ein allgemeines Grundgesetz der elektrischen Wirkung", Abh. Leibnizens Ges., Leipzig (1846) [33] G. Kirchhoff, "Ueber die Bewegung der Elektricität in Leitern" Ann. Phys. Chem. 102, 529—544 (1857) [34] G.F.B. Riemann, "Ein Beitrag zur Elektrodynamik", Annalen der Physik und Chemie, pg 131 (1867) [35] G. Kirchhoff, "Zur Theorie der Entladung einer Leydener Flasche", Pogg. Ann. 121 (1864) [36] J. Hermann, "Phoronomia", Amsterdam, Wetsten, (1716) [37] J. d'Alembert, "Recherches sur les cordes vibrantes", L’Académie Royal des Sciences (1747) [38] L. Euler, "De La Propagation Du Son" Memoires de l'acadamie des sciences de Berlin [15] (1759), 1766, pgs 185—209, in "Opera physica miscellanea epistolae. Volumen primum", pg 432 [39] L. Euler, "Eclaircissemens Plus Detailles Sur La Generation et La Propagation Du Son Et Sur La Formation De L'Echo", "Memoires de l'acadamie des sciences de Berlin" [21] (1765), 1767, pgs 335—363 in "Opera physica miscellanea epistolae. Volumen primum", pg 540 [40] J.W. Strutt, "Theory of Sound" Vol 1, pg 251, McMillan and Co. (1877) [41] H.A. Lorentz, "Stokes' Theory of Aberration in the Supposition of a Variable Density of the Aether", Proc. Roy. Acad. Amsterdam I, pg 443 (1899) [42] A. Einstein, "The Collected Papers, Vol 5, The Swiss Years: Correspondence, 1902—1914", English Translation, Doc 86 (1907) [43] W. Voigt, "Ueber das Doppler'sche Princip", Goett. Nachr. 2, pg 41 (1887) [44] E. Cohn, "Zur Elektrodynamik bewegter Systeme. II", Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin, der physikalisch-mathematischen Classe (1904) [45] M. Brillouin, "Le mouvement de la Terre et la vitesse de la lumière", comptes rendu 140, pg 1674 (1905) [46] G. Herglotz, "Zur Elektronentheorie", Nachrichten von der Gesellschaft 6, pg 357 (1903) [47] F. Himstedt, "Ueber die Schwingungen eines Magneten unter dem dämpfenden Einfluß einer Kupferkugel", Nachrichten von der Gesellschaft 11, pg 308 (1875) [48] H. Helmholtz, Berlin: Verl. d. Kgl. Akad. d. Wiss. (1892) [49] H. Hertz, "Electric Waves", Macmillan (1893)
{"url":"http://johanw.home.xs4all.nl/PhysFAQ/Relativity/SpeedOfLight/c.html","timestamp":"2014-04-18T18:22:07Z","content_type":null,"content_length":"29187","record_id":"<urn:uuid:87140744-62c0-4dad-8671-2fa0b8c5e933>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
A Domain Equation for Bisimulation Results 11 - 20 of 95 - PROCEEDINGS OF THE REX WORKSHOP ON SEMANTICS: FOUNDATIONS AND APPLICATIONS, VOLUME 666 OF LECTURE NOTES IN COMPUTER SCIENCE , 1998 "... Canonical solutions of domain equations are shown to be final coalgebras, not only in a category of non-standard sets (as already known), but also in categories of metric spaces and partial orders. Coalgebras are simple categorical structures generalizing the notion of post-fixed point. They are ..." Cited by 48 (10 self) Add to MetaCart Canonical solutions of domain equations are shown to be final coalgebras, not only in a category of non-standard sets (as already known), but also in categories of metric spaces and partial orders. Coalgebras are simple categorical structures generalizing the notion of post-fixed point. They are also used here for giving a new comprehensive presentation of the (still) non-standard theory of non-well-founded sets (as non-standard sets are usually called). This paper is meant to provide a basis to a more general project aiming at a full exploitation of the finality of the domains in the semantics of programming languages --- concurrent ones among them. Such a final semantics enjoys uniformity and generality. For instance, semantic observational equivalences like bisimulation can be derived as instances of a single `coalgebraic' definition (introduced elsewhere), which is parametric of the functor appearing in the domain equation. Some properties of this general form of equivalence are also studied in this paper. , 1999 "... In this dissertation we investigate presheaf models for concurrent computation. Our aim is to provide a systematic treatment of bisimulation for a wide range of concurrent process calculi. Bisimilarity is defined abstractly in terms of open maps as in the work of Joyal, Nielsen and Winskel. Their wo ..." Cited by 45 (19 self) Add to MetaCart In this dissertation we investigate presheaf models for concurrent computation. Our aim is to provide a systematic treatment of bisimulation for a wide range of concurrent process calculi. Bisimilarity is defined abstractly in terms of open maps as in the work of Joyal, Nielsen and Winskel. Their work inspired this thesis by suggesting that presheaf categories could provide abstract models for concurrency with a built-in notion of bisimulation. We show how - THEORETICAL COMPUTER SCIENCE , 1992 "... This paper establishes a new property of predomains recursively defined using the cartesian product, disjoint union, partial function space and convex powerdomain constructors. We prove that the partial order on such a recursive predomain D is the greatest fixed point of a certain monotone operator ..." Cited by 40 (3 self) Add to MetaCart This paper establishes a new property of predomains recursively defined using the cartesian product, disjoint union, partial function space and convex powerdomain constructors. We prove that the partial order on such a recursive predomain D is the greatest fixed point of a certain monotone operator associated to D. This provides a structurally defined family of proof principles for these recursive predomains: to show that one element of D approximates another, it suffices to find a binary relation containing the two elements that is a post-fixed point for the associated monotone operator. The statement of the proof principles is independent of any of the various methods available for explicit construction of recursive predomains. Following Milner and Tofte [10], the method of proof is called co-induction. It closely resembles the way bisimulations are used in concurrent process calculi [9]. Two specific instances of the co-induction principle already occur in work of Abramsky [2, 1] in the form of `internal full abstraction' theorems for denotational semantics of SCCS and the lazy lambda calculus. In the first case post-fixed binary relations are precisely Abramsky's partial bisimulations, whereas in the second case they are his applicative bisimulations. The co-induction principle also provides an apparently useful tool for reasoning about equality of elements of recursively defined datatypes in (strict or lazy) higher order functional programming languages. , 1996 "... This paper provides foundations for a reasoning principle (coinduction) for establishing the equality of potentially infinite elements of self-referencing (or circular) data types. As it is well-known, such data types not only form the core of the denotational approach to the semantics of programmin ..." Cited by 37 (3 self) Add to MetaCart This paper provides foundations for a reasoning principle (coinduction) for establishing the equality of potentially infinite elements of self-referencing (or circular) data types. As it is well-known, such data types not only form the core of the denotational approach to the semantics of programming languages [SS71], but also arise explicitly as recursive data types in functional programming languages like Standard ML [MTH90] or Haskell [HPJW92]. In the latter context, the coinduction principle provides a powerful technique for establishing the equality of programs with values in recursive data types (see examples herein and in [Pit94]). - Fundamenta Informaticae , 1992 "... Weak Observational Congruence (woc) defined on CCS agents is not a bisimulation since it does not require two states reached by bisimilar computations of woc agents to be still woc, e.g. ff:ø:fi:nil and ff:fi:nil are woc but ø:fi:nil and fi:nil are not. This fact prevent us from characterizing CCS s ..." Cited by 32 (12 self) Add to MetaCart Weak Observational Congruence (woc) defined on CCS agents is not a bisimulation since it does not require two states reached by bisimilar computations of woc agents to be still woc, e.g. ff:ø:fi:nil and ff:fi:nil are woc but ø:fi:nil and fi:nil are not. This fact prevent us from characterizing CCS semantics (when ø is considered invisible) as a final algebra, since the semantic function would induce an equivalence over the agents that is both a congruence and a bisimulation. In the paper we introduce a new behavioural equivalence for CCS agents, which is the coarsest among those bisimulations which are also congruences. We call it Dynamic Observational Congruence because it expresses a natural notion of equivalence for concurrent systems required to simulate each other in the presence of dynamic, i.e. run time, (re)configurations. We provide an algebraic characterization of Dynamic Congruence in terms of a universal property of finality. Furthermore we introduce Progressing Bisimulatio... "... We seek a unified account of modularity for computational effects. We begin by reformulating Moggi’s monadic paradigm for modelling computational effects using the notion of enriched Lawvere theory, together with its relationship with strong monads; this emphasises the importance of the operations ..." Cited by 29 (4 self) Add to MetaCart We seek a unified account of modularity for computational effects. We begin by reformulating Moggi’s monadic paradigm for modelling computational effects using the notion of enriched Lawvere theory, together with its relationship with strong monads; this emphasises the importance of the operations that produce the effects. Effects qua theories are then combined by appropriate bifunctors on the category of theories. We give a theory for the sum of computational effects, which in particular yields Moggi’s exceptions monad transformer and an interactive input/output monad transformer. We further give a theory of the commutative combination of effects, their tensor, which yields Moggi’s side-effects monad transformer. Finally we give a theory of operation transformers, for redefining operations when adding new effects; we derive explicit forms for the operation transformers associated to the above monad transformers. - In Proc. 11th CONCUR, volume 1877 of LNCS , 2000 "... In this paper we describe how to build semantic models that support both nondeterministic choice and probabilistic choice. Several models exist that support both of these constructs, but none that we know of satisfies all the laws one would like. Using domain-theoretic techniques, we show how models ..." Cited by 25 (2 self) Add to MetaCart In this paper we describe how to build semantic models that support both nondeterministic choice and probabilistic choice. Several models exist that support both of these constructs, but none that we know of satisfies all the laws one would like. Using domain-theoretic techniques, we show how models can be devised using the "standard model" for probabilistic choice, and then applying modified domain-theoretic models for nondeterministic choice. These models are distinguished by the fact that the expected laws for nondeterministic choice and probabilistic choice remain valid. We also describe some potential applications of our model to aspects of security. - ENTCS , 2003 "... This paper presents a domain model for a process algebra featuring both probabilistic and nondeterministic choice. The former is modelled using the probabilistic powerdomain of Jones and Plotkin, while the latter is modelled by a geometrically convex variant of the Plotkin powerdomain. The main resu ..." Cited by 24 (1 self) Add to MetaCart This paper presents a domain model for a process algebra featuring both probabilistic and nondeterministic choice. The former is modelled using the probabilistic powerdomain of Jones and Plotkin, while the latter is modelled by a geometrically convex variant of the Plotkin powerdomain. The main result is to show that the expected laws for probability and nondeterminism are sound and complete with respect to the model. We also present an operational semantics for the process algebra, and we show that the domain model is fully abstract with respect to probabilistic bisimilarity.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=167486&sort=cite&start=10","timestamp":"2014-04-18T13:20:12Z","content_type":null,"content_length":"36445","record_id":"<urn:uuid:5f82bd2f-b8b4-4c7b-a6f1-c4ef1d93ad1b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Washington Calculus Tutor ...I also know that exploring many methods is the best way to build conceptual understanding of math. In many math classrooms today, teachers show their students one way to solve a problem, and then the students simply mimic a series of steps. This approach does not promote conceptual understanding! 16 Subjects: including calculus, English, writing, geometry ...Between 2002 and 2006, I was a lecturer at the Ethiopian Civil Service University and during that time I taught more than 12 different engineering courses for undergraduate urban engineering students. Between 2006 and 2011 I was a research assistant at the University of Wyoming and I used to cov... 14 Subjects: including calculus, chemistry, physics, geometry ...Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to areas that are easily accessible via Metro.I work as a professional economist, where I utilize econometric models and concepts regularly using both STATA and Excel. I have also had extensive course... 16 Subjects: including calculus, geometry, statistics, ACT Math ...My current GPA is 3.55 and I have gotten A's in Physics I (mechanics) and II (electronics) at Georgia Tech. I have also gotten a 5 on the AP Calculus AB exam, and have completed Calculus I-III at Tech. I was a peer tutor in physics in high school and am currently a physics and statics tutor at my university. 23 Subjects: including calculus, physics, geometry, algebra 2 ...It is my hope to be able to help more students achieve high grades in their math courses. I have a bachelor and masters degree in engineering, and scored 740 (out of 800) on the GRE quantitative. I took AP calculus in high school and got a score of 5 (maximum) on the AP exam. 34 Subjects: including calculus, physics, geometry, statistics
{"url":"http://www.purplemath.com/washington_navy_yard_calculus_tutors.php","timestamp":"2014-04-18T11:42:52Z","content_type":null,"content_length":"24051","record_id":"<urn:uuid:31b6e701-9963-4d4a-b980-1f3098e7a0c8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Integer Division Of all the elemental operations, division is the most complicated and can consume the most resources (in either silicon, to implement the algorithm in hardware, or in time, to implement the algorithm in software). In many computer applications, division is less frequently used than addition, subtraction or multiplication. As a result, some microprocessors that are designed for digital signal processing (DSP) or embedded processor applications do not have a divide instruction (they also usually omit floating point support as well). Recently I did some preliminary work on the design of the code generation phase for a compiler that would target a digital signal processor. This processor does not have a divide instruction and I had no idea how long it would take to implement the run time function needed to support integer division in software. The answer, it turns out, is "it depends". If all that is needed is a basic division function, and performance is not a major issue, the runtime function is fairly straight forward. A high performance division function is more complicated and would take more time to implement and test. The division function that is included here is of the former variety - a basic binary integer division function. I have also included some references on higher performance algorithms, but these are, as my professors used to say, left as exercises to the reader. The integer division algorithm included here is a so called "radix two" division algorithm. One computation step is needed for each binary digit. There are radix 4, 8, 16 and even 256 algorithms, which are faster, but are more difficult to implement. The main reference I used in implementing my algorithm was Digital Computer Arithmetic by Cavanaugh. Several other references on high radix division are also listed below. • Digital Computer Arithmetic: Design and Implementation by Joseph J. F. Cavanaugh, McGraw-Hill, 1984. This is an extremely valuable reference work, but it may be out of print. This is one of the best surveys I've seen on digital computer arithmetic and hardware design. • Computer Organization and Design: The Hardware Software Interface by David A. Patterson and John L. Hennessy, Morgan Kaufmann Press. This book has a brief (compared to Cavanaugh) section on digitial arithmetic. It is an excellent book on computer architecture and should be read by anyone designing a digital signal processor. • An Analysis of Division Algorithms and Implementations by Stuart F. Oberman and Michael J. Flynn, Stanford University Computer Systems Laboratory, CSL-TR-95-675. This paper if available via ftp in postscript, compressed with zip (this file can be uncompressed with GNU unzip). • High-radix Division with Approximate Quotient-digit estimation by Peter Fenwick, Department of Computer Science, University of Auckland, New Zealand (p_fenwick@cs.auckland.ac.nz). This paper is available on the World Wide Web. Professor Fenwick's paper describes a high-radix division algorithm with some refinements of his own. The University of Paderborn site also has a pointer to what is supposed to be a postscript version of Prof. Fenwick's paper. However, when I tried to download it, all I got was garbage. The postscript can also be downloaded from the University of Auckland My integer division algorithm is written in C++ and is included below. The file can be downloaded here. Division is the process of repeated subtraction. Like the long division we learned in grade school, a binary division algorithm works from the high order digits to the low order digits and generates a quotient (division result) with each step. The division algorithm is divided into two steps: 1. Shift the upper bits of the dividend (the number we are dividing into) into the remainder. 2. Subtract the divisor from the value in the remainder. The high order bit of the result become a bit of the quotient (division result). Copyright stuff Use of this program, for any purpose, is granted the author, Ian Kaplan, as long as this copyright notice is included in the source code or any source code derived from this program. The user assumes all responsibility for using this code. Ian Kaplan, October 1996 void unsigned_divide(unsigned int dividend, unsigned int divisor, unsigned int &quotient, unsigned int &remainder ) unsigned int t, num_bits; unsigned int q, bit, d; int i; remainder = 0; quotient = 0; if (divisor == 0) if (divisor > dividend) { remainder = dividend; if (divisor == dividend) { quotient = 1; num_bits = 32; while (remainder < divisor) { bit = (dividend & 0x80000000) >> 31; remainder = (remainder << 1) | bit; d = dividend; dividend = dividend << 1; /* The loop, above, always goes one iteration too far. To avoid inserting an "if" statement inside the loop the last iteration is simply reversed. */ dividend = d; remainder = remainder >> 1; for (i = 0; i < num_bits; i++) { bit = (dividend & 0x80000000) >> 31; remainder = (remainder << 1) | bit; t = remainder - divisor; q = !((t & 0x80000000) >> 31); dividend = dividend << 1; quotient = (quotient << 1) | q; if (q) { remainder = t; } /* unsigned_divide */ #define ABS(x) ((x) < 0 ? -(x) : (x)) void signed_divide(int dividend, int divisor, int &quotient, int &remainder ) unsigned int dend, dor; unsigned int q, r; dend = ABS(dividend); dor = ABS(divisor); unsigned_divide( dend, dor, q, r ); /* the sign of the remainder is the same as the sign of the dividend and the quotient is negated if the signs of the operands are opposite */ quotient = q; if (dividend < 0) { remainder = -r; if (divisor > 0) quotient = -q; else { /* positive dividend */ remainder = r; if (divisor < 0) quotient = -q; } /* signed_divide */ Ian Kaplan October, 1996
{"url":"http://www.bearcave.com/software/divide.htm","timestamp":"2014-04-16T10:09:37Z","content_type":null,"content_length":"7185","record_id":"<urn:uuid:a6706959-2136-4572-88d5-659b6b7fe810>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics 480 > Hansen > Notes > Solution Set #5 | StudyBlue Simply amazing. The flashcards are smooth, there are many different types of studying tools, and there is a great search engine. I praise you on the awesomeness. - Dennis I have been getting MUCH better grades on all my tests for school. Flash cards, notes, and quizzes are great on here. Thanks! - Kathy I was destroying whole rain forests with my flashcard production, but YOU, StudyBlue, have saved the ozone layer. The earth thanks you. - Lindsey This is the greatest app on my phone!! Thanks so much for making it easier to study. This has helped me a lot! - Tyson
{"url":"http://www.studyblue.com/notes/note/n/solution-set-5/file/458759","timestamp":"2014-04-20T13:23:16Z","content_type":null,"content_length":"34385","record_id":"<urn:uuid:d4a03043-9adf-4cb7-bd7c-294e9aaee467>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Find measures of angles, massive error ! December 12th 2009, 11:39 AM #1 Find measures of angles, massive error ! So this is the triangle I'm dealing with. I have to find the measures of the angles x & y. First thing I tried to do was find the measure of the angle @ A. a/SinA = c/SinC 66/SinA = 25/Sin10.5 and then I end up with 28 degrees for the angle @ A. Maybe I'm looking at it the wrong way but beyond that I have no idea how to proceed with this question. Any help ? So this is the triangle I'm dealing with. I have to find the measures of the angles x & y. First thing I tried to do was find the measure of the angle @ A. a/SinA = c/SinC 66/SinA = 25/Sin10.5 and then I end up with 28 degrees for the angle @ A. Maybe I'm looking at it the wrong way but beyond that I have no idea how to proceed with this question. Any help ? (angle C + angle A) - 180 = angle B The issue here is that I'm getting 28 degrees with the calculation 66/SinA = 25/Sin10.5 the measurement for angle A is obviously not 28 degrees, so basically I'm ending up with an error and I can't understand why because so far as I understand my procedure is correct. I want to solve for angle A, then because I already have angle C I can solve for angle B. From angle B I could find the angle next to (x), thereby finding x becasue the sum of the two would be 180. makes sense now ? Yes I get 28 as well. The only thing I can think of is the 10.5 angle is wrong or one of the lengths is wrong. Doesn't matter, just keep on going. So $180 - (28.76+10.5) = 140.74$ So $y =140.74$ Now use cosine law to find the right hand side then you can work out x. So this is the triangle I'm dealing with. I have to find the measures of the angles x & y. First thing I tried to do was find the measure of the angle @ A. a/SinA = c/SinC 66/SinA = 25/Sin10.5 and then I end up with 28 degrees for the angle @ A. Maybe I'm looking at it the wrong way but beyond that I have no idea how to proceed with this question. Any help ? Your thinking is correct: the angle at A is 28.7574degrees. (see my attached image) But it is NOT the angle CAB, it is the angle EAB or angle CEB $<br /> \angle CEB = \dfrac{66 \sin(10.5deg)}{25} = 28.7574 deg<br />$ $\angle CEB = \angle EAB$ $\angle CAB = 180 - 28.7574 = 151.2426 <br />$ $<br /> y = \angle ABD = 180 - (10.5 + 151.2426) = 180 - 161.7426 = 18.2574<br />$ $x = \angle ADC = 161.7426$ That is what is shown on the post by bigwave. December 12th 2009, 01:19 PM #2 December 12th 2009, 01:56 PM #3 December 12th 2009, 02:28 PM #4 December 13th 2009, 10:17 AM #5 December 13th 2009, 03:22 PM #6 Senior Member Jul 2009 December 14th 2009, 02:48 AM #7 Super Member Jan 2009
{"url":"http://mathhelpforum.com/trigonometry/120081-find-measures-angles-massive-error.html","timestamp":"2014-04-17T19:31:40Z","content_type":null,"content_length":"50226","record_id":"<urn:uuid:e34b5825-a866-4677-b262-8f4a909ab447>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Expander Graphs, Random Matrices and Quantum Chaos A basic problem in the theory of expander graphs, formulated by Lubotzky and Weiss, is to what extent being an expander family for a family of Cayley graphs is a property of the groups alone, independent of the choice of generators. While recently Alon, Lubotzky and Wigderson constructed an example demonstrating that expansion is not in general a group property, the problem is open for "natural" families of groups. In particular for SL(2, p) numerical experiments indicate that it might be an expander family for "generic" choices of generators (Independence Conjecture). A basic conjecture in Quantum Chaos, formulated by Bohigas, Giannoni, and Shmit, asserts that the eigenvalues of a quantized chaotic Hamiltonian behave like the spectrum of a typical member of the appropriate ensemble of random matrices. Both conjectures can be viewed as asserting that a deterministically constructed spectrum "generically" behaves like the spectrum of a large random matrix: "in the bulk" (Quantum Chaos Conjecture) and at the "edge of the spectrum" (Independence Conjecture). After explaining this approach in the context of the spectra of random walks on groups, we review some recent related results and numerical experiments.
{"url":"http://www.newton.ac.uk/programmes/RMA/Abstract4/gamburd.html","timestamp":"2014-04-17T06:47:39Z","content_type":null,"content_length":"3393","record_id":"<urn:uuid:3bbef6e0-cd4e-489e-b50b-c4be3eed81a6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
Hi, This Is Not A Complicated Question But I Don't ... | Chegg.com This is not a complicated question but I don't have any pastexamples to refer of for equations so your help would be greatlyappreciated. I did the easy percentage slip calculationalready. A 6 pole, 50 Hz, 970 rpm, 3-phase induction motor running at inputpower of Pin= 17456W to develops a useful torque of 150N-m. Assume friction and windage losses = 1016W and total stator losses= 700W. a) % Slip (120 x 50)/6 = 1000rpm (1000-970)/1000 x 100 = 3% SLIP b) Air gap power, P[ag] c) Rotor losses P[r] d) Mechanical power developed, P[dev] e) Output mechanical power, P[out] f) Efficiency THANKS VERY MUCH IN ADVANCE!!! Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/hi-complicated-question-don-t-pastexamples-refer-equations-help-would-greatlyappreciated-e-q273358","timestamp":"2014-04-18T18:24:28Z","content_type":null,"content_length":"22830","record_id":"<urn:uuid:2a7b657e-9dc0-47b3-9e60-70f51599653f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning fixed-dimension linear thresholds from fragmented data The Library Learning fixed-dimension linear thresholds from fragmented data UNSPECIFIED. (2001) Learning fixed-dimension linear thresholds from fragmented data. INFORMATION AND COMPUTATION, 171 (1). pp. 98-122. ISSN 0890-5401 Full text not available from this repository. We investigate PAC-leaming in a situation in which examples (consisting of an input vector and 0/1 label) have some of the components of the input vector concealed from the learner. This is a special case of restricted focus of attention (RFA) learning. Our interest here is in 1-RFA learning, where only a single component of an input vector is given, for each example. We argue that 1-RFA learning merits special consideration within the wider field of RFA learning. It is the most restrictive form of RFA learning (so that positive results apply in general), and it models a ty-pe of "data fusion" scenario, where we have sets of observations from a number of separate sensors, but these sensors are uncorrelated sources. within this setting we study the well-known class of linear threshold functions, the characteristic functions of Euclidean half-spaces. The sample complexity (i.e., sample-size requirement as a function of the parameters) of this learning problem is affected by the input distribution. We show that the sample complexity is always finite, for any given input distribution, but we also exhibit methods for defining "bad" input distributions for which the sample complexity can grow arbitrarily fast. We identify fairly general sufficient conditions for an input distribution to give rise to sample complexity that is polynomial in the PAC parameters epsilon -1 and delta -1. We give an algorithm whose sample complexity is polynomial in these parameters and the dimension (number of input components), for input distributions that satisfy our conditions. The run-time is polynomial in epsilon -1 and delta -1 provided that the dimension is any constant. We show how to adapt the algorithm to handle uniform misclassification noise. (C) 2001 Elsevier Science. Data sourced from Thomson Reuters' Web of Knowledge Actions (login required)
{"url":"http://wrap.warwick.ac.uk/11479/","timestamp":"2014-04-17T13:13:02Z","content_type":null,"content_length":"36125","record_id":"<urn:uuid:a63a3d34-f9ae-4ebd-9663-c0f77119b61c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
The length of a rectangle is 2 more than twice the width. Find the dimensions of the rectangle if the perimeter is 76. If x represents the width of the rectangle, then which expression represents the perimeter? Number of results: 65,569 The length of a rectangle is one inch less than twice its width. The diagonal of the rectangle is two inches more than its length. Find the area of the rectangle Thursday, August 12, 2010 at 9:25pm by Anonymous The length of a rectangle is 1ft. more than twice the width, and the area of the rectangle is 66 ft^2. Find the dimensions of the rectangle. length: width: Tuesday, April 1, 2014 at 9:18am by Roderick the length of a rectangle is 3 yds more than twice it width and the area of the rectangle is 77yds find the dimensions of the rectangle the length and width Friday, February 22, 2013 at 11:55am by carmen the length of a rectangle is 5m more than twice its width. and the area of the rectangle is 88m^2. find the dimensions of the rectangle. Thursday, May 5, 2011 at 8:37pm by Shantelle beginning algebra The length of a rectangle is 1ft more than twice its width, and the area of the rectangle is 28ft^2 . Find the dimensions of the rectangle. l= w= Wednesday, February 9, 2011 at 6:09pm by Rachal How do I find the dimensions of a rectangle if the length of the rectangle is 1 yard more than twice its width, and the area of the rectangle is 66 yards^2 . Thursday, March 7, 2013 at 8:48pm by Terry The length of a rectangle is 5 in. more than twice its width.? How do I find the width of the rectangle. The length = (2 x width) + 5 inches. Do you have any other information about the rectangle such as the area or the perimeter. I don't think there is enough information here... Wednesday, July 26, 2006 at 9:10pm by Diana a rectangle is twice as long as it is wide. if both of its dimensions are increased by 4m, its area is increased by 88m^2 find the dimensions of the original rectangle Original rectangle = w for width and 2w for length. Area = w x 2w Larger rectangle = w+4 for width and 2w+4 ... Wednesday, November 29, 2006 at 1:54pm by samantha The length of a rectangle is 5cm more than twice the width. The perimeter of the rectangle is 34 cm. Find the dimensions of the rectangle. So confused on how to do this problem please help me thanks. Wednesday, March 13, 2013 at 7:05pm by steph The length of a rectangle is 5cm more than twice the width. The perimeter of the rectangle is 34 cm. Find the dimensions of the rectangle. So confused on how to do this problem please help me thanks. Wednesday, March 13, 2013 at 7:15pm by steph the length of a rectangle is 5 ft more than twice the width. a)if x represents the width of the rctangle, represent the perimeter of the rectangle in terms of x. b) if the perimeter if the rectangle is 2 ft more than eight times the width of the rectangle, find the dimensions ... Saturday, September 12, 2009 at 6:45pm by anthony The length of a rectangle is 5 in. more than twice its width.? How do I find the width of the rectangle. duplicate post Divide the length by 2 it's going to be a decimal Wednesday, July 26, 2006 at 9:10pm by Diana The length of a rectangle is 3ft more than twice its width, and the area of the rectangle is 54ft^2 . Find the dimensions of the rectangle. The other problem worked but how do you SOLVE it. I need to be able to do similar problems. I know the factors multiply to get the area ... Wednesday, February 27, 2013 at 6:37pm by Anonymous The length of a rectangle is 2 inches more than twice its width. If the perimeter of the rectangle is 34 inches, find the dimensions of the rectangle Tuesday, January 22, 2008 at 7:11pm by amy The length of a rectangle is 5 cm more than twice the width. The perimeter of the rectangle is 34 cm. Find the dimensions of the rectangle. Wednesday, April 1, 2009 at 8:19pm by Angie The length of a rectangle is 10 feet more than twice its width. The perimeter of the rectangle is 170 feet. Find the dimensions of the rectangle. Thursday, November 29, 2007 at 10:44am by shelly math please help The length of a rectangle is 8 in. more than twice its width. If the perimeter of the rectangle is 34 in., find the width of the rectangle.is it 1. 2 in. 2. 3 in. 3. 4 in. 4. 5 in. Thursday, January 24, 2008 at 11:02pm by Kara the length of a rectangle is 5 centimeters more than twice its width. if the perimeter of the rectangle is 94 centimeters, what are the length and width of the rectangle? Thursday, May 10, 2012 at 10:16pm by Michael The length of a rectangle is 3 in. more than twice its width. If the perimeter of the rectangle is 18 in., find the width of the rectangle. Sunday, November 25, 2007 at 6:42pm by mlb The length of a rectangle is 9 in. more than twice its width. If the perimeter of the rectangle is 48 in., find the width of the rectangle. a) 4 in b) 5 in c) 6 in d) 7 in Thursday, January 10, 2008 at 3:26pm by bb The length of a rectangle is 2 in. more than twice its width. If the perimeter of the rectangle is 28 in., find the width of the rectangle. Wednesday, June 20, 2012 at 8:24pm by steven The length of the rectangle is 4 more than twice its width and its perimeter is 70m. What are the dimensions of the rectangle? Tuesday, January 15, 2013 at 8:00am by Anonymous Can someone help me just to do the setup of this the two equations. I can do the rest is just the step up THAT I AM HAVING DIFFICULTY. The length of a rectangle is 2in. more than twice its width. If the perimeter of the rectangle is 52in. wHAT IS THE LENGTH AND WIDTH OF THE ... Thursday, April 12, 2007 at 4:59pm by jasort120 The length of a rectangle is 2 more than twice its width. If the area of the rectangle is 24 sq. m., find its dimensions. Friday, July 27, 2012 at 5:58am by Bianca EASy QUESTION JUST NEED ANSWER!!!!! the area of a rectangle is length times width the perimeter of a rectangle is twice the length + twice the width Monday, September 1, 2008 at 5:58pm by Damon length of a rectangle is 5y more than twice its width and area of the rectangle is 42 yards ^2, find the dimensions Monday, November 19, 2012 at 2:18pm by Brian The length of a rectangle is 10 feet more than twice its width. The perimeter of the rectangle is 170 feet. Find the dimensions of the rectangle. ANS: Length= 60 feet, Width= 25 feet I already know the answer, its in my review sheet. If someone can explain to me how you get ... Thursday, November 29, 2007 at 11:14am by shelly How would you factor the problem: 7-23r+6r to the second power (only 6r is to the second power) And if a length of a rectangle i 3 more than twice the width and the are is 90 cm squared than what are the dimensions of the rectangle? Oh its not rectangle i its rectangle IS ... Tuesday, January 23, 2007 at 6:00pm by Judy Algebra - 2 more questions... ^^ The length of a certain rectangle is 20m and the length increased by 100m, the perimeter of the new rectangle would be twice the perimeter of the original rectangle. What are the dimensions of the original rectangle? I started by saying : L=w+20 >>> l+w+20=P &&& 2(L+... Wednesday, November 28, 2007 at 10:08pm by tchrwill The length of a rectangle is 6 inches more than twice its width. If the perimeter of the rectangle is 54 inches, find the width of the rectangle. Tuesday, July 22, 2008 at 12:09pm by JoAnn the length of a rectangle is 3cm more than twice the width. the perimeter is 84cm. find the length Wednesday, April 7, 2010 at 9:44pm by Brett If the length of a rectangle is twice its width, w, write an expression to represent the perimeter of the rectangle. If the perimeter of the rectangle is 72 inches, what is the length and width of the rectangle? Tuesday, September 6, 2011 at 4:45pm by Maddie a rectangle has a length of 17 inches more than twice its height. If the perimeter of a rectangle is 304 inches, find its dimensions. Saturday, September 15, 2012 at 11:37am by bill The length of a rectangle is 5ft less than twice its width. The area of the rectanle is 75ft^2. Find the length of the rectangle Monday, April 11, 2011 at 2:17pm by Renee alg 150 help the width of a rectangle is 9 less than twice the length. if the area os the rectangle is 41cm^2 what is the length of the diagonal? Sunday, February 10, 2013 at 6:51pm by eddie the length of a rectangle is 3cm more than twice the width. the perimeter is 84cm. find the length Please show me the steps on how you get the answer. Thanks Wednesday, April 7, 2010 at 10:13pm by Brett A rectangle has an area of 72 square inches. The length of the longest side is twice the length of the shortest side. What is the length of the rectangle? Monday, April 25, 2011 at 7:37pm by dasha the length of a rectangle is 4cm less then twice its width. the perimeter of the rectangle is 34cm. what are the demensions of the rectangle? Thursday, April 28, 2011 at 1:49pm by Anonymous The length of a rectangle is 4 in. more than twice its width. If the perimeter of the rectangle is 32 in., find the width of the rectangle let x = width length is then 2x+4 so 2(2x+4)+2x=32 Solve for x 4x + 8 +2x = 32 6x=24 x=4 thanks Tuesday, June 5, 2007 at 1:08am by veronica the perimeter of a rectangle is twice the sum of its length and its width. the perimeter is 40 meters and its lenth is 2 meters more then twice its width. what is the length? Monday, April 4, 2011 at 3:23pm by help fast please!!! the perimeter of a rectangle is twice the sum of its length and its width. the perimeter is 40 meters and its lenth is 2 meters more then twice its width. what is the length? Monday, April 4, 2011 at 4:03pm by help fast please!!! help help help The length of a rectangle is twice the width. If the length is increased by 4 inches and the width is decreased by 1 inch, a new rectangle is formed whose perimeter is 198 inches. Find the dimensions of the original rectangle. Friday, October 19, 2007 at 7:04pm by anonymous the length of a rectangle is 5 ft more than twice the width(x) would be written as: length = 2x+5 Now you can proceed to calculate the perimeter=2(W+L), and attempt part (b). Saturday, September 12, 2009 at 6:45pm by MathMate a rectangle has a length of 13 units. the perimeter of a rectangle is twice its length. complete the table to find the perimeter of the rectangle if its width is 5,6,7 or 8 units wide Saturday, March 23, 2013 at 4:15pm by taffy the length of a rectangle is 2 centimetres less than twice the width. the area of the rectangle is 180 square centimetres. find the length and width of the rectangle. Thursday, February 4, 2010 at 10:28am by jke The length of a rectangle is 3 feet less than twice the width of the rectangle. If the perimeter of the rectangle is 414 feet, find the width and the length. Sunday, July 29, 2012 at 9:32pm by Alonzo the perimeter of a rectangle is 62 m. then length is 10 m more than twice the width. find the dimensions what is the length? what is the width? Sunday, June 28, 2009 at 6:17pm by scooby9132002 the length of a rectangle is 6 ffet more than twice the width. if the length is 24 feet, what is the width Monday, April 11, 2011 at 8:51pm by antonio algebra & trigonometry The area of a rectangle is 12 square inches. The length is 5 more than twice the width. Find the length and the width. Thanks Tuesday, February 11, 2014 at 7:46pm by daniel math alegbra represent the given condition using a sing variable, x. The length and width of a rectangle whose length is 12 centimeters more than its width. the width of the rectangle is _____. The length of the rectangle is ____. Sunday, July 24, 2011 at 12:52am by charlene The variable W represents the width of a rectangle. The length of the rectangle, L, is 5 less than twice the width. Write an expression for the length of the rectangle in terms of W. Write a simplified expression for the perimeter of the rectangle in terms of W. Find the ... Saturday, November 16, 2013 at 8:19pm by Unknown if the perimeter of a rectangle is 3 times the length, plus twice the width. what is the equation to find the perimeter of the rectangle? I think it's P= 3 x l(length)+2. Is this right? Tuesday, September 24, 2013 at 9:05pm by sharon math ,correction Directions: Solve each of the following problems. Be sure to show the equation used for the solution. The length of a rectangle is 2in. more than twice its width. If the perimeter of the rectangle is 34 in., find the dimensions of the rectangle. Let L = length and W = width. L... Monday, February 5, 2007 at 9:10pm by jasmine20 the length of a rectangle is one foot more than twice its width. If all sides were increased by 7feet the perimeter of the new triangle would be 122 ft. find the new dimensions of the new rectangle Thursday, February 23, 2012 at 8:24pm by jessica the perimeter of a rectangle is 46 feet. The width is 3ft less than twice the length. find the length and width of the rectangle Wednesday, September 21, 2011 at 3:32pm by Anonymous the length of a rectangle exceeds its breadth by 4cm. The length and breadth are increased by 3 cm .The area of the new rectangle will be 81cm square more than the given rectangle .Find the dimensions of the given rectangle? Tuesday, August 14, 2012 at 12:07pm by TUHITUHI the perimeter of a rectangle is twice the sum of its width and it length. the perimeter is 40meters and its lenght is 2 meters more than twice its width. Thursday, November 18, 2010 at 4:16pm by kara The area of a rectangle is 44yds squared, and the length of the rectangle is 3yds less than twice the width. what is the length and width Sunday, February 16, 2014 at 7:17pm by Gerald The perimeter of a rectangle is 128m the length is 4m more than twice the width Monday, August 23, 2010 at 5:42pm by Jeff the perimeter of a rectangle is 32 m. The length is 4m more than twice the width Sunday, January 9, 2011 at 2:15pm by Ronda the length of a rectangle is 1 cm less than twice its width. the area is 28 cm squared. find the dimensions of hte rectangle. Just translate the English into "math" "the length of a rectangle is 1 cm less than twice its width" l=2w-1 "the area is 28" w(2w-1) = 28 solve as a ... Tuesday, May 8, 2007 at 8:55pm by anonymous A. One board is one-third the length of another. Six times the sum of the length of the short board and -10 is equal to the length of the longer board decreased by 11 inches. Find the length of the longer board. B. The length of a rectangle is 4 feet more than twice the width... Thursday, July 11, 2013 at 5:14pm by gary The length of a rectangle is 3cm less than twice the width.The perimeter of the rectangles of the sides of the rectangle is 24cm.Find the lengths of the sides of the rectangle. Wednesday, November 24, 2010 at 8:04am by rooi HELP ! I just don't get this. The perimeter of a rectangle is 40 m. The length is 5m more than twice the width. Find the dimensions. What is the length? What is the width? I need to simplify my answer... PLEASE HELP ME> Friday, May 21, 2010 at 4:59pm by JOHNNY The width of a rectangle is x inches long. The length of that rectangle is 5 inches less than Twice the width. If the perimeter measures 26 inches, what is the measure of the length and width of the Friday, April 27, 2012 at 6:16pm by marie the length of a rectangle is 8cm more than the width and its area is 172cm^2 .Find a) the width od the rectangle; b) the length of the diagonal of the rectangle, giving your answer correct to 2 decimal places Sunday, February 16, 2014 at 5:54am by nooria The perimeter of a rectangle is 128m. The length is 4m more than twice the width. Find the dimensions. Tuesday, June 16, 2009 at 12:23am by micheal The perimeter of a rectangle is 92m. The length is 4m more than twice the width. Find the dimensions. Sunday, January 3, 2010 at 1:53pm by Mike The perimeter of a rectangle is 96m.The length is 6m more than twice the width. Find the dimension. Wednesday, November 10, 2010 at 10:36am by Kayla The length of a rectangle is 1 meter more than 5 times it's width. The perimeter of the rectangle is 98 meters. Find the length and width of the rectangle. Tuesday, October 29, 2013 at 8:09pm by Miah The area of a rectangle is 32 in.˛ the length is twice the width. what is the perimeter of the rectangle? Sunday, March 2, 2014 at 3:20pm by Abby Basic Algebra The width of a rectangle is x inches long. The length of that rectangle is 5 inches less than Twice the width. If the perimeter measures 26 inches, what is the measure of the length and width of the rectangle. Show all the work and the algebra you used to solve this problem. Wednesday, April 11, 2012 at 10:48am by KP THE LENGTH OF A RECTANGLE IS 3M MORE THAN 2 TIMES ITS WIDTH. IF THE AREA IF THE RECTANGLE IS 99CM^2, FIND THE DIMENSIONS OF THE RECTANGLE TO THE NEAREST THOUSANDTH. If L = length and W = width, L = 2W + 3 and LW = 99 Substitute and solve for either L or W. Sunday, June 24, 2007 at 10:34am by TB You have 300 meters of fencing with which to build two enclosures. One will be a square, and the other will be a rectangle where the length of the base is exactly twice the length of the height. Give the dimensions of the square and rectangle that minimize the combined area. Tuesday, June 5, 2012 at 9:18pm by KK The length of a rectangle is twice its width if the area of the rectangle is 200 m squared, find its perimeter. Monday, January 17, 2011 at 8:46pm by Joseph an architect wants to draw a rectangle with a diagonal of 13 inches.the length of the rectangle is to be 2 inches more thn twice the width.what dimensions should she make the triangle. (a)set up an equation: (b)solve the equation: (c)write a sentence that answers the quation: Sunday, May 6, 2012 at 12:05pm by sharmila write a polynomial that expresses the area of a rectangle whose length is 3 feet more than twice the width Tuesday, February 9, 2010 at 7:30am by Dawn The perimeter of a rectangle is 90 meters. The length is 9 meters more than twice the width. Find the dimensions. I need the length and the width. Thanks! Sunday, October 12, 2008 at 5:43pm by Kay the perimeter of a rectangle is 136 m. The length is 2m more than twice the width. Find the dimensions what is the length? ____m? what is the width ____m? Thursday, December 17, 2009 at 1:50pm by akaray Math Symbols Let "p" represent 'the area of rectangle ABCD is 50 square inches'. Let "q" represent 'the perimeter of rectangle ABCD is 30 inches'. Let "r" represent "the length of rectangle ABCD is twice the width" My question: use symbols to represent the collowing conjunction: The area ... Monday, August 21, 2006 at 9:28pm by Kasey Math Word Problem The length of a rectangle is twice the width. The area is 242 yd^2. Find the length and the width of the rectangle. The width would be? The length would be? I am not sure if I am to take 242 and sqrt it to give me 58,564 and then divide by 4 or what. I am lost with this. Sunday, August 22, 2010 at 12:29pm by MBK let the length be L "..the width of a rectangle is 5 inches shorter than the length" so width = L - 5 Twice the width + twice the length = 80 2L + 2(L-5) = 80 take it from there. once you have L, evaluate L-5 for the width Sunday, June 16, 2013 at 3:17am by Reiny 6. The practice court at the park is a rectangle. Its width is exactly twice its length. The area of the practice field is 450 ft2. What are the length and width of the rectangle? Wednesday, April 10, 2013 at 6:37pm by Adán 6. The practice court at the park is a rectangle. Its width is exactly twice its length. The area of the practice field is 450 ft2. What are the length and width of the rectangle? Wednesday, April 10, 2013 at 7:08pm by Adán The perimeter of a rectangle is 66 m the length is 9m more than twice the width I am having trouble finding the dimensions Thursday, June 26, 2008 at 7:35am by Monica .identify what each leters represents. the perimeter of a rectangle is twice its length plus twice its width Tuesday, November 9, 2010 at 7:22pm by william The length and breadth of a rectangle are in the ration 8:5. The length is 10.5 centimeters more than the breadth. What are the length and breads of the rectangle? Kindly give steps also. Thursday, July 14, 2011 at 11:50am by Sunil Basic algebra The length of a rectangle is 9 feet longer than twice the width. If the perimeter is 90 feet, find the length and width of the rectangle? Thursday, October 11, 2012 at 2:23pm by Anonymous The length of the smaller rectangle at the right is 1 inch less than twice its width. Both the dimensions of the larger rectangle are 2 inches longer than the smaller rectangle. The area of the shaded region is 86 square inches. What are the dimensions of the smaller rectangle? Thursday, August 4, 2011 at 4:57pm by Alyssa a rectangle has length which is twice the width. if the diagonal is the 5 times the square root of 5 meters long, find the perimeter of the rectangle. Tuesday, March 1, 2011 at 2:45am by ed Algebra 8th grade The length of a rectangle is twice its width. If the area is 30 square units, find the dimensions of the rectangle. Wednesday, August 22, 2012 at 9:57pm by Anonymous A. In triangle, the second angle measures twice the first, and the third angle measures 5 more than the second. If the sum of the angles' measures is 180^0, find the measure of each angle. b. The price of a pack of gum today is 63 cent. This is 3 cent more than three times the... Wednesday, July 10, 2013 at 11:02pm by gary The length of a rectangle is more than double the width, and the area of the rectangle is . Find the dimensions of the rectangle. Sunday, December 1, 2013 at 10:42pm by Sara The length of the smaller rectangle at the right is 1 inch less than twice its width. Both the dimensions of the larger rectangle are 2 inches longer than the smaller rectangle. The area of the shaded region is 86 square inches. What is the area of the larger rectangle? Thursday, August 4, 2011 at 4:57pm by Alyssa Length of rectanngle is 1 cm more than width. If length of rectangle is doubled the area is increased by 30cm2. find dimensions of original rectangle. Saturday, November 17, 2012 at 6:10am by Dipti The length of a rectangle is 7 centimeters less than twice its width. Its area is 72 square meters. Find the dimensions of the rectangle. Tuesday, January 23, 2007 at 7:01pm by Anonymous Algebra world problem The length of a rectangle is 7 centimeters less than twice its width. Its area is 72 square meters. Find the dimensions of the rectangle. Tuesday, July 23, 2013 at 4:06pm by Anonymous Algebra - 2 more questions... ^^ Thank you for helping on the other problem, but could you maybe explain how I should solve these two problems? 1. In 1985, Barry was 13 years old and his father was 43. In what year will Barry's age be two-fifths of his father's age? 2. The length of a certain rectangle is 20m... Wednesday, November 28, 2007 at 10:08pm by Jason algebra , help Can someone help me set up the equations thanks. Directions: Solve each of the following applications. Give all answers to the nearest thousandth. Problem: Geometry. The length of a rectangle is 1 cm longer than its width. If the diagonal of the rectangle is 4 cm, what are the... Tuesday, February 20, 2007 at 9:22pm by jas20 Quadratic Equation The dimensions of a rectangle are such that its length is 3 in. more than its width. If the length were doubled and if the width were decreased by 1in., the area would be increased by 150 in.^2. What are the length and width of the rectangle? Tuesday, March 15, 2011 at 10:36pm by Brandon Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=The+length+of+a+rectangle+is+2+more+than+twice+the+width.+Find+the+dimensions+of+the+rectangle+if+the+perimeter+is+76.+If+x+represents+the+width+of+the+rectangle%2C+then+which+expression+represents+the+perimeter%3F","timestamp":"2014-04-19T20:38:14Z","content_type":null,"content_length":"41511","record_id":"<urn:uuid:c0b84dfa-0890-4f1b-b464-6b0919aaa05c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Three Essays on - Review of Financial Studies , 1997 "... This article provides a Markov model for the term structure of credit risk spreads. The model is based on Jarrow and Turnbull (1995), with the bankruptcy process following a discrete state space Markov chain in credit ratings. The parameters of this process are easily estimated using observable data ..." Cited by 237 (12 self) Add to MetaCart This article provides a Markov model for the term structure of credit risk spreads. The model is based on Jarrow and Turnbull (1995), with the bankruptcy process following a discrete state space Markov chain in credit ratings. The parameters of this process are easily estimated using observable data. This model is useful for pricing and hedging corporate debt with imbedded options, for pricing and hedging OTC derivatives with counterparty risk, for pricing and hedging (foreign) government bonds subject to default risk (e.g., municipal bonds), for pricing and hedging credit derivatives, and for risk management. This article presents a simple model for valuing risky debt that explicitly incorporates a firm's credit rating as an indicator of the likelihood of default. As such, this article presents an arbitrage-free model for the term structure of credit risk spreads and their evolution through time. This model will prove useful for the pricing and hedging of corporate debt with We would like to thank John Tierney of Lehman Brothers for providing the bond index price data, and Tal Schwartz for computational assistance. We would also like to acknowledge helpful comments received from an anonymous referee. Send all correspondence to Robert A. Jarrow, Johnson Graduate School of Management, Cornell University, Ithaca, NY 14853. The Review of Financial Studies Summer 1997 Vol. 10, No. 2, pp. 481--523 1997 The Review of Financial Studies 0893-9454/97/$1.50 imbedded options, for the pricing and hedging of OTC derivatives with counterparty risk, for the pricing and hedging of (foreign) government bonds subject to default risk (e.g., municipal bonds), and for the pricing and hedging of credit derivatives (e.g. credit sensitive notes and spread adjusted notes). This model can also... - Mathematical Finance , 2005 "... Recent advances in the theory of credit risk allow the use of standard term structure machinery for default risk modeling and estimation. The empirical literature in this area often interprets the drift adjustments of the default intensity’s diffusion state variables as the only default risk premium ..." Cited by 15 (1 self) Add to MetaCart Recent advances in the theory of credit risk allow the use of standard term structure machinery for default risk modeling and estimation. The empirical literature in this area often interprets the drift adjustments of the default intensity’s diffusion state variables as the only default risk premium. We show that this interpretation implies a restriction on the form of possible default risk premia, which can be justified through exact and approximate notions of “diversifiable default risk. ” The equivalence between the empirical and martingale default intensities that follows from diversifiable default risk greatly facilitates the pricing and management of credit risk. We emphasize that this is not an equivalence in distribution, and illustrate its importance using credit spread dynamics estimated in Duffee (1999). We also argue that the assumption of diversifiability is implicitly used in certain existing models of mortgage-backed securities. , 2008 "... Classically, in reduced form default models the instantaneous default intensity is the modelling object and survival probabilities are given by the Laplace transform of At = R t 0 sds. Instead, recent literature has shown a tendency towards specifying the process A directly. We will refer to A as th ..." Add to MetaCart Classically, in reduced form default models the instantaneous default intensity is the modelling object and survival probabilities are given by the Laplace transform of At = R t 0 sds. Instead, recent literature has shown a tendency towards specifying the process A directly. We will refer to A as the cumulative hazard process. We present a new cumulative hazard based framework where survival probabilities are still obtained in closed form but where A belongs to the class of self-similar additive processes also termed Sato processes. We analyze two speci…cations for the cumulative hazard process; Sato-Gamma and Sato-IG processes where the unit time distribution A1 is described by a Gamma law and Inverse Gaussian law respectively. The models are calibrated to data on the single names included in the iTraxx Europe index and compared with two Ornstein-Uhlenbeck type intensity models. It is shown how the Sato models achieve similar calibration errors with fever parameters, and with more stable parameter estimates in time.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1832505","timestamp":"2014-04-19T23:07:54Z","content_type":null,"content_length":"19403","record_id":"<urn:uuid:49a8d243-8e93-476f-aeb0-768041f83eb9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Cheyney Prealgebra Tutor ...Once we identified that the problem was not in their understanding of the new material, but with their more basic skills, they were able to improve their performance. I can also help students improve their strategies on standardized tests. Determining which problems to solve first, which ones t... 18 Subjects: including prealgebra, calculus, statistics, GRE Latoya graduated from the University of Pittsburgh in December of 2007 with a Bachelor's degree in Psychology and Medicine and a minor in Chemistry. Currently, she is pursuing her Master's in Physician Assistance. Her goal is to practice pediatric medicine in inner city poverty stricken communities. 13 Subjects: including prealgebra, chemistry, geometry, biology ...The better that we can connect the information the easier it will be to get better grades. Your child's learning style is important to his/her growth in the educational environment. Knowing his /her strengths and weaknesses will aid him/her in developing strategies to cope with and to overcome his/her learning difficulties. 30 Subjects: including prealgebra, chemistry, calculus, writing ...I have successfully tutored students at local Universities in PA and in NJ and have helped failing students rise to A-'s by the end of the semester. I would enjoy the opportunity to help you !I have taught Algebra 1 in the Philadelphia School District for two years. I have also had to teach it ... 26 Subjects: including prealgebra, chemistry, GRE, biology ...One-on-One tutoring is a wonderful way to give your child the personal attention he/she needs to be a successful learner. I tutor students throughout Delaware county. My available hours are mainly evenings and weekends. 17 Subjects: including prealgebra, reading, writing, algebra 1
{"url":"http://www.purplemath.com/cheyney_prealgebra_tutors.php","timestamp":"2014-04-18T21:49:29Z","content_type":null,"content_length":"23896","record_id":"<urn:uuid:e0189691-6421-421b-9b27-b0f3a62c89ad>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Newbie to Derivative needs help January 30th 2009, 11:27 AM #1 Sep 2008 Newbie to Derivative needs help There are 2 problems I am having particular trouble with. QUESTION 1 A coal-burning electrical generating plant emits sulfur dioxide into the surrounding air. The concentration, C(x), in parts per million, is given approximately by C(x)= 0.1/x^2 where x is the distance from the plant in miles. So the derivative is -.2x^-3 i believe? And then I have to evaluate C(2) and C'(2). C(2) = -.2(2)^3 = -.025 C'(2) = I am unsure of how to acquire this. QUESTION 2 Use the definition of the derivative to find f'(x) if f(x) = x^3. I know the derivative is 3x^2 but when I try to do it step by step by hand I get screwed up somewhere. Any help or tips in the right direction would be appreciated. There are 2 problems I am having particular trouble with. QUESTION 1 A coal-burning electrical generating plant emits sulfur dioxide into the surrounding air. The concentration, C(x), in parts per million, is given approximately by C(x)= 0.1/x^2 where x is the distance from the plant in miles. So the derivative is -.2x^-3 i believe? correct ... $C'(x) = -\frac{0.2}{x^3}$ And then I have to evaluate C(2) and C'(2). C(2) = -.2(2)^3 = -.025 no ... $C(2) = \frac{0.1}{2^2}$ C'(2) = I am unsure of how to acquire this. $C'(2) = -\frac{0.2}{2^3}$ QUESTION 2 Use the definition of the derivative to find f'(x) if f(x) = x^3. below ... $f(x+h) = (x+h)^3 = x^3 + 3x^2h + 3xh^2 + h^3$ $f(x) = x^3$ $f'(x) = \lim_{h \to 0} \frac{f(x+h) - f(x)}{h}$ $f'(x) = \lim_{h \to 0} \frac{(x^3 + 3x^2h + 3xh^2 + h^3) - (x^3)}{h}$ $f'(x) = \lim_{h \to 0} \frac{3x^2h + 3xh^2 + h^3}{h}$ $f'(x) = \lim_{h \to 0} \frac{h(3x^2 + 3xh + h^2)}{h}$ $f'(x) = \lim_{h \to 0} (3x^2 + 3xh + h^2) = 3x^2$ Much appreciated skeeter! January 30th 2009, 02:38 PM #2 January 30th 2009, 06:20 PM #3 Sep 2008
{"url":"http://mathhelpforum.com/calculus/70861-newbie-derivative-needs-help.html","timestamp":"2014-04-17T11:08:41Z","content_type":null,"content_length":"38739","record_id":"<urn:uuid:66791fef-380d-47ee-a5fc-2edcc12ff84b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying Radicals . . . Rationalize the Denominator . . .Simplifying Radicals . . . Rationalize the Denominator . . . Rationalize the Denominator What does "Rationalizing the Denominator" mean? Rationalizing the Denominator simply means to remove all radicals from the denominator of a fraction without changing the value of the fraction. When the denominator is rationalized, the original fraction is converted to the simplest equivalent fraction which does not have radicals in the denominator. By removing all radicals from the denominator, all numbers in the denominator will be converted rational numbers (hence the term, "Rationalizing the Denominator"). For example, the following fraction has a square root in the denominator: (Note that the square root of 2 is an irrational number - a non-terminating decimal without a repeating pattern.) Rationalizing the Denominator means the fraction will be rewritten as the simplest equivalent fraction which does not have radicals in the denominator: (Note that the denominator is now the rational number 2.) Why Rationalize the Denominator of a fraction? Why not leave the denominator alone? What difference does it make? What is the point? Rationalizing the Denominator is the standard way of simplifying fractions so that they (fractions) can be readily understood and easily compared with other fractions. The keys to understanding why Rationalizing the Denominator is important are: simplification and standardization: Example of how the use of integers is standardized Suppose you were telling a friend you had 5 apples yesterday, but today you only have 4 apples because you ate 1 apple. How would you present the numbers 5 and 4? Statement 1: "I had 5 apples yesterday. However, I ate 1 apple and now I have only 4 apples." This statement is straightforward and makes it easy to compare how many apples you had yesterday with the number you have today (the numbers 5 and 4). But that is not the only way you could communicate the numbers 5 and 4. Statement 2: "I had 6 minus 1 apples yesterday; but, only 100 minus 96 apples today." This statement makes it difficult to compare how many apples you had yesterday with the number you have today (the numbers 5 and 4) without completing a subtraction problem. Statements 1 & 2 are mathematically equivalent. Each shows exactly how many apples you have on both days. However, Statement 1 presents the numbers 5 and 4 as single integers . . . the standard for clarity and ease of comparison. Example of how the use of fractions is standardized Reducing a fraction to its simplest form makes it easier to understand and compare with other fractions in the majority of cases. For example, the following four fractions have the exactly the same numerical value: However, if you were telling a friend you have half an apple, you would probably use the fraction because it is easy to understand (it is the most reduced form of the fraction). But you could use any of the four fractions listed. You could say you had of an apple. Fractions are generally reduced to the simplest form because it makes them easier to understand. Rationalizing the denominator of a fraction simplifies and standardizes fractions in the same way Removing all radicals from the denominator of every fraction is a convention which allows fractions to be compared more easily. The value of the fraction is not changed, but it is easier to understand and compare with other fractions. Is it ever necessary to rationalize the numerator of a fraction instead of a denominator? Yes. When studying calculus, it will be necessary to find limits. In this case, rationalizing the numerator of a fraction rather than the denominator is part of the process. However, the basic technique for rationalization either the denominator or numerator is the same. How to rationalize the denominator of a fraction Video demonstrating How to Rationalize the Denominator - click here The denominator is a monomial (1 term). To rationalize the denominator, (1) multiply the denominator by a number (or expression) which will remove the radical from the denominator. (2) Multiply the numerator by the same number (or Rationalize the denominator of: - simplifying radicals - The final answer is: Rationalize the denominator of: - simplifying radicals - The final answer is: Rationalize the denominator of: - simplifying radicals - The final answer is: Rationalize the denominator of: - simplifying radicals - The final answer is: The denominator is a binomial (2 terms). To rationalize the denominator, (1) multiply the denominator by an expression which is the conjugate of the denominator. (2) Multiply the numerator by the same expression. The conjugate of "A+B" is "A-B". The conjugate of "A-B" is "A+B". Rationalize the denominator of: - simplifying radicals - The final answer is: Rationalize the denominator of: The final answer is:
{"url":"http://www.solving-math-problems.com/simplifying-radicals-rationalize-the-denominator.html","timestamp":"2014-04-20T16:40:01Z","content_type":null,"content_length":"126442","record_id":"<urn:uuid:ef5e0e15-4f63-4ddb-bc6c-552dffbb8940>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Most popular books about science, and even about mathematics, tiptoe around equations as if they were something to be hidden from the reader's tender eyes. Dana Mackenzie starts from the opposite premise: He celebrates equations. No history of art would be complete without pictures. Why, then, should a history of mathematics--the universal language of science--keep the masterpieces of the subject hidden behind a veil? The Universe in Zero Words tells the history of twenty-four great and beautiful equations that have shaped mathematics, science, and society--from the elementary (1+1=2) to the sophisticated (the Black-Scholes formula for financial derivatives), and from the famous (E=mc2) to the arcane (Hamilton's quaternion equations). Mackenzie, who has been called "a popular-science ace" by Booklist magazine, lucidly explains what each equation means, who discovered it (and how), and how it has affected our lives. Illustrated in color throughout, the book tells the human and often-surprising stories behind the invention or discovery of the equations, from how a bad cigar changed the course of quantum mechanics to why whales (if they could communicate with us) would teach us a totally different concept of geometry. At the same time, the book shows why these equations have something timeless to say about the universe, and how they do it with an economy (zero words) that no other form of human expression can match. The Universe in Zero Words is the ultimate introduction and guide to equations that have changed the world. Dana Mackenzie is a frequent contributor to Science, Discover, and New Scientist, and writes the biennial series What's Happening in the Mathematical Sciences for the American Mathematical Society. In 2012, he received the prestigious Communications Award from the Joint Policy Board for Mathematics. He has a PhD in mathematics from Princeton and was a mathematics professor for thirteen years before becoming a full-time writer. "Quietly learned and beautifully illustrated, Mackenzie's book is a celebration of the succinct and the singular in human expression."--Nature "The equations Mackenzie exhibits in this wonderful book represent 24 of the most profound discoveries in the history of Mathematics. . . . Mackenzie's writing is understated and clear. The complex ideas he explains so lucidly are beautiful in themselves, but this book is physically beautiful too, imaginatively illustrated and stylishly designed to complement its subject."--Irish Times "[M]ackenzie provides interesting insights regarding the equations, such as relating whale communications to a model of a non-Euclidean geometry or the role of cigar smoke in the quantization of angular momentum of quantum particles. . . . The book is an enjoyable read . . ."--Choice "This well-designed and accessible book will delight and inform the student, mathematician or historian in your life and it may also help you rediscover your forbidden love for mathematics."--Devorah Bennu, GrrlScentist "With a book that is both short and very easy to read, Mackenzie manages to introduce a very wide scope of ideas, and to produce a condensate of the history of mathematics that is at the same time enlightening and engaging. He succeeds in discussing highly advanced science while remaining very comprehensible, and in popularizing mathematics and physics while also giving food for thought to the specialist. His Universe in Zero Words will therefore seduce any scientist, but also anyone with some curiosity and desire to get more familiar with the history of human thinking and knowledge."-- Jean-Baptiste Gramain, London Mathematical Society Newsletter "[V]ery absorbing reading. . . . Two hundred pages, twenty-four equations, one endearing and well told story. I wholeheartedly recommend the book."--Alexander Bogomolny, CTK Insights More reviews Table of Contents Subject Areas: Hardcover: Not for sale in Australia Paperback: Not for sale in Australia
{"url":"http://press.princeton.edu/titles/9662.html","timestamp":"2014-04-19T01:51:20Z","content_type":null,"content_length":"21024","record_id":"<urn:uuid:df57a853-b766-4889-b85b-5d88c0a82a8c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Kildeer, IL Math Tutor Find a Kildeer, IL Math Tutor ...I have tutored Trigonometry to many high school students. I have worked with many students who have difficulty memorizing unit circle in degrees, and radians with their values. Some of the topics I have worked with are converting the angles from radians to degrees & degree to radians, graphing trigonometry functions, applying properties & identities, law of Cos and law of sin. 11 Subjects: including algebra 1, algebra 2, calculus, ACT Math ...Quadratic Functions. Polynomial Functions of Higher Degree. Dividing Polynomials: Long Division and Synthetic Division. 17 Subjects: including geometry, statistics, discrete math, ACT Math ...I am also well versed in the use of the sword, specifically long sword and the Japanese katana. I have a uniqueness to my martial arts. For me, I love to learn the martial arts for its art. 16 Subjects: including geometry, SAT math, English, algebra 1 ...I usually meet with students in the evening at our local library. I have also met with students in their homes and occasionally at a local coffee shop. Students will typically meet with me twice a week for 30 minutes to an hour each session. 13 Subjects: including trigonometry, algebra 1, algebra 2, biology ...I have over eight years of experience tutoring math. My students have ranged from middle school to college. I pride myself on being able to push past any difficulty a student is faced with.I love this subject and am extremely competent teaching it. 21 Subjects: including SAT math, prealgebra, GRE, GMAT Related Kildeer, IL Tutors Kildeer, IL Accounting Tutors Kildeer, IL ACT Tutors Kildeer, IL Algebra Tutors Kildeer, IL Algebra 2 Tutors Kildeer, IL Calculus Tutors Kildeer, IL Geometry Tutors Kildeer, IL Math Tutors Kildeer, IL Prealgebra Tutors Kildeer, IL Precalculus Tutors Kildeer, IL SAT Tutors Kildeer, IL SAT Math Tutors Kildeer, IL Science Tutors Kildeer, IL Statistics Tutors Kildeer, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/kildeer_il_math_tutors.php","timestamp":"2014-04-19T23:46:53Z","content_type":null,"content_length":"23577","record_id":"<urn:uuid:8c6c7901-af63-41e6-9f9d-03850d8d86a8>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Cramer's Rule: setting up determinants, which to use When you're doing Cramer's Rule, like if you have three equations in three unknowns, then you'll have four matrixes that you have to find the determinants of. Is there any easy way to remember how to set those up, and which to use for finding what variable? Try reviewing the pictures here. In words: Pick the variable you want to solve for. Take the other-side-of-the-"equals" column, and insert that column of values in place of the column (in the augmented matrix) representing that variable. Find the value of that determinant. Divide by the value of the determinant of the original augmented matrix. The result is the value of the variable you picked. And that's all there is to it! Re: Cramer's Rule: setting up determinants, which to use That was easier to remember than what they had in the book. Thnx
{"url":"http://www.purplemath.com/learning/viewtopic.php?p=272","timestamp":"2014-04-19T07:17:48Z","content_type":null,"content_length":"20311","record_id":"<urn:uuid:bea2e893-45d2-41ac-af0f-77933da81187>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
Upper triangularization method (AX=B) October 17th 2009, 12:13 PM #1 Oct 2009 Upper triangularization method (AX=B) Function X = uptrbk(A,B) %Input – A is an N x N nonsingular matrix % B is an N x 1 matrix % Output – X is an N x 1 matrix containing the solution to AX=B % Initialize X and the temporary storage matrix C [N N] = size(A); %Form the augmented matrix: Aug=[A|B] Aug=[A B]; for p=1 : N-1 %partial pivoting for column p [Y,j] = max(abs(Aug(p:N,p))); %Interchange row p and j C=Aug(p,: ); Aug(p,: ) = Aug (j+p-1,: ); Aug(j+p-1,: )=C; If Aug(p,p)==0 ‘A was singular. No unique solution’ % Eliminating process for column p for k=p+1:N %Back substitution on [U|Y] The above program is to construct the solution to AX=B by reducing the augmented matrix [A B] to upper triangularization method and then performing back substitution. I am trying to modify this program so that it will solve M linear systems with the same coefficient matrix A but different column matrices B . The M linear systems look like AX1 = B1 AX2 = B2 ...... AXm=Bm (1 , 2 and m are in subscript) Please help!!! Function X = uptrbk(A,B) %Input – A is an N x N nonsingular matrix % B is an N x 1 matrix % Output – X is an N x 1 matrix containing the solution to AX=B % Initialize X and the temporary storage matrix C [N N] = size(A); %Form the augmented matrix: Aug=[A|B] Aug=[A B]; for p=1 : N-1 %partial pivoting for column p [Y,j] = max(abs(Aug(p:N,p))); %Interchange row p and j C=Aug(p,: ); Aug(p,: ) = Aug (j+p-1,: ); Aug(j+p-1,: )=C; If Aug(p,p)==0 ‘A was singular. No unique solution’ % Eliminating process for column p for k=p+1:N %Back substitution on [U|Y] The above program is to construct the solution to AX=B by reducing the augmented matrix [A B] to upper triangularization method and then performing back substitution. I am trying to modify this program so that it will solve M linear systems with the same coefficient matrix A but different column matrices B . The M linear systems look like AX1 = B1 AX2 = B2 ...... AXm=Bm (1 , 2 and m are in subscript) Please help!!! My guess is it should work as it is, just make B the matrix of the column vectors, all right you will have to modify the back substitution to do back sub for all the columns of augmentation (this can be done by looping over the extra columns, though the back substitution function might just work if you give it all the augmentation cols at once). October 17th 2009, 02:06 PM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/math-software/108588-upper-triangularization-method-ax-b.html","timestamp":"2014-04-21T16:10:13Z","content_type":null,"content_length":"36286","record_id":"<urn:uuid:ff294bd4-2d37-4f83-bbdf-d8dde373edb8>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00117-ip-10-147-4-33.ec2.internal.warc.gz"}
Kildeer, IL Math Tutor Find a Kildeer, IL Math Tutor ...I have tutored Trigonometry to many high school students. I have worked with many students who have difficulty memorizing unit circle in degrees, and radians with their values. Some of the topics I have worked with are converting the angles from radians to degrees & degree to radians, graphing trigonometry functions, applying properties & identities, law of Cos and law of sin. 11 Subjects: including algebra 1, algebra 2, calculus, ACT Math ...Quadratic Functions. Polynomial Functions of Higher Degree. Dividing Polynomials: Long Division and Synthetic Division. 17 Subjects: including geometry, statistics, discrete math, ACT Math ...I am also well versed in the use of the sword, specifically long sword and the Japanese katana. I have a uniqueness to my martial arts. For me, I love to learn the martial arts for its art. 16 Subjects: including geometry, SAT math, English, algebra 1 ...I usually meet with students in the evening at our local library. I have also met with students in their homes and occasionally at a local coffee shop. Students will typically meet with me twice a week for 30 minutes to an hour each session. 13 Subjects: including trigonometry, algebra 1, algebra 2, biology ...I have over eight years of experience tutoring math. My students have ranged from middle school to college. I pride myself on being able to push past any difficulty a student is faced with.I love this subject and am extremely competent teaching it. 21 Subjects: including SAT math, prealgebra, GRE, GMAT Related Kildeer, IL Tutors Kildeer, IL Accounting Tutors Kildeer, IL ACT Tutors Kildeer, IL Algebra Tutors Kildeer, IL Algebra 2 Tutors Kildeer, IL Calculus Tutors Kildeer, IL Geometry Tutors Kildeer, IL Math Tutors Kildeer, IL Prealgebra Tutors Kildeer, IL Precalculus Tutors Kildeer, IL SAT Tutors Kildeer, IL SAT Math Tutors Kildeer, IL Science Tutors Kildeer, IL Statistics Tutors Kildeer, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/kildeer_il_math_tutors.php","timestamp":"2014-04-19T23:46:53Z","content_type":null,"content_length":"23577","record_id":"<urn:uuid:8c6c7901-af63-41e6-9f9d-03850d8d86a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
for Sharing Research Data M. Bun and Thaler, J. , “ Dual Lower Bounds for Approximate Degree and Markov-Bernstein Inequalities Automata, Languages, and Programming , vol. 7965, pp. 303-314, 2013. The ε-approximate degree of a Boolean function f: {−1, 1} n →{−1, 1} is the minimum degree of a real polynomial that approximates f to within ε in the ℓ∞ norm. We prove several lower bounds on this important complexity measure by explicitly constructing solutions to the dual of an appropriate linear program. Our first result resolves the ε-approximate degree of the two-level AND-OR tree for any constant ε>0. We show that this quantity is Θ(n‾‾√) , closing a line of incrementally larger lower bounds [3,11,21,30,32]. The same lower bound was recently obtained independently by Sherstov using related techniques [25]. Our second result gives an explicit dual polynomial that witnesses a tight lower bound for the approximate degree of any symmetric Boolean function, addressing a question of Špalek [34]. Our final contribution is to reprove several Markov-type inequalities from approximation theory by constructing explicit dual solutions to natural linear programs. These inequalities underly the proofs of many of the best-known approximate degree lower bounds, and have important uses throughout theoretical computer science. K. Chandrasekaran, Thaler, J., Ullman, J., and Wan, A. , “ Faster Private Release of Marginals on Small Databases , vol. abs/1304.3754, 2013. We study the problem of answering \emph{$k$-way marginal} queries on a database $D \in (\{0,1\}^d)^n$, while preserving differential privacy. The answer to a $k$-way marginal query is the fraction of the database's records $x \in \{0,1\}^d$ with a given value in each of a given set of up to $k$ columns. Marginal queries enable a rich class of statistical analyses on a dataset, and designing efficient algorithms for privately answering marginal queries has been identified as an important open problem in private data analysis. For any $k$, we give a differentially private online algorithm that runs in time $$ \min{\exp(d^{1-\Omega(1/\sqrt{k})}), \exp(d / \log^{.99} d)\} $$ per query and answers any (possibly superpolynomially long and adaptively chosen) sequence of $k$-way marginal queries up to error at most $\pm .01$ on every query, provided $n \gtrsim d^{.51} $. To the best of our knowledge, this is the first algorithm capable of privately answering marginal queries with a non-trivial worst-case accuracy guarantee on a database of size $\poly(d, k)$ in time $\exp(o(d))$. Our algorithms are a variant of the private multiplicative weights algorithm (Hardt and Rothblum, FOCS '10), but using a different low-weight representation of the database. We derive our low-weight representation using approximations to the OR function by low-degree polynomials with coefficients of bounded $L_1$-norm. We also prove a strong limitation on our approach that is of independent approximation-theoretic interest. Specifically, we show that for any $k = o(\log d)$, any polynomial with coefficients of $L_1$-norm $poly(d)$ that pointwise approximates the $d$-variate OR function on all inputs of Hamming weight at most $k$ must have degree $d^{1-O(1/\sqrt{k})}$. L. Sweeney, Yasnoff, W. A., and Shortliffe, E. H. , “ Putting Health IT on the Path to Success , vol. 309, no. 10, pp. 989-990, 2013. The promise of health information technology (HIT) is comprehensive electronic patient records when and where needed, leading to improved quality of care at reduced cost. However, physician experience and other available evidence suggest that this promise is largely unfulfilled. Serious flaws in current approaches to health information exchanges: (1) complex and expensive; (2) prone to error and insecurity; (3) increase liability; (4) not financially sustainable; (5) unable to protect privacy; (6) unable to ensure stakeholder cooperation; and, (7) unable to facilitate robust data sharing. The good news is that personal health record banks pose a viable alternative that is: (a) simpler; (b) scalable; (c) less expensive; (d) more secure; (e) community oriented to ensure stakeholder participation; and, (e) capable of providing the most comprehensive records. The idea of patient controlled records is not new, but what is new is how personally controlled records can help achieve the HIT vision. S. Hooley and Sweeney, L. , “ Survey of Publicly Available State Health Databases Data Privacy Lab, IQSS, Harvard University . 2013. Project websitePDF L. Sweeney, Abu, A., and Winn, J. , “ Identifying Participants in the Personal Genome Project by Name Data Privacy Lab, IQSS, Harvard University . 2013. Project websitePDF L. Sweeney , “ Matching Known Patients to Health Records in Washington State Data Data Privacy Lab, IQSS, Harvard University . 2013. Project websitePDF J. Ullman , “ Answering n[{2+o(1)}] counting queries with differential privacy is hard ”, in Proceedings of the 45th annual ACM symposium on Symposium on theory of computing , Palo Alto, California, USA, 2013, pp. 361-370. A central problem in differentially private data analysis is how to design efficient algorithms capable of answering large numbers of counting queries on a sensitive database. Counting queries are of the form "What fraction of individual records in the database satisfy the property q?" We prove that if one-way functions exist, then there is no algorithm that takes as input a database db ∈ dbset, and k = ~Θ(n2) arbitrary efficiently computable counting queries, runs in time poly(d, n), and returns an approximate answer to each query, while satisfying differential privacy. We also consider the complexity of answering "simple" counting queries, and make some progress in this direction by showing that the above result holds even when we require that the queries are computable by constant-depth (AC0) circuits. Our result is almost tight because it is known that ~Ω(n2) counting queries can be answered efficiently while satisfying differential privacy. Moreover, many more than n2 queries (even exponential in n) can be answered in exponential time. We prove our results by extending the connection between differentially private query release and cryptographic traitor-tracing schemes to the setting where the queries are given to the sanitizer as input, and by constructing a traitor-tracing scheme that is secure in this setting. J. Hsu, Roth, A., and Ullman, J. , “ Differential privacy for the analyst via private equilibrium computation ”, in Proceedings of the 45th annual ACM symposium on Symposium on theory of computing , Palo Alto, California, USA, 2013, pp. 341-350. We give new mechanisms for answering exponentially many queries from multiple analysts on a private database, while protecting dif- ferential privacy both for the individuals in the database and for the analysts. That is, our mechanism's answer to each query is nearly insensitive to changes in the queries asked by other analysts. Our mechanism is the first to offer differential privacy on the joint distribution over analysts' answers, providing privacy for data an- alysts even if the other data analysts collude or register multiple accounts. In some settings, we are able to achieve nearly optimal error rates (even compared to mechanisms which do not offer an- alyst privacy), and we are able to extend our techniques to handle non-linear queries. Our analysis is based on a novel view of the pri- vate query-release problem as a two-player zero-sum game, which may be of independent interest. G. N. Rothblum, Vadhan, S., and Wigderson, A. , “ Interactive proofs of proximity: delegating computation in sublinear time ”, in Proceedings of the 45th annual ACM symposium on Symposium on theory of computing , Palo Alto, California, USA, 2013, pp. 793-802. We study interactive proofs with sublinear-time verifiers. These proof systems can be used to ensure approximate correctness for the results of computations delegated to an untrusted server. Following the literature on property testing, we seek proof systems where with high probability the verifier accepts every input in the language, and rejects every input that is far from the language. The verifier's query complexity (and computation complexity), as well as the communication, should all be sublinear. We call such a proof system an Interactive Proof of Proximity (IPP). On the positive side, our main result is that all languages in NC have Interactive Proofs of Proximity with roughly √n query and communication and complexities, and polylog(n) communication rounds. This is achieved by identifying a natural language, membership in an affine subspace (for a structured class of subspaces), that is complete for constructing interactive proofs of proximity, and providing efficient protocols for it. In building an IPP for this complete language, we show a tradeoff between the query and communication complexity and the number of rounds. For example, we give a 2-round protocol with roughly n3/4 queries and communication. On the negative side, we show that there exist natural languages in NC1, for which the sum of queries and communication in any constant-round interactive proof of proximity must be polynomially related to n. In particular, for any 2-round protocol, the sum of queries and communication must be at least ~Ω(√n). Finally, we construct much better IPPs for specific functions, such as bipartiteness on random or well-mixing graphs, and the majority function. The query complexities of these protocols are provably better (by exponential or polynomial factors) than what is possible in the standard property testing model, i.e. without a prover. Y. Chen, Chong, S., Kash, I. A., Moran, T., and Vadhan, S. , “ Truthful mechanisms for agents that value privacy ”, in Proceedings of the fourteenth ACM conference on Electronic commerce , Philadelphia, Pennsylvania, USA, 2013, pp. 215-232. Recent work has constructed economic mechanisms that are both truthful and differentially private. In these mechanisms, privacy is treated separately from the truthfulness; it is not incorporated in players' utility functions (and doing so has been shown to lead to non-truthfulness in some cases). In this work, we propose a new, general way of modelling privacy in players' utility functions. Specifically, we only assume that if an outcome o has the property that any report of player i would have led to o with approximately the same probability, then o has small privacy cost to player i. We give three mechanisms that are truthful with respect to our modelling of privacy: for an election between two candidates, for a discrete version of the facility location problem, and for a general social choice problem with discrete utilities (via a VCG-like mechanism). As the number n of players increases, the social welfare achieved by our mechanisms approaches optimal (as a fraction of n). L. Sweeney , “ Discrimination in online ad delivery Commun. ACM , vol. 56, no. 5, pp. 44–54, 2013. Google ads, black names and white names, racial discrimination, and click advertising. S. P. Kasiviswanathan, Nissim, K., Raskhodnikova, S., and Smith, A. , “ Analyzing Graphs with Node Differential Privacy ”, in Theory of Cryptography , vol. 7785, Springer Berlin Heidelberg, 2013, pp. 457-476. Springer LinkAbstract We develop algorithms for the private analysis of network data that provide accurate analysis of realistic networks while satisfying stronger privacy guarantees than those of previous work. We present several techniques for designing node differentially private algorithms, that is, algorithms whose output distribution does not change significantly when a node and all its adjacent edges are added to a graph. We also develop methodology for analyzing the accuracy of such algorithms on realistic networks. The main idea behind our techniques is to “project” (in one of several senses) the input graph onto the set of graphs with maximum degree below a certain threshold. We design projection operators, tailored to specific statistics that have low sensitivity and preserve information about the original statistic. These operators can be viewed as giving a fractional (low-degree) graph that is a solution to an optimization problem described as a maximum flow instance, linear program, or convex program. In addition, we derive a generic, efficient reduction that allows us to apply any differentially private algorithm for bounded-degree graphs to an arbitrary graph. This reduction is based on analyzing the smooth sensitivity of the “naive” truncation that simply discards nodes of high degree. A. Beimel, Nissim, K., and Stemmer, U. , “ Characterizing the sample complexity of private learners ”, in Proceedings of the 4th conference on Innovations in Theoretical Computer Science , Berkeley, California, USA, 2013, pp. 97-110. In 2008, Kasiviswanathan el al. defined private learning as a combination of PAC learning and differential privacy [16]. Informally, a private learner is applied to a collection of labeled individual information and outputs a hypothesis while preserving the privacy of each individual. Kasiviswanathan et al. gave a generic construction of private learners for (finite) concept classes, with sample complexity logarithmic in the size of the concept class. This sample complexity is higher than what is needed for non-private learners, hence leaving open the possibility that the sample complexity of private learning may be sometimes significantly higher than that of non-private learning. We give a combinatorial characterization of the sample size sufficient and necessary to privately learn a class of concepts. This characterization is analogous to the well known characterization of the sample complexity of non-private learning in terms of the VC dimension of the concept class. We introduce the notion of probabilistic representation of a concept class, and our new complexity measure RepDim corresponds to the size of the smallest probabilistic representation of the concept class. We show that any private learning algorithm for a concept class C with sample complexity m implies RepDim(C) = O(m), and that there exists a private learning algorithm with sample complexity m = O(RepDim(C)). We further demonstrate that a similar characterization holds for the database size needed for privately computing a large class of optimization problems and also for the well studied problem of private data release. A. Gupta, Roth, A., and Ullman, J. , “ Iterative Constructions and Private Data Release ”, in Theory of Cryptography - 9th Theory of Cryptography Conference, TCC 2012 , Taormina, Sicily, Italy, 2012, Lecture Notes in Computer Science., vol. 7194, pp. 339-356. In this paper we study the problem of approximately releasing the cut function of a graph while preserving differential privacy, and give new algorithms (and new analyses of existing algorithms) in both the interactive and non-interactive settings. Our algorithms in the interactive setting are achieved by revisiting the problem of releasing differentially private, approximate answers to a large number of queries on a database. We show that several algorithms for this problem fall into the same basic framework, and are based on the existence of objects which we call iterative database construction algorithms. We give a new generic framework in which new (efficient) IDC algorithms give rise to new (efficient) interactive private query release mechanisms. Our modular analysis simplifies and tightens the analysis of previous algorithms, leading to improved bounds. We then give a new IDC algorithm (and therefore a new private, interactive query release mechanism) based on the Frieze/Kannan low-rank matrix decomposition. This new release mechanism gives an improvement on prior work in a range of parameters where the size of the database is comparable to the size of the data universe (such as releasing all cut queries on dense graphs). We also give a non-interactive algorithm for efficiently releasing private synthetic data for graph cuts with error O(|V|1.5). Our algorithm is based on randomized response and a non-private implementation of the SDP-based, constant-factor approximation algorithm for cut-norm due to Alon and Naor. Finally, we give a reduction based on the IDC framework showing that an efficient, private algorithm for computing sufficiently accurate rank-1 matrix approximations would lead to an improved efficient algorithm for releasing private synthetic data for graph cuts. We leave finding such an algorithm as our main open problem. C. Dwork, Naor, M., and Vadhan, S. , “ The Privacy of the Analyst and the Power of the State ”, in Proceedings of the 53rd Annual {IEEE} Symposium on Foundations of Computer Science (FOCS `12) , New Brunswick, NJ, 2012, pp. 400–409. IEEE XploreAbstract We initiate the study of "privacy for the analyst" in differentially private data analysis. That is, not only will we be concerned with ensuring differential privacy for the data (i.e. individuals or customers), which are the usual concern of differential privacy, but we also consider (differential) privacy for the set of queries posed by each data analyst. The goal is to achieve privacy with respect to other analysts, or users of the system. This problem arises only in the context of stateful privacy mechanisms, in which the responses to queries depend on other queries posed (a recent wave of results in the area utilized cleverly coordinated noise and state in order to allow answering privately hugely many queries). We argue that the problem is real by proving an exponential gap between the number of queries that can be answered (with non-trivial error) by stateless and stateful differentially private mechanisms. We then give a stateful algorithm for differentially private data analysis that also ensures differential privacy for the analyst and can answer exponentially many queries. Y. Dodis, López-Alt, A., Mironov, I., and Vadhan, S. , “ Differential Privacy with Imperfect Randomness ”, in Proceedings of the 32nd International Cryptology Conference (CRYPTO `12) , Santa Barbara, CA, 2012, Lecture Notes on Computer Science., vol. 7417, pp. 497–516. Springer LinkAbstract In this work we revisit the question of basing cryptography on imperfect randomness. Bosley and Dodis (TCC’07) showed that if a source of randomness R is “good enough” to generate a secret key capable of encrypting k bits, then one can deterministically extract nearly k almost uniform bits from R, suggesting that traditional privacy notions (namely, indistinguishability of encryption) requires an “extractable” source of randomness. Other, even stronger impossibility results are known for achieving privacy under specific “non-extractable” sources of randomness, such as the γ-Santha-Vazirani (SV) source, where each next bit has fresh entropy, but is allowed to have a small bias γ < 1 (possibly depending on prior bits). We ask whether similar negative results also hold for a more recent notion of privacy called differential privacy (Dwork et al., TCC’06), concentrating, in particular, on achieving differential privacy with the Santha-Vazirani source. We show that the answer is no. Specifically, we give a differentially private mechanism for approximating arbitrary “low sensitivity” functions that works even with randomness coming from a γ-Santha-Vazirani source, for any γ < 1. This provides a somewhat surprising “separation” between traditional privacy and differential privacy with respect to imperfect randomness. Interestingly, the design of our mechanism is quite different from the traditional “additive-noise” mechanisms (e.g., Laplace mechanism) successfully utilized to achieve differential privacy with perfect randomness. Indeed, we show that any (accurate and private) “SV-robust” mechanism for our problem requires a demanding property called consistent sampling, which is strictly stronger than differential privacy, and cannot be satisfied by any additive-noise mechanism. J. Thaler, Ullman, J., and Vadhan, S. P. , “ Faster Algorithms for Privately Releasing Marginals ”, in Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012 , Warwick, UK, 2012, Lecture Notes in Computer Science., vol. 7391. We study the problem of releasing k-way marginals of a database D∈({0, 1} d ) n , while preserving differential privacy. The answer to a k-way marginal query is the fraction of D’s records x∈{0, 1} d with a given value in each of a given set of up to k columns. Marginal queries enable a rich class of statistical analyses of a dataset, and designing efficient algorithms for privately releasing marginal queries has been identified as an important open problem in private data analysis (cf. Barak et. al., PODS ’07). We give an algorithm that runs in time dO(k√) and releases a private summary capable of answering any k-way marginal query with at most ±.01 error on every query as long as n≥dO(k√) . To our knowledge, ours is the first algorithm capable of privately releasing marginal queries with non-trivial worst-case accuracy guarantees in time substantially smaller than the number of k-way marginal queries, which is d Θ(k) (for k≪d). M. Kearns, Pai, M., Roth, A., and Ullman, J. , “ Private Equilibrium Release, Large Games, and No-Regret Learning ”, 2012. We give mechanisms in which each of n players in a game is given their component of an (approximate) equilibrium in a way that guarantees differential privacy---that is, the revelation of the equilibrium components does not reveal too much information about the utilities of the other players. More precisely, we show how to compute an approximate correlated equilibrium (CE) under the constraint of differential privacy (DP), provided n is large and any player's action affects any other's payoff by at most a small amount. Our results draw interesting connections between noisy generalizations of classical convergence results for no-regret learning, and the noisy mechanisms developed for differential privacy. Our results imply the ability to truthfully implement good social-welfare solutions in many games, such as games with small Price of Anarchy, even if the mechanism does not have the ability to enforce outcomes. We give two different mechanisms for DP computation of approximate CE. The first is computationally efficient, but has a suboptimal dependence on the number of actions in the game; the second is computationally efficient, but allows for games with exponentially many actions. We also give a matching lower bound, showing that our results are tight up to logarithmic factors.
{"url":"http://privacytools.seas.harvard.edu/publications","timestamp":"2014-04-20T13:19:40Z","content_type":null,"content_length":"75882","record_id":"<urn:uuid:9a9823e1-8dd7-4a5e-ac3e-aeb3f7777b11>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Composition of Transformations 12.5: Composition of Transformations Created by: CK-12 Learning Objectives • Perform a glide reflection. • Perform a reflection over parallel lines and the axes. • Perform a double rotation with the same center of rotation. • Determine a single transformation that is equivalent to a composite of two transformations. Review Queue 1. Reflect $ABCD$$x-$$A'B'C'D'$ 2. Translate $A'B'C'D'$$(x,y) \rightarrow (x+4,y)$$A''B''C''D''$ 3. Now, start over. Translate $ABCD$$(x,y) \rightarrow (x+4,y)$$A'B'C'D'$ 4. Reflect $A'B'C'D'$$x-$$A''B''C''D''$ Know What? An example of a glide reflection is your own footprint. The equations to find your average footprint are in the diagram below. Determine your average footprint and write the rule for one stride. You may assume your stride starts at (0, 0). Glide Reflections Now that we have learned all our rigid transformations, or isometries, we can perform more than one on the same figure. In your homework last night you actually performed a composition of two reflections. And, in the Review Queue above, you performed a composition of a reflection and a translation. Composition (of transformations): To perform more than one rigid transformation on a figure. Glide Reflection: A composition of a reflection and a translation. The translation is in a direction parallel to the line of reflection. So, in the Review Queue above, you performed a glide reflection on $ABCD$the order in which you reflect or translate does not matter. It is important to note that the translation for any glide reflection will always be in one direction. So, if you reflect over a vertical line, the translation can be up or down, and if you reflect over a horizontal line, the translation will be to the left or right. Example 1: Reflect $\triangle ABC$$y-$ Solution: The green image below is the final answer. $A(8,8) & \rightarrow A''(-8,0)\\ B(2,4) & \rightarrow B''(-2,-4)\\C(10,2) & \rightarrow C''(-10,-6)$ One of the interesting things about compositions is that they can always be written as one rule. What this means is, you don’t necessarily have to perform one transformation followed by the next. You can write a rule and perform them at the same time. Example 2: Write a single rule for $\triangle ABC$$\triangle A''B''C''$ Solution: Looking at the coordinates of $A$$A''$$x-$$y-$$y - 8$$(x,y) \rightarrow (-x,y-8)$ Notice that this follows the rules we have learned in previous sections about a reflection over the $y-$ Reflections over Parallel Lines The next composition we will discuss is a double reflection over parallel lines. For this composition, we will only use horizontal or vertical lines. Example 3: Reflect $\triangle ABC$$y = 3$$y = -5$ Solution: Unlike a glide reflection, order matters. Therefore, you would reflect over $y = 3$$y = -5$ Example 4: Write a single rule for $\triangle ABC$$\triangle A''B''C''$ Solution: Looking at the graph below, we see that the two lines are 8 units apart and the figures are 16 units apart. Therefore, the double reflection is the same as a single translation that is double the distance between the two lines. $(x,y) \rightarrow (x,y-16)$ Reflections over Parallel Lines Theorem: If you compose two reflections over parallel lines that are $h$$2h$ Be careful with this theorem. Notice, it does not say which direction the translation is in. So, to apply this theorem, you would still need to visualize, or even do, the reflections to see in which direction the translation would be. Example 5: $\triangle DEF$$D(3, -1), E(8, -3),$$F(6, 4)$$\triangle DEF$$x = -5$$x = 1$ Solution: From the Reflections over Parallel Lines Theorem, we know that this double reflection is going to be the same as a single translation of $2(1 -(-5))$$\triangle DEF$left, $\triangle D''E''F''$right of $\triangle DEF$$x = 1$$x = -5$left. Reflections over the $x$$y$ You can also reflect over intersecting lines. First, we will reflect over the $x$$y$ Example 6: Reflect $\triangle DEF$$x-$$y-$$\triangle D''E''F''$ Solution: $\triangle D''E''F''$$\triangle DEF$ $D(3,-1) & \rightarrow D'(-3,1)\\E(8,-3) & \rightarrow E' (-8,3)\\F(6,4) & \rightarrow F'(-6,-4)$ If you recall the rules of rotations from the previous section, this is the same as a rotation of $180^\circ$ Reflection over the Axes Theorem: If you compose two reflections over each axis, then the final image is a rotation of $180^\circ$ With this particular composition, order does not matter. Let’s look at the angle of intersection for these lines. We know that the axes are perpendicular, which means they intersect at a $90^\circ$ Reflections over Intersecting Lines Now, we will take the concept we were just discussing and apply it to any pair of intersecting lines. For this composition, we are going to take it out of the coordinate plane. Then, we will apply the idea to a few lines in the coordinate plane, where the point of intersection will always be the origin. Example 7: Copy the figure below and reflect it over $l$$m$ Solution: The easiest way to reflect the triangle is to fold your paper on each line of reflection and draw the image. It should look like this: The green triangle would be the final answer. Investigation 12-2: Double Reflection over Intersecting Lines Tools Needed: Example 7, protractor, ruler, pencil 1. Take your answer from Example 7 and measure the angle of intersection for lines $l$$m$$55^\circ$ 2. Draw lines from two corresponding points on the blue triangle and the green triangle. These are the dotted lines in the diagram below. 3. Measure this angle using your protractor. How does it related to $55^\circ$ Again, if you copied the image exactly from the text, the angle should be $110^\circ$ From this investigation, we see that the double reflection over two lines that intersect at a $55^\circ$$110^\circ$$m$$l$$110^\circ$ Reflection over Intersecting Lines Theorem: If you compose two reflections over lines that intersect at $x^\circ$$2x^\circ$ Notice that the Reflection over the Axes Theorem is a specific case of this one. Example 8: Reflect the square over $y = x$$x-$ Solution: First, reflect the square over $y = x$$x-$ Example 9: Determine the one rotation that is the same as the double reflection from Example 8. Solution: Let’s use the theorem above. First, we need to figure out what the angle of intersection is for $y = x$$x-$$y = x$$45^\circ$$x-$$90^\circ$$270^\circ$$270^\circ$$135^\circ$$45^\circ$ Know What? Revisited The average 6 foot tall man has a $0.415 \times 6 = 2.5$$(x,y) \rightarrow (-x,y+2.5)$ Review Questions 1. Explain why the composition of two or more isometries must also be an isometry. 2. What one transformation is equivalent to a reflection over two parallel lines? 3. What one transformation is equivalent to a reflection over two intersecting lines? Use the graph of the square below to answer questions 4-7. 4. Perform a glide reflection over the $x-$ 5. What is the rule for this glide reflection? 6. What glide reflection would move the image back to the preimage? 7. Start over. Would the coordinates of a glide reflection where you move the square 6 units to the right and then reflect over the $x-$ Use the graph of the triangle below to answer questions 8-10. 8. Perform a glide reflection over the $y-$ 9. What is the rule for this glide reflection? 10. What glide reflection would move the image back to the preimage? Use the graph of the triangle below to answer questions 11-15. 11. Reflect the preimage over $y = -1$$y = -7$ 12. What one transformation is this double reflection the same as? 13. What one translation would move the image back to the preimage? 14. Start over. Reflect the preimage over $y = -7$$y = -1$ 15. Write the rules for #11 and #14. How do they differ? Use the graph of the trapezoid below to answer questions 16-20. 16. Reflect the preimage over $y = -x$$y-$ 17. What one transformation is this double reflection the same as? 18. What one transformation would move the image back to the preimage? 19. Start over. Reflect the preimage over the $y-$$y = -x$ 20. Write the rules for #16 and #19. How do they differ? Fill in the blanks or answer the questions below. 21. Two parallel lines are 7 units apart. If you reflect a figure over both how far apart with the preimage and final image be? 22. After a double reflection over parallel lines, a preimage and its image are 28 units apart. How far apart are the parallel lines? 23. A double reflection over the $x$$y$$^\circ$ 24. What is the center of rotation for #23? 25. Two lines intersect at an $83^\circ$ 26. A preimage and its image are $244^\circ$ 27. A rotation of $45^\circ$$^\circ$ 28. After a double reflection over parallel lines, a preimage and its image are 62 units apart. How far apart are the parallel lines? 29. A figure is to the left of $x = a$$x = a$$x = b$$b > a$ 30. A figure is to the left of $x = a$$x = b$$x = a$$b > a$ Review Queue Answers 1. $A'(-2, -8), B'(4, -5), C'(-4, -1), D'(-6, -6)$ 2. $A''(2, -8), B''(8, -5), C''(0, -1), D''(-2, -6)$ 3. $A'(2, 8), B'(8, 5), C''(0, 1), D''(-2, 6)$ 4. The coordinates are the same as #2. Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/Geometry---Second-Edition/r1/section/12.5/","timestamp":"2014-04-18T15:06:42Z","content_type":null,"content_length":"140656","record_id":"<urn:uuid:f1c350d2-594c-4430-95c2-6bd86c7d64f3>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Langhorne Algebra 1 Tutor ...I am currently a student majoring in secondary education: mathematics. I plan on becoming a full time math teacher as soon as my bachelor's degree is completed. After my bachelor completion, I plan on working toward a PhD in particle physics.Algebra 1 is perhaps my favorite mathematics subject of all! 18 Subjects: including algebra 1, English, writing, calculus ...I would consider these three fields to be my specialties within Biology. Additionally, I studied German for 5 years in high school, and then continued studying it for 4 years in college. This included a semester abroad, at the University of Tuebingen, in Tuebingen, Germany. 12 Subjects: including algebra 1, reading, chemistry, biology ...I have had much success using this technique in my tutoring over the past five years. I can tutor general chemistry as well as organic chemistry and I would be happy to help you prepare for the chemistry portion of the MCAT, the GRE subject exam in chemistry, or the chemistry AP exam. See my subject section below for a complete list of the subjects I tutor! 8 Subjects: including algebra 1, chemistry, biology, American history ...I want to displace the student as little as possible, and want to tutor them in an environment they are comfortable and familiar with. I look forward to working with you!I currently teach Algebra I at the high school level. I am also certified to teach Math in grades K-12. 2 Subjects: including algebra 1, prealgebra ...All sessions encourage good study habits and I strive to facilitate learning, without lecturing the material. Understanding the content of a written passage can be confusing for even the best of students. It is absolutely essential that a student be able to extract the content from books and articles. 62 Subjects: including algebra 1, reading, English, calculus
{"url":"http://www.purplemath.com/Langhorne_algebra_1_tutors.php","timestamp":"2014-04-20T21:10:10Z","content_type":null,"content_length":"24034","record_id":"<urn:uuid:3aa619a4-89eb-430b-a8b6-2790442f747a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
September 1996, Volume 5, Number 3 M. Harakal, J. Chmurny [full-text] Image Generation by Octree This paper deals with the generating of three-dimensional (3-D) images by octree and using the application of tesseral theory in image processing. M. Brejl [full-text] Restoration of Local Degradations in Audio Signals The paper presents an algorithm for restoration of local degradations in audio signals. The theoretical foundations and basic suggestions of this algorithm were published in [1]. A complete description of restoration process and some improvements are presented here. K. Vlcek, J. Popelek [full-text] Analog Hardware Description Language and Its Relations to VHDL Primary motivations for analogue hardware description language (VHDL-A) is to support the modelling of physical systems. The VHDL-A must therefore allow to model the physical conservation laws, such as the energy conservation law, which states that energy can neither be created nor destroyed, but it can only change its form. M. Hajny [full-text] Interference Fading Prediction on the Line-of-Sight Radio Links The fading on the LOS links is an important factor influencing the reliability of the communication. In this paper predictions models for interference fading are described. Predictions are compared with measured data of the experiment of TESTCOM & CTU PRAGUE in the frequency 13 GHz. O. Fiser [full-text] Prediction of Rain and Water Vapour Attenuation at Frequencies 10-30 GHz This contribution discusses the attenuation of microwaves in the atmosphere and applies the results to the conditions in the Czech Republic. The simple CCIR method predicting the rain attenuation is described and the relevant rain intensity as the only meteorological parameter required is presented on a map. Five methods of evaluation of the water vapour attenuation are discussed and compared . Included is also the computation of the zenith water vapour attenuation and its distribution based on the statistical evaluation of real four-year vertical profiles of important meteorological parameters from the Prague station. A. Urbas, B. Galinski [full-text] Unbalanced Czarnul Resistive MOS Circuit in the Symmetrical MOSFET-C Filters The analysis of influence of mismatches of the MOS transistor array on center frequency ω[0] and Q-factor of the MOSFET-C filter poles is performed. As a useful parameter characterizing mismatches of transistors, the dispersion of the threshold voltage V[T] and transconductance K per unit area in the VLSI process is considered The optimal formula for the control gate voltages V[G1], V[G2] of MOS transistors, minimizing the absolute errors of ω[0] , Q parameters is given.
{"url":"http://www.radioeng.cz/papers/1996-3.htm","timestamp":"2014-04-17T21:29:47Z","content_type":null,"content_length":"16269","record_id":"<urn:uuid:ee589f6e-8d53-4eeb-b7f3-96f32be6ed68>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Nyquist Diagram in Linear Control System I have so much problem in draw nyquist diagram. here is some problem 1-as the book mentioned the open- transfer function doesn't have any pole and zero in right of S plane. and give me the uncompleted diagram so i completed it as you see , but my answer is different form the book !!!! what is my wrong? and other question: 2-here is uncompleted diagrams i completed and solved with this view but one of the answers is wrong .... now what is my problem?!!!! how can i find the best videos references about nyquist in Linear Control System ?
{"url":"http://www.physicsforums.com/showthread.php?p=4210654","timestamp":"2014-04-21T07:11:48Z","content_type":null,"content_length":"20941","record_id":"<urn:uuid:d2153da6-1cea-4308-8682-83e0ce0d4aab>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: SIGNAL PROCESSING METHOD AND APPARATUS Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A signal processing method that includes inputting sample values of a signal and considering the signal to have a plurality of portions. For each portion, a predetermined function is fitted to the sample values of that portion of the signal by calculating values of coefficients for that predetermined function. At least one statistical information function is evaluated for the signal to determine statistical information about the signal and the calculated coefficient values are used so that the form of the statistical information function has been determined for the predetermined function used to fit the signal portion and further includes using the statistical information obtained about the signal to process the signal. A signal processing method comprising: inputting sample values of a band-limited signal; considering the signal to comprise a plurality of portions; wherein the signal processing method is characterised for each portion of the signal by: a) fitting a predetermined interpolating or approximating function to the sample values of the portion of the signal, by calculating values of coefficients for the predetermined interpolating or approximating function, the interpolating or approximating function approximating a reconstruction of the portion of the signal; and b) evaluating a statistical information function using the calculated coefficient values, to determine statistical information sought for the portion of the signal, wherein the statistical information function is an analytic solution for: (i) the statistical information; and (ii) the predetermined interpolating or approximating function; using the statistical information obtained about the signal to process the signal. The method according to claim 1, wherein the statistical information is information about the probability distribution of the signal. The method according to claim 1, wherein the statistical information about the signal comprises one or more selected from the group consisting of the probability density function, the cumulative distribution function, the conditional density, the mean, the variance, other moments and the entropy. The method according to claim 3, further comprising: cumulatively populating the appropriate bins of a histogram, by evaluating the probability density function or cumulative distribution function for each portion of the signal. The method according to claim 1, wherein the portions of the signal for processing according to the method comprise the portions between consecutive samples. The method according to claim 1, further comprising interpolating additional sample values from the input sample values for at least a region of the signal, and using the additional values in the fitting and evaluating steps. The method according to claim 1, wherein the predetermined interpolating function is a polynomial. The method according to claim 1, wherein the predetermined interpolating function is one selected from the group consisting of a 1D linear function, a 1D quadratic function, a 2D bilinear function, a cubic spline and a sinc-based function. The method according to claim 8, wherein for a one dimensional interpolating function: (i) the statistical information function is: F y ( y ) = y a b ≦ y ≦ a + b ##EQU00020## when the predetermined interpolating function is the one dimensional interpolating function y=ax+b, and the desired statistical information is the CDF; or (ii) the statistical information function is: f y ( y ) = 1 2 ax 1 + b f x ( x r ) ##EQU00021## when the predetermined interpolating function is the one dimensional interpolating function y=ax +bx+c, and the desired statistical information is the PDF. The method according to claim 1, wherein the signal is one of a speech signal, a still image signal or a video signal. The method according to claim 1, further comprising: inputting the sample values of a second sampled signal, and wherein the evaluating step comprises evaluating the joint probability density function between the two sampled signals. The method according to claim 11, wherein: the sampled signals are medical images taken using different modalities, for example, an MRI (magnetic resonance imaging) scan and a CT (computed/ computerized tomography) scan; the statistical information is the marginal and joint probability density functions (PDFs) between the images; and the signal processing further comprises transforming one image geometrically to map that image onto the other image. The method according to claim 1, wherein: the signal samples are ordered and/or the signal has been sampled non-uniformly. A computer-readable medium storing a computer program comprising computer-executable code that, when executed on a computer system, causes the computer system to perform a signal processing method according to claim 1. 15. A signal processing apparatus comprising: an input that receives sample values of a band-limited signal; an analysis module for considering portions of the signal; wherein the signal processing apparatus is characterised in that, for each portion of the signal, the analysis module is adapted to: a) calculate values of coefficients for fitting a predetermined interpolating or approximating function to the sample values of that portion of the signal, the interpolating or approximating function approximating a reconstruction of the portion of the signal; and b) evaluate a statistical information function using the calculated coefficient values, to determine statistical information sought for the portion of the signal, wherein the statistical information function is an analytic solution for: (i) the statistical information; and (ii) the predetermined interpolating or approximating function; further comprising a processing module that uses the statistical information obtained about the signal by the analysis module to process the signal. The present application is a continuation patent application of International Application No. PCT/GB2005/004317 filed 9 Nov. 2005 which was published in English pursuant to Article 21(2) of the Patent Cooperation Treaty and which claims priority to GB Application No. 0424737.5 filed 9 Nov. 2004. Said applications are expressly incorporated herein by reference in their entirety. FIELD [0002] The present invention relates to a signal processing method in which statistical information about a signal is determined, and also provides a computer program and signal processing apparatus for performing signal processing in which statistical information about a signal is determined. BACKGROUND [0003] In many fields, for example speech signal processing, artificial intelligence, telecommunications and medical imaging, it is desired to obtain estimates of statistical information such as probability distributions from discrete, sampled signals, such as digitised speech, still images or video. Examples of probability distributions are probability density functions (PDFs) and cumulative distribution functions (CDFs). For a one-dimensional signal y=f(x), the PDF gives the density of samples of that signal having particular values. As an example, to assist understanding of the background of the present invention, for the signal y=sin(x), the PDF can be calculated analytically and is plotted in FIG. 1. FIG. 1 also plots the CDF, which is obtained by integrating the PDF and gives the probability that the signal has a value less than a particular y value. For example, for y=1, the CDF is 1 because the function sin(x) is always less than or equal to 1. For y=0, the CDF is 0.5 because there is a probability of 0.5 that a sample of the signal will be less than zero. In the case of real-world signals which are sampled and whose form is not known in an analytic representation, then a conventional method for PDF estimation is known as histogramming. The possible values of the signal are divided into a number of ranges known as bins. Then for each bin a count is kept of how many times the available samples of the signal fall within the range of that bin. A histogram can then be plotted of the number of times a sample falls within the range of a particular bin divided by the total number of samples. FIG. 2 shows an example of such a histogram for the signal y=sin(x). In this case, the signal values y, which must lie between -1 and +1, are divided into 64 bins, each with a bin-width of 1/32. It can be seen that this histogram gives an approximation to the PDF. The continuous line representing the PDF is superimposed on FIG. 2. The PDF has been normalized by a weighting factor by assuming that the point evaluation of the PDF is constant across the width. The bin values of a histogram represent a probability that the signal lies between two value points, the upper and lower bin boundaries, and therefore must be less than 1. The histogram is a piecewise constant function. On the other hand, a PDF is a continuous function and at a point represents the density of values which that function (signal) passes through (and hence can be greater than 1). In the limit of zero bin-width, the histogram equals the PDF. Probability distribution estimation techniques generally fall into one of three categories: parametric, non-parametric and semi-parametric. Parametric techniques are suitable where a particular form of function can be assumed due to some application specific reasons. For example, Rician and Rayleigh functions are often used in medical signal processing applications e.g. ultrasound imaging. However, such analytical forms for the PDF are generally neither known or not appropriate in most applications. Probably the simplest and most widely used non-parametric technique is histogramming, as explained above. However, this technique has a number of associated problems, such as the requirement to define in advance the number of bins, and to specify the arbitrary bin boundaries, both of which render the histogram sensitive to slight displacements of the signal, and also the block-like nature of the resulting PDF estimate. Furthermore, the resulting PDF estimates tend to be poor and require large numbers of samples to produce stable estimates (typically the number of samples must considerably exceed the number of bins). Conventionally, it is widely assumed that only the samples available can be used and hence for a given portion of the signal, the number of samples is fixed. For example, if it was desired to estimate the PDF of the part of an image corresponding to a face, and this was represented by 50 pixels, then conventional methods would use only those 50 pixels. A number of techniques to repair these problems are available, such as Parzen windowing in which the PDF is approximated as the superposition of kernels placed at domain positions corresponding to the (co-domain) value of each sample. However, they do not work well in general. For example, Parzen windowing avoids arbitrary bin assignments and leads to smoother PDFs, however, a suitable kernel shape and size must be chosen. Conventionally this choice has been somewhat arbitrary and non-systematic, so does not give stable or universally predictable results. Semi-parametric techniques, such as Guassian mixture models, offer a compromise between the parametric and non-parametric approaches, whereby the superposition of a number of parametric densities are used to approximate the underlying density. Thus the current techniques suffer from the problem that the PDF can only be estimated from large parts of the signal (to obtain enough samples), and the resulting PDF estimates often exhibit poor stability (if the signal is shifted, the PDF estimate changes), poor accuracy, and poor resolution (limited to the bin width); conventional techniques also require the careful setting of several parameters, such as bin-widths or smoothing kernel shapes and sizes. However, according to the invention, it has been realized that some of these limitations and problems arise from the fact that these techniques do not use all of the information in the sampled signal. The Whittaker-Shannon sampling theory states that a band-limited continuous signal, y(t) can be uniquely reconstructed from its samples (assumed to be periodic) as long as the sampling rate F satisfies the relationship, F ≧2B where B is the highest frequency present in y(t). When F =2B the signal is said to be critically sampled and the respective sampling frequency referred to as the Nyquist rate. Since real-world signals have infinite bandwidth (in theory at least) they do not have an upper limit on their frequency. Therefore, they must be low-pass filtered prior to sampling in order to avoid corruption of the reconstruction known as aliasing. In such a case, the reconstruction is of the band-limited signal, its bandwidth chosen appropriately for the application. For example, since speech can generally be understood with ease even if the signal is cut-off at 4 kHz, telephone quality speech can be sampled at a rate of 8 kHz. Similarly, for a signal representing an image, filtering (in this case spatial filtering) is performed by the optics, such as the camera lens and aperture. Essentially, band-limited signals of practical interest, such as speech or images, can be reconstructed exactly given three pieces of information: (1) the samples; (2) their order; and (3) the sampling pre-filter characteristics. Often, conventional techniques for PDF estimation, such as histogramming, Parzen windowing and Gaussian mixture models, assume that the samples are independent and identically distributed (IID) samples from some continuous underlying PDF which is to be estimated. However, this assumption is not true for band-limited signals sampled at least the Nyquist rate. Essentially these methods just use the first piece of information, i.e. the sample values. However, this disregards information. For example, given a sample value of a signal at one point, the next consecutive sample cannot just take an arbitrary value selected from the probability distribution because of, for example, constraints such as the frequency band limitation. This loss of information because of the IID assumption leads to poor system performance in which, for example, the number of samples determines the quality of the final estimate. SUMMARY [0012] One simple approach to generating PDFs of arbitrary stability and accuracy, according to one aspect of the invention, is to generate extra samples synthetically, because sampling theory teaches that the band-limited signal can be reconstructed at any point. This corresponds to over sampling the signal beyond the Nyquist rate. As many extra samples as desired can be generated, and these extra samples can be used to populate a histogram in the conventional manner to produce a PDF estimate of arbitrary stability and accuracy, by forming a histogram with an arbitrarily small bin size. However, there are significant practical challenges that may limit the utility of such an approach. In many cases it is computationally inefficient to generate a sufficiently large number of samples and process them; in some cases such a process would be intractable. The filter function is rarely a simple analytic function, particularly for the case of signals representing an image, so the interpolation must proceed with a reconstruction filter specified numerically, which can be quite an inefficient process in itself. Consequently, another aspect of the invention has been devised using a different approach to address these issues. It is desirable to alleviate, at least partially, the problems of conventional techniques. Accordingly, one aspect of the present invention provides a signal processing method comprising: inputting sample values of a signal; considering the signal to comprise a plurality of portions; for each portion: fitting a predetermined function to the sample values of that portion of the signal by calculating values of coefficients for that predetermined function; and evaluating at least one statistical information function for the signal to determine statistical information about the signal, using the calculated coefficient values, wherein the form of the at least one statistical information function has been determined for the predetermined function used to fit the signal portion; and further comprising using the statistical information obtained about the signal to process the signal. Another aspect of the invention provides a computer program comprising computer-executable code that when executed on a computer system causes the computer system to perform the method as defined above. The invention also provides a computer-readable medium storing a computer program of the invention. A further aspect of the invention provides a signal processing apparatus comprising: an input that receives sample values of a signal; an analysis module for considering portions of the signal; wherein, for each portion of the signal, the analysis module: calculates values of coefficients for fitting a predetermined function to the sample values of that portion of the signal; and evaluates at least one statistical information function for the signal to determine statistical information about a signal, using the calculated coefficient values, wherein the form of the at least one statistical information function has been determined for the predetermined function used to fit the signal portion; and further comprising a processing module that uses the statistical information obtained about the signal by the analysis module to process the signal. The invention is advantageous because it provides a non-parametric technique that enables the probability distribution to be evaluated with arbitrary resolution, independent of the number of sample points. Furthermore, no arbitrary bin widths or smoothing kernel parameters have to be set. This technique is in general more accurate and more efficient to implement than sampling, such as over-sampling and then histogramming. In fact, its accuracy is dependent only on the accuracy of the functional representation of each portion, not the number of samples, nor the number of bins. Another aspect of the invention provides a signal processing method comprising: inputting sample values of a signal; up-sampling the signal by generating additional sample values of the signal using interpolation to produce an up-sampled version of the sampled signal; calculating statistical information about the signal using the up-sampled version of the signal; and further comprising using the statistical information obtained about the signal to process the signal. An apparatus, computer program and computer-readable medium corresponding to the method of this aspect of the invention are also envisaged. BRIEF DESCRIPTION OF DRAWINGS [0019] Embodiments of the invention will now be described by way of example only, with reference to the accompanying drawings in which: FIG. 1 shows plots of the Probability Density Function and Cumulative Distribution Function for a sine function; FIG. 2 is a 64 bin probability histogram derived from the analytical form of the CDF of the sine function, together with the normalised PDF; FIG. 3 are illustrative graphs to show that across the span of a spline having a range of x values, a 1D quadratic in x can exibit, as a function of y, (a) single values, (b) multiple values, or (c) a combination of both; FIG. 4 show graphically the integration ranges for the 2D bilinear spline case; FIG. 5 illustrates the six basic configurations given by Equations (21) that determine the integration ranges; FIG. 6 gives plots of stability and accuracy of the results of PDF estimations of a sine function by different methods, both according to and not according to the invention; FIG. 7 shows examples of the PDF estimates of the sine function from 50 samples according to four different methods, both according to and not according to the invention; and FIG. 8 depicts a signal processing apparatus embodying the invention. DETAILED DESCRIPTION [0028] The present invention may be used to determine a variety of statistical information about a signal, such as information about the probability distribution of the signal, the probability density function (PDF), the cumulative distribution function (CDF), the conditional density, the mean, the variance, other moments and the entropy. Embodiments of the invention will be described below principally with reference to the example of calculating the PDF and CDF. The preferred embodiment of the invention is to represent the signal using an interpolating function from which a PDF estimate can be obtained. Preferably the interpolating function is a piecewise polynomial; splines are a particular type of piecewise polynomial. Furthermore, instead of over-sampling the interpolating function and histogramming as in an alternative technique, an exact PDF and CDF is calculated from each piecewise section using techniques from probability theory. The method of the preferred embodiment of the invention for calculating the PDF/CDF and histogram consists of three main steps: 1. Calculating the interpolating function coefficients for the signal samples; 2. Calculating the PDF and/or CDF for each piecewise section; and 3. Populating the appropriate histogram bins, using the PDF or CDF for each piecewise section. The final step 3 is optional and is only necessary if an explicit numerical representation of the PDF is required. Step 1 comprises considering the signal to comprise a plurality of portions, also referred to as piecewise sections, and using a routine fitting procedure to obtain values of coefficients for the chosen polynomial function to fit each portion in turn. The interpolating function (e.g. piecewise polynomial/spline) can be fitted to the samples using interpolation or approximation procedures. If the samples are considered to be essentially noise-free, then an interpolation fitting procedure can be used; otherwise an approximation fitting procedure can be used. In the interpolation fitting procedure, the interpolating functions pass through the sample points exactly. In the later discussion, the conventional approach is adopted whereby the piecewise spans start at zero and are of unit length. For a 1D signal, such as an audio signal, the portions can comprise sections between consecutive samples separated in time. For a 2D signal, such as an image, the portions comprise spatial areas i.e. pixels. Regarding step 2, it is first necessary to provide the form of the PDF and/or CDF resulting from the chosen interpolating function, such as a polynomial function. In general this technique can be used for any analytical function, and the invention is not limited to polynomial functions. The principle is to treat the problem of obtaining the PDF as a function of a uniform random variable representing the domain of the signal. Further information can be gleaned from Probability, Random Variables and Stochastic Processes, by A Papoulis and S U Pillai, McGraw-Hill 2002. Two known methods are available, using the distribution function or the so-called Transformation formula respectively. The examples below use the latter, but it is understood that this is not crucial for the invention, but merely a specific example. In summary, in order to be able to calculate an analytic function's PDF in closed form, one must be able to solve the function (i.e. invert it to obtain x as a function of y) and calculate its derivative. For the CDF, the resulting PDF must be integrated. In 2D one must be able to calculate a Jacobian, solve the function for one of the variables and perform an extra integration to remove a dummy variable. Polynomial splines, at least at low order and in one or two dimensions, fulfil these requirements. Other functions, such as sinc-based functions, might interpolate the signal more closely, but for this method polynomial splines are mathematically tractable and can be computed rapidly. Details of the form of the PDF and CDF and how they are derived are given below for several specific exemplary cases. Using the calculated coefficients from step 1, and the form of the PDF and/or CDF derived from the polynomial function, it is possible to evaluate the PDF and/or CDF for each portion of the signal. For step 3, conventional techniques can be employed to populate a histogram by evaluating for each portion the obtained PDF or CDF. Exemplary Cases Calculating the Form of the PDF and CDF [0038] Case 1. 1D-Linear. In this case, each piecewise section is represented as a polynomial of the form, y(x)=ax+b, i.e. straight lines defined by coefficients a and b. According to the Transformation formula mentioned above, the PDF (f ) is given by: f y ( y ) = 1 ∂ y ∂ x f x ( x ) ( 1 ) ##EQU00001## where f[x] is the density of the random variable x, which is assumed to be uniform between the start and end of the piecewise section (by convention for the case of polynomial splines, each section is normalised to start at 0 and end at 1, so the value of the constant is 1). In this case, the derivative is given by ∂ y ∂ x = a ##EQU00002## and inverting the functions gives x ( y ) = y - b a . ##EQU00003## Substituting these into (1) gives: f y ( y ) = 1 a f x ( y - b a ) ( 2 ) = 1 a b ≦ y ≦ a + b ( 3 ) ##EQU00004## This has a particularly straightforward and intuitive implementation. The PDF is simply the super-position of piecewise constant sections of magnitude 1/|a| between domain values b and a+b. This corresponds to adding all values between consecutive points in equal proportion. The CDF (F ) is given by integrating the PDF with respect to y: F y ( y ) = y a b ≦ y ≦ a + b ( 4 ) ##EQU00005## Case 2. 1D-Quadratic Each span is represented by a polynomial of the form y(x)=ax +bx+c. The derivation of the PDF is slightly complicated by the fact that quadratics are in general non-monotonic. Such cases can be handled by either by detecting points at which the curve becomes monotonic and modifying the PDF calculation appropriately or by resampling the spline spans such that each section is strictly monotonic. The latter approach could provide a fast implementation of the PDF estimation stage at the expense of complexity of the spline fitting step. In the following, the former approach is used. ( x ) = ax 2 + bx + c f x ( x ) = 1 ; 0 ≦ x ≦ 1 ( 5 ) ∂ y ∂ x ( x ) = 2 ax + b ( 6 ) x ( y ) = - b ± b 2 - 4 a ( c - y ) 2 a ( 7 ) ##EQU00006## Due to the non-monotonicity of quadratic, the inverse quadratic function will in general be multi-valued, as indicated by the two roots in Equation 7. However, within the spline span, f (x)=0≦x≦1, they may exhibit single values, multiple values or a combination of both as illustrated in FIG. 3. Fortunately, since quadratics are symmetric about the extrema point and we can treat multiple valued sections by considering only one root (hence one side of the extrema) and multiplying the PDF by two in that domain. For each section of the spline the PDF can be calculated as f y ( y ) = 1 2 ax 1 + b f x ( x r ) = 1 2 a ( - b + b 2 - 4 a ( c - y ) 2 a ) + b 0 ≦ - b + b 2 - 4 a ( c - y ) 2 a ≦ 1 = 1 b 2 - 4 a c ( c - y ) c ≦ y ≦ a + b + c ( 8 ) ( 9 ) ( 10 ) ##EQU00007## The CDF is given by: F y ( y ) = b 2 - 4 a ( c - y ) 2 a c ≦ y ≦ a + b + c ( 11 ) ##EQU00008## Case 3. 2D-Bilinear The derivation for the two dimensional case requires the introduction of a dummy function and variable which must be integrated out in the final step, denoted x in the following: 1 ( x 1 x 2 ) = ax 1 x 2 + bx 1 + cx 2 + d y 2 ( x 1 , x 2 ) = x 1 ( 12 ) x 2 ( y 1 , y 2 ) = y 1 - by 2 - d ay 2 + c x 1 ( y 1 , y 2 ) = y 2 ( 13 ) f x 1 ( x 1 ) , f x 2 ( x 2 ) = 1 ; 0 ≦ x 1 , x 2 ≦ 1 ( 14 ) ##EQU00009## The derivative used in the univariate case becomes a Jacobian |J| in the multi-variate case: = ∂ x 1 ∂ y 1 ∂ x 1 ∂ y 2 ∂ x 2 ∂ y 2 ∂ x 2 ∂ y 1 = 0 1 - 1 ( ay 2 + c ) - b ( ay 2 + c ) + ( - by 2 - y 1 + d ) a ( ay 2 + c ) 2 = 1 ay 2 + c ( 15 ) ( 16 ) ( 17 ) ##EQU00010## The joint PDF between y[1] and y is given by: f y 1 , y 2 = f x 1 , x 2 ( y 2 , y 1 - by 2 - d ay 2 + c ) J = 1 ay 2 + c 0 ≦ y 2 ≦ 1 ; 0 ≦ y 1 - by 2 - d ay 2 + c ≦ 1 = 1 ay 2 + c 0 ≦ y 2 ≦ 1 ; by 2 + d ≦ y 1 ≦ ay 2 + by 2 + c + d ( 18 ) ( 19 ) ( 20 ) ##EQU00011## The inequalities in Equation 20 define the range over which the dummy variable, y , should be integrated out. Graphically, the integration must be carried over the range y defined by the lines: 2 = 0 , y 2 = 1 , y 2 = y 1 - d b , y 2 = y 1 - c - d a + b . ( 21 ) ##EQU00012## For example, FIG. 4 shows the integration range for the case where {a, b, c, d}>0 and b>c. In this particular case, the integration proceeds over three ranges: f y 1 ( y 1 ) = { ∫ 0 y 1 - d b 1 ay 2 + c y 2 : d ≦ y 1 < d + c ∫ y 1 - c - d a + b y 1 - d b 1 ay 2 + c y 2 : d + c ≦ y 1 < b + d ∫ y 1 - c - d a + b 1 1 ay 2 + c y 2 : d + b ≦ y 1 ≦ a + b + c + d = { 1 a ( ln ( ay 1 - d + cb b ) - ln ( c ) ) : d ≦ y 1 < d + c 1 a ( ln ( ay 1 - d + cb b ) - ln ( ay 1 - ad + cb a + b ) ) : d + c ≦ y 1 < b + d 1 a ( ln ( a + c ) - ln ( ay 1 - ad + cb a + b ) ) : d + b ≦ y 1 ≦ a + b + c + d ( 22 ) ( 23 ) ##EQU00013## Note, that the specific integrals are determined by the values of the coefficients, or more precisely, the intersections of the lines defined by Equation 21. This complicates the implementation since there are many possible cases that can occur. To determine how many cases occurs we denote the y positions of the intersection points of the four lines as integers: 1 : y 2 = 0 y 2 = y 1 - d b 2 : y 2 = 1 y 2 = y 1 - d b 3 : y 2 = 0 y 2 = y 1 - c - d a + b 4 : y 2 = 1 y 2 = y 1 - c - d a + b ( 24 ) ##EQU00014## The ordering of these positions describes unique configurations in the graphs and hence determines the ranges of the integration; see FIG. 5. While there are 24 configurations in total, these can be grouped into 6 basic arrangements as shown in FIG. 5 where each graph corresponds to 4 of the 24 cases. For example, orderings {2 1 4 3}, {3 4 1 2} and {4 3 2 1} all result in configurations similar to that of {1 2 3 4} shown in the top left of FIG. 5. The reason is that for all of these cases the y =0 and y =1 intersections of the 1 - c - d a + b ##EQU00015## line occur together and separately from those of y 1 - d b . ##EQU00016## Such groupings can be used to simplify the implementation of the Case 4. Joint PDFs Estimation of joint PDFs can proceed in a manner similar to that of the 2D case described in the previous section. However, instead of the dummy variable and function one uses the piecewise polynomials of the second signal and does not perform the final integration step. For clarity of presentation the joint case is illustrated using the 1D linear case from which the 2D bilinear and quadratic cases follow. 1 ( x 1 , x 2 ) = ax 1 b y 2 ( x 1 , x 2 ) = cx 2 + d ( 25 ) x 1 ( y 1 , y 2 ) = y 1 - b a x 2 ( y 1 , y 2 ) = y 2 - d c ( 26 ) f x 1 ( x 1 ) , f x 2 ( x 2 ) = 1 ; 0 ≦ x 1 , x 2 ≦ 1 ( 27 ) ##EQU00017 The Jacobian in this case is: = 1 a 0 0 1 c = 1 a c ( 28 ) ##EQU00018## The joint PDF between y[1] and y is given by: f y 1 , y 2 = f x 1 , x 2 ( y 1 - b a , y 2 - d c ) J = 1 a c 0 ≦ y 1 - b a ≦ 1 ; 0 ≦ y 2 - d c ≦ 1 = 1 a c b ≦ ≦ y 1 ≦ a + b ; d ≦ y 2 ≦ c + d ( 29 ) ( 30 ) ( 31 ) ##EQU00019## In some cases the variable of one function is a transformation of another. For example, in the maximisation of Mutual Information (MI) image registration algorithm one signal, typically an image, is assumed to be some unknown geometric transformation of the another. In this 1D example this corresponds to a shift such that x +δx. Such a relationship could be used to simplify the implementation of the joint PDF calculation in algorithms where such a relationship exists. For example, in the maximisation of Mutual Information image registration method an interpolation step is usually required to estimate the joint PDF. Using the proposed approach, and incorporating the geometric relationship between the two images being registered into the joint PDF calculations, such a step becomes unnecessary. An example of a geometric relationship appropriate for image registration is an Euclidean transformation which can model a uniform scaling, translation and rotation of an image. Example Results [0057] A number of tests were performed, using synthetic data as the sampled signal. The stability and accuracy of the results of different methods were assessed. Stability was measured as the variation of the PDF under the geometric Euclidean group of transformations, namely translation and rotation. In one dimension, this reduces to a shift. The Total Variation (L ) is used as the distance metric i.e. measure of stability. In the case where the ground-truth PDF is known, for example with the synthetic data used in the tests, the accuracy of the PDF estimate is also assessed. This is done by measuring the distance between the estimate and the ground-truth; again the Total Variation (L ) is used as the distance metric. The first test compares the PDF estimate stability and accuracy for four techniques: conventional histogram, Parzen window (Gaussian Φ=4), linear spline (i.e. piecewise polynomial) and quadratic spline (i.e. piecewise polynomial). The objective is to estimate a 256 bin histogram from one period of the sine function, 127.5 sin (x+phi)+127.5, using a pre-determined number of samples. FIG. 6 shows the quality results. Clearly, the spline methods outperform the conventional methods by some significant margin, however the Parzen window method performs surprisingly well considering it is just a Gaussian smoothed version of the histogram. The reason for this is that in this particular case smoothing across histogram bins is a good thing to do since in a sine function adjacent locations vary smoothly. In general this is not true. Moreover, whilst the stability converges to zero with increasing sample number the accuracy of the Parzen window methods does not, in contrast to the other methods. This is because Gaussian smoothing causes a distortion in the PDF which cannot be removed despite an increase in the number of samples. This confirms the rule of thumb that the width of the Parzen kernel should be inversely related to the number of samples available. It should also be said that the sine function also favours the quadratic spline method since a quadratic can approximate a sine quite well with only a few samples. FIG. 7 shows examples of the PDFs generated by each method with 50 samples. Further Embodiments [0061] If the spline function exactly replicates the true interpolation of the sampling pre-filter then the resulting PDF is exact. In general however, this is unlikely to be the case and the interpolation will only be an approximation to the true band-limited signal. One way in which the approximation can be improved is to use higher order polynomials such as cubic splines. However, in practice, increasing the order and/or dimension can quickly lead to unwieldy expressions and in many cases unsolvable integrals. For example, for the 2D quadratic case the final integration is not immediately solvable in closed form and an approximation must be used. Moreover, since the optimal interpolator is determined by the sampling pre-filter it is unlikely to be available in a suitable analytic form and must be specified numerically. One further approach to improve the accuracy of the interpolation is to up-sample the signal using the optimal interpolator and then use a bilinear or quadratic PDF estimator at this higher sampling rate. Non-uniform spatial sampling can be used to reduce the overhead associated with higher sampling rates; locations at which the signal is complex are sampled more finely. The PDF calculations described above require that the interpolating function is monotonic, i.e. either strictly increasing or decreasing, between the sample points. For some cases, such as the linear polynomial spline, this is always true. For higher order and higher dimension cases this may not be true. Therefore preferred implementations handle such situations explicitly. For example, an algorithm detects any inflection points, i.e. where the gradient changes sign, and treats each portion between such points independently. However, for speed and simplicity of the implementation, it is advantageous to detect all such inflection points after the interpolating functions have been fitted and treat these as if they were extra sample points. All the PDF calculations are then performed with the guarantee that the interpolating functions are monotonic. In effect, the number of samples and hence the number of polynomial functions are increased. A way to further optimise the implementation is to detect adjacent interpolating functions, which can be grouped together into one interpolating function which is still monotonic. The aim is to reduce the number of samples and hence the number of interpolating functions that have to be processed. The effect is to change the signal from one that is uniformly sampled to one that is non-uniformly sampled. Alternatively, original sample positions may be moved to positions that allow a more accurate interpolation of the signal. From the previous point it can be readily appreciated that the method can be applied equally well to uniformly and non-uniformly sampled data. The basic algorithm described first fits all of the interpolating functions to the signal samples and then processes each in turn. For each interpolating function the appropriate equations must be used. For example, for the bilinear polynomial spline case it must be determined to which one of the 24 possible configurations the current polynomial corresponds and then process that using the equations specific to that configuration. An alternative to this which may be faster in many situations is to parse the all of the polynomials in the whole signal to detect their configuration or `type`, then create a list for each type indexing the corresponding polynomials and finally process all polynomials from each `type` consecutively. This is likely to be faster in cases where there are a large number of samples. The approach outlined uses the exact closed-form solution to the necessary calculations. In some cases, such as higher order polynomial functions or other interpolating functions, the closed-form solutions may not exist or be computationally inconvenient. In such cases, numeric integration or some other form of approximation may be used in the implementation. As an alternative to using more accurate interpolating functions, such as high order polynomials, it may be faster and computationally more convenient to first up-sample the signal by some small factor, say two or four times, using an accurate form of the interpolating function (which can be derived from the sampling pre-filter) and then apply the proposed technique with a simple interpolating function such as a linear or bi-linear polynomial. Since sampled signals have also to be discretized in order to be represented in a digital signal processing system, such discretization has the effect of adding noise or uncertainty to the value of the samples. To further improve the accuracy of the technique of the invention this can be taken into account in several ways. One way is to account for the uncertainty in the sample value by using an approximating spline instead of an interpolating spline. An alternative is to blur the final PDF estimate at the points corresponding to the limits of each piecewise polynomial section. As an alternative to Step 3 where a histogram is populated, one can use the closed-form solution of the distribution directly in many algorithms. For example, in many algorithms that require the estimation of a distribution function, a distance is used to measure the difference between one set of data and another. Such a distance could be designed to work with the closed-form distribution directly. Functions of the distribution can also be calculated in closed-form. For example, the derivative of the distribution can be calculated in closed-form for use in algorithms such as Maximisation of Mutual Information or ICA. One can also use the results of step 2 to fit another probability parametric model such as a Gaussian or mixture of Gaussians. As an alternative to, or in addition to, calculating the full probability distribution one can calculate other statistics of the signal or signals directly such as its conditional density, mean, variance or other moments of the signal. More generally other functions of the signal statistics can be calculated such as its entropy. The advantage of the method of the invention is that such calculations may be perform in closed-form i.e. analytically and not numerically and hence can be performed more quickly and with more accuracy. Many signal processing techniques require the estimation of the probability distribution over different parts of a signal where such parts could be overlapping. For example, in an image processing application such as the detection and recognition of human faces it might be required that the probability distributions of areas corresponding to the eyes, mouth, nose and whole face be estimated from an image. Another example of such a technique requires the calculation of the probability distribution of an image inside `windows` of varying size at all spatial positions in the image. Previous implementations of this technique attempt to do this using a histogram, however the method of the invention offers significant advantages in terms of stability and accuracy of the resultant selected image regions. In such cases where the probability distribution must be calculated over overlapping parts of a signal or image, it can be more efficient to modify the method embodying the invention in the following way: perform Step 1 and Step 2 as before, then apply Step 3 multiple times within the specific regions of interest required. Avoiding the repetition of steps 1 and 2 by using their results in a repeated fashion gives rise to a computational complexity similar to that of histogramming. There is an alternative, and in theory equivalent, formulation of the invention to that described. Here the signal is considered from the co-domain first which is divided into ranges (or bins) and then the aim is to measure the fraction of the domain that the signal goes between each range. Calculations similar those outlined in the embodiments of the invention are performed because the signal will need to be interpolated. In the case of an image this approach has an intuitive interpretation. First the image is considered as a height map, i.e. where brightness is treated as the height of a surface. Then level lines or isocontours are found on this surface. As on a geographical map, such level lines depict locations of equal height (or intensity in the case of the image). The fraction of the total image area that is in between each consecutive pair of level lines corresponds to the probability of the image being between the two corresponding intensity values. If the level lines are calculated at a sub-pixel scale using an accurate interpolation function then the result is an accurate estimate of the probability. This formulation does have the disadvantage of requiring that the division of the co-domain into ranges or bins must be specified before the estimation of the PDF is performed and is consequently fixed at the outset. In the original formulation of the invention the resolution of the final probability distribution is not decided until the last step and can be left in continuous form if required. The method described so far has considered scalar sampled functions, that is, each sample location depicts a single scalar value such as the pixel brightness in the case of an image. Alternatively, vector sampled functions could be considered where each sample corresponds to a vector value. For example, a RGB colour image where each pixel has three values for each of the red, green and blue channels respectively. In such a case the interpolating function must be fitted in a suitable manner for the signal under consideration. For example, an RGB image could be processed by treating each colour channel independently or first transformed into another colour space such as YUV and the interpolating function fitted in that space. [Y=aR+bG+cB; U=d(B-Y); and V=e(R-Y), where a, b, c, d and e are weighting factors]. Although the described method results in accurate and in some cases exact calculation of sampled signal statistics and is primarily intended for application to such signals, it may be advantageous to apply it to signals that have not been formed in the manner described, that is band-limited, sampled in order and quantised. In such cases, the resulting PDF will only be approximate but could still be better than conventional methods such as histogramming or Parzen window estimation. In contrast to Parzen window estimation where the smoothing is performed in the co-domain, i.e. the data values are smoothed, the proposed method smoothes in the domain space, e.g. spatially in the case of images. However, for most accurate results, the signal should be band-limited, and the samples should be ordered, e.g. a time sequence for an audio signal, a spatial array for an image, and so on. Applications [0078] PDF estimation (and other statistical characterization of a signal) is required for a large number of applications in, for example, machine learning, image signal processing and communications systems. Below are three example applications of the invention. In medical imaging it is often necessary to superimpose one image onto another taken using different modalities such as MM (magnetic resonance imaging) and CT (computed/computerized tomography). However, to do so an algorithm is required to estimate the geometric transformation to map one image onto the correct position on the other. One such algorithm is called Maximisation of Mutual Information and relies heavily on the accurate calculation of marginal and joint PDFs between the images. Currently, implementations typically use histograms for this purpose and consequently perform poorly. The invention can improve the performance of such algorithms significantly. Referring to FIG. 8, an input 10 receives a signal S, in sampled form, representing the images. An analysis module 12 applies the techniques described above to obtain statistical information W on the signal S, namely the marginal and joint PDFs between the images. The statistical information W is passed to a processing module 14 which also receives the original signal S. The processing module 14 applies signal processing to the original signal S, using the statistical information W and the Maximisation of Mutual Information algorithm, to produce an output O comprising one image geometrically transformed to map onto the other image. Another application is in image retrieval where the aim is to retrieve images from a database that are visually similar to a query image. Colour histograms are a simple and effective technique and are widely used. The invention can be used here to improve the reliability and accuracy of such systems. Again, with reference to FIG. 8, the query image is input together with a database of images as signal S; the analysis module 12 computes colour histograms as the statistical information W; and the processing module 14 uses the information W in conjunction with the database of images (about which statistical information may already been computed and stored) to retrieve potentially similar or matching images from the database as the output O. Image tracking algorithms rely on accurate PDF estimation to predict the next position and state of targets. For example, one might be interested in tracking a hand or the full body of a human. Typically, such systems assume very simple probability models such as a Gaussian. Alternatively, computationally expensive approaches such as particle trackers can be used. The invention can be used to produce improved trackers since the PDF estimate is accurate, robust and fast to compute. Referring to FIG. 8, a sequence of images is input as input signal S; the analysis module 12 calculates PDF estimates as the statistical information W; and the processing module 14 performs signal processing on the original signal S, using the PDF information W, to predict the next position and state of a target in the image as the output O. The invention can be embodied in dedicated hardware or in software. For example, the invention can be embodied by a computer program executed on a computer system 16. The computer system 16 may be any type of computer system, but is typically a conventional personal computer executing a computer program written in any suitable language. The computer program may be stored on a computer-readable medium, which may be of any type, for example: a recording medium, such as a disc-shaped medium insertable into a drive of the computer system, and which may store information magnetically, optically or magneto-optically; a fixed recording medium of the computer system such as a hard drive; or a solid-state computer memory (fixed or removable). The signal to be analysed may be input into the computer system directly, for example from a microphone or camera apparatus, or the computer system may read information representing the signal from a store of previously obtained signals. The computer system may also be embodied as dedicated hardware, for example in a custom integrated circuit in a mobile telecommunication device for speech processing. Patent applications by John Michael Brady, Oxford GB Patent applications by Timor Kadir, Oxford GB Patent applications in class Tomography (e.g., CAT scanner) Patent applications in all subclasses Tomography (e.g., CAT scanner) User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130011039","timestamp":"2014-04-18T09:48:36Z","content_type":null,"content_length":"85808","record_id":"<urn:uuid:5da60521-4475-4f9b-af48-b2ac54a973cb>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
se t Current-sense transformer circuit design for average current-mode control Here are the steps required to select the transformer and how to design the circuit needed to meet the end use in a manner that prevents transformer saturation Average current-mode control (CMC) requires that the total waveform of the current be reconstructed for the control loop. This article presents the steps required to select the transformer and how to design the circuit needed to meet the end use in a manner that prevents transformer saturation. The model we used is a power factor correction (PFC) topology. A commercially available current-sense transformer is used in this analysis to identify the parameters needed and how to use this information to design the circuit in order to prevent saturation. To meet the objective of reconstructing the current signal needed for PFC average CMC means that both the current during the power pulse (“on” time) and the current during the freewheeling energy recovery time (“off” time) must be included in the generated current signal. In higher-power PFCs the losses through a resistive sensor system become significantly high, so current transformers are used. In this analysis we demonstrate the design of the current transformers needed in a PFC circuit because it will be more demanding as compared to a standard forward converter. Figure 1 represents the model used in this discussion. Fig. 1: Schematic representation of a power factor control converter power stage, including the detailed current-sense transformer parameters needed to explain the current-sense circuit design. Table 1 lists specific details needed to identify the two current transformers to be used in this converter. The IinLpk current indicates that the current transformer needed has a primary current handling capability of approximately 20 A and a switching frequency of 100 kHz. A pulse PA 1005.100 transformer with a primary current handling capable of 20 A and a frequency range of 50 kHz to 1 MHz meets the requirements for this design. Table 1: Parameters needed to generate a PFC design.Table 2: Current transformer datasheet specifications These two tables give us the information needed to identify several parameters, including the peak current and the resistance on the sense resistor, voltage across the secondary, total voltage across the inductor, duration this voltage is present, change in the magnetizing current, and the transformer value. The peak current on the secondary is easily determined as (Eq 1): IRsenseL = IinLpk / N = 0.183 A The resistance of the sense resistor can be determined from (Eq 2): R1 = VRsense /IRsenseL = 5.464 Ω. The voltage across the secondary, assuming that the converter is operating at maximum load and minimum input voltage, can now be determined. This total voltage consists of the voltage across the current-sense resistor, Rsense, which by definition is 1 volt, the voltage across the diode which again is defined at 0.7 volts and the voltage across the winding resistance VRwinding can be calculated as (Eq 3): VRwinding = Rwinding * IRsenseL = 1.007 V The total voltage across the inductor can now be calculated as (Eq 4): Vind = VRsense +Vfd + VRwinding = 2.707 V The duration this voltage is present across the magnetizing inductance is (Eq 5): TonL = DL / Fosc = 6.995 µs The change in the magnetizing current in the magnetizing inductance is (Eq 6): ∆Imagpk = (TonL * Vind) / Lmag = 9.466 mA At this point you need to verify that the transformer is below the saturation level. The formula given for that can now be populated with the derived values (Eq 7): Bpk = (37.59 * Vind* DL*105)/(N*Fosc*10-3) = (37.59 * 2.707 * 0.699 * 105)/(100 * 105 * 10-3) = 711.6 This is about 30% of the maximum allowed flux level, which, according to the datasheet, is 2,000. Since the flux density developed in the worst case conditions for this configuration is less than half the flux level that would result in saturation, the magnetizing current can be allowed to increase (in this case almost by a factor of three), as long as it can be reduced sufficiently in the “off” time. To keep the transformer from “walking” into saturation, you need to develop a volt second integral during the time that Q1 is off. This will balance the volt-second integral during the “on” time. This is done by having a resistor R1 (which can be referred to as the reset resistor) in place so that the magnetizing current developed during the “on” time will force a voltage to develop across this reset resistor (R1) during the “off” time. It is important to remember that the voltage across this resistor will decrease as the magnetizing current decreases. To determine the value of R1, set the peak magnetizing current to 2 * ∆Imagpk and design the circuit so that during the “off” time the resistor chosen will reduce the magnetizing current to 0.5 * ∆Imagpk. This will ensure operation where the peak current is less than 2 * ∆Imagpk. Set the initial current through the magnetizing inductor to Iinit = 20 mA set the final magnetizing current to Ifinal = 5 mA. The off time is determined as Toff = 3.005 µs and the magnetization inductance, Lmag, of the chosen transformer is 2 mH (from the datasheet). This is sufficient information to get the value of the R1 resistor (Eq 8). R1= ((ln(Iinit/Ifinal)) * Lmag) / Toff) = ((ln(4)) * 2 mH) / (3.005 µs) = 922.6 Ω At this point the solution is half done. You still need to address the design of the current transformer circuit for the boost diode current sensor. The worst case conditions for the T2 current transformer is the peak of the maximum line voltage with the maximum load. The “off” time for the main switch at the peak of high line is the maximum conduction time of the rectifying diode, D3, and the primary of the T2 current transformer. This is the condition that will be used in the design. Since the same developed voltage across the current-sense resistor for the same primary current in needed, the same Rsense is used for both transformers. The conduction time for the current through the primary of T2 is (1-D). The maximum conduction for the transformer primary can be determined by (Eq 9): Tondiode = (1-DH) / Fosc = 9.369 µs The corresponding reset time for the transformer is (Eq 10): Toffdiode = DH / Fosc = 0.631 µs The current through the T2 transformer primary under these conditions (maximum input voltage) is considerably less than at the low input voltage. The maximum current, IinHpk, is only 5.87 A at the high line. This yields a voltage across the sense resistor under these conditions (Eq 11): VRsencehigh = (IinHpk / N) * R2 = ((5.87 A) / 100) * 5.464 Ω = 0.292 V The voltage across the internal winding resistance is (Eq 12): VRwindingH = (IinHpk / N) * Rwinding = 0.294 V The voltage across the transformer magnetizing inductance is equal to (Eq 13): VmagHigh = VRsencehigh + Vfd + VRwindingH = 0.292 V + 0.7 V + 0.294 V = 1.285 V Calculating the flux in the core for a single pulse is (Eq 14): BpkH = (37.59 * VmagHigh * (1-DH) * 105) / (100 * Fosc *10-3) = (37.59 * 1.285 * .937 *105) / (100 * 105 *10-3) = 452.6 The flux is about 25% of the allowable flux. The magnetizing current is determined as (Eq 15): ImagH = (VmagHigh * Tondiode )/ Lmag = (1.285 V * 9.369 µs) / 2 mH = 6.02 mA If we now set the limits for the magnetizing current as the peak being twice ImagH and the final as being half of ImagH and the time as being TresetH where TresetH = DH/Fosc, we can determine the value of R2 by (Eq 16). R2 = ((ln(2/.5)) *Lmag) / TresetH = (1.386 * 2 *10-3 ) / (.631 * 10-6) = 4.395 kΩ This completes the design of the current-sense circuit for the PFC circuit. Similar calculations would be required for average current-mode control of a buck converter. For peak current-mode control of a buck converter, only the above calculations are needed, with the main switch duty cycle limits being used at maximum load and minimum input voltage. A PFC controller that often is used in PFC converters of over 1 kW is the UCC2817a. In the case of higher power, the current through the power FET and the output diode is measured with current-sense transformers as we described in this article. ■ Add Comment Text Only 2000 character limit
{"url":"http://www.electronicproducts.com/Passive_Components/Magnetics_Inductors_Transformers/Current-sense_transformer_circuit_design_for_average_current-mode_control.aspx","timestamp":"2014-04-17T21:36:20Z","content_type":null,"content_length":"294691","record_id":"<urn:uuid:4be847a3-af30-4a12-962f-53669d470046>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Technical Report PHD-2008-08 Locally Decodable Codes (LDCs) are error-correcting codes in which each symbol of the message can be retrieved by probabilistically probing a limited number of symbols from an adversarially corrupted encoding of the message. This thesis covers several questions that are related to LDCs. A stronger and desirable property than local decodability is that of self-correction, allowing to efficiently recover not only symbols of the message but also arbitrary symbols of the encoded message. In contrast to the initial constructions of LDCs, the recent and most efficient constructions are not known to be self-correctable. The existence of self-correctable codes of comparable efficiency remains open. The best self-correctable codes known are based on multivariate polynomial representations over finite fields. We study the question of closing this current gap, and relate this question to a conjecture from the 1970s concerning the algebraic rank of combinatorial designs. Closely related to LDCs is the cryptographic problem of Private Information Retrieval (PIR). A PIR protocol allows a user to retrieve a data item from a database which is replicated amongst a number of servers, such that each individual server learns nothing about the identity of the item being retrieved. We use LDCs and self-correctable codes to construct better PIR protocols that offer privacy even against coalitions of several servers. We then investigate a generalization of PIR, where the user privately searches a database, e.g., for partial match or nearest neighbor. Motivated by the observation that many natural database search problems can be solved by constant-depth circuits, we present efficient distributed protocols for privately evaluating circuits from this class. To this end we extend previous techniques for representing constant-depth circuits by probabilistic low-degree polynomials. Motivated in part by the goal of improving our protocols for private database search, we study the power of d-multiplicative secret sharing. Such secret sharing schemes allow players to locally convert shares of d secrets into an additive sharing of their product. While the case d=2 is fairly well understood, in the case of d>2 secrets it was not even known how the minimal number of players should depend on the desired level of privacy. We prove a negative result, showing that known constructions are optimal with respect to the required number of players. Interestingly, the proof relies on a quantitative communication complexity argument. Finally, inspired by techniques used for constructing LDCs, we consider a setting where different players get replicated copies of the same database which are partially erased, and should still allow a referee to compute a function of the database with no interaction and with minimum communication. This is applicable in different settings, such as sensors that measure the same phenomena in parallel, or a database that is locally changed after it was distributed. We present interesting connections between this problem and secret sharing. We use these connections to obtain nontrivial upper bounds and lower bounds using results and techniques from the domain of secret sharing.
{"url":"http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-info.cgi/2008/PHD/PHD-2008-08","timestamp":"2014-04-16T13:05:03Z","content_type":null,"content_length":"4852","record_id":"<urn:uuid:d8feedf5-904e-45a4-806b-60194a0ec833>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00130-ip-10-147-4-33.ec2.internal.warc.gz"}
The KRIG2D function interpolates a regularly- or irregularly-gridded set of points z = f ( x, y ) using kriging. It returns a two dimensional floating-point array containing the interpolated surface, sampled at the grid points. The parameters of the data model - the range, nugget, and sill - are highly dependent upon the degree and type of spatial variation of your data, and should be determined statistically. Experimentation, or preferably rigorous analysis, is required. For n data points, a system of n +1 simultaneous equations are solved for the coefficients of the surface. For any interpolation point, the interpolated value is: The following formulas are used to model the variogram functions: d( i,j ) = the distance from point i to point j. V = the variance of the samples. C( i,j ) = the covariance of sample i with sample j. C( x [ 0] , y [ 0] , x [ 1] , y [ 1] ) = the covariance of point ( x [ 0] , y [ 0] ) with point ( x [ 1] , y [ 1] ). NOTE: The accuracy of this function is limited by the single-precision floating-point accuracy of the machine. This routine is written in the IDL language. Its source code can be found in the file krig2d.pro in the lib subdirectory of the IDL distribution. Z, X, Y Arrays containing the Z , X , and Y coordinates of the data points on the surface. Points need not be regularly gridded. For regularly gridded input data, X and Y are not used: the grid spacing is specified via the XGRID and YGRID (or XVALUES and YVALUES) keywords, and Z must be a two dimensional array. For irregular grids, all three parameters must be present and have the same number of Set this keyword to a two- or three-element vector of model parameters to use an exponential semivariogram model. The model parameters (A, CO, and C1) are explained below. Set this keyword to a two- or three-element vector of model parameters to use a spherical semivariogram model. The model parameters (A, CO, and C1) are explained below. If specified, C1 is the covariance value for a zero distance, and the variance of the random sample z variable. If only a two element vector is supplied, C1 is set to the sample variance. (C0 + C1) = the sill , which is the variogram value for very large distances. If set, the Z parameter is a two dimensional array of dimensions ( n,m ), containing measurements over a regular grid. If any of XGRID, YGRID, XVALUES, or YVALUES are specified, REGULAR is implied. REGULAR is also implied if there is only one parameter, Z . If REGULAR is set, and no grid specifications are present, the grid is set to (0, 1, 2, ...). A two-element array, [ xstart , xspacing ], defining the input grid in the x direction. Do not specify both XGRID and XVALUES. An n -element array defining the x locations of Z[ i,j ]. Do not specify both XGRID and XVALUES. A two-element array, [ ystart , yspacing ], defining the input grid in the y direction. Do not specify both YGRID and YVALUES. An n -element array defining the y locations of Z[ i,j ]. Do not specify both YGRID and YVALUES. The output grid spacing. If present, GS must be a two-element vector [ xs, ys ], where xs is the horizontal spacing between grid points and ys is the vertical spacing. The default is based on the extents of x and y . If the grid starts at x value xmin and ends at xmax , then the default horizontal spacing is ( xmax - xmin )/(NX-1). ys is computed in the same way. The default grid size, if neither NX or NY are specified, is 26 by 26. If present, BOUNDS must be a four-element array containing the grid limits in x and y of the output grid: [ xmin, ymin, xmax, ymax ]. If not specified, the grid limits are set to the extent of x and y . The output grid size in the x direction. NX need not be specified if the size can be inferred from GS and BOUNDS. The default value is 26. Make a random set of points that lie on a Gaussian: N = 15 ; Number of random points. Z = EXP(-2 * ((X-.5)^2 + (Y-.5)^2)) ; The Gaussian. Get a 26 by 26 grid over the rectangle bounding x and y: E = [ 0.25, 0.0] ; Range is 0.25 and nugget is 0. These numbers are dependent on your data model. R = KRIG2D(Z, X, Y, EXPON = E) ; Get the surface. Alternatively, get a surface over the unit square, with spacing of 0.05: R = KRIG2D(Z, X, Y, EXPON=E, GS=[0.05, 0.05], BOUNDS=[0,0,1,1])
{"url":"http://www.astro.virginia.edu/class/oconnell/astr511/IDLresources/idl_5.1_html/idle1.htm","timestamp":"2014-04-20T08:24:50Z","content_type":null,"content_length":"13725","record_id":"<urn:uuid:4d74cebf-480f-4557-99b7-dd17dacfb2b9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The prop. blade of an airplane is 2.80ft, and rotates @ 2200 r/min. What is the linear velocity of a point on the tip of the blade? • 5 months ago • 5 months ago Best Response You've already chosen the best response. Use \[v=\omega(r)=\theta/t(r)\] \[\theta = 2\pi rad\] t= 1 rev r = .5(2.8) Best Response You've already chosen the best response. so them telling me the length of the blade holds no grounds? Best Response You've already chosen the best response. Will have to look up how to calculate linear velocity. Best Response You've already chosen the best response. all I could find was angular velocity , and after about 2 hours on a simple problem hat Best Response You've already chosen the best response. isnt linear velocity the speed of the circumference? Best Response You've already chosen the best response. speed over time Best Response You've already chosen the best response. I know there's a relationship between the two. Best Response You've already chosen the best response. I found this: The formula for finding linear velocity is v = x / t. The x variable is distance traveled, and t is the time it took to travel the distance x. The v variable is the linear velocity. radius = 1.4 feet If prop rotates at 2,200 rev/min that equals 36.666 rev for every second Each turn of the prop means that the tip has traveled 1.4 *2*PI = 8.7964594301 feet Multiplying this by 36.666 rev per second equals 322.54 feet per second. Best Response You've already chosen the best response. The distance from the prop center to the tip is half of 2.8 or 1.4 for each turn of the prop the tip must travel 1.4*2*PI feet. Best Response You've already chosen the best response. so we take 2.80 and multiply in 1/2, that gives us half the blade length which is 1.4 ft. is that 1.4x2xpie? Best Response You've already chosen the best response. which equals 8.7964594301 feet Best Response You've already chosen the best response. now is that ft per second? Best Response You've already chosen the best response. eventually it is feet per second. At this point we are only determining the distance traveled - which is 8.796 feet Best Response You've already chosen the best response. persistence pays off i found my page Best Response You've already chosen the best response. I'd like to thank ehuman and wolf1728, I couldn't have done it without you! Best Response You've already chosen the best response. wait what am I talking about, im not done here. Best Response You've already chosen the best response. (2200revs/min)(2 pi rad/rev)(1.4ft)=? that is the equation you need Best Response You've already chosen the best response. compare it to what i gave you before Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/52757371e4b09b78d9ee29d2","timestamp":"2014-04-23T07:50:59Z","content_type":null,"content_length":"71429","record_id":"<urn:uuid:dc59a1fa-b14a-4c22-a746-a8b1e8f7cb1b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help hey..real amateur here. i'm really interested in fractals and i wanna have something more to say than "they're pretty!" help... abiy3000 Wikipedia is always a good fist stopping point for such questions, here is the artice on fractals RonL I checked out the Wiki and the Mathworld... Is there an in-between? Mathworld's too much for me now and the other's not much. Is there like a "from-scratch" tutorial? I have an older book, but a decent one. Admittedly it's more about Chaos Theory than just fractals, but fractals have a decent section. It isn't really a text or anything, but it does give some introductory stuff. And yes, it's full of pretty pictures too! Anyway, I'd recommend it as a non-Mathematician's book, but many of the concepts behind fractals and chaos are mentioned as well a some stuff about what they mean. Depending on how much of an introduction to the subject you need you might find it a good read. The book is "Chaos: Making a New Science," by James Gleick (published in 1987). If it helps: ISBN 0 14 00.9250 1 -Dan Maybe you noticed that the MathHelpForum logo is a fractal!
{"url":"http://mathhelpforum.com/advanced-math-topics/2839-fractals.html","timestamp":"2014-04-17T08:40:07Z","content_type":null,"content_length":"55013","record_id":"<urn:uuid:51830faa-3133-44a8-9ecb-5275e770df49>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00162-ip-10-147-4-33.ec2.internal.warc.gz"}
National Formulary Elixirs. In the following pages will be found formulae from the National Formulary, for those most important. For more extensive lists the reader is referred to the National Formulary, and Elixirs, by J. U. National Formulary Elixirs.—ELIXIR AMMONII BROMIDI (N. F.), Elixir of ammonium bromide.—Formulary number, 32: "Ammonium bromide, eighty-five grammes (85 Gm.) [3 oz. av.]; citric acid, four grammes (4 Gm.) [62 grs.]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the ammonium bromide and the citric acid in about five hundred cubic centimeters (500 Cc.) [16 fl℥, 435♏] of aromatic elixir, by agitation. Then add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏], and filter, if necessary. Each fluid drachm contains 5 grains of ammonium bromide"—(Nat. Form). ELIXIR AMMONII VALERIANATIS (N. F.), Elixir of ammonium valerianate.—Formulary number, 33: "Ammonium valerianate, thirty-five grammes (35 Gm.) [1 oz. av., 103 grs.]; chloroform, eight-tenths of a cubic centimeter (0.8 Cc.) [13♏]; tincture of vanilla (U. S. P.), sixteen cubic centimeters (16 Cc.) [260♏]; compound tincture of cudbear (F. 419), sixteen cubic centimeters (16 Cc.) [260♏]; water of ammonia (U. S. P.), aromatic elixir (U. S. P.), of each, a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the ammonium valerianate in about seventy-five cubic centimeters (75 Cc.) [2 fl℥, 257♏] of aromatic elixir, in a graduated vessel, and add enough water of ammonia, in drops, until a faint excess of it is perceptible in the liquid. Then add the chloroform, tincture of vanilla and compound tincture of cudbear and finally, enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Filter, if necessary. Each fluid drachm contains 2 grains of ammonium valerianate. Note.—Should the odor of valerianic acid become perceptible after the elixir has been kept for some time, it may be overcome by slightly supersaturating with water of ammonia"—(Nat. Form.). ELIXIR APII GRAVEOLENTIS COMPOSITUM (N. F.), Compound elixir of celery.—Formulary number, 36: "Fluid extract of celery seed (F. 139), sixty-two cubic centimeters (62 Cc.) [2 fl℥, 46♏]; fluid extract of erythroxylon (U. S. P.), sixty-two cubic centimeters (62 Cc.) [2 fl℥, 46♏]; fluid extract of kola (F. 175), sixty-two cubic centimeters (62 Cc.) [2 fl℥, 46♏]; fluid extract of viburnum prunifolium (U. S. P.), sixty-two cubic centimeters (62 Cc.) [2 fl℥, 46♏]; alcohol, one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; aromatic elixir (U. S. P.), a sufficient quantity, to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Mix the alcohol with two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏] of aromatic elixir. To this add the fluid extract of celery seed in several portions, shaking after each addition, and afterwards the other fluid extracts. Finally add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏], allow the mixture to stand 24 hours, and filter. Note.—If this preparation is prescribed or quoted under its Latin title, it is recommended that the full title be given, so that the word 'Apii' may not be mistaken for 'Opii'"—(Nat. Form.). ELIXIR BISMUTHI (N. F.), Elixir of bismuth.—Formulary number, 37: "Bismuth and ammonium citrate, thirty-five grammes (35 Gm.) [1 oz. av., 103 grs.]; water, hot, sixty cubic centimeters (60 Cc.) [9 fl℥, 14♏]; water of ammonia (U. S. P.), and aromatic elixir (U. S. P.), of each a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the bismuth and ammonium citrate in the hot water, allow the solution to stand until any undissolved matter has subsided; then decant the clear liquid, and add to the residue just enough water of ammonia to dissolve it. Then mix it with the decanted portion and add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Filter, if necessary. Each fluid drachm represents 2 grains of bismuth and ammonium citrate"—(Nat. Form). ELIXIR BUCHU (N. F.), Elixir of buchu.—Formulary number, 38: "Fluid extract of buchu (U. S. P.), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; alcohol, sixty-two cubic centimeters (62 Cc.) 12 fl℥, 46♏]; syrup (U. S. P.), sixty-two cubic centimeters (62 Cc.) [2 fl℥, 46♏]; magnesium carbonate, fifteen grammes (15 Gm.) [231 grs.]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Mix the fluid extract of buchu with the alcohol, then add seven hundred and fifty cubic centimeters (750 Cc.) [25 fl℥, 173♏], of aromatic elixir, and the syrup. Incorporate with it the magnesium carbonate, and filter. Finally, pass enough aromatic elixir through the filter to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm represents about 71 grains of buchu"—(Nat. Form.). ELIXIR BUCHU COMPOSITUM (N. F.), Compound elixir of buchu.—Formulary number, 39: "Compound fluid extract of buchu (F. 144), two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; alcohol, sixty-two cubic centimeters (62 Cc.) [2 fl℥, 46♏]; syrup (U. S. P.), sixty-two cubic centimeters (62 Cc.) [2 fl℥, 46♏]; magnesium carbonate, fifteen grammes (15 Gm.) [231 grs.]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc. [33 fl℥, 391♏]. Mix the compound fluid extract of buchu with the alcohol, then add five hundred cubic centimeters (500 Cc.) [16 fl℥, 435♏] of aromatic elixir, and the syrup. Incorporate with it the magnesium carbonate, and filter. Finally, pass enough aromatic elixir through the filter to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm represents 15 minims of compound fluid extract of buchu"—(Nat. Form.). ELIXIR BUCHU ET POTASSII ACETATIS (N. F.), Elixir of buchu and potassium acetate.—Formulary number, 40: "Potassium acetate, eighty-five grammes (85 Gm.) ozs. av.]; elixir of buchu (F. 38), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the potassium acetate in about seven hundred and fifty cubic centimeters (750 Cc.) [25 fl℥, 173♏] of elixir of buchu; filter, if necessary, and add enough elixir of buchu to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm represents 5 grains of potassium acetate and about 7 grains of buchu"—(Nat. Form.). ELIXIR CAFFEINAE (N. F.), Elixir of caffeine.—Formulary number, 41: "Caffeine, seventeen and one-half grammes (17.5 Gm.) [270 grs.]; diluted hydrobromic acid E 8. P.), four cubic centimeters (4 Cc.) [65♏]; syrup of coffee (F. 367), two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Rub the caffeine, in a mortar, with the diluted hydrobromic acid, and about one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏] of aromatic elixir, until solution is effected. Then add the syrup of coffee, and lastly, enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Filter, if necessary. Each fluid drachm contains 1 grain of caffeine"—(Nat. Form.). ELIXIR CALCII HYPOPHOSPHITIS (N. F.), Elixir of calcium hypophosphite.—Formulary number, 43: "Calcium hypophosphite, thirty-five grammes (35 Gm.) [1 oz. av., 103 grs.]; citric acid, four grammes (4 Gm.) [62 grs.]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the calcium hypophosphite in nine hundred cubic centimeters (900 Cc.) [30 fl℥, 308♏] of aromatic elixir, and filter. Dissolve the citric acid in the filtrate, and pass enough aromatic elixir through the filter to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm contains 2 grains of calcium hypophosphite"—(Nat. Form.). ELIXIR CALCII LACTOPHOSPHATIS (N. F.), Elixir of calcium lactophosphate.—Formulary number, 44: "Calcium lactate, seventeen and one-half grammes (17.5 Gm.) [270 grs.]; phosphoric acid (U. S. P., 85 per cent), eight cubic centimeters (8 Cc.) [130♏]; water, sixty cubic centimeters (60 Cc.) [2 fl℥, 14♏]; syrup (U. S. P.), sixty cubic centimeters (60 Cc.) [2 fl℥, 14♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Triturate the calcium lactate with the phosphoric acid, the water, and the syrup, until the salt is dissolved. Then add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏], and filter. Each fluid drachm represents 1 grain of calcium lactate, or about 1 1/2 grains of so-called calcium lactophosphate"—(Nat. Form.). ELIXIR CATHARTICUM COMPOSITUM (N. F.), Compound cathartic elixir.—Formulary number, 45: "Fluid extract of senna (U. S. P.), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; fluid extract of podophyllum (U. S. P.), sixty-two cubic centimeters (62 Cc.) [2fl℥, 46♏]; fluid extract of leptandra (U.S. P.), fifty cubic centimeters (50 Cc.) [1 fl℥, 332♏]; fluid extract of jalap (F. 162), fifty cubic centimeters (50 Cc.) [1 fl℥, 332♏]; potassium and sodium tartrate, one hundred and twenty-five grammes (125 Gm.) [4 ozs. av., 179 grs.]; sodium bicarbonate, sixteen grammes (16 Gm.) [247 grs.]; compound elixir of taraxacum (F. 111), two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; elixir of glycyrrhiza (F. 76), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Mix the fluid extracts with the compound elixir of taraxacum; in the mixture dissolve the salts by agitation, and add enough elixir of glycyrrhiza to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. The product should not be filtered, and should be shaken up whenever any of it is dispensed. The average dose for an adult is 2 fluid drachms"—(Nat. Form.). ELIXIR CHLOROFORMI COMPOSITUM (N. F.), Compound elixir of chloroform.—Formulary number, 46: "Chloroform, one hundred and ninety cubic centimeters (190 Cc.) [6 fl℥, 204♏]; tincture of opium (U.S. P.), one hundred and ninety cubic centimeters (190 Cc.) [6 fl℥, 204♏]; spirit of camphor (U. S. P.), one hundred and ninety cubic centimeters (190 CC.) [6 fl℥, 204♏]; aromatic spirit of ammonia (U. S. P.), one hundred and ninety cubic centimeters (190 Cc.) [6 fl℥, 204♏]; alcohol, two hundred and thirty-five cubic centimeters (23 Cc.) [7 fl℥, 454♏]; oil of cinnamon (cassia), five cubic centimeters (5 Cc.) [81♏]. Mix the chloroform with the alcohol, then add the oil of cinnamon, aromatic spirit of ammonia, spirit of camphor, and tincture of opium. Allow the mixture to stand a few hours, and filter in a well-covered funnel. Each fluid drachm represents about 1 grain of opium and 11 minims of chloroform. Note.—This preparation is called chloroform paregoric in some sections of the country. It is recommended that this title be abandoned, to prevent confusion with the official paregoric or Tinctura Opii Camphorata"—(Nat. Form.). ELIXIR CINCHONAE (N. F.), Elixir of cinchona, Elixir of calisaya.—Formulary number, 47: "Tincture of Cinchona (U. S. P.), one hundred and fifty cubic centimeters (150 Cc.) [5 fl℥, 35♏]; syrup (U. S. P.), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; glycerin, one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; aromatic elixir (U. S. P.),six hundred cubic centimeters (600 Cc.) [20 fl℥, 138♏]. Mix the liquids, allow to stand as long as convenient, and filter through a wetted filter. Each fluid ounce represents about 14 grains of yellow cinchona"— (Nat. Form.). ELIXIR CINCHONAE DETANNATUM (N. F.), Detannated elixir of cinchona, Detannated elixir of calisaya.—Formulary number, 48: "Detannated tincture of cinchona (F. 403), one hundred and fifty cubic centimeters (150 Cc.) [5 fl℥, 35♏]; syrup (U. S. P.), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; glycerin, one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; aromatic elixir (U. S. P.), six hundred cubic centimeters (600 Cc.) [20 fl℥, 138♏]. Mix the liquids, and filter, if necessary. Each fluid ounce represents about fourteen grains (14 grains) of yellow cinchona. Note.—This preparation may be used when elixir cinchonae is directed. in combination with preparations of iron, but may be replaced by compound elixir of quinine (F. 98), colored by the addition of fifteen cubic centimeters (15 Cc.) [242♏] of compound tincture of cudbear (F. 419) to one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]"—(Nat. Form.). ELIXIR CINCHONAE ET FERRI (N. F.), Elixir of cinchona and iron, Elixir of calisaya and iron, Ferrated elixir of calisaya.—Formulary number, 50: "Phosphate of iron (U. S. P.), thirty-five grammes (35 Gm.) [1 oz. av., 103 grs.]; water, boiling, sixty cubic centimeters (60 Cc.) [2 fl℥, 14♏]; compound elixir of quinine. (F. 98), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the phosphate of iron in the boiling water, then add enough compound elixir of quinine to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏], and filter. Each fluid drachm contains 2 grains of phosphate of iron"—(Nat. Form). ELIXIR CINCHONAE, FERRI ET STRYCHNINAE (N. F.), Elixir of cinchona, iron and strychnine, Elixir of calisaya, iron and strychnine.—Formulary number, 55: "Strychnine sulphate, one hundred and seventy-five milligrammes (0.175 Gm.) [2.7 grs.]; water, fifteen cubic centimeters (15 Cc.) [242♏]; elixir of cinchona and iron (F. 50), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the strychnine sulphate in the water and add enough elixir of cinchona and iron to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm contains 1/100 grain of strychnine sulphate, and about 2 grains of phosphate of iron"—(Nat. Form.). ELIXIR CURASSAO (N. F.), Elixir of curacao, Curacao cordial.—Formulary number, 58: "Spirit of curacao (F. 348), sixteen cubic centimeters (16 Cc.) [260♏]; orris root, in fine powder, four grammes (4 Gm.) [62 grs.]; deodorized alcohol, two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; citric acid, seven grammes (7 Gm.) [108 grs.]; syrup (U. S. P.), five hundred cubic centimeters (500 Cc.) [16 fl℥, 435♏]; magnesium carbonate, fifteen grammes (15 Gm.) [231 grains]; water, a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Mix the spirit of curacao with the alcohol, add the orris root, the magnesium carbonate, and one hundred and eighty-five cubic centimeters (185 Cc.) [6 fl℥, 123♏] of water. Allow the mixture to stand 12 hours, occasionally agitating; then pour it on a wetted filter, returning the first portions of the filtrate until it runs through clear, and pass enough water through the filter to make the filtrate measure five hundred cubic centimeters (500 Cc.) [16 fl℥, 435♏]. In this dissolve the citric acid, and finally add the syrup"—(Nat. Form.). ELIXIR ERYTHROXYLI (N. F.), Elixir of erythroxylon, Elixir of coca.—Formulary number, 61: "Fluid extract of erythroxylon (U. S. P.), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; alcohol, sixty-two and one-half cubic centimeters (62.5 Cc.) [2 fl℥, 54♏]; syrup (U. S. P.) one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; tincture of vanilla (U. S. P.), sixteen cubic centimeters (16 Cc.) [260♏]; purified talcum (F. 395), fifteen grammes (15 Gm.) [231 grs.]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Mix the fluid extract with the alcohol, the syrup and six hundred and fifty cubic centimeters (650 Cc.) [21 fl℥, 470♏] of aromatic elixir, add the purified talcum and incorporate the latter thoroughly. Let the mixture stand during 48 hours, if convenient, shaking occasionally; then filter, add the tincture of vanilla to the filtrate, and pass enough aromatic elixir through the filter to make the product measure one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm represents 7 1/2 grains of erythroxylon (coca)"—(Nat. Form.). ELIXIR FERRI HYPOPHOSPHITIS (N. F.), Elixir of hypophosphite of iron.—Formulary number, 65: "Solution of hypophosphite of iron (F. 219), one hundred cubic centimeters (100 Cc.) [3 fl℥, 183♏]; aromatic elixir (U. S. P.), nine hundred cubic centimeters (900 Cc.) [30 fl℥, 208♏]. Mix, allow the mixture to stand a few days in a cool place, and filter, if necessary. Each fluid drachm contains 1 grain of hypophosphite of iron (ferric)"—(Nat. Form.). ELIXIR FERRI PHOSPHATIS (N. F.), Elixir of phosphate of iron.—Formulary number, 67. Phosphate of iron (U. S. P.) thirty-five grammes (35 Gm.) [1 oz. av., 103♏]; water, sixty cubic centimeters (60 Cc.) [2 fl℥, 14♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the phosphate of iron in the water with the aid of heat; then mix this solution with a sufficient quantity of aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Filter, if necessary. Each fluid drachm contains 2 grains of phosphate of iron"—(Nat. Form.). ELIXIR FERRI PHOSPHATIS, QUININAE ET STRYCHNINAE (N. F.), Elixir of phosphate of iron, quinine and strychnine.—Formulary number, 69: "Phosphate of iron (U. S. P.), seventeen and one-half grammes (17.5 Gm.) [270 grs.]; quinine (alkaloid), eight and three-fourths grammes (8.75 Gm.) [135 grs.]; strychnine (alkaloid), two hundred and seventy-five milligrammes (0.275 Gm.) [4.24 grs.]; alcohol, one hundred and thirty cubic centimeters (130 Cc.) [4 fl℥, 190♏]; water, fifty cubic centimeters (50 Cc.) [1 fl℥, 332♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the alkaloids in the alcohol and add seven hundred and fifty cubic centimeters (750 Cc.) [25 fl℥, 173♏] of aromatic elixir, then dissolve the phosphate of iron in the water, using heat, if necessary, and add to the previous mixture. Finally, add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm contains 1 grain of phosphate of iron, 1/2 grain of quinine, and 1/64 grain of strychnine. Note.—When this elixir is mixed with water, it may become cloudy or opaque through the separation of some of its constituents"—(Nat. Form.). ELIXIR FERRI PYROPHOSPHATIS (N. F.), Elixir of pyrophosphate of iron.—Formulary number, 70: "Pyrophosphate of iron (U. S. P.), thirty-five grammes (35 Gm.) [1 oz. av., 103 grs.]; water, sixty cubic centimeters (60 Cc.) [2 fl℥, 14♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the pyrophosphate of iron in the water, and add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Filter, if necessary. Each fluid drachm contains 2 grains of pyrophosphate of iron"—(Nat. Form.). ELIXIR FERRI, QUININAE ET STRYCHNINAE (N. F.), Elixir of iron, quinine and strychnine.—Formulary number, 71: "Tincture of citro-chloride of iron (F. 407), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; quinine hydrochlorate, eight and one-half grammes (8.5 Gm.) [131 grs.]; strychnine sulphate, one hundred and seventy-five milligrammes (0.175 Gm.) [2.7 grs.]; alcohol, thirty cubic centimeters (30 Cc.) [1 fl℥, 7♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the alkaloidal salts in about seven hundred and fifty cubic centimeters (750 Cc.) [25 fl℥, 173♏] of aromatic elixir, then add the tincture and the alcohol, and finally, enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Filter, if necessary. Each fluid drachm represents about 1 grain of ferric chloride, 1 grain of quinine hydrochlorate, and grain of strychnine sulphate"—( Nat. Form.). ELIXIR FRANGULAE (N. F.), Elixir of frangula, Elixir of buckthorn.—Formulary number, 72: "Fluid extract of frangula (U. S. P.), two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; alcohol, sixty cubic centimeters (60 Cc.) [2 fl℥, 14♏]; compound elixir of taraxacum (F. 111), two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; aromatic elixir (U. S. P.), four hundred and forty cubic centimeters (440 Cc.) [14 fl℥, 422♏]. Mix them, allow the mixture to stand during 48 hours, if convenient, and filter. Each fluid drachm represents 15 grains of frangula"—( Nat. Form.). ELIXIR GENTIANAE (N. F.), Elixir of gentian.—Formulary number, 73: "Fluid extract of gentian (U. S. P.), thirty-five cubic centimeters (35 Cc.) [1 fl℥, 88♏]; compound spirit of cardamom (F. 347), twenty-five cubic centimeters (25 Cc.) [406♏]; solution of tersulphate of iron (U. S. P.), twenty-five cubic centimeters (25 Cc.) [406♏]; water of ammonia (U. S. P.), twenty-eight cubic centimeters (28 Cc.) [455♏]; alcohol, water, aromatic elixir (U. S. P.), of each, a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dilute the solution of tersulphate of iron with two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏] of cold water, and add it, constantly stirring, to the water of ammonia, previously diluted with an equal volume of cold water. Collect the precipitate on a well wetted muslin strainer, allow it to drain completely, return it to the vessel, mix it intimately with two hundred and fifty cubic centimeters (260 Cc.) [8 fl℥, 218♏] of water, and again drain. Repeat this operation once more with the same quantity of water. When the precipitate has been completely drained for the third time, fold the strainer, and press it gently so as to remove the water as completely as possible without loss of magma; then remove the magma into a tared bottle, and ascertain its weight. Now, add to the magma one-fifth (1/5) of its weight of alcohol, the fluid extract, the compound spirit, and seven hundred and fifty cubic centimeters (750 Cc.) [25 fl℥, 173♏] of aromatic elixir, and shake the mixture occasionally during 24 hours. Filter through paper, and pass enough aromatic elixir through the filter to make the product measure one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm represents about 2 grains of gentian"—(Nat. Form.). ELIXIR GENTIANAE CUM TINCTURA FERRI CHLORIDI (N. F.), Elixir of gentian with tincture of chloride of iron.—Formulary number, 74: "Tincture of citro-chloride of iron (F. 407), one hundred cubic centimeters (100 Cc.) [3 fl℥, 183♏]; elixir of gentian (F. 73), nine hundred cubic centimeters (900 Cc.) [30 fl℥, 208♏]. Mix and filter, if necessary. Each fluid drachm represents about 3/4 grain of ferric chloride, and nearly 2 grains of gentian"—(Nat. Form.). ELIXIR GLYCYRRHIZAE AROMATICUM (N. F.), Aromatic elixir of glycyrrhiza, Aromatic elixir of liquorice.—Formulary number, 77: "Fluid extract of glycyrrhiza (U. S. P.), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; oil of cloves, four-tenths of a cubic centimeter (0.4 Cc.) [6.5♏]; oil of cinnamon (Ceylon), four-tenths of a cubic centimeter (0.4 Cc.) [6.5♏]; oil of nutmegs, one-fourth of a cubic centimeter (0.25 Cc.) [4♏]; oil of fennel, three-fourths of a cubic centimeter (0.75 Cc.) [12♏]; magnesium carbonate, fifteen grammes (15 Gm.) [231 grs.]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Triturate the oils with magnesium carbonate, and gradually add eight hundred and seventy-five cubic centimeters (875 Cc.) [29 fl℥, 282♏] of aromatic elixir. Shake occasionally during an hour, filter, and pass enough aromatic elixir through the filter to make eight hundred and seventy-five cubic centimeters (875 Cc.) [29 fl℥, 282♏] of filtrate. Add the fluid extract to the filtrate, mix, and filter, if necessary"—(Nat. Form.). Employed to mask the bitterness of quinine. ELIXIR GUARANAE (N. F.), Elixir of guarana.—Formulary number, 79: "Fluid extract of guarana (U. S. P.), two hundred cubic centimeters (200 Cc.) [6 fl℥, 366♏]; aromatic elixir (U. S. P.), two hundred cubic centimeters (200 Cc.) [6 fl℥, 366♏]; compound elixir of taraxacum (F. 111), six hundred cubic centimeters (600 Cc.) [20 fl℥, 138♏]. Mix them; allow the mixture to stand during 48 hours, if convenient, and filter. Each fluid drachm represents about 12 grains of guarana"—(Nat. Form.). ELIXIR HYPOPHOSPHITUM (N. F.), Elixir of Hypophosphites.—Formulary number, 81: "Calcium hypophosphite, fifty-two and one-half grammes (52.5 Gm.) [1 oz. av., 373 grs.]; sodium hypophosphite, seventeen and one-half grammes (17,5 Gm.) [270 grs.]; potassium hypophosphite, seventeen and one-half grammes (17.5 Gm.) [270 grs.]; citric acid, four grammes (4 Gm.) [62 grs.]; water, two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; glycerin, thirty cubic centimeters (30 Cc.) [1 fl℥, 7♏]; compound spirit of cardamom (F. 347), thirty cubic centimeters (30 Cc.) [1 fl℥, 7♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the hypophosphites and the citric acid in the water; then add the glycerin, compound spirit of cardamom, and enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Filter, if necessary. Each fluid drachm contains 3 grains of calcium hypophosphite and 1 grain each of sodium and potassium hypophosphite"—(Nat. Form.). ELIXIR HYPOPROSPHITUM CUM FERRO (N. F.), Elixir of hypophosphites with iron.—Formulary number, 82: "Calcium hypophosphite, twenty-five grammes (95 Gm.) [386 grs.]; sodium hypophosphite, seventeen and one-half grammes (17.5 Gm.)[270 grs.]; potassium hypophosphite, eight and one-half grammes (8,5 Gm.) [131 grs.]; sulphate of iron, in clear crystals, thirteen grammes (13 Gm.) [201 grs.]; citric acid, four grammes (4 Gm.) [62 grs.]; water, two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; syrup (U. S. P.), two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the hypophosphites in one hundred and seventy-five cubic centimeters (175 Cc.) [5 fl℥, 440♏] of water, and add the syrup. Dissolve the sulphate of iron in the remainder of the water, and mix this with the other solution. Then add three hundred and fifty cubic centimeters (350 Cc.) [11 fl℥, 401♏] of aromatic elixir, set the mixture aside, in a cold place for 12 hours, and filter from the deposited calcium sulphate. Finally, dissolve the citric acid in the filtrate, and pass enough aromatic elixir through the filter to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm contains about 1/2 grain of hypophosphite of iron (ferrous), about 1 grain each, of the hypophosphites of calcium and sodium, and 1/2 grain of potassium hypophosphite"—(Nat. Form.). ELIXIR LITHII CITRATIS (N. F.), Elixir of lithium citrate.—Formulary number, 84: "Lithium citrate, eighty-five grammes (85 Gm.) [3 oz. av.]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the lithium citrate in about nine hundred cubic centimeters (900 Cc.) [30 fl℥, 208♏] of aromatic elixir, by agitation. Then add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏], and filter. Each fluid drachm contains 5 grains of lithium citrate"—(Nat. Form.). ELIXIR MALTI ET FERRI (N. F.), Elixir of malt and iron.—Formulary number, 86: "Extract of malt, two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; phosphate of iron (U. S. P.), seventeen and one-half grammes (17.5 Gm.) [270 grs.]; water, thirty cubic centimeters (30 Cc.) [1 fl℥, 7♏]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the phosphate of iron in the water by the aid of heat, mix the solution with the extract of malt previously introduced into a graduated bottle, and add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Set the mixture aside for 24 hours, and filter. Each fluid drachm represents 1 grain of phosphate of iron, and 15 minims of extract of malt. Note.—Extract Of malt, most suitable for this preparation, should have about the consistence of Balsam, of Peru, at a temperature of about 15° C. (59° F.)"—(Nat. Form.). ELIXIR PEPSINI (N. F.), Elixir of pepsin.—Formulary number, 88: "Pepsin (U. S. P.), seventeen and one-half grammes (17.5 Gm.) [270 grs.]; hydrochloric acid (U. S. P.), four cubic centimeters (4 Cc.) [65♏]; glycerin, one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; compound elixir of taraxacum (F. 111), sixty-five cubic centimeters (65 Cc.) [2 fl℥, 95♏]; alcohol, one hundred and seventy-five cubic centimeters (175 Cc.) [5 fl℥, 441♏]; purified talcum (F. 395), fifteen grammes (15 Gm.) [231 grs.]; sugar, two hundred and fifty grammes (250 Gm.) [8 ozs. av., 358 grs.]; water, a sufficient quantity, to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Mix the pepsin with three hundred and fifty cubic centimeters (350 Cc.) [11 fl℥, 401♏] of water, add the glycerin and acid, and agitate until solution has been effected. Then add the compound elixir of taraxacum, alcohol, and the purified talcum, and mix thoroughly. Set the mixture aside for a few hours, occasionally agitating. Then filter it through a wetted filter, dissolve the sugar in the filtrate, and pass enough water through the filter to make the whole product measure one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm represents 1 grain of pepsin"—(Nat. Form.). ELIXIR PEPSINI, BISMUTHI ET STRYCHNINAE (N. F.), Elixir of pepsin, bismuth and strychnine.—Formulary number, 89: "Strychnine sulphate, one hundred and seventy-five milligrammes (0.175 Gm.) [2.7 grs.]; elixir of pepsin and bismuth (F. 90), one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the strychnine sulphate in the elixir. Each fluid drachm represents 1/100 grain of strychnine sulphate, 1 grain of pepsin, and 2 grains of bismuth and ammonium citrate"—(Nat. Form.). This execrable combination, in a pharmaceutical sense, has been one of the most popular American elixirs. Possibly, its chief advantage rests in the minute amount of strychnine present, and the fact that the incompatible nature of the pepsin and bismuth precipitates the former and destroys the ELIXIR PEPSINI ET BISMUTHI (N. F.), Elixir of pepsin and bismuth.—Formulary number, 90: Pepsin (U. S. P.), seventeen and one-half grammes (17.5 Gm.) [270 grs.]; bismuth and ammonium. citrate, thirty-five grammes (35 Gm.) [1 oz. av., 103 grs.]; water of ammonia (U. S. P.), a sufficient quantity; glycerin, one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; alcohol, one hundred and seventy-five cubic centimeters (175 Cc.) [5 fl℥, 441♏]; syrup (U. S. P.), two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; compound elixir of taraxacum, (F. 111), sixty-five cubic centimeters (65 Cc.) [2 fl℥, 95♏]; purified talcum, (F. 395), fifteen grammes (15 Urn.) [937 grs.]; water, a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the pepsin in two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏] of water. Dissolve the bismuth and ammonium citrate in sixty cubic centimeters (60 Cc.) [2 fl℥, 14♏] of warm water, allow the solution to stand until clear, if necessary; then decant the clear liquid, and add to the residue just enough water of ammonia to dissolve it, carefully avoiding an excess. Then mix the two solutions, and add the glycerin, compound elixir of taraxacum, and alcohol. Thoroughly incorporate the purified talcum with the mixture, filter it through a wetted filter, and pass enough water through the filter to make the filtrate measure seven hundred and fifty cubic centimeters (750 Cc.) [25 fl℥, 173♏]. To this add the syrup. Each fluid drachm represents 1 grain of pepsin, and 2 grains of bismuth and ammonium citrate"—(Nat. Form.). ELIXIR POTASSII ACETATIS ET JUNIPERI (N. F.), Elixir of potassium acetate and Juniper.—Formulary number, 96: "Potassium acetate, eighty-five grammes (85 Gm.) [3 ozs. av.]; fluid extract of juniper (F. 164), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; magnesium carbonate, fifteen grammes (15 Gm.) [231 grs.]; aromatic elixir (U. S. P.), a sufficient quantity, to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Triturate the fluid extract of juniper with the magnesium carbonate, then add seven hundred and fifty cubic centimeters (750 Cc.) [25 fl℥, 173♏] of aromatic elixir in which the potassium acetate has previously been dissolved. Filter, and add enough aromatic elixir through the filter, to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Each fluid drachm represents 5 grains of potassium acetate and 7 1/2 grains of juniper"—(Nat. Form.). ELIXIR RHAMNI PURSHIANAE (N. F.), Elixir rhamnus purshiana, Elixir of cascara sagrada.—Formulary number, 101: "Fluid extract of rhamnus purshiana (U. S. P.), two hundred and fifty cubic centimeters (250 Cc.) [8 fl℥, 218♏]; compound elixir of taraxacum (F. 111), seven hundred and fifty cubic centimeters (750 Cc.) [25 fl℥, 173♏]. Mix them. Allow the mixture to stand a few days, if convenient, and filter. Each fluid drachm represents 15 grains of rhamnus purshiana"—(Nat. Form.). ELIXIR RHEI (N. F.), Elixir of rhubarb.—Formulary number, 103: "Sweet tincture of rhubarb (U. S. P.), five hundred cubic centimeters (500 Cc.) [16 fl℥, 435♏]; deodorized alcohol, sixty-five cubic centimeters (65 Cc.) [2 fl℥, 95♏]; water, one hundred and eighty-five cubic centimeters (185 Cc) [6 fl℥, 123♏]; glycerin, one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]; syrup (U. S. P.), one hundred and twenty-five cubic centimeters (125 Cc.) [4 fl℥, 109♏]. Mix them and filter. Each fluid drachm represents about 21 grains of rhubarb"—(Nat. Form.). ELIXIR RUBI COMPOSITUM (N. F.), Compound elixir of blackberry.—Formulary number, 105: "Blackberry root, one hundred and sixty grammes (160 Gm.) [5 ozs. av., 282 grs.]; galls, one hundred and sixty grammes (160 Gm.) [5 ozs. av., 282 grs.]; cinnamon, Saigon, one hundred and sixty grammes (160 Gm.) [5 ozs. av., 282 grs.]; cloves, forty grammes (40 Gm.) [1 oz. av., 180 grs.]; mace, twenty grammes (20 Gm.) [309 grs.]; ginger, twenty grammes (20 Gm.) [309 grs.]; blackberry juice, recently expressed, thirty-seven hundred and fifty cubic centimeters (3750 Cc.) [126 fl℥,385♏]; syrup (U. S. P.), eighteen hundred and seventy-five cubic centimeters (1875 Cc.) [63 fl℥, 192♏]; glycerin, eighteen hundred and seventy-five cubic centimeters (1875 Cc.) [63 fl℥, 192♏]; diluted alcohol (U. S. P.), a sufficient quantity to make ten thousand cubic centimeters (10,000 Cc.) [16 O, 82 fl℥, 66♏]. Reduce the solids to a moderately coarse (No. 40) powder, moisten it with diluted alcohol, and percolate it with this menstruum in the usual mariner, until twenty-five hundred cubic centimeters (2500 Cc.) [84 fl℥, 257♏] of percolate are obtained. To this add the blackberry juice, syrup, and glycerin, and mix thoroughly"—(Nat. Form.) ELIXIR SODII SALICYLATIS (N. F.), Elixir of sodium salicylate.—Formulary number, 108:"Sodium salicylate, eighty-five grammes (85 Gm.) [3 ozs. av.]; aromatic elixir (U. S. P.), a sufficient quantity to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏]. Dissolve the sodium salicylate in about eight hundred cubic centimeters (800 Cc.) [27 fl℥, 25♏] of aromatic elixir, by agitation. Then add enough aromatic elixir to make one thousand cubic centimeters (1000 Cc.) [33 fl℥, 391♏], and filter, if necessary. This preparation should be freshly prepared, when required for use. Each fluid drachm contains 5 grains of sodium salicylate"—(Nat. Form.). King's American Dispensatory, 1898, was written by Harvey Wickes Felter, M.D., and John Uri Lloyd, Phr. M., Ph. D.
{"url":"http://www.henriettes-herb.com/eclectic/kings/elixiria-nf.html","timestamp":"2014-04-17T09:48:31Z","content_type":null,"content_length":"54328","record_id":"<urn:uuid:4531b6ac-dae9-4fcf-abea-0f4c8a01b4cd>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Kevin and Conner tracked their exercise last week. Kevin ran 14 laps around the school track, 8 laps around his neighborhood, and 3 miles on the treadmill. Conner ran 7 laps around the school track and 11 miles on the treadmill. Write and simplify an expression for the total combined distance that Kevin and Conner ran. Show your work. Use t for the distance in laps around their school track and n for the distance in laps around Kevin’s neighborhood. • one year ago • one year ago Best Response You've already chosen the best response. can someone help me on his Best Response You've already chosen the best response. *ok! once again lets break this down! Lets start with Kevin. Kevin ran 14 laps around the school track, 8 laps around his neighborhood, and 3 miles on the treadmill. ok. so lets make this a bit off an expression that will equal to the distance that Kevin ran first. In this equation i am making for Kevin.. K=Kevin s=school track n=neighborhood t= treadmill So..lets get on with the equation! Since we are trying to find out the whole distance for Kevin, this equation will only have addition Kevin ran 14 laps around school track....14s 8 laps around neighborhood.....8n 3 miles on treadmill....3t Now lets combine all! K=14s+8n+3t *Note- this was only for Kevin. There is another for Conner Best Response You've already chosen the best response. wow thank you Best Response You've already chosen the best response. Now lets get on with Connor! Conner ran 7 laps around the school track and 11 miles on the treadmill. ok. so lets make this a bit off an expression that will equal to the distance that Connor ran first. In this equation i am making for Connor.. C=Connor s= school track t= treadmill *NOTE- Connor seems to have not run in his neighborhood at all. This will be important when combining Kevin and Connor's distances. So..lets get on with the equation! Since we are trying to find out the whole distance for Connor, this equation will only have addition. Connor ran 7 laps around the school track....7s and 11 miles on the treadmill...11t Now lets combine it all! C= 7s+11t Best Response You've already chosen the best response. Now that we know Kevin's and Connor's distances, we need to combine them together for a full distance! Once again, lets take a look at Kevin's and Connor's equations i put together. K= 14s+8n+3t and C= 7s+11t NOW. Lets add them together! using like terms! I will be using D for distance when we solve for our final answer. 14s+8n+3t + 7s +11t ------------ 21s+8n+14t Therefore, D= 21s+8n+14t OR the total distance that Connor and Kevin ran is 21s+8n+14t Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/500dd58ae4b0ed432e104898","timestamp":"2014-04-18T03:30:08Z","content_type":null,"content_length":"39353","record_id":"<urn:uuid:fb165289-5737-482b-befc-2f2a5d49ea13>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
question need help June 18th 2007, 10:28 AM #1 question need help I have a prob question that i need help with, thanks a lot "suppose people attending a party pour drinks from a bottle containing 63 ounces of vodka. suppose also that the expected size of each drink is 2oz, the standard deviation of each drink is 1/2 oz, all drinks are poured independly. whats the probability that the bottle will not be empty after 36 drinks have been poured?? " thanks for the help I have a prob question that i need help with, thanks a lot "suppose people attending a party pour drinks from a bottle containing 63 ounces of vodka. suppose also that the expected size of each drink is 2oz, the standard deviation of each drink is 1/2 oz, all drinks are poured independly. whats the probability that the bottle will not be empty after 36 drinks have been poured?? " thanks for the help 36 drinks total 72 oz with a standard deviation of (1/2)*sqrt(36)=3 oz The total is normally distributed with mean 72 and sd 3 oz, so the z-score corresponding to an empty bottle is: So the probability that the bottle is not empty is: p(not empty)=1 - P(3) where P is the cumulative standard normal distribution. June 20th 2007, 10:21 AM #2 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-statistics/16040-question-need-help.html","timestamp":"2014-04-19T11:30:44Z","content_type":null,"content_length":"28697","record_id":"<urn:uuid:538804c2-2df8-4ab8-8a6e-38866d6f2181>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: REFILLING MERIDIANS IN A GENUS 2 HANDLEBODY Dedicated to the memory of Heiner Zieschang, who first noticed that genus two handlebodies could be interesting ABSTRACT. Suppose a genus two handlebody is removed from a 3- manifold M and then a single meridian of the handlebody is restored. The result is a knot or link complement in M and it is natural to ask whether geometric properties of the link complement say something about the meridian that was restored. Here we consider what the relation must be between two not necessarily disjoint meridians so that restoring each of them gives a trivial knot or a split link. 1. BACKGROUND For a knot or link in a 3-manifold, here are some natural geometric ques- tions that arise, in roughly ascending order of geometric sophistication: is the knot the unknot? is the link split? is the link or knot a connected sum? are there companion tori? beyond connected sums, are there essential an- nuli in the link complement? beyond connected sums, are there essential meridional planar surfaces? One well-established context for such ques- tions is that of Dehn surgery (cf [Go]) where one imagines filling in the
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/722/1939541.html","timestamp":"2014-04-21T07:32:15Z","content_type":null,"content_length":"8274","record_id":"<urn:uuid:36c291f1-ff97-4f24-a27e-8a71fb3866da>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Hubbardston, MA Math Tutor Find a Hubbardston, MA Math Tutor ...I will also be covering material up to Algebra II and Pre-Calculus. I am currently teaching two homeschooled teenagers (14 and 16)in Elementary Math. I meet with them on Tuesdays and Thursdays for 2 hours each day instructing them in math concepts such as fractions, percents, adding and subtracting, multiplying and dividing, absolute values, etc. 26 Subjects: including statistics, precalculus, ACT Math, algebra 1 ...SAT reading tests for vocabulary, sentence structure, grammar, and logic. There are also analytical skills you can use to score better -- even when you don't know the answer. I can help you master all of these skills, as well as guide you to improving your everyday reading for more fun and understanding. 55 Subjects: including trigonometry, ACT Math, reading, algebra 1 ...I also have more than 6 years of experience tutoring mathematics to students ranging from 7 years old through adult learners. I have taught and/or tutored mathematics from basic addition and subtraction through calculus. I have also helped students prepare for the GED, SAT and MCAS tests in mathematics. 14 Subjects: including discrete math, differential equations, C, linear algebra ...History and AP U.S. Government and Politics as well. I taught Social Studies in a self-contained classroom environment in Los Angeles for several years. 53 Subjects: including algebra 1, algebra 2, linear algebra, biology ...By working with them on how to develop, design, and review study guides for upcoming test and quizzes, I am able to help students reduce their anxiety. Certain students may excel in certain formats, and have difficulties with others. The study guides I work with them on to create are designed to accent their strengths, and present information in the most accessible manner for them. 4 Subjects: including algebra 1, algebra 2, study skills, special needs Related Hubbardston, MA Tutors Hubbardston, MA Accounting Tutors Hubbardston, MA ACT Tutors Hubbardston, MA Algebra Tutors Hubbardston, MA Algebra 2 Tutors Hubbardston, MA Calculus Tutors Hubbardston, MA Geometry Tutors Hubbardston, MA Math Tutors Hubbardston, MA Prealgebra Tutors Hubbardston, MA Precalculus Tutors Hubbardston, MA SAT Tutors Hubbardston, MA SAT Math Tutors Hubbardston, MA Science Tutors Hubbardston, MA Statistics Tutors Hubbardston, MA Trigonometry Tutors Nearby Cities With Math Tutor Ashby, MA Math Tutors Baldwinville Math Tutors Barre, MA Math Tutors Boylston Math Tutors East Princeton Math Tutors East Templeton Math Tutors Jefferson, MA Math Tutors Paxton, MA Math Tutors Phillipston, MA Math Tutors Princeton, MA Math Tutors Rutland, MA Math Tutors South Barre, MA Math Tutors Templeton, MA Math Tutors Westminster, MA Math Tutors Wheelwright, MA Math Tutors
{"url":"http://www.purplemath.com/hubbardston_ma_math_tutors.php","timestamp":"2014-04-20T13:30:40Z","content_type":null,"content_length":"24049","record_id":"<urn:uuid:82a83220-e460-461e-9eac-87e9dbda4507>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
North Miami, FL Math Tutor Find a North Miami, FL Math Tutor ...My composite score places me in the top 94% of all MCAT test takers. I have taken multiple statistics courses in college, and have received A's in all those courses. I was an Economics major. 32 Subjects: including algebra 1, algebra 2, econometrics, precalculus ...I earned a ThD degree from Grace Theological Seminary in 2007, and I am currently pursuing a PhD in interdisciplinary studies with a concentration in Instructional Design and Curriculum Development. For over thirty years I have taught at the Church at Montreal Prayer Palace, Restoration Ministri... 24 Subjects: including linear algebra, differential equations, algebra 1, algebra 2 ...When students tell me "I'm not good at math" or state that they don't have "math in their DNA", my response is that they just never had someone teach them the way they want/need to learn. People aren't smart or dumb, we all just learn differently. Everyone can be really good at math, they just need people to explain things to them in a way they understand. 23 Subjects: including trigonometry, ACT Math, algebra 1, algebra 2 ...I am a graduate of University of Florida Levin College of Law and a member of the Florida Bar, with active status in good standing. My curriculum in law school included criminal law, appellate advocacy, trial practice, and legal counseling. I passed the Florida Bar Exam, which included the subjects of Criminal Law, Criminal Procedure, and Constitutional Rights of the Accused. 36 Subjects: including algebra 1, ACT Math, GED, GRE ...Also, I have a Bachelor's plus an MBA in International Business with 327 credit hours in undergraduate and graduate courses. I currently teach Mathematics at Broward College with 9 years teaching and tutoring experience. At the same time I provide support and tutor college students in the Math Lab. 22 Subjects: including discrete math, differential equations, ACT Math, linear algebra Related North Miami, FL Tutors North Miami, FL Accounting Tutors North Miami, FL ACT Tutors North Miami, FL Algebra Tutors North Miami, FL Algebra 2 Tutors North Miami, FL Calculus Tutors North Miami, FL Geometry Tutors North Miami, FL Math Tutors North Miami, FL Prealgebra Tutors North Miami, FL Precalculus Tutors North Miami, FL SAT Tutors North Miami, FL SAT Math Tutors North Miami, FL Science Tutors North Miami, FL Statistics Tutors North Miami, FL Trigonometry Tutors Nearby Cities With Math Tutor Biscayne Park, FL Math Tutors Doral, FL Math Tutors El Portal, FL Math Tutors Hialeah Math Tutors Hialeah Lakes, FL Math Tutors Mia Shores, FL Math Tutors Miami Beach Math Tutors Miami Gardens, FL Math Tutors Miami Shores, FL Math Tutors Miramar, FL Math Tutors N Miami Beach, FL Math Tutors North Miami Beach Math Tutors Opa Locka Math Tutors Pembroke Pines Math Tutors Sunny Isles Beach, FL Math Tutors
{"url":"http://www.purplemath.com/North_Miami_FL_Math_tutors.php","timestamp":"2014-04-16T07:27:51Z","content_type":null,"content_length":"24242","record_id":"<urn:uuid:fb20f12b-3da4-4fab-a2c1-3e9d1aaaea50>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00421-ip-10-147-4-33.ec2.internal.warc.gz"}
C, P, T (And Their Combinations) C, P, T (And Their Combinations) During the 20th century, particle physicists learned that it is important to consider all the possible symmetries that the laws of nature governing elementary particles might exhibit. The presence or absence of symmetries can reveal aspects of nature that aren’t otherwise obvious. Of the many possible symmetries to consider, there are three simple ones that play a unique role: charge conjugation (C), parity (P), and time-reversal (T). These three transformations, which affect particles, space, and time, involve • C: changing all particles to particles of opposite charge (including electric charge, but also other less familiar charges; even some neutral particles get switched. For instance, neutrinos switch to anti-neutrinos, and neutrons switch to anti-neutrons.) • P: putting the world in a mirror (more precisely, flipping the orientation of the three directions of space) • T: running the world backwards in time (more precisely, flipping the direction in which time evolves.) Each of these transformations has the feature that if you do it twice, you get back to where you started. In jargon, we say P² = P × P = 1 (i.e., if you put a mirror in a mirror, what you see looks the same as if there were no mirrors at all), and similarly C² = 1 and T² = 1. Also, you can do two of these transformations together. For instance you can do C and then P, which we simply write “CP” (or you can do PC, which is the same — for these transformations, the order doesn’t matter) in which you put the world in a mirror and flip particles’ charge. You can also consider CT, PT, or even CPT. Like C, P and T themselves, any one of these combined transformations, performed twice, gives back the world you started with. Now what should you do, now that you’re thinking about these transformations? The question which you should ask is this: if I imagine a world which is created from ours by making one of these transformations, do the laws of nature that govern elementary particles and forces work the same way in the transformed world as in our own? If the answer is “yes”, then everything that can happen in the new, transformed world can also happen in our own; and in this case we say this transformation is a symmetry of our world. More precisely, it is a symmetry of our world’s laws of nature. If not, well, then you can still do the transformation, but it’s not a symmetry of our world, because the world you get after the transformation differs from our own. It’s not hard to get a feel for how parity (P) works. A particular object may or may not have symmetry under parity. As shown in Figure 1, reflecting a simple triangle in a mirror gives back a triangle which looks identical to the first one, so the triangle is symmetric under P. But the more complicated shape shown at the bottom of figure 1 does not look the same in a mirror, so it is not symmetric under P. Obviously the world around us is not symmetric in a mirror, as you can see in any natural photograph (see Figure 2, top.) However, we have to distinguish between the symmetry of an object and the symmetry of the laws of nature that govern all possible objects. The underlying processes of particle physics could be symmetric, which would mean that for any process that can happen in nature, the mirror image of that process could also happen (Figure 2, bottom). But in fact, the underlying processes of nature are not symmetric under P! The remarkable thing is that neither C, nor P, nor T, nor CP, nor CT, nor PC, is a symmetry of nature. The basic processes that physicists knew about up through the early 1900s — in particular, those involving the gravitational and electromagnetic forces, and therefore those that hold the earth together and in orbit round the sun, and those that govern the physics of atoms and molecules and all of chemistry — are in fact C, P and T symmetric. So it was quite surprising to physicists when, in the 1950s and 1960s, it was learned that the weak nuclear force violates all of these symmetries. The only one of these transformations that is still widely believed to be a symmetry of nature (for profound theoretical reasons) is CPT. Note that if CPT is a symmetry, then CP and T must have the same effect. Since it is a symmetry, doing CPT gives you back the world you started with, but we also know that doing T twice gives you back the world you started with, so CP must be do the same thing as T. The same is true for CT and P, and for PT and C. CPT transforms particles and their interactions of our world to anti-particles and their interactions of the transformed world, and vice versa. And since in our world, every type of particle has an anti-particle [possibly itself again], and since every interaction involving various particles has an anti-interaction involving their anti-particles (so to speak), this is believed to be an exact symmetry. More specifically, in any world whose particles are governed by quantum field theory, the math used in the equations of the Standard Model, which describes all the known particles and forces, one can prove that CPT must be a symmetry. (Whether this is true of a fully unified theory [such as string theory] that combines a quantum theory of gravity with the non-gravitational forces isn’t clear; but experimentally no violations of CPT are known.) C and P Aren’t Symmetries, Because of the Weak Nuclear Force Up to around 1950, everything physicists knew — all of chemistry and atomic physics, all the effects of gravitational and electromagnetic forces, light waves and the basics of atomic nuclei — was consistent with the world being symmetric under P. But it turns out that C and P aren’t even close to being symmetries of the laws of nature. They are violated about as much as they possibly could be, by the weak nuclear force. The simplest (but by no means only) example of this involves neutrinos. When a neutrino is created in a particle physics process, it is always produced via the weak nuclear force. And when it is produced, it always spins counter-clockwise, seen from the point of view of someone at its departure point. (Neutrinos, like electrons and protons and many other types of particles, are always, in some sense, spinning; more precisely they have angular momentum that is always present.) In other words, it spins like a left-handed screw (see Figure 3). [The jargon is that it has negative helicity --- helicity as in "helix", appropriate for a screw.] But a neutrino produced via the weak nuclear force never spins like a right-handed screw. Since P would exchange right-handed and left-handed (as you’d expect for a mirror), this means that the weak nuclear force violates P. As a more specific example (Figure 3), when a positively-charged pion (a hadron made from an up quark, an anti-down quark, and many gluons and quark/anti-quark pairs) decays to an anti-muon and a neutrino, the neutrino is always left-handed and never right-handed. That violates P. And meanwhile, when a negatively-charged pion decays to a muon and an anti-neutrino, the anti-neutrino is always right-handed. This difference between the processes involving negatively and positively charged pions violates C. This type of P and C violation is now very well-understood. The Standard Model (the equations we use to describe all the known particles and forces) incorporates it very naturally (see here for some discussion), and the details of its equations have been tested very thoroughly in experiments. So while the violation of P and C was a big surprise in the 1950s, today it is now a standard part of particle physics. However, if we simply look at the particles themselves (and not in detail at how they interact with one another), CP (which is the same as PC) does at first appear to be a symmetry. That’s because P flips the spin of the neutrino from left-handed to right-handed, but C flips the charge of the pion particle, turns the anti-muon into a muon, and replaces the neutrino with an anti-neutrino; and the resulting process does occur in our world (see Figure 4). So for a brief period, physicists thought the weak nuclear force would preserve CP, even though it maximally violates C and P separately. [Another way to see this is to look at my article on what the particles would be like if the Higgs field were zero. There you see that there are, for instance, electron-left and neutrino-left particles which come together in a pair, and are affected by the weak isospin force, while the electron-right particle comes separately from the neutrino-right particle, and neither is affected by the weak isospin force. Meanwhile what is true for the electron-left is true for the positron-right, and what is true for the positron-right is true for the electron-left. But P exchanges the electron-left and the electron-right, so clearly it is not a symmetry; C exchanges the electron-left and the positron-left, and since the positron-left is not affected by the weak-force, C is also not a symmetry. Note CP, however, exchanges the electron-left and the positron-right, both of which are affected by the weak nuclear force.] CP Also is Not a Symmetry But it turned out, as was learned in the 1960s, that CP is also violated in the weak nuclear interactions. Again this was a surprise, one that we understand well but are still studying today. Here’s the basic story. Most hadrons [particles made from quarks, anti-quarks and gluons] decay almost instantaneously via the strong nuclear force, in times shorter than a trillionth of a trillionth of a second. One hadron, the proton, is stable; the neutron, on its own, lives about 15 minutes. (Atomic nuclei, made from protons and neutrons, are themselves sometimes called hadrons, but I personally prefer to call them “collections of hadrons.”) But a number of hadrons, historically and even practically of great importance, have short but not so short lifetimes — anywhere from a billionth of a trillionth of a second to a billionth of a second — and for most of them, their decay is induced by the weak nuclear force (while a few others decay via the electromagnetic force.) As I’ll describe elsewhere, some of them – especially mesons that contain one bottom quark or one bottom anti-quark — have one or more decays that have been measured to violate CP. (There are other signs of CP violation in oscillations between two hadrons, similar to oscillations that happen in neutrinos.) This type of CP violation is very interesting because it occurs naturally if there are three or more “flavors” or “generations” of up-type quarks (up, charm and top) and three flavors of down-type quarks (down, strange and bottom). As Kobayashi and Maskawa pointed out, a version of the Standard Model with only two generations could not have this type of CP violation; there would need to be some entirely new source for it. Since they observed this back before any particles from the third generation were discovered, they essentially predicted there should be a third generation, and for this they were consequently awarded the 2008 Nobel Prize in Physics (along with Nambu, for his extensive work on other subjects.) So far, there are no signs of CP violation from other sources than the one Kobayashi and Maskawa identified. But if there are particles and forces beyond those we know in the Standard Model, there may well be more places to see effects of CP violation. However, even within the Standard Model, there’s still one very big puzzle… The Strong Nuclear Force and CP Very surprisingly, CP is not significantly violated by the strong nuclear force, and no one knows why. We know the strong nuclear force does not violate CP symmetry very much because of a certain property of the neutron, called an “electric dipole moment”. The neutron is an electrically neutral hadron, very similar to a proton. The quarks, anti-quarks and gluons which make up the neutron are held together by the strong nuclear force. Now, an interesting question you can ask about any electrically neutral object is whether it has an electric dipole. A magnet such as you played with as a child is a magnetic dipole, with a north pole and a south pole (see Figure 5.) A magnetic monopole would be either a north pole or a south pole, on its own; you’ve never seen one, and neither has anyone else. Meanwhile, an electric dipole has total electric charge zero but has a positively charge on one side and a negatively charge on the other. This could be as simple as a hydrogen atom, with an electron as a negative charge and a proton as a positive charge. For a simple electric dipole consisting of two charges a distance D apart, one with charge q and one with charge -q, the electric dipole moment is simply defined to be q ×D. Notice that if the positive and negative charges sit right on top of each other, then this object has no dipole moment; the charges have to be separated in space to be “polarized”. A hydrogen atom normally isn’t polarized. But many molecules have a dipole moment, even though they are electrically neutral. For example, a water molecule H[2]O has a dipole moment equal to 3.9 × 10^-8 e cm, where “e” is the charge of a proton (-e the charge of an electron), and “cm” is 1 centimeter. For comparison, this is just a little bit smaller than you’d get if you separated an electron and a proton by a distance that is about the size of a water molecule. (If you did that, the resulting dipole would have a dipole moment of about 9× 10^-8 e cm). This is telling you that the electrons on the two hydrogen atoms in H[2]O are spending a lot of their time over with the oxygen atom. Now, how big would you expect the dipole moment of a neutron to be? Well, the neutron has a radius of about 10^-13 cm, so you’d expect D should be about that size. And it consists of quarks, anti-quarks and gluons; the gluons are electrically neutral, but the quarks and anti-quarks have electric charges: 2/3 e (up quarks), -1/3 e (down quarks), -2/3 e (up anti-quarks) and +1/3 e (down anti-quarks). So you might expect q to be about that size. So you’d expect the neutron to have an electric dipole moment with a size in the vicinity of 10^-13 e cm. That’s about a million times smaller than the dipole moment of a water molecule, mainly since the radius of a neutron is a million times smaller. Actually there are some subtle effects which make a more accurate estimate a little smaller. The real expectation is about 10^-15 e cm. But if the neutron had an electric dipole moment, this would violate T, and therefore CP, if CPT is even an approximate symmetry. (It also violates P.) So if CP and CPT were exact symmetries, then the electric dipole of the neutron would have to be exactly zero. Of course we already know that CP is not an exact symmetry; it’s violated by the weak nuclear force. But the weak force is so weak (at least as far as it affects neutrons, anyway) that it can only give the neutron an electric dipole moment of about 10^-32 e cm. That’s far smaller than anyone can measure! So it might as well, for current purposes, be zero. But if the strong nuclear force, which holds the neutron together, violates CP, then we’d expect to see an electric dipole moment of 10^-15 e cm or so. Yet experiment shows that the neutron’s electric dipole moment is less than 3 × 10^-26 e cm!! That’s over ten thousand million times smaller than expected. And so the strong nuclear force does not violate CP as much as naively Why is it so much smaller than expected? No one knows, though there have been various speculations. This puzzle is called the strong CP problem, and it is one of the three greatest problems plaguing the general realm of particle physics, the others being the hierarchy problem and the cosmological constant problem. Specifically, the problem is this. When one writes down the theory of the strong nuclear force — the equations for gluons, quarks and anti-quarks called “QCD” — the equations have various parameters, • the coupling overall strength of the strong nuclear force • the masses of the various quarks • the theta angle, which does not affect any Feynman diagrams, but nevertheless determines the effects of certain subtle processes [quantum tunneling, with the buzzwords "instantons" or "pseudoparticles"] in the physics of gluons Huh?! What’s that last one?? Well, this additional parameter of QCD was discovered in the 1970s (and is one of the contexts in which Polyakov, who won a pri$e recently, is famous.) The issue is too technical to explain here, but suffice it to say that if the theta angle isn’t equal to 0 or π, then the strong nuclear force violates CP. More precisely, and more disturbingly, it is a certain combination of the theta angle and the masses of the various quarks [specifically, the product of the complex phases of the masses] that violates CP. And these two things (the theta angle and the quark masses) are not obviously related to each other — so how can they combine to cancel perfectly? Yet for some reason this combination is zero, or at least ten billion times smaller than it could have been. There’s no obvious reason why. Solutions to this 35-year-old puzzle include the following • maybe the up quark is massless (this is a very difficult thing to check, because there’s no direct way to measure its mass; indirect methods have long suggested it has a mass a few times larger than that of the electron, but there are subtleties that make these methods difficult to interpret with complete confidence.) • maybe there is a field called the axion field, which removes this effect; a prediction of this idea is the existence of an axion particle, which has been sought for over 30 years but hasn’t been found up to now. The axion could also serve as the universe’s dark matter, by the way. There are a couple of other possible known solutions, but I won’t cover them here; generally they don’t have a near-term experimental consequence, unfortunately. What does CP not being a symmetry have to do with the fact that there is more matter than anti-matter in the universe? Do right-handed neutrinos and left-handed anti-neutrinos exist at all? Coming soon… 4 responses to “C, P, T (And Their Combinations)” 1. Thanks for this. Concepts around C, P, and T are somewhat more accessible fo me now. As an aside, I was struck by The diagram discussing magnetic and electric dipoles and the fact that magnetic monopole do not exist (maybe). Is there an analogy here with quark confinement given if you separate the N and S poles in a magnet by breaking it you just get two seperate dipoles. Do quarks behave similarly? 2. According to Relativity, there is a symmetry between space and time. Therefore, C and P violations cannot be distinguished from a T violation. So maybe we truly have a time reversal asymmetry instead of a CP violation, even if this violations are relatively well understood. CP seem to be violated because T is not supposed to be violated… :o) A T violation will produce a C, a P or a CP violation. At least it is a possibility! PS.: Your blog is one of the best in terms of clarity, reliability and interest among physics blogs. PS2: (you can erased “one of”) 3. fantastic post, very informative. I’m wondering why the opposite specialists of this sector don’t realize this. You should continue yourr writing. I’mconfident, you have a great readers’ base already! 4. What is a gauge symmetry? I have a disused PhD in astrophysics, but my specialty was computational MHD. My courses never had time for particle physics, general-audience material is mostly useless — when you’re asked to take absolutely everything on faith, it’s hard to believe any of it — and the only semi-technical introduction I ever found was Feynman’s QED. But obviously that’s 30 years out of date, and doesn’t have much to say about QCD or the Higgs field to begin with. I’m very glad to have found your site. It’s occupying an important void, I think.
{"url":"http://profmattstrassler.com/articles-and-posts/particle-physics-basics/c-p-t-and-their-combinations/","timestamp":"2014-04-19T14:47:19Z","content_type":null,"content_length":"122185","record_id":"<urn:uuid:3b9fb9a8-fc3b-4fa3-a3f9-90dd622ac2db>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Sierra Madre Trigonometry Tutor Find a Sierra Madre Trigonometry Tutor ...I have come up with many different tricks for helping students remember key ideas in math and the biggest compliment I received was when one client told me she was showing her class my "wedding cake" trick (one that I use for factoring) because they were all coming to her for help. I am not some... 11 Subjects: including trigonometry, physics, geometry, algebra 1 ...My interests range from the sciences to humanities, focusing especially on math, physics, and art history. I am currently focusing on research, and planning to attend a graduate program for a PhD in art history. For the past three summers, I have worked as the lead teaching assistant for a math course for incoming freshmen at Caltech. 51 Subjects: including trigonometry, reading, chemistry, calculus ...As a young female, I believe I can connect to my students in ways that other people cannot. I can develop a friendship and maintain a professional position at the same time. I believe in all different types of learning, including visual and practical. 19 Subjects: including trigonometry, Spanish, geometry, biology ...The most important thing in the process of solving a exercise is to understand concepts instead of memorizing equations. Students of math must know what they are doing, how to do the exercise, and why they succeeded or didn't quite understand what to do. I am willing to help those struggling wi... 14 Subjects: including trigonometry, Spanish, geometry, piano Hello! My name is David, and I hope to be the tutor you are looking for. I have over 5 years of tutoring experience in all math subjects, including Algebra, Geometry, Trigonometry, Pre-Calculus, Calculus, Probability and Statistics. 14 Subjects: including trigonometry, calculus, physics, geometry
{"url":"http://www.purplemath.com/Sierra_Madre_Trigonometry_tutors.php","timestamp":"2014-04-19T15:03:18Z","content_type":null,"content_length":"24206","record_id":"<urn:uuid:e4b98ec2-aeb3-40e9-97e7-378b64bcab11>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00662-ip-10-147-4-33.ec2.internal.warc.gz"}