content
stringlengths
86
994k
meta
stringlengths
288
619
This Article Bibliographic References Add to: ASCII Text x M.Y. Chan, F.Y.L. Chin, "Optimal Resilient Distributed Algorithms for Ring Election," IEEE Transactions on Parallel and Distributed Systems, vol. 4, no. 4, pp. 475-480, April, 1993. BibTex x @article{ 10.1109/71.219762, author = {M.Y. Chan and F.Y.L. Chin}, title = {Optimal Resilient Distributed Algorithms for Ring Election}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {4}, number = {4}, issn = {1045-9219}, year = {1993}, pages = {475-480}, doi = {http://doi.ieeecomputersociety.org/10.1109/71.219762}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Parallel and Distributed Systems TI - Optimal Resilient Distributed Algorithms for Ring Election IS - 4 SN - 1045-9219 EPD - 475-480 A1 - M.Y. Chan, A1 - F.Y.L. Chin, PY - 1993 KW - Index Termsmessage complexity; distributed algorithms; ring election; dynamic ring; processorrecoveries; computational complexity; multiprocessing systems; system recovery VL - 4 JA - IEEE Transactions on Parallel and Distributed Systems ER - The problem of electing a leader in a dynamic ring in which processors are permitted tofail and recover during election is discussed. It is shown that theta (n log n+k/sub r/)messages, counting only messages sent by functional processors, are necessary andsufficient for dynamic ring election, where k/sub r/ is the number of processor recoveriesexperienced. [1] H. L. Bodlaender and J. van Leeuwen, "New upperbounds for decentralized extrema-finding in a ring of processors," Tech. Rep. RUU-CS-85-15, Comput. Sci. Dep., Rijksuniversiteit Utrecht, Netherlands, 1985. [2] J. E. Burns, "A formal model for message passing systems," Tech. Rep. 91, Comput. Sci. Dep., Indiana Univ., Bloomington, IN, 1980. [3] E. Chang and R. Roberts, "An improved algorithm for decentralized extrema-finding in circular configurations of processes,"Commun. ACM, vol. 22, no. 5, pp. 281-283, 1979. [4] D. Dolev, M. Klawe, and M. Rodeh, "An O(n log n) unidirectional distributed algorithm for extrema finding in a circle,"J. Algorithms, vol. 3, pp. 245-260, 1982. [5] R.E. Filman and D.P. Friedman,Coordinated Computing: Tools and Techniques for Distributed Software. New York: McGraw-Hill, 1984, pp. 79-81. [6] W. R. Franklin, "On an improved algorithm for decentralized extrema finding in circular configurations of processors,"Commun. ACM, vol. 25, pp. 336-337, 1982. [7] G. N. Frederickson and N. A. Lynch, "The impact of synchronous communication on the problem of electing a leader in a ring," inProc. 16th Annu. ACM Symp. Theory Comput., Washington, DC, 1984, pp. [8] O. Goldreich and L. Shrira, "The effect of link failures on computations in asynchronous rings," inProc. 5th ACM Symp. Principles Distributed Comput., Calgary, Alta., Canada, Aug. 1986, pp. 174- [9] D. S. Hirschberg and J. B. Sinclair, "Decentralized extrema-finding in circular configurations of processors,"Commun. ACM, vol. 23, no. 11, pp. 627-628, 1980. [10] A. Itai and M. Rodeh, "Symmetry breaking in distributive networks," inProc. 22nd IEEE Symp. Foundations Comput. Sci., Oct. 1981, pp. 150-158. [11] E. Korach, D. Rotem, and N. Santoro, "Distributed election in a circle without a global sense of orientation,"Int. J. Comput. Math., vol. 14, 1984. [12] G. LeLann, "Distributed systems - Towards a formal approach,"Information Processing 77. New York: Elsevier Science, 1977, pp. 155-160. [13] J. Martin,Local Area Networks - Architectures and Implementations. Englewood Cliffs, NJ: Prentice-Hall, 1989. [14] S. Moran, M. Shalom, and S. Zaks, "A 1.44...nlogn algorithm for distributed leader finding in bidirectional rings of processors," Tech. Rep. 389, Comput. Sci. Dep., Technion, Nov. 1985. [15] J. Pachl, E. Korach, and D. Rotem, "Lower bounds for distributed maximum-finding algorithms,"J. ACM, vol. 31, pp. 905-918, 1984. [16] G. L. Peterson, "An O(n log n) unidirectional algorithm for the circular extrema problem,"ACM Trans. Programming Languages Syst., vol. 4, pp. 758-762, 1982. [17] D. Rotem, E. Korach, and N. Santoro, "Analysis of a distributed algorithm for extrema finding in a ring," Tech. Rep. SCS-TR-61, School of Comput. Sci., Carleton Univ., Aug. 1984. [18] P. M. B. Vitanyi, "Distributed election in an Archimedean ring of processors," inProc. 16th Annu. ACM Symp. Theory Comput., Washington, DC, 1984, pp. 542-547. Index Terms: Index Termsmessage complexity; distributed algorithms; ring election; dynamic ring; processorrecoveries; computational complexity; multiprocessing systems; system recovery M.Y. Chan, F.Y.L. Chin, "Optimal Resilient Distributed Algorithms for Ring Election," IEEE Transactions on Parallel and Distributed Systems, vol. 4, no. 4, pp. 475-480, April 1993, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/td/1993/04/l0475-abs.html","timestamp":"2014-04-18T16:30:20Z","content_type":null,"content_length":"52604","record_id":"<urn:uuid:db1a2904-c68c-400a-aad3-e275406dd917>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Word Problem Database Percent Word Problems - Level 1 1. Cameron bought ice skates that were on sale for 15% off the usual price. If the ice skates usually cost $75, what is the sale price? 2. Scott bought lunch for his friends at the Happy Hamburger. The total bill was $24. Scott decided to leave a 15% tip for the waiter. How much was the tip? 3. The Flying Eagles football team won 18 out of 24 games Assuming no ties, what percent of the games did they lose? 4. There are 540 students at Sunny Acres Middle School. 432 students ride the bus to school. What percentage of the students do not ride the bus? 5. 60 students participated in a survey about pets. 27 have a dog, 15 have a pet cat, and 18 students have no pets at all. What percentage of the students have a pet?
{"url":"http://www.mathplayground.com/wpdatabase/Percent1_2.htm","timestamp":"2014-04-17T01:13:37Z","content_type":null,"content_length":"50467","record_id":"<urn:uuid:9ef9dadb-87e3-4c1a-a15d-66ccf1a81e0f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
Number Operations and Number Sense Approximately 20%-30% of the questions on the GED Mathematics Test assess knowledge of basic number operations and number sense. This includes such things as: • Representing and using whole numbers, decimals, fractions, percents, ratios, proportions, exponents, roots, and scientific notation. • Recognizing equivalencies and order relations (for example, largest to smallest). • electing the appropriate operations to solve problems (for example, When should I divide?). • Calculating with mental math, pencil and paper, and a scientific calculator. • Using estimation to solve problems and assess the reasonableness of an answer. Mental math and estimation skills are important in not only number operations and number sense, but also in the other areas of the GED Mathematics Test. When you estimate, you use your number sense to find an answer or to check the reasonableness of an answer. This will require that you teach students to round numbers up and down, as well as to identify acceptable ranges when estimating. How can you incorporate problem-solving and estimation skills? Let’s take a look.
{"url":"http://www.ket.org/gedtestinfo/math/math_23.htm","timestamp":"2014-04-17T01:57:55Z","content_type":null,"content_length":"12742","record_id":"<urn:uuid:303647ae-7473-42b3-b001-02d2e51ffa2a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized refractive tunable-focus lens and its imaging characteristics « journal navigation Generalized refractive tunable-focus lens and its imaging characteristics Optics Express, Vol. 18, Issue 9, pp. 9034-9047 (2010) Conventional lenses made from optical glass or plastics have fixed properties (e.g. focal length) that depend on the index of refraction and geometrical parameters of the lens. We present an approach to the problem of calculation of basic paraxial parameters and the third order aberration coefficients of compound optical elements analogical to classical lenses which are based on refractive tunable-focus lenses. A detailed theoretical analysis is performed for a simple tunable-focus lens, a generalized tunable-focus lens, a generalized tunable-focus lens with minimum spherical aberration, and three-element tunable-focus lens (a tunable-focus doublet). © 2010 OSA 1. Introduction Recently the first types of tunable-focus lenses with variable optical parameters appeared on the market [ ] that give a possibility to design optical systems, which have no analogy in classical systems. The advantage of these active lenses is their capability to change continuously the focal length within a certain range. Using several tunable-focus lenses one can build optical systems which change their parameters (focal length, magnification, etc.) in a continuous way without a need for changing their mutual position. Such lenses with a tunable focal length in a wide range and lens type convertibility make possible to design optical systems with functions that are difficult to combine using conventional approaches. A novel design of lens systems with tunable-focus lenses is promising for future, especially due to a possibility for size reduction, a lower complexity and costs, better robustness, and a faster adjustment of optical parameters of such systems. Different types of either refractive or diffractive tunable lenses with variable focal lengths were developed in recent years [ 18. R. Marks, D. L. Mathine, G. Peyman, J. Schwiegerling, and N. Peyghambarian, “Adjustable fluidic lenses for ophthalmic corrections,” Opt. Lett. 34(4), 515–517 (2009). [CrossRef] [PubMed] ] and some of them are offered commercially nowadays [ ]. The technology of tunable-focus lenses is inspired with an active change of optical parameters of the human eye. Several different technical approaches were developed for controlling the focal length of lenses. Tunable-focus lenses can use the principle of electrowetting [ 3. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3(2), 159–163 (2000). [CrossRef] 8. R. Peng, J. Chen, and S. Zhuang, “Electrowetting-actuated zoom lens with spherical-interface liquid lenses,” J. Opt. Soc. Am. A 25(11), 2644–2650 (2008). [CrossRef] ], the controlled injection of fluid into chambers with deformable membranes [ 9. D. Y. Zhang, N. Justis, and Y. H. Lo, “Fluidic adaptive zoom lens with high zoom ratio and widely tunable field of view,” Opt. Commun. 249(1-3), 175–182 (2005). [CrossRef] 11. H. W. Ren and S. T. Wu, “Variable-focus liquid lens,” Opt. Express 15(10), 5931–5936 (2007). [CrossRef] [PubMed] ], thermooptical or electroactive polymers [ 12. G. Beadie, M. L. Sandrock, M. J. Wiggins, R. S. Lepkowicz, J. S. Shirk, M. Ponting, Y. Yang, T. Kazmierczak, A. Hiltner, and E. Baer, “Tunable polymer lens,” Opt. Express 16(16), 11847–11857 (2008). [CrossRef] [PubMed] ], or voltage-controlled liquid crystals as active optical elements [ 13. A. F. Naumov, G. D. Love, M. Y. Loktev, and F. L. Vladimirov, “Control optimization of spherical modal liquid crystal lenses,” Opt. Express 4(9), 344–352 (1999). [CrossRef] [PubMed] 17. P. Valley, D. L. Mathine, M. R. Dodge, J. Schwiegerling, G. Peyman, and N. Peyghambarian, “Tunable-focus flat liquid-crystal diffractive lens,” Opt. Lett. 35(3), 336–338 (2010). [CrossRef] ]. The development of tunable-focus lenses is of great importance for a number of practical applications, ranging from adaptive eyeglasses for vision correction [ 18. R. Marks, D. L. Mathine, G. Peyman, J. Schwiegerling, and N. Peyghambarian, “Adjustable fluidic lenses for ophthalmic corrections,” Opt. Lett. 34(4), 515–517 (2009). [CrossRef] [PubMed] ] to fast and miniaturized zooming devices in various cameras, camcorders, and mobile phones [ 19. F. C. Wippermann, P. Schreiber, A. Bräuer, and P. Craen, “Bifocal liquid lens zoom objective for mobile phone applications,” SPIE Proc. 6501, 650109 (2007). [CrossRef] 21. B. H. W. Hendriks, S. Kuiper, M. A. J. van As, C. A. Renders, and T. W. Tukker, “Variable liquid lenses for electronic products,” Proc. SPIE 6034, 603402 (2006). [CrossRef] In this work we focused on analysis of refractive tunable-focus lenses that can be fabricated, for example, using two liquids and electrowetting phenomena, in which an electrically induced change in surface-tension changes the surface curvature of liquid [ 3. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3(2), 159–163 (2000). [CrossRef] ]. Adjusting the shape of the surface between two immiscible liquids can be used for forming a positive or negative lens. Optical power, shape and material are fundamental optical parameters of the lens which affects its imaging properties [ ]. Aberrations are essential factors which affect the image quality of the lens. Thus, it is important for designing optical systems composed of tunable-focus lenses to analyze paraxial imaging properties and aberrations of such lenses. Only a few papers address imaging properties and aberration analysis of tunable-focus lenses and their systems [ 27. S. Reichelt and H. Zappe, “Design of spherically corrected, achromatic variable-focus liquid lenses,” Opt. Express 15(21), 14146–14154 (2007). [CrossRef] [PubMed] 29. Z. Wang, Y. Xu, and Y. Zhao, “Aberration analyses of liquid zooming lenses without moving parts,” Opt. Commun. 275(1), 22–26 (2007). [CrossRef] The purpose of this work is to show a possible approach for the calculation of fundamental paraxial properties and the third order aberration coefficients of refractive tunable-focus lenses and their combinations into more complex optical systems analogical to classical lenses. We perform a detailed theoretical analysis of different optical elements based on refractive tunable-focus lenses composed of two immiscible liquids with an interface of a variable curvature. The calculation of aberrations and parameters of these elements is presented on several examples. The provided analysis may serve for the initial design of non-conventional optical systems using refractive tunable-focus lenses. 2. Basic formulas for calculation of parameters of refractive tunable-focus lenses From the optical and technological point of view a simple refractive tunable-focus lens can be most easily designed as an optical system consisting of three optical surfaces, whereas the first and the last surface is planar, and the inner surface has a spherical shape with an adjustable curvature. Such tunable-focus lenses can be fabricated, for example, using two immiscible liquids and electrowetting phenomena, in which an electrically induced change in surface-tension changes the surface curvature of liquid [ 3. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3(2), 159–163 (2000). [CrossRef] ]. Adjusting the shape of the surface between two immiscible liquids can be used for forming an optical lens. A change in curvature of this inner surface between two liquids by electrowetting leads to a change in the focal distance of the lens. Further, we will not concern with a detail technical realization of tunable-focus lenses. Several variants of refractive tunable-focus lenses were described in literature [ 16. M. Ye, M. Noguchi, B. Wang, and S. Sato, “Zoom lens system without moving elements realised using liquid crystal lenses,” Electron. Lett. 45(12), 646–648 (2009). [CrossRef] ] and some of them are being fabricated commercially [ ]. Such lenses can be used in various interesting applications in practice. We will focus our analysis on a model of fluidic tunable-focus lenses. We do not consider a thickness and material of thin covering planparallel plates which are used in fluidic lenses for separation of liquids from the surrounding media. In the following analysis performed in this work we will deal mostly with an optical design using a thin lens approximation, where we can neglect the influence of thin covering plates and the thickness of lenses. The problem of replacing a thin lens by a thick lens is treated in Ref [ 30. M. Herzberger, “Replacing a thin lens by a thick lens,” J. Opt. Soc. Am. 34(2), 114–115 (1944). [CrossRef] ]. An optical scheme of the simple tunable-focus lens is shown in Fig. 1 The following relations hold for raytracing the paraxial aperture ray through the optical system having surfaces [ is the paraxial angle of the aperture ray incident at -th surface of the optical system, is the paraxial angle of the aperture ray refracted at -th surface, is the index of refraction in front of the -th surface, is the index of refraction behind the -th surface, is the incident height of paraxial aperture ray at -th surface, is the radius of curvature of -th surface, is the axial distance of the vertex of the -th surface and the vertex of ( +1)-st surface, is the distance of the axial point of the object, which is formed by the part of the optical system in front of the -th surface, from -th surface, is the image distance of the axial point of the object, which is formed by first surfaces, from the -th surface of the optical system. The transverse magnification is given by the formulaNow, consider imaging of the object at infinity ( ). We obtain using Eq. (1) where φ is the optical power of the tunable-focus lens. We can derive for the focal length and positions of the image and object focal points the following formulas Equations (4) make possible to calculate fundamental paraxial parameters of the tunable-focus lens. Consider imaging of the point A in the distance from the first surface of the tunable-focus lens, then the image A' is situated in the distance from the last surface of the lens. We obtain using Eq. (1) is the transverse magnification. Equations (5) enable to calculate ' for a given value of the transverse magnification . Further, it holds the following image equation [ is the distance of point A from the object focal point F, and ' is the distance of point A' from the image focal point F'. 3. Third order aberrations of tunable-focus lenses Aberrations are essential factors which affect the image quality of the lens. Thus, it is very important for designing optical systems composed of tunable-focus lenses to know aberrations of such lenses. Consider that the system of refractive tunable-focus lenses is a rotationally symmetric ( Fig. 2 ) consisting of spherical surfaces. In case we know radii of curvature of lenses, their thicknesses, indices of refraction and distances between individual lenses we can simply calculate aberration coefficients of the third order [ ]. Firstly, we calculate two paraxial (auxiliary) rays through the optical system, namely a paraxial aperture ray and paraxial principal ray. The following relations are valid for raytracing the paraxial principal ray through the optical system having is the paraxial angle of the principal ray incident at -th surface, is the paraxial angle of the principal ray refracted at -th surface, is the incident height of paraxial principal ray at i-th surface, is the distance of the image of the entrance pupil, which is formed by the part of the optical system in front of the -th surface, from -th surface, is the distance of the image of the entrance pupil, which is formed by first surfaces, from -th surface of the optical system. The meaning of other symbols is the same as in the case of the paraxial aperture ray. The angular magnification in pupils of the optical system can be expressed as as coordinates of the intersection of the ray with the entrance pupil plane, s as the distance of the object plane from the first surface of the optical system, as the distance of the entrance pupil from the first surface of the optical system, and as the object height, then transversal ray aberrations of the third order of the rotationally symmetric optical system can be calculated from [ where aberration coefficients of the third order can be expressed for the centered optical system of spherical surfaces as [ (h¯1σ1−h1σ¯1)=n′K(h¯Kσ′K−hKσ¯′K)=konst . In previous relations we denoted and similarly for other differences. Furthermore, it holds Individual aberration coefficients of the third order have the following meaning: S [I] is the coefficient of spherical aberration, S [II] is the coefficient of coma, S [III] is the coefficient of astigmatism, S [IV] is Petzval coefficient, and S [V] is the coefficient of distortion. The quantity H is the Lagrange-Helmholtz invariant. It is evident from previous equations that one can use an arbitrary choice of input parameters (h1,σ1=h1/s1,h¯1,σ¯1=h¯1/s¯1) for the calculation of the third order aberration coefficients for a given object distance s1 and a position s¯1 of the entrance We can apply above-mentioned formulas on an optical element (lens) with a variable focal length which consists of three surfaces. The outer surfaces are planar ( ) and the inner surface is a spherical surface with the radius which can be changed in a continuous way. Figure 1 presents an optical scheme of such lens. Using Eq. (9) we obtain after a time-consuming calculation for aberration coefficients of the third order of thin tunable-focus lens ( d [1] = 0, d [2] = 0) in air ( n [1] n [4] =1) the following formulaswhereFunctions , and can be expressed aswhere . As one can see from Eqs. (10 ) we expressed the third order aberration coefficients using three parameters, which depend only on refractive indices of fluids forming the tunable-focus lens and do not depend on the optical power φ of the lens. The graph which presents the dependence of functions ) and ) on parameters (refractive indices) is shown in Fig. 3 for the value = 1.38. Assume now a refractive rotationally symmetric aspheric surface of the second order. The formula of the meridian of a general surface of the second order is within the scope of the accuracy of the third-order aberration theory given by [ are the coordinates of an arbitrary point of the lens surface meridian, is the radius of curvature on the optical axis, and is the aspheric coefficient that characterizes the shape of the aspheric surface. We can determine the type of the curve by the value of the coefficient . The curve represents hyperbola, if , parabola, if , ellipse, if ≠ 0 or circle, if = 0. If the inner surface tunable-focus lens is aspheric then we must replace the variable Eq. (11) by the following expressionWe can determine aberration coefficients of the third order of a tunable-focus lens using Eqs. (10 ) for an arbitrary value of optical power φ and position of the object plane . We can write for the entering paraxial aperture angle and the exiting paraxial aperture angle Fig. 2 ), where is the power of the lens. We obtain for a system of K tunable-focus lenseswhere = I, II, III, IV, V) denotes the aberration coefficient of the -th element of the optical system. We can provide calculations of the third-order aberration coefficients of an arbitrary optical system of thin tunable-focus lenses using Eq. (8) Eqs. (10 ). The provided analysis may serve for the initial design of optical systems, and the calculated parameters can be used for its further optimization using optical design software. Chromatic aberrations can be simply calculated by substitution of refractive indices for corresponding wavelengths into Eqs. (8) . The error due to neglecting the finite thickness of lenses is relatively small because the change of aberration coefficients with respect to the lens thickness is approximately few percents. 4. Imaging properties of generalized tunable-focus lens We derived formulas for a simple refractive tunable-focus lens in the previous text. Now we focus on optical systems designed using several tunable-focus lenses. Optical systems in practice are always composed of several lenses. Every spherical lens is characterized by its radii of curvature of optical surfaces and index of refraction of the optical material. We will deal with an analogy of a classical lens using tunable-focus lenses. One has to use two simple tunable-focus lenses in air (a generalized tunable-focus lens to obtain the element similar to a classical simple spherical lens. As we can see form Fig. 4 it holds that . We obtain using Eq. (4) for powers and the distance of the object principal plane of the second lens from the image principal plane of the first lensWe can derive for the optical power φ, the position of the image focal point and the position of the object focal point It holds for imaging of the point A by a thin ( ) generalized tunable-focus lens ( Fig. 5 )where we set . Generally, we can write for a system of thin simple tunable-focus lenses in contact in air ( )We will focus on aberration properties of a generalized thin tunable-focus lens. The first tunable-focus lens is described by parametersThe second lens is described by parametersA generalized tunable-focus lens ( Fig. 4 ) can be practically realized from two simple tunable-focus lenses ( Fig. 1 ) which are appropriately mutually oriented. Two ways exist of orientation of the second simple tunable-focus lens with respect to the first tunable-focus lens:As one can see the relations for calculation of aberration coefficients will be simpler in the first case. The second case is similar to “a classical lens”. We obtain for the object at infinity ( ) using Eq. (11) Eq. (12) If we choose a position of the entrance pupil identical with a generalized thin tunable-focus lens ( ), we can express aberration coefficients using Eq. (10) asNow, we require a minimization of spherical aberration of a generalized tunable-focus lens. From necessary condition for the extremum ( ) we obtain using Eq. (19) for the power the following equationBy the previous quadratic equation we can calculate the power of the first lens for the case where the generalized thin tunable-focus lens has minimum spherical aberration. 6. Imaging properties of three thin tunable-focus lenses A cemented doublet appears either as an individual optical system (telescopes, collimators, etc.) or as a part of complex optical systems very frequently in practice. The cemented doublet has three surfaces of different curvature. In case we want to construct its analogy using tunable-focus lenses we have to use three simple tunable-focus lenses (a tunable-focus doublet). We obtain for the object at infinityUsing Eq. (11) Eq. (12) we can writeThe coefficients in Eq. (23) Eq. (24) are given by the following formulasIf we choose a position of the entrance pupil identical with a generalized thin tunable-focus lens ( ), we obtain using Eq. (10) If we want to remove spherical and coma aberration of the system of three thin tunable-focus lenses in contact, then conditions must be fulfilled. We obtain using Eq. (23) Eq. (24) the following equationswhere Equations (26) represent a system of two non-linear equations and values of power are the solutions of these equations. If both Eq. (26) Eq. (27) have the identical solution , then their resultant must be equal to zero [ ]. We can derive one equation for the unknown value , which can be solved. The power can be calculated by the backward procedure. The resultant Eqs. (26) can be expressed asWe can calculate the resultant (28) from the following formula We obtain the power by solving Eq. (29) , which can be substituted into Eqs. (26) and the power can be calculated. In case we require the optical system with specific values of aberration coefficients , then previous relations are still valid, only the coefficients k [10] p [6] are changed in the following wayIt can be noted that we can proceed in a similar way even in the case of more complicated optical systems which consist of a larger number of thin tunable-focus lenses. For example, an equivalent of a traditional non-cemented doublet must be composed of four tunable-focus lenses, a triplet must be composed of six tunable-focus lenses (i.e. three generalized tunable-focus lenses), the Petzval lens must be composed of six tunable-focus lenses (i.e. two tunable-focus doublets), etc. It is possible to combine tunable-focus lenses, traditional lenses and optical systems and design hybrid optical systems with variable optical characteristics (e.g. focal length, magnification). The fundamental advantage of optical systems with tunable-focus lenses over classical optical systems is the possibility to continuously change properties of these systems without the need for changing position of individual elements of the optical system. 7. Examples of tunable-focus lenses We will show several examples of thin tunable-focus lenses in air and we provide a comparison of imaging properties to traditional lens systems. We have chosen two cases of values of refractive indices of fluids in tunable-focus lenses: (n2=1.38,n3=1.55) and (n2=1.38,n3=1.99). Furthermore, we consider imaging of the object at infinity (σ1=0) and the entrance pupil is identical with the plane of lenses (s¯1=0). Linear dimensions in the following examples are given in mm. Example 1 We consider a simple thin tunable-focus lens with the optical scheme shown in Fig. 1 . Parameters of the lens and the coefficient of spherical aberration is given in Table 1 for both cases of refractive indices.We obtain for the transverse spherical aberration and longitudinal spherical aberration the following formulaswhere is the image aperture angle of the optical system. Now, we compare parameters of the thin simple tunable-focus lens with parameters of the classical thin lens in air, which is made from optical glass with the refractive index and the minimum spherical aberration for the object at infinity. We have for the longitudinal spherical aberration of the classical lenswhereUsing as optical glass Schott BK7 with = 1.516 (λ = 589 nm) we can calculate for the focal length f′=100 mm the aberration coefficient . If we use optical glass Schott N-LASF46A with = 1.904 (λ = 589 nm) we obtain for the value f′=100 mm . One can see by comparison to the thin tunable-focus lens with the same focal length f′=100 mm that the traditional thin lens has approximately fourteen times lower residual spherical aberration than the first case of the simple thin tunable-focus lens ( n [2] = 1.38 and n [3] = 1.55). In the second case ( n [2] = 1.38 and n [3] = 1.99) the tunable-focus lens has almost the same residual spherical aberration as the classical lens from the glass BK7. It is clear from the presented example that the difference ( n [3] – n [2] ) between indices of refraction must be relatively large for achieving small residual aberration of the simple tunable-focus lens. It is also evident from Fig. 3 Example 2 We consider a generalized thin tunable-focus lens with minimum spherical aberration. The optical scheme of this lens is shown in Fig. 4 Fig. 5 . We can calculate parameters of the lens using Eq. (21) . These parameters together with the coefficient of spherical aberration ( are presented in Table 2 . As we can see from Table 2 the first solution gives always lower value of spherical aberration. By comparison with a classical lens we can see that the generalized tunable-focus lens with minimum spherical aberration has 2.2 times lower residual spherical aberration than the classical thin lens made from the glass BK7 and approximately the same residual aberration as the lens from the glass N-LASF46A. Example 3 Now, we consider an optical system of three simple tunable-focus lenses and we choose the focal length equal to 1 mm ( f′=1/φ=1 mm ). By solving Eq. (29) Eq. (27) , and Eq. (26) we obtain parameters of the optical system which are shown in Table 3 Table 4 ).We provided only four combinations of tunable-focus lenses. Other combinations can be obtained by different orientation of simple tunable-focus lenses, but it is not a goal of this work. As one can see from Table 3 Table 4 it is possible to design an optical system, which has corrected spherical aberration and coma ( ), using three simple tunable-focus lenses. Such optical system is analogical to the classical cemented doublet which has also corrected spherical aberration and coma. We also provided a verification of presented calculations using the Zemax software that gives the same results of aberration coefficients. The high refractive index = 1.99 in the previous examples was chosen intentionally in order to show from the theoretical point of view that we need the difference of refractive indices as large as possible for obtaining small values of Seidel coefficients. 8. Summary The work presents a possible approach to a general solution of the problem of calculation of fundamental paraxial parameters and the third order aberration coefficients of thin tunable-focus lenses and their combinations into more complex optical systems. It is shown that aberration coefficients of the third order of the thin tunable-focus lens are completely characterized by three functions A, B and C that depend only on refractive indices of fluids forming the tunable-focus lens and do not depend on the position and size of the object and the position of the entrance pupil. These functions are constant for a given type of the tunable-focus lens. A detailed theoretical analysis was performed for a simple tunable-focus lens, a generalized tunable-focus lens, a generalized tunable-focus lens with minimum spherical aberration, and three-element tunable-focus lens (a tunable-focus doublet), which is the equivalent of the classical cemented doublet. The derived equations enable to carry out calculations of all parameters of above-mentioned optical systems and are also fundamental for solving of more complex optical systems using tunable-focus lenses. For example, an analogy of a traditional non-cemented doublet is composed of four tunable-focus lenses, a triplet must is composed of six tunable-focus lenses (i.e. three generalized tunable-focus lenses), the Petzval lens is composed of six tunable-focus lenses (i.e. two tunable-focus doublets), etc. The calculation of parameters of the optical systems with tunable-focus lenses was presented on several examples. The provided analysis may serve for better understanding aberration and imaging properties of the refractive tunable-focus lenses and for the initial design of optical systems using such non-conventional lens systems. Tunable-focus lenses start to be used in various practical applications and in near future these lenses will impact considerably the design of modern non-conventional optical systems, e.g. zoom lenses. This work has been supported by Ministry of Education of Czech Republic by the grant MSM6840770022 and GA 202/09/P553 from Czech Science Foundation. References and links 1. http://www.varioptic.com 2. http://www.optotune.com/ 3. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3(2), 159–163 (2000). [CrossRef] 4. C. Gabay, B. Berge, G. Dovillaire, and S. Bucourt, “Dynamic study of a Varioptic variable focal lens,” SPIE Proc. 4767, 159–165 (2002). [CrossRef] 5. B. Berge, “Liquid lens technology: Principle of electrowetting based lenses and applications to imaging”, Proc. IEEE MEMS, 227–230 (2004). 6. B. H. W. Hendriks, S. Kuiper, M. A. J. Van As, C. A. Renders, and T. W. Tukker, “Electrowetting-based variable-focus lens for miniature systems,” Opt. Rev. 12(3), 255–259 (2005). [CrossRef] 7. S. Kuiper and B. H. W. Hendriks, “Variable-focus liquid lens for miniature cameras,” Appl. Phys. Lett. 85(7), 1128–1130 (2004). [CrossRef] 8. R. Peng, J. Chen, and S. Zhuang, “Electrowetting-actuated zoom lens with spherical-interface liquid lenses,” J. Opt. Soc. Am. A 25(11), 2644–2650 (2008). [CrossRef] 9. D. Y. Zhang, N. Justis, and Y. H. Lo, “Fluidic adaptive zoom lens with high zoom ratio and widely tunable field of view,” Opt. Commun. 249(1-3), 175–182 (2005). [CrossRef] 10. H. Ren, D. Fox, P. A. Anderson, B. Wu, and S. T. Wu, “Tunable-focus liquid lens controlled using a servo motor,” Opt. Express 14(18), 8031–8036 (2006). [CrossRef] [PubMed] 11. H. W. Ren and S. T. Wu, “Variable-focus liquid lens,” Opt. Express 15(10), 5931–5936 (2007). [CrossRef] [PubMed] 12. G. Beadie, M. L. Sandrock, M. J. Wiggins, R. S. Lepkowicz, J. S. Shirk, M. Ponting, Y. Yang, T. Kazmierczak, A. Hiltner, and E. Baer, “Tunable polymer lens,” Opt. Express 16(16), 11847–11857 (2008). [CrossRef] [PubMed] 13. A. F. Naumov, G. D. Love, M. Y. Loktev, and F. L. Vladimirov, “Control optimization of spherical modal liquid crystal lenses,” Opt. Express 4(9), 344–352 (1999). [CrossRef] [PubMed] 14. M. Ye and S. Sato, “Optical properties of liquid crystal lens of any size,” Jpn. J. Appl. Phys. 41(Part 2, No. 5B), L571–L573 (2002). [CrossRef] 15. H. W. Ren, Y. H. Fan, S. Gauza, and S. T. Wu, “Tunable-focus flat liquid crystal spherical lens,” Appl. Phys. Lett. 84(23), 4789–4791 (2004). [CrossRef] 16. M. Ye, M. Noguchi, B. Wang, and S. Sato, “Zoom lens system without moving elements realised using liquid crystal lenses,” Electron. Lett. 45(12), 646–648 (2009). [CrossRef] 17. P. Valley, D. L. Mathine, M. R. Dodge, J. Schwiegerling, G. Peyman, and N. Peyghambarian, “Tunable-focus flat liquid-crystal diffractive lens,” Opt. Lett. 35(3), 336–338 (2010). [CrossRef] 18. R. Marks, D. L. Mathine, G. Peyman, J. Schwiegerling, and N. Peyghambarian, “Adjustable fluidic lenses for ophthalmic corrections,” Opt. Lett. 34(4), 515–517 (2009). [CrossRef] [PubMed] 19. F. C. Wippermann, P. Schreiber, A. Bräuer, and P. Craen, “Bifocal liquid lens zoom objective for mobile phone applications,” SPIE Proc. 6501, 650109 (2007). [CrossRef] 20. F. S. Tsai, S. H. Cho, Y. H. Lo, B. Vasko, and J. Vasko, “Miniaturized universal imaging device using fluidic lens,” Opt. Lett. 33(3), 291–293 (2008). [CrossRef] [PubMed] 21. B. H. W. Hendriks, S. Kuiper, M. A. J. van As, C. A. Renders, and T. W. Tukker, “Variable liquid lenses for electronic products,” Proc. SPIE 6034, 603402 (2006). [CrossRef] 22. A. Miks, Applied Optics (Czech Technical University Press, Prague 2009). [PubMed] 23. W. Smith, Modern Optical Engineering, 4th Ed. (McGraw-Hill, New York 2007). 24. M. Born, and E. Wolf, Principles of Optics, (Oxford University Press, New York 1964). 25. P. Mouroulis, and J. Macdonald, Geometrical Optics and Optical Design (Oxford University Press, New York 1997). 26. M. Herzberger, Modern Geometrical Optics (Interscience Publishers, Inc., New York 1958). 27. S. Reichelt and H. Zappe, “Design of spherically corrected, achromatic variable-focus liquid lenses,” Opt. Express 15(21), 14146–14154 (2007). [CrossRef] [PubMed] 28. R. Peng, J. Chen, Ch. Zhu, and S. Zhuang, “Design of a zoom lens without motorized optical elements,” Opt. Express 15(11), 6664–6669 (2007). [CrossRef] [PubMed] 29. Z. Wang, Y. Xu, and Y. Zhao, “Aberration analyses of liquid zooming lenses without moving parts,” Opt. Commun. 275(1), 22–26 (2007). [CrossRef] 30. M. Herzberger, “Replacing a thin lens by a thick lens,” J. Opt. Soc. Am. 34(2), 114–115 (1944). [CrossRef] 31. K. Rektorys, Survey of Applicable Mathematics. (Kluwer Academic Publisher, Dodrecht 1994) OCIS Codes (080.3620) Geometric optics : Lens system design (110.0110) Imaging systems : Imaging systems (220.3630) Optical design and fabrication : Lenses (110.1080) Imaging systems : Active or adaptive optics Original Manuscript: March 1, 2010 Revised Manuscript: March 31, 2010 Manuscript Accepted: March 31, 2010 Published: April 14, 2010 Antonin Miks, Jiri Novak, and Pavel Novak, "Generalized refractive tunable-focus lens and its imaging characteristics," Opt. Express 18, 9034-9047 (2010) Sort: Year | Journal | Reset 1. http://www.varioptic.com 2. http://www.optotune.com/ 3. B. Berge and J. Peseux, “Variable focal lens controlled by an external voltage: An application of electrowetting,” Eur. Phys. J. E 3(2), 159–163 (2000). [CrossRef] 4. C. Gabay, B. Berge, G. Dovillaire, and S. Bucourt, “Dynamic study of a Varioptic variable focal lens,” SPIE Proc. 4767, 159–165 (2002). [CrossRef] 5. B. Berge, “Liquid lens technology: Principle of electrowetting based lenses and applications to imaging”, Proc. IEEE MEMS, 227–230 (2004). 6. B. H. W. Hendriks, S. Kuiper, M. A. J. Van As, C. A. Renders, and T. W. Tukker, “Electrowetting-based variable-focus lens for miniature systems,” Opt. Rev. 12(3), 255–259 (2005). [CrossRef] 7. S. Kuiper and B. H. W. Hendriks, “Variable-focus liquid lens for miniature cameras,” Appl. Phys. Lett. 85(7), 1128–1130 (2004). [CrossRef] 8. R. Peng, J. Chen, and S. Zhuang, “Electrowetting-actuated zoom lens with spherical-interface liquid lenses,” J. Opt. Soc. Am. A 25(11), 2644–2650 (2008). [CrossRef] 9. D. Y. Zhang, N. Justis, and Y. H. Lo, “Fluidic adaptive zoom lens with high zoom ratio and widely tunable field of view,” Opt. Commun. 249(1-3), 175–182 (2005). [CrossRef] 10. H. Ren, D. Fox, P. A. Anderson, B. Wu, and S. T. Wu, “Tunable-focus liquid lens controlled using a servo motor,” Opt. Express 14(18), 8031–8036 (2006). [CrossRef] [PubMed] 11. H. W. Ren and S. T. Wu, “Variable-focus liquid lens,” Opt. Express 15(10), 5931–5936 (2007). [CrossRef] [PubMed] 12. G. Beadie, M. L. Sandrock, M. J. Wiggins, R. S. Lepkowicz, J. S. Shirk, M. Ponting, Y. Yang, T. Kazmierczak, A. Hiltner, and E. Baer, “Tunable polymer lens,” Opt. Express 16(16), 11847–11857 (2008). [CrossRef] [PubMed] 13. A. F. Naumov, G. D. Love, M. Y. Loktev, and F. L. Vladimirov, “Control optimization of spherical modal liquid crystal lenses,” Opt. Express 4(9), 344–352 (1999). [CrossRef] [PubMed] 14. M. Ye and S. Sato, “Optical properties of liquid crystal lens of any size,” Jpn. J. Appl. Phys. 41(Part 2, No. 5B), L571–L573 (2002). [CrossRef] 15. H. W. Ren, Y. H. Fan, S. Gauza, and S. T. Wu, “Tunable-focus flat liquid crystal spherical lens,” Appl. Phys. Lett. 84(23), 4789–4791 (2004). [CrossRef] 16. M. Ye, M. Noguchi, B. Wang, and S. Sato, “Zoom lens system without moving elements realised using liquid crystal lenses,” Electron. Lett. 45(12), 646–648 (2009). [CrossRef] 17. P. Valley, D. L. Mathine, M. R. Dodge, J. Schwiegerling, G. Peyman, and N. Peyghambarian, “Tunable-focus flat liquid-crystal diffractive lens,” Opt. Lett. 35(3), 336–338 (2010). [CrossRef] 18. R. Marks, D. L. Mathine, G. Peyman, J. Schwiegerling, and N. Peyghambarian, “Adjustable fluidic lenses for ophthalmic corrections,” Opt. Lett. 34(4), 515–517 (2009). [CrossRef] [PubMed] 19. F. C. Wippermann, P. Schreiber, A. Bräuer, and P. Craen, “Bifocal liquid lens zoom objective for mobile phone applications,” SPIE Proc. 6501, 650109 (2007). [CrossRef] 20. F. S. Tsai, S. H. Cho, Y. H. Lo, B. Vasko, and J. Vasko, “Miniaturized universal imaging device using fluidic lens,” Opt. Lett. 33(3), 291–293 (2008). [CrossRef] [PubMed] 21. B. H. W. Hendriks, S. Kuiper, M. A. J. van As, C. A. Renders, and T. W. Tukker, “Variable liquid lenses for electronic products,” Proc. SPIE 6034, 603402 (2006). [CrossRef] 22. A. Miks, Applied Optics (Czech Technical University Press, Prague 2009). [PubMed] 23. W. Smith, Modern Optical Engineering, 4th Ed. (McGraw-Hill, New York 2007). 24. M. Born, and E. Wolf, Principles of Optics, (Oxford University Press, New York 1964). 25. P. Mouroulis, and J. Macdonald, Geometrical Optics and Optical Design (Oxford University Press, New York 1997). 26. M. Herzberger, Modern Geometrical Optics (Interscience Publishers, Inc., New York 1958). 27. S. Reichelt and H. Zappe, “Design of spherically corrected, achromatic variable-focus liquid lenses,” Opt. Express 15(21), 14146–14154 (2007). [CrossRef] [PubMed] 28. R. Peng, J. Chen, Ch. Zhu, and S. Zhuang, “Design of a zoom lens without motorized optical elements,” Opt. Express 15(11), 6664–6669 (2007). [CrossRef] [PubMed] 29. Z. Wang, Y. Xu, and Y. Zhao, “Aberration analyses of liquid zooming lenses without moving parts,” Opt. Commun. 275(1), 22–26 (2007). [CrossRef] 30. M. Herzberger, “Replacing a thin lens by a thick lens,” J. Opt. Soc. Am. 34(2), 114–115 (1944). [CrossRef] 31. K. Rektorys, Survey of Applicable Mathematics. (Kluwer Academic Publisher, Dodrecht 1994) OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-18-9-9034&id=198223","timestamp":"2014-04-17T18:32:24Z","content_type":null,"content_length":"446618","record_id":"<urn:uuid:0c4c5498-4b58-4fac-8787-d234718e9187>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
intergration issues February 26th 2009, 12:01 PM intergration issues I have this equation: ImageShack - Image Hosting :: maths.jpg I have equations for a, b and c. The equation for c has x in it and I'm meant to try a range of x values. I have no idea how to do that. I'm using R to do this, so I know I use the "integrate" command, but I don't know any more than that... Thank you! February 26th 2009, 03:20 PM I have this equation: ImageShack - Image Hosting :: maths.jpg That is not an equation, assuming that I am seeing the correct image ( $ab+(1-a)\!\int\!c(x)\,dx$). Also, note that you can upload images directly to the forum; scroll down below the text box on the reply page for attachment options. I have equations for a, b and c. And they are...? The equation for c has x in it and I'm meant to try a range of x values. I have no idea how to do that. For what purpose? What are you trying to do?
{"url":"http://mathhelpforum.com/calculus/75908-intergration-issues-print.html","timestamp":"2014-04-18T21:40:55Z","content_type":null,"content_length":"5250","record_id":"<urn:uuid:948e6a66-9189-450f-81a4-47c57be33be8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
ODE Solver using Euler Method First and Second Order Ordinary Differential Equation (ODE) Solver using Euler Method. Enter the coefficients for the Ax2 + Bx + C = 0 equation and Quadratic Equation will output the solutions and plot(if they are not imaginary). % This Function solves a bilateral matrix quadratic equation % of the form AX+XB+XCX+D = 0 for X % Inputs : Matrices A,B,C, D of appropriate dimensions % Output : The Matrix X - if a solution exists Our scripts are intended for educators but can be useful to anyone. They include site search, a quiz, mailing list manager, web page creator, crossword puzzle generator, quadratic equation solver, and more. Computes the Gauss hypergeometric function 2F1(a,b;c;z) and its derivative for real z, z<1 by integrating the defining differential equation using the Matlab differential equation solver ode15i. If 2F1 is to be evaluated for many... This is a very simple, short Sudoku solver using a classic brute-force approach. What makes it nice is the purely arithmetic one-liner computing the constraint c (the sequence of already used digits on the same row, same column, same... Just a little bit of hack: a linear equations solver using eval and built-in complex numbers: >>> solve("x - 2*x + 5*x - 46*(235-24) = x + 2") QUADPROG2 - Convex Quadratic Programming Solver Featuring the SOLVOPT freeware optimizer New for version 1.1: * Significant speed improvement * Geometric Preconditioning * Improved Error Checking This program solves quadratic equations. Type in the coefficients for x^2 and x and the constant to get your quadratic equation and you will then be given the roots of the equation. If "b2 - 4ac" is negative, you'll be given the results... This zip file contains three separate guis. One solves and plots a system of functions given the inputs of 2 or more functions. The second one solves a 2 system differential equation model using the 4th order runga-kutta method. The third one is... Real Controls contains 12 native VCL's including shareware protection, form sizer, multi-mask edit, and more. Other included demos for Real Database, a non-bde system using pascal records and Realforms, a paper form design and filling system. The attachment contains: 1. Related matlab files. 2. Picture files of possible outputs. 3. A read me text file. 4. A report containing detailed explanations about the basics and about coding algorithm used herein. Solves a common algebraic Riccati equation using Schur decomposition. This function solves an algebraic Riccati equation of the form: A'*X + X*A' - X*G*X + Q = 0, where A, G, and Q are given and X is the symmetric... Solves simultaneous linear equations of any order using Crammer's rule. Required input is two lists..one for coefficients, and other for constants eg. 2x+3y=8 x+4y=6 will be written as Solves the Liouville Master Equation (LME) for a kind of Piecewise deterministic process (PDP). === Immediate command line examples === - dichot_markov_gui.m (dichot_markov_gui.fig) A graphical user interface for the... solvePoissonSOR.m is an efficient, lightweight function that solves the Poisson equation using Successive Overrelaxation (SOR) with Chebyshev acceleration to speed-up convergence. Dirichlet boundary conditions are used to provide a unique solution. MATLAB fsolve is frequently used to solve nonlinear algebraic equation systems. This utility makes it simple. User enters initial guesses and equations. The equations will be solved automatically after user clicks the Run button. Residual values... The non-linear regression problem (univariate or multivariate) is easily posed using a graphical user interface (GUI) that solves the problem using one of the following solvers: - nlinfit: only univariate problems. - lsqnonlin: can... This code simulates commodity spot prices using the Clewlow and Strickland one factor daily spot model using a Monte Carlo approach. The derived stochastic differential equations (SDEs) are solved... This submission facilitates working with quadratic curves (ellipse, parabola, hyperbola, etc.) and quadric surfaces (ellipsoid, elliptic paraboloid, hyperbolic paraboloid, hyperboloid, cone, elliptic cylinder, hyperbolic cylinder, parabolic...
{"url":"http://www.sourcecodeonline.com/list?q=quadratic_equation_solver_using_pascal","timestamp":"2014-04-17T16:33:23Z","content_type":null,"content_length":"50521","record_id":"<urn:uuid:0777f299-96da-4874-95f8-49ea0292b048>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Projects • Mathematics Teaching and Learning to Teach Project (MTLT) » The Mathematics Teaching and Learning to Teach project is the research project Ball developed upon her arrival to the U-M in 1996. Serving as the conceptual foundation of subsequent research projects, MTLT investigates the mathematical knowledge, sensibilities, and skills entailed in the work of teaching. The research team continues to develop a practice-based theory of mathematical knowledge for teaching. Instead of analyzing curriculum, or starting with what they think teachers should know, they begin by examining and analyzing practice. The MTLT project studies the interplay of mathematics and pedagogy in the teaching of elementary school mathematics. By looking closely at the mathematical and pedagogical work of teaching - for instance, managing discussions, asking questions, interpreting students' thinking - the project aims to identify mathematical insight, appreciation, and knowledge that matter for teaching and to analyze and articulate ways in which it might be entailed in practice. The MTLT project believes that such analysis is needed to extend what we currently know about the mathematical resources required for teaching, the role of such resources in practice, and, by implication, what learning opportunities teachers and prospective teachers need to develop in order to teach mathematics. Other analyses explore the role of talk in mathematics teaching and learning, the nature of mathematical reasoning, and the relational work involved in teaching mathematics. The MTLT project draws on two primary data sources: • extensive multimedia records of one full year of third grade mathematics teaching and learning, including videos, audiotapes, transcripts, the teacher's daily journal, each student's class work, homework, quizzes, and standardized tests, student interviews. • records of practice drawn from other elementary and middle school classrooms, to extend and supplement the data in the primary third grade records. • Learning Mathematics for Teaching Project (LMT) » The Learning Mathematics for Teaching Project (LMT) investigates the mathematical knowledge needed for teaching, and how such knowledge develops as a result of experience and professional learning. This is done through the writing, piloting, and analysis of problems that reflect real mathematics tasks teachers face in classrooms - for instance, assessing student work, representing numbers and operations, and explaining common mathematical rules or procedures. Assessments composed of these problems are often used to measure the effectiveness of professional development intended to improve teachers' mathematical knowledge. • Study of Instructional Improvement (SII) » The Study of Instructional Improvement (SII) is a program of comprehensive research that seeks to understand the impact of school improvement programs on instruction and student performance in elementary schools. Over a six-year period, researchers at the University of Michigan followed schools involved with one of three leading school improvement programs -- Accelerated Schools, America's Choice, and Success For All. The study is tracking the implementation of these improvement efforts in schools, and investigating the impact on teachers, students, and schools. • Center for Proficiency in Teaching Mathematics (CPTM) » CPTM, the Center for Proficiency in Teaching Mathematics, aims to strengthen the system of professional education that supports teachers of mathematics throughout their careers. In order to do this, CPTM works on ideas, materials, and approaches to improve: • Professional development for mathematics teacher educators and professional developers • Doctoral programs for future mathematics teacher educators • Professional development for teachers of mathematics • Knowledge about the unique nature of mathematics as it is used in teaching • mod4: Materials Development Project » mod4 is a materials development project based at the University of Michigan and funded by the National Science Foundation’s Teacher Professional Continuum program. Its aim is to produce practice-based materials for teacher education and professional development that focus on helping teachers learn mathematical knowledge and skills for the work of teaching. • Elementary Mathematics Laboratory (EML) » The Elementary Mathematics Laboratory (EML) is a teaching and research project at the University of Michigan School of Education. It features a two-week summer mathematics program for incoming fifth-graders that is taught by mathematics educator and School of Education dean Dr. Deborah Ball. This program provides local schoolchildren with an opportunity to work with expert researchers and teachers to improve their mathematical knowledge and skill. At the same time the EML creates a space for diverse professionals representing a range of expertise and perspectives to work together to solve complex problems of learning and teaching. • Developing Teaching Expertise @ Mathematics (Dev-TE@M) » The Dev-TE@M project at the U-M School of Education has partnered with Cisco Learning Institute (CLI) to build practice-focused professional development modules for elementary teachers who teach mathematics and may play leadership roles in mathematics education. Using innovative technologies, the project aims to provide high-quality learning experiences and assessments that are accessible, coherent, and usable at scale. Over two years, the Dev-TE@M will lead a group of mathematics teacher educators from across the country to develop a coherent set of professional learning modules that will serve as the foundation for the Cisco Learning Institute (CLI)’s national K-5 Mathematics Specialist Academy and their vision for an Elementary Mathematics Specialist (EMS) endorsement program. • Teacher Education Initiative (TEI) » The Teacher Education Initiative is a comprehensive project to redesign how teachers are prepared for practice at the University of Michigan, and to build knowledge and tools that will inform teacher education more broadly. Recognizing that teachers play a pivotal role in improving p-12 education in the United States, TEI aims to develop professional education that will prepare novices to do the complex relational, psychological, social, and intellectual work of teaching. The project also intends to study these efforts and to gather and disseminate systematic evidence of and about effective teacher education. The Initiative is under Deborah Ball’s leadership as dean of the School of Education.
{"url":"http://www-personal.umich.edu/~dball/projects/index.html","timestamp":"2014-04-17T10:02:06Z","content_type":null,"content_length":"11027","record_id":"<urn:uuid:bbf90c9c-6650-489b-8dda-584ce8d99af6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Thermodynamics? number crunching thermal conductivity I am working on some basic calcs for heat transfer from polyethylene pipe. My numbers are not working out right so I need a little refresher. The PE pipe would have a TC of about .46 W/(m.*C). to get to BTU/(hr.ft.*F), I mult by .5779 to get .266. Assuming 10sf of PE pipe, and lets say a dT of 10*F, how do I arrive at my BTU/hr? Wall thickness of piping is .120" but I am told that does not matter. IIRC, the unit is actualy per sf PER ft so I might actually divide by my thickness which gets me closer at around 2.22 BTU/hr/sf*F of pipe? The formula for the heat load Q (BTU/hr) is: where d is the wall thickness.
{"url":"http://www.physicsforums.com/showthread.php?s=52fe01b66b412debcb1c640e9a9fa0ac&p=4652378","timestamp":"2014-04-24T06:27:45Z","content_type":null,"content_length":"30535","record_id":"<urn:uuid:c9ec21dd-05f3-42d7-a83f-b0d2a2adc0a6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Rhombi in a Rhombus Date: 9/25/95 at 10:51:26 From: Anonymous How can I work out a formula for finding how many rhombuses there are in a rhombus? (say 2cm*2cm or 3cm*3cm and so on, etc.) Date: 9/25/95 at 11:36:55 From: Doctor Ken Subject: Re: (no subject) I'll assume you are asking how many rhombi of 1cm side length there are in a rhombus of side length 2, 3, 4, and so on. Well, try this: squish your big rhombus / / / / / / into a rectangle: | | | | | | Now is it easier to see? -Doctor Ken, The Geometry Forum Date: 9/26/95 at 22:25:42 From: Arthur Smith Subject: Re: Rhombuses Hi Doctors, I do not think I made myself clear. I should have said all rhombuses that could be found. So, a 2cm*2cm would have 9 rhombuses, a 3cm * 3cm 26 rhombuses, a 4cm*4cm would have 60 rhombuses and a 5cm*5cm would have 111 rhombuses and so on. This includes all rhombuses that can be found.. Sizes of rhombus in larger rhombus in cm Size of rhombus in cm 1*1 2*2 3*3 4*4 5*5 Total 1 by 1cm 1 0 0 0 0 1 2 by 2 8 1 0 0 0 9 3 by 3 21 4 1 0 0 26 4 by 4 40 5 4 1 0 60 5 by 5 65 32 9 4 1 111 (we are certain this one is right) And so on... We hope to find a formula for working out all the rhombuses whether they are 10cm*10cm or 1000cm*1000cm;whatever the size. Thanks in advance, Date: 9/27/95 at 12:46:34 From: Doctor Andrew Subject: Re: Rhombuses First, this problem could be done with squares, which are simpler cases of rhombi. This should make it easier to visualize. I think we've figured out that you are only counting rhombi with sides of a length that is a multiple of 1cm. We're having trouble recreating your table, though. We count only 4 1x1 rhombi in a 2x2 rhombus. Perhaps you are also counting 1x2 parallelograms as well. In a rhombus all sides must have equal length. Here's the solution to the rhombus problem (as we undertstand it) as well as a solution for the number of parallelograms in a rhombus. Feel free to just read part of it to get you started in the right direction: First, let's find how many segments of length x are on a side of length n: This diagram shows a side divided into n 1cm segments: How many segments of length 1 are there? n. Right, they're already marked for you. How many segments of length 2? If you start at the left and count them, there will be a segment beginning at each mark, except for the last one where it will go off the end. So there n - 1. How many of length 3? You can count n -2 before you go off the edge. So how many of length x? You'll can see that the pattern is that there are n - (x+1) segments of length x in the segment of length n. If you are going to create a rhombus of side length x, you need to pick a segment of length x from one side of the larger rhombus and another segment of length x from the adjacent side. How many such rhombi are there? Well, how many choices do you have? You've go (n - (x + 1)) choices on each side. So you have (n - (x + 1)) * (n - (x + 1)) = (n - (x + 1))^2 [note: x^2 means x squared] total choices. The total number of rhombi in the larger rhombus is then the sum of the number of each size of rhombus: total = (n - 0 + 1)^2 + (n - 1 + 1)^2 + (n - 2 + 1)^2 + ... (n - n + 1)^2 = (n+1)^2 + n^2 + (n-1)^2 + .. 1^2 = (n+1)((n+1)+1)(2(n+1)+1) / 6 [1] using a known formula that states: 1^2 + 2^2 + ... + x^2 = x(x+1)(2x+1)/6. You can reduce [1] to: total rhombi = (n+1)(n+2)(2n+3)/6. So, if you are looking for rhombi (parallelograms with equal side lengths) this is the solution. If you are looking for parallelograms in general the solution can also be found: The total number of choices of segments on one side is the sum of the number of choices for segment length: choices = (n - 0 + 1) + (n - 1 + 1) + (n - 2 + 1) + ... (n - n + 1) = n+1 + n + n-1 + ... 1 = n(n+1)/2 (the sum of the averages) There are this many choices on the adjacent side as well so the total number of choices is the number of choices squared: [n(n+1)/2] * [n(n+1)/2] = n^2(n+1)^2/4 So this is the number of parallelograms with sides that are multiples of 1cm in a rhombus. Hope this helps! -Doctor Andrew, The Geometry Forum Date: 01/18/2000 at 10:09:47 From: Oli Hickman Subject: How many rhombi are in this rhombus? I need a formula to find the total number of rhombi in this rhombus. It is the same format of rhombus as in the "Finding Rhombi in a Rhombus" (above). The only difference is that in the 1x1 rhombi there are diagonal lines from the top left to the bottom right of each 1x1 rhombus. Because of this, the results in the table that was sent to you in 1995 are correct. Here is the table again. Size of rhombus in cm 1*1 2*2 3*3 4*4 5*5 Total 1 by 1 cm 1 0 0 0 0 1 2 by 2 8 1 0 0 0 9 3 by 3 21 4 1 0 0 26 4 by 4 40 5 4 1 0 60 5 by 5 65 32 9 4 1 111 /\ /\ / / \ / \ / /\ /\ / / \ / \ / Here is a rough diagram of the rhombus type we are investigating. This is a 2x2 rhombus and as you can see it is divided into equilateral triangles as opposed to just smaller 1x1 rhombi. If you draw it out on paper more accurately with the diagonals going directly into the corner you will easily see 9 rhombi. We have investigated this type of rhombus and we know there are three types of rhombi to be found within these rhombi. They are roughly drawn below. _ __ /_/ /\ \_\ The first rhombus would have a diagonal running from top left to bottom right, the second has a horizontal line splitting it into 2 equilateral triangles, and the third has a diagonal line running from top right to bottom left. We need one formula to find the total possible number of rhombuses in any sized rhombus (we know rhombi are always 1x1, 2x2, etc. and anything else is a parallelogram). Thanks very much. Date: 01/18/2000 at 21:07:25 From: Doctor Peterson Subject: Re: How many rhombi are in this rhombus? Hi, Oli. It intrigues me that your question produces the table that was "wrong" in the archived answer; the original question must have been the same as yours, but the questioner never corrected Dr. Ken's assumptions about the lines within the rhombus. As a result, his answer doesn't cover all the rhombuses you want. Yet it is a very good beginning. Just for completeness, I'll draw the correct diagram the best way I can, for a 4x4 rhombus, in which I've marked one 2x2 rhombus in each /2\3/3\3/3\ / \ / /2\2/2\3/3\3/3\ / / \2/2\2/1\1/1\1/ / \ / \2/1\1/1\1/ You've started out just the way we like you to: looking in our archives for an answer that applies to your question. As you are probably aware, our purpose is not to give you the answer to your homework -- especially not to give you a formula that you are supposed to find by investigation, and take away the exercise that work is meant to give you. What we want you to do is to use the hints we give (such as the example in the archived problem) to get ideas for your investigation. Having said that, I can give you some more hints. When I look for a formula, I don't care much for tables of numbers. Sometimes you might be able to guess a pattern by looking at the numbers; but how could you ever be sure the pattern wouldn't change when you added the next row? In order to be sure of a pattern, you have to see how it forms: what is it about the circumstances of the problem that cause the pattern to arise? For that purpose, I like to make a table, but to concentrate not on the table itself, but on the process of building it. What thought process do you use to figure out each number? That's where the pattern shows up. In this case, you've pointed out that there are three orientations for the rhombuses; so each number in your table is really the sum of three numbers that you probably counted separately. I would make my table that way: under "1x1" I would make three columns, one for each type of rhombus. If you look closely how you form each number, you'll be able to see a formula for that number. For example, one kind of rhombus will be just what Drs. Ken and Andrew were talking about, and their answers will apply to them. Read carefully - both the reason for each of these cells in our table being a square, and the formula for summing squares will be useful to you. The other two orientations will produce different formulas, but you can arrive at them by the same kind of reasoning. Have fun - this is a good problem to investigate. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/57699.html","timestamp":"2014-04-20T03:29:35Z","content_type":null,"content_length":"14741","record_id":"<urn:uuid:80339440-a1b8-496e-ac17-a6153fc8e08c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply hi abhishek.ciindore Welcome to the forum. If the equation has the form then you can make three equations by substituting for (x,y) as follows: (i) (0,0) (ii) (1,2) (iii) (6,0) That should be enough to work out a, b, and c. Check by substituting each 'x' value into your finished equation to see if it gives the right 'y'.
{"url":"http://www.mathisfunforum.com/post.php?tid=18374&qid=239082","timestamp":"2014-04-18T19:16:05Z","content_type":null,"content_length":"16305","record_id":"<urn:uuid:ef3ca6ab-fd40-4ed9-8de0-821b3ca65589>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mermin-Wagner theorem and quantum criticality in physical quasi-1d systems Peter Richard Crompton Hamburg University We identify the leading corrections to Finite-Size Scaling relations for the Correlation length and Twist Order parameter of three mixed-spin chain systems via QMC analysis, arguing that our results imply these systems can be described by the same underlying conformal picture in correspondence with recent pictures of deconfined quantum criticality at the vacuum angle, $\theta=\pi$, both in zero and finite temperature quantum spin chains. We propose a new effective theory for the critical region of quantum fluctuation driven transitions and derive a renormalization group equation, suitable for numerical evaluation via Quantum Monte Carlo analysis, that rigorously defines the mapping between critical features in zero temperature and finite temperature chains. We comment on the Mermin-Wagner theorem, and its rotational symmetry breaking extensions, in this context.
{"url":"http://web.mit.edu/physics/cmt/informalseminar_abstracts/crompton.html","timestamp":"2014-04-20T06:00:22Z","content_type":null,"content_length":"1795","record_id":"<urn:uuid:2a51f9e9-6099-46b9-9a52-89a36307d5d0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
• Login • Register • Forget Challenger of the Day Time: 00:00:37 Placed User Comments (More) Lekshmi Narasimman MN 5 Days ago Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u all a good news :) Thanks to almighty too :) !! abhinay yadav 11 Days ago thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same as given here. bcz of practice i finished my written test 15 minutes before and got it. thanx allot for such noble work... 15 Days ago coz of this site i cud clear IBM's apti nd finally got placed in tcs thanx m4maths...u r a wonderful site :) 17 Days ago thank u m4maths and all its user for posting gud and sensible answers. Nilesh singh 20 Days ago finally selected in TCS. thanks m4maths 22 Days ago Thank you team m4maths.Successfully placed in TCS. Deepika Maurya 22 Days ago Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !! Rimi Das 1 Month ago Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site for placement Stephen raj 1 Month ago prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it. Stephen raj 1 Month ago Thanks to m4maths:) cracked infosys:) 1 Month ago i have been Selected in Tech Mahindra. All the quanti & reasoning questions are common from the placement papers of m4maths. So a big thanks to m4maths team & the people who shares the placement papers. Amit Das 1 Month ago I got selected for interview in TCS.Thank you very much m4maths.com. 1 Month ago I got placed in TCS :) Thanks a lot m4maths :) Syed Ishtiaq 1 Month ago An Awesome site for TCS. Cleared the aptitude. 1 Month ago I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com plz guide for the technical round. mounika devi mamidibathula 1 Month ago got placed in IBM.. this site is very useful, many questions repeated.. thanks alot to m4maths.com Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Kusuma Saddala 1 Month ago Thanks to m4maths, i have place at IBM on feb 8th of this month 2 Months ago thanks to m4 maths because of this i clear csc written test mahima srivastava 2 Months ago Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. prashant gaurav 2 Months ago Got Placed In Infosys... Thanks of m4maths.... 3 Months ago iam not placed in TCS...........bt still m4maths is a good site. 4 Months ago Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a lotttttt............ 4 Months ago THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even reading the whole question completely.. gr8 work m4maths.. keep it up. 5 Months ago m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus drives that happened in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not only for TCS but for all other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :) 5 Months ago THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now. 5 Months ago Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and 1000000000000s of thanks to this website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website. 5 Months ago 2 days before i cleared written test just because of m4maths.com.thanks a lot for this community. 6 Months ago thanks for m4maths!!! bcz of which i cleared apti of infosys today. 6 Months ago Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths. No words to praise m4maths.so i simply said thanks a lot. 7 Months ago I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am very proud to say that I have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle very easily making people like us to be successful. Thanks a lotttt Abhishek Ranjan 7 Months ago me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :) do practice n u'll surely succeed :) Sandhya Pallapu 1 year ago Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site helped me a lot in achieving this...thanks to M4MATHS! vivek singh 2 years ago I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through TCS......special thanks to admin for creating such a superb community Manish Raj 2 years ago this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE Asif Neyaz 2 years ago Thanku M4maths..due to u only, imade to TCS :D test on sep 15. Harini Reddy 2 years ago Big thanks to m4maths.com. I cracked TCS..The solutions given were very helpful!!! 2 years ago HI everyone , me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been able to place in TCS... and thanks to all the users who clearly solved the problems.. im very greatful to you :) 2 years ago Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially... 2 years ago Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that the success rate of M4Math grow exponentially. Again Thanx for all support given by M4Math during my preparation for TCS. and Best of LUCK for all students for their preparation. 2 years ago thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :) 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths 2 years ago Thank U ...I'm placed in TCS..... Continue this g8 work 2 years ago thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus raghu nandan 2 years ago thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :) V.V.Ravi Teja 3 years ago thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........ Veer Bahadur Gupta 3 years ago got placed in TCS ... thanku m4maths... Amulya Punjabi 3 years ago Hi All, Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the help of m4maths.......it's just an osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students. Anusha Alva 3 years ago a big thnks to this site...got placed in TCS!!!!!! Oindrila Majumder 3 years ago thanks a lot m4math.. placed in TCS Pushpesh Kashyap 3 years ago superb site, i cracked tcs Saurabh Bamnia 3 years ago Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME......... Gautam Kumar 3 years ago it was really useful 4 me.................n finally i managed to get through TCS Karthik Sr Sr 3 years ago i like to thank m4maths, it was very useful and i got placed in tcs Lekshmi Narasimman MN 5 Days ago Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u all a good news :) Thanks to almighty too :) !! abhinay yadav 11 Days ago thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same as given here. bcz of practice i finished my written test 15 minutes before and got it. thanx allot for such noble work... manasi 15 Days ago coz of this site i cud clear IBM's apti nd finally got placed in tcs thanx m4maths...u r a wonderful site :) arnold 17 Days ago thank u m4maths and all its user for posting gud and sensible answers. Nilesh singh 20 Days ago finally selected in TCS. thanks m4maths MUDIT 22 Days ago Thank you team m4maths.Successfully placed in TCS. Deepika Maurya 22 Days ago Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !! Rimi Das 1 Month ago Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site for placement preparation... Stephen raj 1 Month ago prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it. Stephen raj 1 Month ago Thanks to m4maths:) cracked infosys:) Ranadip 1 Month ago i have been Selected in Tech Mahindra. All the quanti & reasoning questions are common from the placement papers of m4maths. So a big thanks to m4maths team & the people who shares the placement papers. Amit Das 1 Month ago I got selected for interview in TCS.Thank you very much m4maths.com. PRAVEEN K H 1 Month ago I got placed in TCS :) Thanks a lot m4maths :) Syed Ishtiaq 1 Month ago An Awesome site for TCS. Cleared the aptitude. sara 1 Month ago I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com plz guide for the technical round. mounika devi mamidibathula 1 Month ago got placed in IBM.. this site is very useful, many questions repeated.. thanks alot to m4maths.com Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Kusuma Saddala 1 Month ago Thanks to m4maths, i have place at IBM on feb 8th of this month sangeetha 2 Months ago thanks to m4 maths because of this i clear csc written test mahima srivastava 2 Months ago Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. prashant gaurav 2 Months ago Got Placed In Infosys... Thanks of m4maths.... vishal 3 Months ago iam not placed in TCS...........bt still m4maths is a good site. sameer 4 Months ago Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a Sonali 4 Months ago THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even reading the whole question completely.. gr8 work m4maths.. keep it up. Kumar 5 Months ago m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus drives that happened in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not only for TCS but for all other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :) YASWANT KUMAR CHAUDHARY 5 Months ago THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now. ANGELIN ALFRED 5 Months ago Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and 1000000000000s of thanks to this website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website. MALLIKARJUN ULCHALA 5 Months ago 2 days before i cleared written test just because of m4maths.com.thanks a lot for this community. Madhuri 6 Months ago thanks for m4maths!!! bcz of which i cleared apti of infosys today. DEVARAJU 6 Months ago Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths. No words to praise m4maths.so i simply said thanks a lot. PRATHYUSHA BSN 7 Months ago I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am very proud to say that I have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle very easily making people like us to be successful. Thanks a lotttt Abhishek Ranjan 7 Months ago me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :) IT'S VERY VERY VERY HELPFUL N IMPORTANT SITE. do practice n u'll surely succeed :) Sandhya Pallapu 1 year ago Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site helped me a lot in achieving this...thanks to M4MATHS! vivek singh 2 years ago I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through TCS......special thanks to admin for creating such a superb community Manish Raj 2 years ago this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE Asif Neyaz 2 years ago Thanku M4maths..due to u only, imade to TCS :D test on sep 15. Harini Reddy 2 years ago Big thanks to m4maths.com. I cracked TCS..The solutions given were very helpful!!! portia 2 years ago HI everyone , me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been able to place in TCS... and thanks to all the users who clearly solved the problems.. im very greatful to you :) vasanthi 2 years ago Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially... vijay 2 years ago Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that the success rate of M4Math grow exponentially. Again Thanx for all support given by M4Math during my preparation for TCS. and Best of LUCK for all students for their preparation. maheswari 2 years ago thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :) GIRISH 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths girish 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths Aswath 2 years ago Thank U ...I'm placed in TCS..... Continue this g8 work JYOTHI 2 years ago thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus raghu nandan 2 years ago thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :) V.V.Ravi Teja 3 years ago thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........ Veer Bahadur Gupta 3 years ago got placed in TCS ... thanku m4maths... Amulya Punjabi 3 years ago Hi All, Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the help of m4maths.......it's just an osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students. Anusha Alva 3 years ago a big thnks to this site...got placed in TCS!!!!!! Oindrila Majumder 3 years ago thanks a lot m4math.. placed in TCS Pushpesh Kashyap 3 years ago superb site, i cracked tcs Saurabh Bamnia 3 years ago Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME......... Gautam Kumar 3 years ago it was really useful 4 me.................n finally i managed to get through TCS Karthik Sr Sr 3 years ago i like to thank m4maths, it was very useful and i got placed in tcs Latest User posts (More) Maths Quotes (More) "The more you know, the less sure you are" Voltaire "Well done is better than well said." Ben Franklin "But mathematics is the sister, as well as the servant, of the arts and is touched with the same madness and genius." Harold Marston Morse "Arithmetic is where numbers fly like pigeons in and out of your head" Carl Sandburg "Learn mathematics better and solve your own problems." K. Anandakumar "If two wrongs don't make a right, try three." Unknown "MATHEMATICS is a great motivator for all humans.. Because its career starts with "ZERO" but it never end(INFINITY).." Vignesh R Latest Placement Puzzle (More) "If 73+46=42, 95+87=57, than UnsolvedAsked In: SSC "Abhishek purchased 140 shirts and 250 trousers @ Rs. 450/- and @ Rs. 550/respectively. What should be the overall average selling price of shirts and trousers so that 40% profit is earned? (Rounded off to next integer). a) Rs. 725/- b) Rs. 710/- c) Rs. 720/- d) Rs. 700/- e) None of these" UnsolvedAsked In: Bank Exam "PIGEON : PEACE a)Olive Oil : Enmity b) Eagle : Friendship c) Whiteflag : Surrender d)Roses : Garden e)Ring : Engagement UnsolvedAsked In: Tech Mahindra "The more you know, the less sure you are" Voltaire "But mathematics is the sister, as well as the servant, of the arts and is touched with the same madness and genius." Harold Marston Morse "Arithmetic is where numbers fly like pigeons in and out of your head" Carl Sandburg "Learn mathematics better and solve your own problems." K. Anandakumar "If two wrongs don't make a right, try three." Unknown "MATHEMATICS is a great motivator for all humans.. Because its career starts with "ZERO" but it never end(INFINITY).." Vignesh R "Abhishek purchased 140 shirts and 250 trousers @ Rs. 450/- and @ Rs. 550/respectively. What should be the overall average selling price of shirts and trousers so that 40% profit is earned? (Rounded off to next integer). a) Rs. 725/- b) Rs. 710/- c) Rs. 720/- d) Rs. 700/- e) None of these" UnsolvedAsked In: Bank Exam "PIGEON : PEACE a)Olive Oil : Enmity b) Eagle : Friendship c) Whiteflag : Surrender d)Roses : Garden e)Ring : Engagement " UnsolvedAsked In: Tech Mahindra 3i-infotech (285) Accenture (258) ADITI (46) Athenahealth (38) CADENCE (30) Capgemini (227) CMC (29) Cognizant (42) CSC (462) CTS (811) Dell (41) GENPACT (503) Google (29) HCL (119) Hexaware (67) Huawei (39) IBM (1160) Infosys (1612) L&T (58) Microsoft (41) Miscellaneous C (149) Oracle (38) Patni (193) Sasken (25) Self (26) Syntel (433) TCS (6579) Tech Mahindra (143) Wipro (1073) ACIO (100) AIEEE (285) AMCAT (430) CAT (715) CMAT (82) Elitmus (883) Gate (388) GMAT (62) Gmate (24) GRE (127) IIT-JEE (459) ITC (24) Maths olympiad (129) MBA (3436) MCA (24) R-SAT & I-SAT (67) Self (51) Government Jobs Exams Bank Exam (193) CDS (52) IBPS (710) IES EC (42) KVPY (364) NDA (479) NTSE (36) REVENUE OFFICER (41) RRB (887) SSC (1058) UPSC (357) HR Interview (162) Ibm (22) Infosys (28) Tcs (21) Maths Puzzle A website (79) Book (653) Campus (62) CMAT (49) Exam (293) General (286) M4maths (91) Maths (102) Orkut (27) Others (1212) Reasoning (101) Self (2956) Programming and Technical ASP.NET (58) C Programming (303) C++ Programming (433) DATA STRUCTURE (36) DBMS (66) ELECTRONICS (29) Java Programmin (205) OOPs Concepts (115) Operating Syste (99) RDBMS (107) UNIX (63) Here you can share maths puzzles, comments and their answers, which helps you to learn and understand each puzzle's answer in detail. If you have any specific puzzle which is not on the site, use "Ask Puzzle" (2nd tab on the left side). KEEP AN EYE: By this feature you can bookmark your Favorite Puzzles and trace these puzzles easily in your next visit. Click here to go your Keep an EYE (0) puzzles. If you face any interview then please submit your interview experience here and also you can read others sucess story and experience InterView Experience(19).
{"url":"http://www.nowiseet.com/m4maths/placement-puzzles.php?ISSOLVED=N&page=2&LPP=10&SOURCE=&MYPUZZLE=","timestamp":"2014-04-17T21:23:12Z","content_type":null,"content_length":"140469","record_id":"<urn:uuid:de66ea41-8176-4db3-a5da-d748f9d4048c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/vishal_kothari/asked/1","timestamp":"2014-04-21T12:44:23Z","content_type":null,"content_length":"132992","record_id":"<urn:uuid:ad64c44f-6bac-4a64-9de6-362b4ddb6b3d>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 201: Calculus 3, Section ? MATH 302: Real Analysis II Mathematics Department, Bryn Mawr College, Spring 2008 │ Professor: Victor Donnay │ Lecture: Mon, Wed, Fri 11- 12 │ │Office: Park Science Building #330 │ │ │ Phone: 526-5352, E-mail: vdonnay │Office Hours: Mon 1-3, Wed 2-3,│ │ │ │ │ │ Fri 1:30-2:30 │ Pre-requisites: We will be studying material that you (might) have seen in Math 102 (Calculus 2), Math 201 (Multi-variable calculus, Math 203 (Linear Algebra). We will build on the material taught in Math 301 (Real Analysis I Texts: Introduction to Real Analysis, Bartle & Sherbert, Wiley 2000, 3^rd edition. (B) Introductory Functional Analysis with Applications, E. Kreyszig, Wiley Classics, 1989. (K) Course Web Site: accessible from Prof. Donnay's homepage All materials for the course will be found on the web site or at the course Blackboard site. TA: Sherry Teti (steti@brynmawr.edu) will be the TA for the course and will run several help sessions per week: Mondays 3 pm to 5 pm in Park 337; Tuesdays 3:30 pm to 4:30 pm in Park 349; Thursdays 1 pm to 2 pm in Park 349 Goals of the Course: In this course, you will: Communicate your mathematical reasoning in writing and verbally, both via informal arguments and via more formal proofs. Develop your ability to work as an independent and self-sufficient learner: What to do when you do not know what to do How to take what you have learned in one situation and apply it to a new and different situation (transfer of knowledge) Get comfortable with not knowing the answer immediately Learn material we have not covered in class by reading the book and applying this newly learned information to solve problems. Decide for yourself whether you understand material and learn how to ask yourself questions to check your understanding. Become part of a community of learners who support, encourage and learn from one another. You will demonstrate your progress in these areas by undertaking a final project on a topic of your own choosing. Metric spaces (Ch. 11 B, Ch. 1 K). We will extend the notions of analysis that you have learned for R (sequences, limits, continuity) to the more general setting of metric spaces. Sequences of functions: (Ch. 8 B). Series (Ch. 3.7 K, Ch. 9 K). Differentiation (Ch. 6 B) Integration (Ch. 7 B) Normed spaces, Banach Spaces (Ch 2.1-4, K) Inner Product Spaces, Hilbert Spaces (Ch. 3.1-6). Computer Assignments: We will have occasional computer assignments and will sometimes use computers during the course. We will use Mathematica; but no previous experience is assumed. There will be a mid-term exam, a final exam (both take home exams) and a final project. The tentative schedule for the exams is: 1st exam: probably in the 6^th week (Feb 25- Feb 29). 2nd exam: probably in the 12^th week (April 14 -18). Final Project: Due during exam period. Students will work in two person teams on a project of their choosing. The project might involve using material from the course to study an applied situation, examining a theoretical issue in more depth or studying a topic that extends the material from the course. Projects will be written up in the form of a paper (10 - 15 pages). During the last two weeks of the term, teams will give short (10 – 15 minute) presentations about their projects to the class (providing the class does not get too big!). Homework will be assigned each week. The homework related to the Monday – Friday classes will be due the following Wednesday. Late work will not be accepted unless there is a special situation (ex. serious medical problem) and you get my permission ahead of time. The best way to learn mathematics is by doing. At this level of more theoretical mathematics, problems can take a lot of thought and experimentation to complete. Part of the goal of the course is to help you develop strategies to attack these hard problems (draw pictures, make simpler mini-problems, read the text very carefully, discuss with your classmates). Much learning happens by trying, doing as much and as well as you can, then getting feedback and trying again. So there will be some HW problems where you will be asked to revise and resubmit. Quizzes: We may have occasional mini-quizzes to give you and me a chance to gauge your understanding of key concepts in the course. During class, there will be a mixture of lecturing by the professor and time spent by the students working out problems, discussing their results in groups and having whole class discussions. Research has shown that this type of active participation leads to improved learning. The group work does not go well when members of the group are absent. Therefore it is important that you attend to class. Please be respectful of your fellow students. If you decide to take this course, you must commit to attending class regularly. Attendance will be taken and substandard attendance will be taken into account in deciding your grade. Final grades will be determined using the following percentages: │Homework, quizzes, class participation │25% │ │Test 1 │25% │ │Test 2 │25% │ │Final Project │25% │ │Total │100%│
{"url":"http://www.brynmawr.edu/math/people/donnay/vjdwebpage/Teaching/vjdmath302webS08/302syllabusS08.htm","timestamp":"2014-04-20T03:38:11Z","content_type":null,"content_length":"17031","record_id":"<urn:uuid:d31adaec-a7d0-489f-b7d3-14f77d804997>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
OP-SF WEB Extract from OP-SF NET Topic #8 --------------- OP-SF NET --------------- September 7, 1995 From: OP-SF Net editor Subject: Death of S. Chandrasekhar (With thanks to Dick Askey, Willard Miller and Ernie Kalnins for providing Prof. S. Chandrasekhar died on August 22, 1995 at the age of 84. He was born in Lahore, India and he studied at the University of Madras, India and at Trinity College, Cambridge, England. He worked at the University of Chicago. He is well known for his work in theoretical astronomy, see for instance his book "The mathematical theory of black holes", Oxford University Press, 1983. This book left unanswered questions about the intrinsic characterisation of the explicit solutions of the spinor equations of mathematical physics in the curved background of a rotating black hole (Kerr space time). In this problem the so called Teukolsky functions play a crucial role. These functions are confluent forms of the solutions of the most general linear homogeneous ordinary differential equation of second order with four regular singularities, viz. Heun's equation. Significant progress in the solution of this problem was made by E.G. Kalnins, W. Miller, Jr., and G.C. Williams. This summer Chandrasekhar's last book appeared, "Newton's `Principia' for the common reader", Oxford University Press, 1995. Further information on his life can be found in the biography "Chandra" by Kameshwar C. Wali, University of Chicago Press, 1991. Back to Home Page of SIAM AG on Orthogonal Polynomials and Special Functions Page maintained by Martin Muldoon
{"url":"http://math.nist.gov/opsf/personal/chandra.html","timestamp":"2014-04-17T15:27:15Z","content_type":null,"content_length":"2296","record_id":"<urn:uuid:bfd1aa62-9429-42d9-8551-91bd6ddc6e08>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Burr Ridge, IL Algebra 2 Tutor Find a Burr Ridge, IL Algebra 2 Tutor ...I was a French teacher, but I also did official district homebound instruction (for students that cannot attend regular high school for health related reasons) for 2 years as well. I also tutored officially after school for students on campus, again in various subjects. I have been tutoring for WyzAnt for over 4 years now and very much enjoy it! 16 Subjects: including algebra 2, English, chemistry, French ...During my two and half years of teaching high school math, I have had the opportunity to teach various levels of Algebra 1 and Algebra 2. I have a teaching certificate in mathematics issued by the South Carolina Department of Education. During my two and a half years of teaching high school, I have taught various levels of Algebra 1 and Algebra 2. 12 Subjects: including algebra 2, calculus, geometry, algebra 1 ...Wilcox scholarship). In high school, I scored a 2400 on the SAT, and earned a 5 on the AP Calculus BC exam from self study. I also received a 5 on the AP Statistics exam. I have teaching experience, as well. 13 Subjects: including algebra 2, calculus, geometry, statistics My tutoring experience ranges from grade school to college levels, up to and including Calculus II and College Physics. I've tutored at Penn State's Learning Center as well as students at home. My passion for education comes through in my teaching methods, as I believe that all students have the a... 34 Subjects: including algebra 2, reading, writing, statistics ...I love teaching, and gain great enjoyment from doing so. I have a Bachelor's degree in Math and believe that I can proficiently tutor that subject. I am also a native French speaker and can tutor that subject as well. 4 Subjects: including algebra 2, French, algebra 1, precalculus Related Burr Ridge, IL Tutors Burr Ridge, IL Accounting Tutors Burr Ridge, IL ACT Tutors Burr Ridge, IL Algebra Tutors Burr Ridge, IL Algebra 2 Tutors Burr Ridge, IL Calculus Tutors Burr Ridge, IL Geometry Tutors Burr Ridge, IL Math Tutors Burr Ridge, IL Prealgebra Tutors Burr Ridge, IL Precalculus Tutors Burr Ridge, IL SAT Tutors Burr Ridge, IL SAT Math Tutors Burr Ridge, IL Science Tutors Burr Ridge, IL Statistics Tutors Burr Ridge, IL Trigonometry Tutors Nearby Cities With algebra 2 Tutor Brookfield, IL algebra 2 Tutors Burbank, IL algebra 2 Tutors Burridge, IL algebra 2 Tutors Countryside, IL algebra 2 Tutors Darien, IL algebra 2 Tutors Forest View, IL algebra 2 Tutors Hinsdale, IL algebra 2 Tutors Indianhead Park, IL algebra 2 Tutors Lemont, IL algebra 2 Tutors Oak Brook algebra 2 Tutors Stickney, IL algebra 2 Tutors Westmont algebra 2 Tutors Willow Springs, IL algebra 2 Tutors Willowbrook algebra 2 Tutors Woodridge, IL algebra 2 Tutors
{"url":"http://www.purplemath.com/burr_ridge_il_algebra_2_tutors.php","timestamp":"2014-04-21T04:44:22Z","content_type":null,"content_length":"24222","record_id":"<urn:uuid:1689316c-66cc-42ce-af89-50bab3da7f54>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient inference of bacterial strain trees from genome-scale multilocus data • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Bioinformatics. Jul 1, 2008; 24(13): i123–i131. Efficient inference of bacterial strain trees from genome-scale multilocus data Motivation: In bacterial evolution, inferring a strain tree, which is the evolutionary history of different strains of the same bacterium, plays a major role in analyzing and understanding the evolution of strongly isolated populations, population divergence and various evolutionary events, such as horizontal gene transfer and homologous recombination. Inferring a strain tree from multilocus data of these strains is exceptionally hard since, at this scale of evolution, processes such as homologous recombination result in a very high degree of gene tree incongruence. Results: In this article we present a novel computational method for inferring the strain tree despite massive gene tree incongruence caused by homologous recombination. Our method operates in three phases, where in phase I a set of candidate strain-tree topologies is computed using the maximal cliques concept, in phase II divergence times for each of the topologies are estimated using mixed integer linear programming (MILP) and in phase III the optimal tree (or trees) is selected based on an optimality criterion. We have analyzed 1898 genes from nine strains of the Staphylococcus aureus bacteria, and identified a fully resolved (binary) strain tree with estimated divergence times, despite the high degrees of sequence identity at the nucleotide level and gene tree incongruence. Our method's efficiency makes it particularly suitable for analysis of genome-scale datasets, including those of strongly isolated populations which are usually very challenging to analyze. Availability: We have implemented the algorithms in the PhyloNet software package, which is available publicly at http://bioinfo.cs.rice.edu/phylonet/ Contact: nakhleh/at/cs.rice.edu Genome sequencing technologies are amassing large amounts of data from various organisms that span the Tree of Life, and in the case of bacteria, genomes of several strains of the same bacterium are becoming available (e.g. see the Microbial Genome Project of the US Department of Energy at http://microbialgenomics.energy.gov/). These data are enabling biologists to analyze the relationships among populations and species, as well as understand speciation and population divergence. To elucidate these relationships and understand these processes among different strains of the same bacterium, an accurate reconstruction of the evolutionary history of these strains—the strain tree—is essential, since it serves as the backbone against which events such as horizontal gene transfer and homologous recombination can be identified and assessed. In a sequence of papers, Roger Milkman and co-workers pioneered some of the work in this area, mainly focusing on mapping the ‘clonal ancestry’ in several strains of Escherichia coli (e.g. Milkman and Stoltzfus, 1988; Stoltzfus et al., 1988). In this article, we focus on the problem of inferring the strain tree from a genome-scale set of gene trees whose incongruence is mainly due to homologous recombination. In bacteria, homologous recombination through transformation or conjugation allows for the integration of homologous alien DNA into a host genome (Errington et al., 2001). This process plays an important role in DNA repair as well as bacterial genome diversification. From an evolutionary perspective, and barring any recombination, the evolutionary history of a set of genomes would be depicted by a tree that is the same tree that models the evolution of each gene in these genomes. However, homologous recombination among bacteria decouples the evolution of different genes in their genomes, thus resulting in incongruent (or, discordant) gene trees—a scenario that is illustrated in Figure 1. Three different gene trees within the branches of a strain tree. (a) Coalescent times coincide with divergence times, and strain/gene tree topologies are concordant. (b) Coalescent times do not coincide with divergence times, and strain/gene-tree topologies ... For example, in Figure 1c, looking backwards in time, the gene lineage from strain A and the gene lineage from B persist deep enough into the past that they have not coalesced by the time of the ancestral strain to A, B and C. Thus, the lineage from B may coalesce with the lineage from C more recently than with the lineage from A. As the ancestries of different parts of the genome may take different paths through the phylogeny, e.g. due to homologous recombination, gene trees may differ in topology from the strain tree topology, and an individual gene history might not reflect the shape of the strain tree. Even if this gene history is correctly estimated, the strain-tree estimate based on a single locus may be incorrect. As genome-scale sequence data from thousands of loci in different strains of bacteria become available, it is now critical that appropriate methods and tools be developed for understanding and overcoming the problem of gene-tree discordance in strain-tree A few methods have been introduced recently for analyzing gene trees, reconciling their incongruities and inferring species trees despite these incongruities. To the best of our knowledge, none of these methods have been applied to bacterial genomes, particularly different strains of the same bacterium, with massive gene tree incongruence due to homologous recombination. Generally speaking, each of these methods follows one of two approaches: the combined analysis approach or the separate analysis approach. In the combined analysis approach, the sequences from multiple loci are concatenated, and the resulting ‘supergene’ dataset is analyzed using traditional phylogenetic methods, such as maximum parsimony and maximum likelihood (e.g. Rokas et al., 2003.) In the separate analysis approach, the sequence data from each locus is first analyzed individually, and a reconciliation of the gene trees is then sought so as to optimize certain criterion (e.g. Edwards et al., 2007). Shortcomings of both approaches have been recently reported by various researchers (e.g. Degnan and Rosenberg, 2006; Kubatko and Degnan, 2007). A particular challenge that was not addressed in these recent studies concerns the analysis of very closely related groups of genomes (strains of the same bacterium, for example). In this case, sequence identity at the nucleotide level is very high, which gives rise to gene trees with low resolution, a fact that further complicates the task of inferring the strain tree. Last but not least, the genomic scale of the available data necessitates the development of efficient tools for tackling the task of strain tree inference. In this article, we address the problem of strain-tree inference from genome-scale multilocus data, where gene-tree incongruence is due to homologous recombination. Our proposed model of the optimal strain tree (topology and divergence times) is one that minimizes the amount of deep coalescent events, which is similar to that used in Maddison and Knowles (2006), and our proposed method to infer the optimal tree under this model is based on two widely encountered optimization problems: maximal cliques and mixed integer linear programming. Our method operates in three phases, where in phase I a set of candidate tree topologies is computed using the maximal cliques concept, in phase II divergence times for each of the topologies are estimated using mixed integer linear programming (MILP) and in phase III one tree, or a set of trees, is selected based on an optimality criterion. To assess our method's performance, we have analyzed 1898 genes from nine strains of the Staphylococcus aureus bacteria. A compatibility graph of all different clusters in 1898 corresponding gene trees was built, whose maximal cliques were then computed to reconstruct candidate tree topologies. The compatibility graph has 36 vertices and 304 edges, which correspond to 304 pairs of compatible clusters, and all its maximal cliques were identified in about 0.046 s. For the 24 trees that corresponded to the maximal cliques, we computed divergence times using a novel MILP formulation, which we solved using the CPLEX tool from ILOG. It took CPLEX approximately 1 h to compute the optimal divergence time assignment of a strain-tree topology, given a set of 1898 gene trees, on a 3.2 GHz Intel Pentium 4 machine, running Linux, with 1GB of RAM. The optimal strain tree that our method identified is fully resolved (binary) despite the high degree of sequence identity at the nucleotide level, which further affirms the suitability of the method to analysis of very closely related organisms. 2 METHODS 2.1 Definitions and notations Let T=(V,E) be a tree, where V(T) and E(T) are the tree nodes and tree edges (or, tree branches), respectively, and let T) denote its leaf set. Further, let T is a phylogenetic tree over T). A tree T is said to be rooted if the edges in E are directed and there is a single internal node x with in-degree 0. In this article, we assume only rooted trees, unless stated otherwise. Let T=(V,E) be a rooted tree, and u be a node in V. Given a tree T=(V,E) leaf-labeled by set vV, an edge e=(u,v) and a set Xp^T(v)=u; T[v] is the clade, or subtree, rooted at node v; T[v]; and, MRCA[T](X) is the most recent common ancestor of X—i.e. the node vV(T) such that e=(u,v)E(T). Tree T induces the set T naturally defines a partial order [T] on C. In this article, we assume that any strain tree T always has a special node r with a special edge re=(r,x), where x is the MRCA of all leaves in the tree (e.g. see the strain tree in Fig. 4a). A strain tree (a) and a gene tree (b) on six taxa, used for illustrating the strain-tree branch labeling. Let τ :V(T)→(^+u)=τ(v) for u,vT) and (2) τ(u)>τ(v) for (u,v)E(V). 2.2 Strain-tree inference and gene-tree reconciliation As indicated above, our proposed model of the optimal strain tree (topology and divergence times) is one that minimizes the amount of deep coalescent events. The input to our problem is a set of gene trees (topologies and coalescent times), and the output is a strain tree (topology and divergence times) that minimizes the amount of deep coalescent events and incongruence of the gene trees when reconciled within the branches of the inferred strain tree. The strain tree is built in three phases. First, a set of topology candidates is computed based on the set of clades in the input gene trees. Second, the times for nodes in each of the candidate trees are inferred based on the coalescent times of the input gene trees. Third, the gene trees are reconciled within the branches of each of the tree candidates, and the tree (topology and times) that optimizes a certain criterion (a weighted sum of deep coalescent events, gene/strain-tree incongruence and shallow coalescent events) is selected as the strain tree. 2.2.1 Phase I: inferring strain-tree topology candidates Given a set of gene-tree topologies {T[1],…,T[k]}, it may be that the tree topology that represents the most frequent coalescent history does not reflect the true divergence patterns (Degnan and Rosenberg, 2006). Further, the tree built from the concatenated ‘supergene’ may also not reflect the true speciation patterns (Kubatko and Degnan, 2007). Our working hypothesis is that the strain-tree topology is most probably formed from a set of clusters, each of which appears in at least one of the gene trees. For a set of clusters to define a (rooted) tree, they have to be pairwise compatible. Two clusters (sets of taxa) c[1], c[2]c[1]∩c[2], c). A classical result in phylogenetics states that a set of pairwise compatible clusters defines a unique tree (Semple and Steel, 2003). Based on our working hypothesis and the relationship between clusters and trees, we formulate our heuristic algorithm for finding candidate strain-tree topologies from the set of gene-tree topologies, as outlined in Figure 2. The algorithm first computes C, the set of all clusters appearing in any of the gene trees. It then builds the compatibility graph H=(V[H],E[H]), where V[H]=C, and E[H]V[H]V[H] where E[H]c[i],c[j]) :c[i] is compatible with c[j]}. Based on the aforementioned relationship between clusters and trees, our next task entails computing all maximal sets of pairwise compatible clusters, which amounts to computing the set K of all maximal cliques in the compatibility graph H. Finally, strain tree topology candidates are constructed in a straightforward manner from the set K, where each maximal clique corresponds to a unique tree. Figure 3 illustrates the algorithm on three input gene trees. The set C contains seven distinct clusters, and the compatibility graph H is shown. There are six maximal cliques in H, which implies that the clusters of the input gene trees give rise to six different strain tree topology candidates. The algorithm for estimating a set C of all clusters that appear in any of the gene trees is computed. In Step 2, the compatibility graph ... Example illustrating algorithm ESTIMATESTTOPOLOGY. At the top are three gene trees, which are the input to the algorithm. The set of all clusters occurring in these gene trees is then computed, and their compatibility graph is built. Finally, the set ... 2.2.2 Phase II: estimating strain-tree divergence times Our next task entails estimating the divergence times at internal nodes of each of the strain-tree topology candidates that we computed so as to optimize the weighted sum criterion, as described above. We present a novel optimization based on solving an MILP formulation. The MILP formulation involves a special labeling of the strain-tree topology branches, formulation of temporal constraints based on information from the gene trees, linking coalescence and temporal information and finally putting together all steps into one MILP program. We now describe in details each of these four (1) Labeling the strain-tree branches. In order to model the coalescent of genes on the strain-tree branches, we need to label these branches. As we seek to minimize deep coalescent events (genes that coalesce deeper than their MRCA), we seek a labeling that reflects the ‘depth’ of the coalescent event, i.e. how far the coalescent event of a set X occurred away from the MRCA of X. For each internal node x in the strain tree ST, let P(x)=x[1],x[2],…,x[p]x[1]=x, (2) x[p]=r(ST) and (3) (x[i],x[i−1])E(ST), for all 2≤i≤p. Further, E[P(x)] denotes the list of edges defined by P(x); i.e. E[P(x)]=x[i],x[i−1]):2≤i≤pP(x[2])=x[2],x[3],x[5],rFigure 4a. Given these sequences, a clade rooted at node y in a gene tree may coalesce only on any edge in E[P(x)], where x=MRCA[ST](y). For example, the clade (C,D) in the gene tree in Figure 4b may coalesce only on one of the edges in x[2] is the node in the strain tree in Figure 4a. Given E[P(x)]x[2],x[1]),(x[3],x[2]),…,(x[p],x[p−1])x in a strain-tree topology, we label the edges in E[P(x)] by the numbers 1,2,…,p such that x[s],x[s−1]))=s−2, for 2≤s≤p. For example, for x[2] is the node in the strain tree in Figure 4a, we have the labels: x[3],x[2]))=1, x[5],x[3]))=2 and r,x[5]))=3. This labeling is essential for our MILP formulation, since it will be used to reflect the ‘depth’ of the coalesce events. For example, if clade (C,D) from the gene tree in Figure 4b coalesces on branch (r,x[5]) in the strain tree, then the depth of that coalescence event is r,x[5]))−1, which is 2 (the reason we choose a label that is larger by 1 than the actual depth value is to accommodate shallow coalescence events, as we discuss below). Indeed, in this scenario, (C,D) coalesced two branches deeper than it could have coalesced [which is branch (x[3],x[2])]. We denote by LABELTREE the procedure that computes the lists P(x) and E[P(x)], as well as the labeling of each edge in (2) Temporal constraints. The topology of the strain tree defines a partial order on the times of the internal nodes. This can be represented using linear constraints as τ[u]>τ[v] for every branch (u ,v) in the strain tree. For example, in the strain tree in Figure 4a, we have the constraint Further, each clade in a gene tree may coalesce on any branch in the strain tree on the path from the MRCA of the clade to the branch re. Temporally, this imposes the (linear) constraint τ[x]≤τ[y]≤τ[ r], where y is a clade (equivalently in this case, the set of leaves in that clade) in a gene tree, x=MRCA[ST](y), and r is the special root of the strain tree. For example, in Figure 4, we have the for every clade y in a gene tree and its MRCA x in the strain tree. The binary variable g[y] here takes the value 1 when the coalescence time of y is lower than that of its MRCA in the strain tree and 0 otherwise. Defining T^max to be the maximum time of the root of any of the gene trees in In this case, we add a small value ^−8) to emulate the (3) Associating times with branches through their labels. Let y be a node in the gene tree, x=MRCA[ST](y), and (u,v)E[P(x)] such that u,v))=m. If node y coalesces on branch (u,v) in the strain tree, this introduces a constraint of the form τ[u]≥τ[y]≥τ[v], which translates into the constraint Notice that if f[y]m, then τ[y] is not constrained, which we emulate by constraining τ[y] from above by T^max and from below by 0. In other words, we have the constraint Let M[y]y)}, where κ(y)=|P(x)|−1 for x=MRCA[ST](y). For branch e=(u,v)E[P(x)], where e)=m, we denote s^y(m)=u and t^y(m)=v. For each clade y in a gene tree, we convert the conjunction of constraints (1) and (2) into linear constraints by introducing κ(y) binary variables α[i] for 1≤i≤κ(y), and then writing the following Constraints (A) and (B) connect the branch assignment with the times of that branch, as they guarantee that α[i]=1 if [i]=0 otherwise. Constraint (C) guarantees that either g[y]=1 and all the α values are 0, thus resulting in f[y]=0 based on constraint (D), which corresponds to the case where the coalescence times of clade y in the gene tree is lower than that of its MRCA in the strain tree, or g[y]=0 and exactly one of the α values is 1, which corresponds to the case where y coalesces, under the time assignment to the strain tree, on a unique branch on the path from the MRCA of y to the root. Constraint (D) guarantees that the unique value is chosen from the set M[y]. Constraint (E) states that all the α variables are binary. (4) Putting it all together: the MILP formulation. Now that we have described the constraints and how to write them as linear constraints for CPLEX, we are in a position to introduce the complete MILP formulation for solving the problem of estimating divergence times in a strain tree ST, given a set I(T) by the set of all internal nodes of tree T, and by [GT]I(GT). We seek τ[x], for every internal node x in the strain tree, and f[y] for every internal node y in all gene trees so as to minimize the amount of deep coalescence events and the amount of shallow coalescence. A MILP formulation of this problem, which we refer to as ESTIMATESTTIMES, is given in Figure 5. Algorithm ESTIMATESTTIMES, which is an MILP formulation for estimating the divergence times of a strain-tree topology ST given a set [u], for every ... Notice that since g[y]=1 (which indicates ‘shallow coalescence’) if and only if f[y]=0, the objective function correctly captures the amount of deep coalescence events, and chooses the solution that minimizes it. 2.2.3 Phase III: strain/gene-tree reconciliation and optimality. Given a strain tree (ST,τ[ST]) and a gene tree (GT,τ[GT]), we seek the coalescence history of the gene, given its tree, on the branches of the strain tree. Because both the strain tree and gene trees have times at internal nodes at this stage of the method, this problem is trivial: the coalescence event of a set c of taxa at time t in gene tree GT must occur at time t on the path between the root of ST and the MRCA of c in ST. There is exactly one such point in ST, so this mapping is unique for each cluster in a gene tree. Considering trees with times at internal nodes is very important since temporal constrains implied by divergence and coalescent times render certain coalescent histories invalid. Therefore, whenever such temporal information is available, it must be used, not only for accuracy reasons, but also to achieve further reductions in the size of the space of strain/gene-tree reconciliations, which in turn affects the computational efficiency of existing and newly developed reconciliation methods. Let cST that c is a clade in ST. Our optimality criterion, η(ST,w[il](∑[ccST}] 1), (2) weighted number of deep coalescence events w[dc](∑[ccST, f[c]>0}] (f[c]−1)) and (3) weighted number of shallow coalescence events w[sc](∑[{ccST, f[c]=0}] g[c]). The first term is the number of clades in the gene trees that do not occur in the strain tree. For a clade c in gene tree GT and which also appears in the strain tree ST, the quantity f[c] captures how far (in terms of the number of branches) c coalesced away from its MRCA in ST and g[c] captures if it may not coalesce, given the time assignment in the strain tree. Note that if all gene trees have the same topology as the strain tree, and each cluster coalesces on the branch immediately above its MRCA, then η(ST,w[il], w[dc] and w[sc] can be set in a way to reflect the significance given to each of the three terms in the criterion. For example, if only topological difference among the gene trees and strain tree matters, w[dc] and w[sc] can be set to 0. 2.3 The algorithm Now that we have defined our optimality criterion, the complete algorithm for inferring an optimal strain tree (topology and times) is described in Figure 6. The algorithm for computing the strain-tree topology and divergence times (ST) from an input set of gene trees with coalescence times at internal nodes ( ... 3.1 Sequence data In our experimental study, we used the S.aureus bacteria, which infect humans in the community and hospitals and cause a variety of diseases. We obtained all the sequence data from the site ftp:// ftp.ncbi.nih.gov/genomes/. Table 1 summarizes the nine strains we used. {"type":"entrez-nucleotide","attrs":{"text":"NC_002745","term_id":"29165615","term_text":"NC_002745"}}NC_002745 is S.aureus subsp. aureus N315, which is a prototype of methicillin-resitant S.aureus (MRSA; Kuroda et al., 2001). {"type":"entrez-nucleotide","attrs": {"text":"NC_002758","term_id":"57634611","term_text":"NC_002758"}}NC_002758 is S.aureus subsp. aureus Mu50, which has a moderate resistance to vancomycin by the thickened cell wall. {"type":"entrez-nucleotide","attrs":{"text":"NC_002951","term_id":"57650036","term_text":"NC_002951"}}NC_002951 is S.aureus subsp. aureus COL,which is an early methicillin-resistant isolate. The first isolation was found in a British hospital in 1961 (Gill et al., 2005). {"type":"entrez-nucleotide","attrs":{"text":"NC_002952","term_id":"49482253","term_text":"NC_002952"}}NC_002952 is S.aureus subsp. aureus MRSA252. {"type":"entrez-nucleotide","attrs":{"text":"NC_002953","term_id":"49484912","term_text":"NC_002953"}}NC_002953 is S.aureus subsp. aureus MSSA476. These strains were isolated from hospital and community (Holden et al., 2004). MRSA252 belongs to the clinically important EMRSA-16 clone that is responsible for half of the MRSA infections in the United Kingdom and is one of the major MRSA clones found in the USA (USA200). MSSA476 causes severe invasive diseases in immunocompetent children in the community and belongs to a major clone associated with community-acquired disease. {"type":"entrez-nucleotide","attrs":{"text":"NC_003923","term_id":"21281729","term_text":"NC_003923"}}NC_003923 is S.aureus subsp. aureus MW2 (Baba et al., 2002). This strain was isolated from the community, and caused fatal septicaemia. This strain was reported in mid-west USA. {"type":"entrez-nucleotide","attrs": {"text":"NC_007622","term_id":"82749777","term_text":"NC_007622"}}NC_007622 is S.aureus subsp. aureus RF122 (Herron-Olson et al., 2007). {"type":"entrez-nucleotide","attrs": {"text":"NC_007793","term_id":"87159884","term_text":"NC_007793"}}NC_007793 is S.aureus subsp. aureus USA300 (Diep et al., 2006). USA300 is one of the major strains in the USA, Canada and Europe. {"type":"entrez-nucleotide","attrs":{"text":"NC_007795","term_id":"88193823","term_text":"NC_007795"}}NC_007795 is S.aureus subsp. aureus NCTC 8325 (Gillaspy et al., 2006). 3.2 Identifying orthologous genes To identify orthologous genes, we used the information of both DNA sequence identity and synteny (gene order) as follows. All-against-all BLASTN search with default parameters (Altschul et al., 1997) was performed for the genes in {"type":"entrez-nucleotide","attrs":{"text":"NC_002745","term_id":"29165615","term_text":"NC_002745"}}NC_002745 versus all others. Then, we produced a list of BLASTN hits of the 2669 genes in {"type":"entrez-nucleotide","attrs":{"text":"NC_002745","term_id":"29165615","term_text":"NC_002745"}}NC_002745 for each of the other strains. The lists include genes that have at least 90% sequence identity to the reference gene in {"type":"entrez-nucleotide","attrs":{"text":"NC_002745","term_id":"29165615","term_text":"NC_002745"}}NC_002745 and the length of the BLASTN hit region covers In order to identify orthologous genes conservatively, we considered that orthologous genes should be in a large block of a region in which the gene order is well conserved for all investigated strains. A block is defined such that genes from all strains are continuously located on their genomes with less than three gene skips, which could be created by small indels and annotation errors. To detect such blocks, we performed a synteny survey from the first gene in {"type":"entrez-nucleotide","attrs":{"text":"NC_002745","term_id":"29165615","term_text":"NC_002745"}}NC_002745 (NC_002745_1) to downstream genes. Then, we identified 222 such blocks, which covered in total 1898 3.3 Gene- and strain-tree analysis For each gene, we built a maximum parsimony (MP) tree from its DNA sequences by using PAUP* 4.0 (Swofford, 2003), and rooted the tree using the midpoint method. When the MP heuristic identified more than one tree for a given gene, we used the strict consensus of these trees. We inferred coalescence times at internal nodes in the gene trees using the formula for coalescence time of node y in a gene tree, where B(y)={(a,b) :MRCA(a,b)=y}, d[s] is the number of synonymous substitutions per synonymous sites and r[s] is the rate of synonymous substitutions. In other words, τ[y] is the average of all coalescence times of every pair of genes whose MRCA is node y. Given that the rate of synonymous substitutions is similar across genes (Nei and Kumar, 2000 ), this allowed us to compare the coalescence times across gene trees and use them to infer divergence times in the strain tree. We used r[s]=10^−8, following the findings of Ochman and Wilson (1987 It has been suggested that d[s] may not be constant across the genome due to different codon bias among genes (Retchless and Lawrence, 2007). We found that d[s] and the codon adaptation index (CAI) are in a negative correlation, therefore, we used a linear regression method to correct d[s] for bias caused by non-random usage of codons. The correction is made such that a corrected d[s] corresponds to that with the mean CAI. However, the corrected d[s] measure did not change the relative times we obtained for the strain trees and results are not shown. To get the strain-tree candidates, we used the algorithm ESTIMATESTTOPOLOGY described in Figure 2. The compatibility graph H contained 36 nodes and 304 edges. We used the MaxClique tool of Kevin O'Neill to compute the maximal cliques. The tool identified all maximal cliques in 0.046 s. Additionally, we considered five other candidate tree topologies: (1) T[conc]: the tree topology obtained by the maximum parsimony heuristic, as implemented in PAUP*, on the concatenation of all 1898 gene data sets; (2) T[hf] : the topology of the gene tree that is compatible with the largest number of other gene trees (this tree, shown in Figure 11, is compatible with 1645 of the gene trees); (3) T[avgds]: a tree topology built using the neighbor joining (Saitou and Nei, 1987) method from the average d[s] distances among nine strains; (4) T[avghd]: a tree topology built using the neighbor joining method from the average Hamming distances among nine strains and (5) T[majcons]: the topology of the majority consensus tree of all 1898 gene trees. In total, we have 29 candidate strain-tree topologies. Strain trees with times assigned by MILP, where T[mc] is the best maximal clique tree and the rest are as defined in Section 3.3. The lengths of the ‘shortened’ branches were divided by 10^5, so that the resolution of the trees can be shown ... We then estimated the divergence times of each of the strain tree topology candidates, using the CPLEX tool (from ILOG) to solve the algorithm (MILP program) described in Figure 5. We have implemented a software tool for generating the MILP program from a set of gene trees with coalescence times, following the formulation in Figure 5, in the PhyloNet software package, which is available publicly at http://bioinfo.cs.rice.edu/phylonet/. In the nine-genome dataset that we considered in this study, each MILP program had ~4000 variables and 30000 constraints. Nonetheless, CPLEX solved each program in about 1h. Our first task was to measure the ‘heterogeneity’ in the data, which consisted of the 9Figure 7 shows the topological differences between every pair of the 1898 gene trees, as computed by the Robinson–Foulds (RF; Robinson and Foulds, 1981) distance measure. The RF measure quantifies, for a given pair of trees, the average number of clades that appears in one, but not both, of the trees. Hence, if two trees are identical, the RF distance between them is 0; if they do not share any clades, then the RF distance is 1; and, trees with varying degrees of shared clades have RF distance values between 0 and 1. The RF distances between every pair of the 1898 gene trees. RF distance of 0 indicates the two trees are identical, and RF distance of 1 indicates that the two trees do not share any clades in As shown in Figure 7, while blue (low RF values) is the dominating color, there are many pairs of trees that have RF distance of at least 0.3. In fact, among the 1898 gene trees, there were over 400 different topologies. Given our conservative selection of the orthology groups, which almost eliminates the possibility of gene tree discordance due to events such as horizontal gene transfer and gene duplication/loss, this result indicates massive gene-tree discordance due to stochastic effects of the coalescent (incomplete lineage sorting). Furthermore, it is important to point out that the majority of the gene trees were not binary, since the percent identity among the orthologous sequences was very high. This lack of resolution of the gene tree topologies may give a false indication of high concordance (low RF values) among the gene trees, even though this may not be the case in reality. Alternatively, one may quantify the ‘compatibility’, rather than ‘similarity’ (as measured by the RF distance), among gene trees. However, this suffers from the fact that compatibility measures are not true metrics, and in particular do not satisfy the triangle inequality property, which may distort the picture emerging from such an analysis. As indicated in the Section 1 and illustrated in Figure 1, it may be the case the gene trees have the same topology, yet they disagree in their coalescence times (times at their internal nodes). Therefore, what we studied next was the distribution of coalescence times of each cluster of taxa across all gene trees in which the cluster occurs (recall that a cluster occurs in a tree if the tree contains a clade whose leaves are the only members of that cluster); the results are shown in Figure 8. The distributions of coalescence times of all 36 clusters of taxa in the 1898 gene trees, as calculated by Formula (3.3), yet without division by r[s]≈10^−8. The figure shows that, even with the exclusion of possible outliers, each cluster of taxa has a wide distribution of coalescence times across all gene trees in which it occurs. Further, what makes the computational analysis of such a dataset particularly challenging is that large extent of overlap of distributions of the different clusters. Dealing with this overalp is where most of the computational time of solving our MILP formulation is spent. After we characterized the heterogeneity in the data, we turned to the main issue, namely estimating the strain-tree topology and divergence times from the set of 1898 gene trees. As described in the previous section, we considered 29 strain-tree topology candidates. For each of these 29 topology candidates, we solved the MILP formulation as outlined in Figure 5, once with w[dc]=w[sc]=1, and another with w[sc]=5w[dc]. In both cases, the same tree topology candidate of all 24 maximal cliques emerged as the optimal one, yet with differing times. Therefore, we report the results of only the optimal solution under w[dc]=w[sc]=1. For a clearer presentation, we show each of the three terms in the optimality criterion described in Section 2.2.3 individually, with Figure 9 showing the number of missing (or, discordant) clades, and the stacked bars in Figure 10 showing the sum of the depths of deep coalescence events (the blue bars) and the number of shallow coalescence events (the red bars). The number of gene tree clades that do not appear in the strain tree. Trees 1 to 24 are built from maximal cliques. Trees 25, 26, 27, 28 and 29 are T[conc], T[hf], T[avgds], T[avghd] and T[majcons], The number of deep coalescences, sf=∑[{cf[c]>0] [c ST]f[c]−1), and the number of shallow coalescences, sg=∑[{c f[c]=0] [c ... Figure 9 shows that the first tree out of the 24 maximal clique trees has the least disagreements with the set of 1898 gene trees, with trees 8 and 9 differing from it by about 70 clades. The other 21 maximal clique trees are much less optimal in this context, with the best of them disagreeing with the gene trees in at least 400 more clades. We denote by T[mc] the first tree, which is the best in this context among all 24 maximal clique trees. Out of the additional five trees, T[hf] is clearly the best in this context, and the only one that is better than T[mc]. Both trees, T[mc] and T[hf] are shown in Figure 11. The tree T[mc] is a refinement of the tree T[hf]; that is, T[mc] contains all the clades in T[hf], plus additional ones. In this case, T[hf] has the clade (USA300, NCTC8325, COL) unresolved, while T[mc] has it resolved as (NCTC8325, (USA300, COL)). When considering the optimality of both trees, T[mc] and T[hf], as measured by the amount of deep coalescence and shallow coalescence events, as shown in Figure 10, they are identical. The significance of this result comes from the fact that, while the unresolved clade (USA300, NCTC8325, COL) has three possible refinements (1) (NCTC8325, (USA300, COL)), (2) ((NCTC8325, USA300), COL) and (3) ((NCTC8325, COL) USA300), the MILP formulation led to a fully binary strain tree that has exactly the same coalescence scenarios among all gene trees. Notice that the majority consensus tree T[majcons] is the optimal among all 29 trees in terms of the coalescence scenarios. However, this tree has two problems. First, in terms of missing clades, it is one of the least optimal, as shown in Figure 9. Further, it is highly unresolved, containing only two internal branches, as shown in Figure 11. The concatenation tree, T[conc] is the best of all trees in terms of minimizing the number of shallow coalescence events, yet is the worst in terms of the sum of the depth of all deep coalescence events. Further, it is the only tree that had the wrong outgroup. This indicates that concatenation of gene sequences and reconstructing a strain tree from the resulting ‘supergene’ may result in very inaccurate trees, particularly when there is a massive extent of discordance among gene trees, a fact that has already been established through extensive experimental studies (Kubatko and Degnan, 2007). While it seems from Figure 11 that T[conc] indicates very large divergence time between N315 and Mu50, this is but a reflection of time estimation given that these two strains did not form a single clade in the concatenation tree. To solve this problem, we will consider in future development of our tool all possible refinements of any non-binary strain-tree topology candidate. The other two trees, T[avgds] and T[avghd] are very similar in terms of topology, as shown in Figure 11, and both fall ‘in the middle’ in terms of optimality, as shown in Figures 9 and and10.10. Therefore, our proposed evolutionary history of all nine strains of S.aureus is the tree T[mc], shown in Figure 11. In this article, we introduced a three-phase method for efficient inference of an optimal strain tree from genome-scale multilocus data. We have implemented all phases of our method and analyzed nine strains of S.aureus. Our hypothesis for the ‘vertical’ evolutionary history of these nine strains is the tree T[mc], shown in Figure 11. It is very important to note that even though the closely related set of strains has a very high degree of sequence identity at the nucleotide level, our method was able to infer a fully resolved evolutionary tree for them. Further, the method computed and evaluated each of 24 possible strain trees within an hour, which is efficient, considering that we used about 1900 loci from nine strains in this analysis. Two immediate future directions that we will pursue are (1) studying the performance of our method in extensive simulations and (2) investigating the evolutionary diameter of a dataset within which the method reliably returns good strain, or even species, trees. It is worth mentioning that our method can be adapted in a straightforward manner to handle multiallelic loci in the data. The authors wish to thank the three anonymous reviewers for very helpful comments on the manuscript. Funding: L.N. and C.T. were supported in part by DOE grant DE-FG02-06ER25734 and NSF grant CCF-0622037. H.I. and R.S. were supported in part by NSF grant CCF-0622037. Conflict of Interest: none declared. • Altschul S, et al. Gapped BLAST and PSI-BLAST: a new generation of protein database search programs. Nucleic Acids Res. 1997;25:3389–3402. [PMC free article] [PubMed] • Baba T, et al. Genome and virulence determinants of high virulence community-acquired MRSA. Lancet. 2002;359:1819–1827. [PubMed] • Degnan J, Rosenberg N. Discordance of species trees with their most likely gene trees. PLoS Genetics. 2006;2:762–768. [PMC free article] [PubMed] • Diep B, et al. Complete genome sequence of USA300, an epidemic clone of community-acquired meticillin-resistant Staphylococcus aureus. Lancet. 2006;367:731–739. [PubMed] • Edwards S, et al. High-resolution species trees without concatenation. Proc. Natl Acad. Sci. USA. 2007;104:5936–5941. [PMC free article] [PubMed] • Errington J, et al. DNA transport in bacteria. Nat. Rev. Mol. Cell Biol. 2001;2:538–544. [PubMed] • Gill S, et al. Insights on evolution of virulence and resistance from the complete genome analysis of an early methicillin-resistant Staphylococcus aureus strain and a biofilm-producing methicillin-resistant Staphylococcus epidermidis strain. J.n Bacteriol. 2005;187:2426–2438. [PMC free article] [PubMed] • Herron-Olson L, et al. Molecular correlates of host specialization in staphylococcus aureus. PLoS ONE. 2007;2:e1120. [PMC free article] [PubMed] • Holden M, et al. Complete genomes of two clinical Staphylococcus aureus strains: evidence for the rapid evolution of virulence and drug resistance. Proc. Natl Acad. Sci. USA. 2004;101:9786–9791. [PMC free article] [PubMed] • Kubatko L, Degnan J. Inconsistency of phylogenetic estimates from concatenated data under coalescence. Syst. Biol. 2007;56:17–24. [PubMed] • Kuroda M, et al. Whole genome sequencing of meticillin-resistant Staphylococcus aureus. Lancet. 2001;357:1225–1240. [PubMed] • Maddison W, Knowles L. Inferring phylogeny despite incomplete lineage sorting. Syst. Biol. 2006;55:21–30. [PubMed] • Milkman R, Stoltzfus A. Molecular evolution of the Eschericia coli chromosome. II. Clonal segments. Genetics. 1988;120:359–266. [PMC free article] [PubMed] • Nei M, Kumar S. Molecular Evolution and Phylogenetics. Oxford: Oxford University Press; 2000. • Ochman H, Wilson A. Evolution in bacteria: evidence for a universal substitution rate in cellular genomes. J. Mol. Evol. 1987;26:74–86. [PubMed] • Retchless A, Lawrence J. Temporal fragmentation of speciation in bacteria. Science. 2007;317:1093–1096. [PubMed] • Robinson D, Foulds L. Comparison of phylogenetic trees. Math. Biosciences. 1981;53:131–147. • Rokas A, et al. Genome-scale approaches to resolving incongruence in molecular phylogenies. Nature. 2003;425:798–804. [PubMed] • Saitou N, Nei M. The neighbor-joining method: a new method for reconstructing phylogenetic trees. Mol. Biol. Evol. 1987;4:406–425. [PubMed] • Semple C, Steel MA. Phylogenetics. Vol. 24. Oxford: Oxford University Press; 2003. • Stoltzfus A, et al. Molecular evolution of the Escherichia coli chromosome. I. Analysis of structure and natural variation in a previously uncharacterized region between trp and tonB. Genetics. 1988;120:345–358. [PMC free article] [PubMed] • Swofford DL. PAUP*: Phylogenetic Analysis using Parsimony. Sunderland, MA: Sinauer Associates; 2003. (* and others methods), Version 4. Articles from Bioinformatics are provided here courtesy of Oxford University Press Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2718627/?tool=pubmed","timestamp":"2014-04-17T16:07:09Z","content_type":null,"content_length":"131854","record_id":"<urn:uuid:e7631de8-9403-45f9-bf6e-1a1a7034dcf7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximal dimension of abelian ideals of a Lie algebra and extensions of the ground field up vote 5 down vote favorite For a Lie algebra $L$ of dimension $n$ over a field ${\mathbb F}$ we denote by $\beta(L)$ the maximal dimension of abelian ideals of $L$. In general, $\beta(L)$ is not preserved under extensions of the ground field (see e.g. Example 2.7 in http://homepage.univie.ac.at/dietrich.burde/papers/burde_39_max_ab.pdf). Do you know any example in which $\beta(L)<\beta(L\otimes_{\mathbb F} \bar{{\mathbb F}})=n-1$, where $\bar{\mathbb F}$ is the algebraic closure of ${\mathbb F}$? (In other words, is it possible that $L\otimes_{\mathbb F} \bar{{\mathbb F}}$ contains an abelian ideal of codimension 1 and $L$ has no abelian ideal of codimension 1?) I am mainly interested in the case where $L$ is a restricted Lie algebra over a field of characteristic $p>0$. lie-algebras restricted-lie-algebras It seems risky to jump into prime characteristic here (including finite fields) when the existing literature is mostly oriented toward characteristic 0. But the question asked does make sense. – Jim Humphreys Mar 10 '12 at 16:59 Professor Humphreys: I agree that jumping in positive characteristic could seem risky, but I faced with this problem when I was dealing with restricted enveloping algebras satisfying certain polynomial identities. This is the real motivation of my question. – Salvatore Siciliano Mar 10 '12 at 17:34 What I find strange is that in the paper you linked, Burde and Ceballos formulate Proposition 3.1 only for the case when the field has characteristic zero. In my opinion, his proof (modulo one trivial typo: "Rescaling $e_1$" should be "Rescaling $e_2$") works equally well in every characteristic. – darij grinberg May 5 '12 at 0:19 darij: I agree with you. – Salvatore Siciliano May 5 '12 at 12:11 add comment 1 Answer active oldest votes There's no such example. Since this is convenient, I denote by $L$ the Lie algebra over the algebraic closure. Let $A$ be a codimension 1 abelian ideal and let us show that some (possibly other) abelian ideal $A'$ is defined over the ground field, i.e. is a hyperplane that can be defined by a linear equation with coefficients in $K$. Since $L/A$ is abelian, we have $[L,L]$ contained in $A$. In particular, $A$ is contained in the centralizer of $[L,L]$. In case $A$ is equal to the centralizer of $[L,L]$, this is defined over the ground field and thus we are done. So I now assume that the centralizer of $[L,L]$ is all of $L$ (so $L$ is nilpotent of step 2). up vote 4 The case when $L$ is abelian is trivial. If the derived subalgebra of $L$ is 1-dimensional, then the Lie algebra law can be viewed as an alternate form. Since $A$ is a codimension 1 down vote isotropic subspace for this form, it is easy to check that the kernel of this alternate form has codimension 2 (and is defined over the ground field) and contains $[L,L]$ because $L$ is accepted nilpotent of step 2. Every hyperplane $A'$ containing this kernel is an abelian ideal; we can pick it to be defined over the ground field. If the derived subalgebra of $L$ is at least 2-dimensional, there exist two linear forms $f_1,f_2$ on $L$ such that the alternate bilinear forms $(x,y)\mapsto b_i(x,y)=f_i([x,y])$, $i= 1,2$ are not proportional. They can be chosen to be defined over the ground field. Let $K_i$ be the kernel of $b_i$. Then $K_i$ is contained in $A$ (otherwise $b_i$ would be zero). Besides, $K_1$ and $K_2$ have codimension 2 (because $A$ is an isotropic subspace for $b_i$) and are not equal, because otherwise $b_1$ and $b_2$ would be alternate forms on the plane $L/ K_1$ and would thus be proportional as the set of antisymmetric matrices of size 2 is 1-dimensional. So the codimension of $K_1+K_2$ is at most 1. Since it's contained in $A$, we deduce that $A=K_1+K_2$. So $A$ is defined over the ground field and the proof is complete. Very nice! Even better, it seems that this argument works in the infinite dimensional case, as well! – Salvatore Siciliano May 11 '12 at 22:43 add comment Not the answer you're looking for? Browse other questions tagged lie-algebras restricted-lie-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/90831/maximal-dimension-of-abelian-ideals-of-a-lie-algebra-and-extensions-of-the-groun","timestamp":"2014-04-24T12:10:58Z","content_type":null,"content_length":"58815","record_id":"<urn:uuid:bf197d49-02ab-4cc8-9b77-069e71f5e210>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Dynamic Systems and Applications 12:1-2 (2003) 9-22 Abstract. Concordia College, Department of Mathematics and Computer Science, Moorhead, MN 56562 USA. E-mail: andersod@cord.edu University of Georgia, Department of Mathematics, Athens, GA 30602 USA. E-mail: johoff@math.uga.edu ABSTRACT. Green's function for an even-order focal problem, where the derivatives alternate between nabla and delta derivatives, is found, and several examples are given for standard time scales. The signs of the function and its derivatives are determined, and whether a symmetry condition holds for an arbitrary time scale is also discussed. The results are then applied to give existence criteria for a positive solution to a nonlinear boundary-value problem. AMS (MOS) Subject Classification. 39A10, 34B10. 1. Preliminaries A time scale T is an arbitrary nonempty closed subset of the reals R [5], [6]. We define the sets T and T by
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/940/1300432.html","timestamp":"2014-04-18T21:54:40Z","content_type":null,"content_length":"8157","record_id":"<urn:uuid:2b883d01-05cd-4f5c-89db-79068cb0f9f8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
DRUMMERWORLD OFFICIAL DISCUSSION FORUM - View Single Post - Do maple sticks reduce shock? Re: Do maple sticks reduce shock? ***WARNING: SCIENCE AHEAD*** In order to determine whether or not one stick transfers more shock than another, we must look at both the sticks density (weight, if you will), as well as the sticks hardness. We shall assume that all uncontrollable factors (velocity of hit, dryness of wood, straightness of grain, etc.) to be equal for our purposes. So here we go: Take your favorite stick, I'll pick a 2B. Now envision the same stick in maple, hickory, and oak. Maple is the lightest, and oak is the heaviest. Therefore, by definition, maple is the least dense, and oak is the most dense, with hickory somewhere in between. This is not the only factor in shock absorption. We must factor in the hardness of each different wood type. According to the Janka Hardness Rating of wood (widely accepted hardness test, I'll explain at bottom), hickory is harder than maple, and maple is harder than oak. So now we know that the heaviest stick is also the softest, the least dense stick is middle ground in hardness, and the hardest stick is middle ground in density. We know that the harder something is the more energy it transfers (think bricks vs foam), and the heavier it is the more energy it produces (again, bricks vs foam). So if each stick were to hit the same head at the same velocity, we can conclude that: 1. The oak stick, being the heaviest, will produce the most energy upon impact, but being the softest it will absorb the most energy of the three. 2. The hickory stick, being the middle weight, will produce more energy than maple but less than oak upon impact. It will transfer the most amount of energy being the hardest. 3. The maple stick being the lightest will produce the least amount of energy of the three upon impact. Being in the middle in terms of hardness, it will transfer less energy than hickory but more than oak. In conclusion, the harder stick transfers more energy, but the heaviest stick produces more. So do maple sticks reduce shock? Sure, compared to hickory. But lets not forget how hard you play and the weight of the stick most definitely come into play. And lets not forget that ALL sticks transfer energy to your hands as long as you are holding on to them, no grip exceptions to this either. If it vibrates and you touch it, energy will be transferred. As I promised, the Janka Hardness Rating is measured by determining how much pound force (lbf) is required to embed a .444" (11.28mm) diameter steel ball halfway into a piece of wood (half the ball, not halfway through the wood). The harder the wood is, the more force is required. Hickory, hard rock maple, and white oak have hardness ratings of 1820lbf, 1450lbf, and 1325lbf respectively. Okay, I promise, no more science for today :)
{"url":"http://www.drummerworld.com/forums/showpost.php?p=1155628&postcount=9","timestamp":"2014-04-17T13:44:45Z","content_type":null,"content_length":"15267","record_id":"<urn:uuid:5114c6cc-f15b-4f12-9129-9e8fc74f7cfa>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving det(A) = 0 [ Follow Ups ] [ Post Followup ] [ Netlib Discussion Forum ] [ FAQ ] Posted by Chris on June 17, 1998 at 10:43:27: As a part of a system of nonlinear equations, one function to be solved is det(A) = 0, where the elements of the matrix A are indirectly dependent on the vector of unknowns. My problem is that for poor initial guesses det(A) is quite large(10^30) and continuing a numerical method until there is no change in the unknowns (i.e., below machine precision) reduces det(A) only to 10^15 (which is still quite far from zero). Does anybody know of a way to scale A (or know of an equivalent solution to det(A) = 0) that will give a more accurate solution? Currently, different initial guesses and numerical methods will continue until there is no change in the unknowns, but the "solution" is different for each case (within approx. 20% of each other, though).
{"url":"http://www.netlib.org/utk/forums/netlib/messages/413.html","timestamp":"2014-04-17T15:27:11Z","content_type":null,"content_length":"1862","record_id":"<urn:uuid:f7992936-9459-44ce-b0d3-9c6652e5d199>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
A combinatorial dichotomy I have just read about the following very nice dichotomy: suppose we have an infinite set X, and a collection of subsets C in X; suppose further we look at all subsets F of X of finite size n, and at the subsets of F which can be defined by intersection of F with an element of C. How rich can this collection of subsets of F be, as F varies over all subsets of fixed (but growing) size? For example, X could be the real line, and C the collection of half-lines ]-∞,a]. Then if we enumerate the finite subset F in increasing order $x_1\lt x_2 \lt \ldots \lt x_n$ the subsets we obtain by intersecting with C are rather special: apart from the emptyset, we have only the n subsets of elements in increasing order up to some a: $x_1\lt x_2 \lt \ldots \lt x_r \le a$ with r≤n. In particular, there are only n+1 subsets of F which can be obtained in this manner, much less than the 2^n subsets of F. As a second example, if C is the collection of all open subsets of the real line, we can clearly obtain all subsets of F as intersection of an element of C and F. The dichotomy in question is that, in fact, those two examples are typical in the following sense: either one can, for all n, find an n element set F and recover all its subsets as intersection with C (as in the second example); or for any n and any F in X of size n, the number of subsets obtained from F by intersecting with C is bounded by a polynomial function of n. So it is not possible to have intermediate behavior (subexponential growth which is not polynomial), and this is certainly surprising at first (at least for me). This very nice fact is due to Vapnik-Chernovenkis and Shelah, independently (published in 1969 and 1971, respectively). What is quite remarkable is that the first authors were interested in basic probability theory (they found conditions for the “empirical” probability of an event in the standard Bernoulli model to converge uniformly to the mathematical probability over a collection of events, generalizing the weak law of large numbers), while Shelah was dealing with model-theoretic properties of various first-order theories (in particular, stability). In fact, these references are given by L. van den Dries (Notes to Chapter 5 of “Tame topology and O-minimal structures”, which is where I’ve read about this, and which is available in the Google preview), but whereas it’s easy to find the result in the paper of Vapnik-Chernovenkis, I would be hard put to give a precise location in Shelah’s paper where he actually states this dichotomy! This is a striking illustratation both of the unity and divergence of mathematics… The proof of the dichotomy, as one can maybe expect (given the information that it is true), is clever but fairly simple, and gives rather more precise information than what I stated. Let’s say that C is a rich uncle of a finite set F if any subset of F is the intersection of F with a subset in C. We must show that either C is a rich uncle for at least one finite subset of every order, or else C only represents relatively few subsets of any finite subset. First, a lemma states that: Given a finite set of size n and a collection D of subsets of F, which contains (strictly) more subsets than there are subsets of F of size up to (but strictly less than) some d, one can always find in F a subset E of size d such that D is a rich uncle of E. Note that this is best possible, because if we take D to be those substes F of size up to (and excluding) d, it certainly can not be a rich uncle of a set of size d. If we grant this lemma, the proof of the dichotomy proceeds as follows: assume we are not in the first case, so for some d, C is a rich uncle for no subset of order d. Let n>d be given (to get polynomial behavior, we can restrict to this case), and let F be a subset of order n. The lemma (applied with D the intersections of C with F), and the definition of d, imply by contraposition that the number of subsets of F which are obtained by intersection from C is less than the number of subsets of a set of order n which are of order d at most. But this is a polynomial function of n, of degre d. As for the lemma, I leave the proof as an exercise (see page 80 in the book of van den Dries, which is also in the preview), with a hint: proceed by induction on n. (One is tempted, in view of the statement, to use the pigeon-hole principle to say that D must contain one subset at least of order d, but the proof by induction doesn’t use that). Now I am going to try and think if I can find some other application of this fact… One Response to “A combinatorial dichotomy” 1. There’s a discussion of a different (probably better) proof of this dichotomoy in a post by Tim Gowers (see Example 3 there). Post a Comment
{"url":"http://blogs.ethz.ch/kowalski/2008/05/23/a-combinatorial-dichotomy/","timestamp":"2014-04-20T21:11:58Z","content_type":null,"content_length":"22322","record_id":"<urn:uuid:ecdf975f-6f80-49df-a5e6-b527a6df5578>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial derivative of (n+1) dimensional integral - Evans PDE September 22nd 2012, 02:19 AM #1 Sep 2012 Partial derivative of (n+1) dimensional integral - Evans PDE Hi I'm just wondering what steps Evans takes in his PDE book on page 50. He defines $u(x,t)=\int_0^t \int_{\mathbb{R}^n}\Phi(y,s)f(x-y,t-s)\,dy\,ds$ And then goes on to say: $u_t(x,t)=\int_0^t\int_{\mathbb{R}^n}\Phi(y,s)f_t(x-y,t-s)\,dy\,ds + \int_{\mathbb{R}^n}\Phi(y,t)f(x-y,0)\,dy$. I do not know how he got here: I think I can get this far: $\frac{u(x,t+h)-u(x,t)}{h} = \frac{1}{h}\left(\int_0^t \int_{\mathbb{R}^n}\Phi(y,s)[f(x-y,t+h-s)-f(x-y,t-s)]\,dy\,ds+\int_t^{t+h}\int_{\mathbb{R}^n}\Phi(y,s)f (x-y,t+h-s)\,dy\,ds\right)$ Taking the limit as $h\rightarrow 0$, I can kind of see how the left term on the RHS can go to $\Phi(y,s)f_t(x-y,t-s)$, but even this I'm not sure of, I don't really understand, I can't really see why it shouldn't it be $f_{t-s}(x-y,t-s)$ since isn't this the 'variables' that is having an infinitesimally small h added on to? I have no idea how the right term of the RHS of the above equation transforms as well. I think the thing that is most troubling me is that the variable t is in the integral and the integrand and what we are taking a limit of, h, is in the integral... and in the integrand once we take the 1/h back inside! Can someone please help?! Thanks! Last edited by benb89; September 22nd 2012 at 02:37 AM. Re: Partial derivative of (n+1) dimensional integral - Evans PDE The general approach to $\frac{d}{dt}\int_{a}^{t} \phi(t, x) dx$ Let $I(t) = \int_{a}^{t} \phi(t, x) dx$. Then: $I(t+h)-I(t) = \int_{a}^{t+h} \phi(t+h, x) dx - \int_{a}^{t} \phi(t, x) dx$ $= \int_{a}^{t+h} \phi(t+h, x) dx - \int_{a}^{t} \phi(t+h, x) dx + \int_{a}^{t} \phi(t+h, x) dx - \int_{a}^{t} \phi(t, x) dx$ (yes, that's the common "add and subtract the same thing" trick) $= \int_{t}^{t+h} \phi(t+h, x) dx + \int_{a}^{t} (\phi(t+h, x) - \phi(t, x)) dx$. Then $\frac{I(t+h)-I(t)}{h} = \frac{1}{h}\int_{t}^{t+h} \phi(t+h, x) dx + \int_{a}^{t} \frac{\phi(t+h, x) - \phi(t, x)}{h} dx$. The 2nd integral gives you the derivative. The first is the kind that shows up in the fundemental theorem of calculus. There are several ways to do it, but maybe use the mean value theorem for For each $h$, there's an $x_h \in [t, t+h]$ (thinking of h as positive) such that $\int_{t}^{t+h} \phi(t+h, x) dx = \phi(t+h, x_h)h$. Thus as $h$ goes to zero, $x_h$ goes to $t$, and if $\phi$ is continuous, then: $\lim_{h \rightarrow 0} \frac{1}{h} \int_{t}^{t+h} \phi(t+h, x) dx = \lim_{h \rightarrow 0} \phi(t+h, x_h) = \phi(t, t)$. For the 2nd integral, switching limits and integrals is switching a limiting process, so in general isn't allowed. However, in most practical cases it will be permitted, I just want to warn you that this really needs a theorem to be justified (the ready availability of such theorems for the Lebesque integral is why it's preferred over the Riemann integral). Ignore that here, just for the sake of the derivation: $\lim_{h \rightarrow 0} \int_{a}^{t} \frac{\phi(t+h, x) dx - \phi(t, x)}{h} dx = \int_{a}^{t} \left\{ \lim_{h \rightarrow 0} \frac{\phi(t+h, x) dx - \phi(t, x)}{h} \right\} dx$ $= \int_{a}^{t} \frac{\partial \phi(t, x)}{\partial t} dx$. Putting it together: Then $I'(t) = \lim_{h \rightarrow 0} \frac{I(t+h)-I(t)}{h}$ $= \lim_{h \rightarrow 0} \frac{1}{h}\int_{t}^{t+h} \phi(t+h, x) dx + \lim_{h \rightarrow 0} \int_{a}^{t} \frac{\phi(t+h, x) dx - \phi(t, x)}{h} dx$. Therefore $\frac{d}{dt}\int_{a}^{t} \phi(t, x) dx = \phi(t, t) + \int_{a}^{t} \frac{\partial \phi(t, x)}{\partial t} dx$. It's sloppy, but that's the gist of what's going on. In your case, that becomes (note that x is considered a constant for these purposes, so, for clarity, I won't even include it in the function notation): $u(t) = \int_{0}^{t} \phi(t, s) ds$, where $\phi(t, s) = \int_{\mathbb{R}^n} \Phi(y,s)f(x-y, t-s) dy$. Thus $u_t(t) = \phi(t, t) + \int_{0}^{t} \frac{\partial \phi(t, s)}{\partial t} ds$. The first term is $\phi(t, t) = \int_{\mathbb{R}^n} \Phi(y,(t))f(x-y, t-(t)) dy = \int_{\mathbb{R}^n} \Phi(y,t)f(x-y, 0) dy$. The 2nd term gives the $\int_0^t \left\{ \int_{\mathbb{R}^n} \Phi(y,s)f_t(x-y, t-s) dy \right\} ds$. Thus $u_t(x, t) = \int_{\mathbb{R}^n} \Phi(y,t)f(x-y, 0) dy + \int_0^t \int_{\mathbb{R}^n} \Phi(y,s)f_t(x-y, t-s) dy ds$. (Again, that's not going to always hold - you'd need to add some conditions to know in which cases it holds. It's a derivation, not a proof.) Last edited by johnsomeone; September 22nd 2012 at 06:25 AM. Re: Partial derivative of (n+1) dimensional integral - Evans PDE You, Sir, are quite possibly my favourite person on this planet right now. That is wonderful. Thankyou so much. Are you a math PhD? I hope you're not an undergrad - or you have really made me feel like an IDIOT at the moment. A happy and relieved idiot. I am doing a course in measure theory at the moment. I have seen first hand how simple the Lebesgue integral makes things. It's a wonderful construction. Again, I am very very appreciative. I have another question about this section in Evans as well. If you are a PDE expert - you may be the one to help me Thanks from Australia! (My parents are visiting Washington DC soon I should tell them to go find you and thank you personally... but that would be weird...) Re: Partial derivative of (n+1) dimensional integral - Evans PDE You, Sir, are quite possibly my favourite person on this planet right now. That is wonderful. Thankyou so much. Are you a math PhD? I hope you're not an undergrad - or you have really made me feel like an IDIOT at the moment. A happy and relieved idiot. I am doing a course in measure theory at the moment. I have seen first hand how simple the Lebesgue integral makes things. It's a wonderful construction. Again, I am very very appreciative. I have another question about this section in Evans as well. If you are a PDE expert - you may be the one to help me . Thanks from Australia! (My parents are visiting Washington DC soon I should tell them to go find you and thank you personally... but that would be weird...) September 22nd 2012, 03:27 AM #2 Super Member Sep 2012 Washington DC USA September 22nd 2012, 04:15 PM #3 Sep 2012 September 22nd 2012, 04:16 PM #4 Sep 2012
{"url":"http://mathhelpforum.com/calculus/203852-partial-derivative-n-1-dimensional-integral-evans-pde.html","timestamp":"2014-04-17T14:57:14Z","content_type":null,"content_length":"51346","record_id":"<urn:uuid:ff99ea7a-0421-422a-819e-dc4b7f002b37>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Solving an equation using scipy.optimize.newton Anne Archibald peridot.faceted@gmail.... Wed Sep 5 21:21:50 CDT 2007 On 05/09/07, fdu.xiaojf@gmail.com <fdu.xiaojf@gmail.com> wrote: > I'm trying to solve an equation f(x) = 0 with scipy.optimize.newton. > However the problem isn't so simple. There are bound constraints for my > equation: the equation cannot be evaluated when x is out of [Min, Max], but > the root is always in the interval of [Min, Max] > When newton() iterates to find a root, it sometimes try to evaluate the > equation with a x out of [Min, Max], and then error occurs. > How to solve this problem ? > I couldn't easily find two points with different signs every time, so methods > like brentq don't work here. Are you sure your function has a zero at all? If it's something like a polynomial, you may find that sometimes it fails to have a root, which will of course be a problem for a root-finding algorithm. It's probably a good idea to look at this as two problems: * Find points of opposite sign in your interval. * Narrow this down to an actual root. Once you've done the first, the second can be done using (say) brentq without worrying that you're going to leave the interval of interest. So how do you find a place where your function crosses the y-axis? Ideally you'd know something about it analytically. But it sounds like you've tried that, to no avail. You could blindly evaluate the function, perhaps on a grid, a pseudorandom or subrandom sequence of points, hoping to find one that gave a negative value. You could run a one-dimensional minimizer, with a wrapper around your function that raises an exception as soon as it sees a negative value, but here too to get started you need three points where the middle one is the lowest. If you're really stuck, you can try one of the constrained multidimensional minimizers (but be warned some of them evaluate at points that violate the constraints!), again with the exception-raising trick to bail out as soon as you've found a point with a negative value. Anne M. Archibald More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-September/013599.html","timestamp":"2014-04-18T14:04:03Z","content_type":null,"content_length":"4892","record_id":"<urn:uuid:916660b8-aa95-4c2c-8f4a-34cd8401cfae>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionSoil moisture retrieval techniqueSensitivity of soil moisture retrieval to RMS height and correlation lengthSensitivity of soil moisture retrieval to roughness parameterization techniquesGeneration of synthetical 1-dimensional surface profilesProfile lengthExperimental setupErrors on roughness parameterizationErrors on soil moisture retrievalNumber of profile measurementsExperimental setupErrors on roughness parameterizationErrors on soil moisture retrievalSpacing between height pointsExperimental setupErrors on roughness parameterizationErrors on soil moisture retrievalInstrument accuracyExperimental setupErrors on roughness parameterizationErrors on soil moisture retrievalTrend removalExperimental setupErrors on roughness parameterizationErrors on soil moisture retrievalConclusionsReferences and NotesFigures and Tables Sensors Sensors 1424-8220 Molecular Diversity Preservation International (MDPI) 10.3390/s90201067 sensors-09-01067 Article Error in Radar-Derived Soil Moisture due to Roughness Parameterization: An Analysis Based on Synthetical Surface Profiles LievensHans^1^* VernieuweHilde^2 Álvarez-MozosJesús^3 De BaetsBernard^2 VerhoestNiko E.C.^1 Laboratory of Hydrology and Water Management, Ghent University, Coupure Links 653, Ghent, Belgium Department of Applied Mathematics, Biometrics and Process Control, Ghent University, Coupure Links 653, Ghent, Belgium Department of Projects and Rural Engineering, Public University of Navarre, Pamplona, Spain E-mails: Hans.Lievens@UGent.be; Hilde.Vernieuwe@UGent.be; Jesus.Alvarez@unavarra.es; Bernard.DeBaets@UGent.be; Niko.Verhoest@UGent.be Author to whom correspondence should be addressed 2009 17 2 2009 9 02 1067 1093 15 12 2008 14 1 2009 © 2009 by the authors; licensee Molecular Diversity Preservation International, Basel, Switzerland. 2009 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). In the past decades, many studies on soil moisture retrieval from SAR demonstrated a poor correlation between the top layer soil moisture content and observed backscatter coefficients, which mainly has been attributed to difficulties involved in the parameterization of surface roughness. The present paper describes a theoretical study, performed on synthetical surface profiles, which investigates how errors on roughness parameters are introduced by standard measurement techniques, and how they will propagate through the commonly used Integral Equation Model (IEM) into a corresponding soil moisture retrieval error for some of the currently most used SAR configurations. Key aspects influencing the error on the roughness parameterization and consequently on soil moisture retrieval are: the length of the surface profile, the number of profile measurements, the horizontal and vertical accuracy of profile measurements and the removal of trends along profiles. Moreover, it is found that soil moisture retrieval with C-band configuration generally is less sensitive to inaccuracies in roughness parameterization than retrieval with L-band configuration. SAR soil moisture soil roughness parameterization Surface soil moisture plays a crucial role in various hydrological and agronomical processes: the top layer moisture content controls the infiltration rate during precipitation events and therefore largely influences the amount of surface runoff, it drives the crop development, and finally, affects the evapo-transpiration rate and thus the micro-climate and -meteorology. The retrieval of soil moisture content from Synthetic Aperture Radar (SAR) relies on the dependency of the backscattered radar signal on the surface reflection coefficients of the sensed target [1]. These reflection coefficients describe the partitioning of the incoming radar signal into reflected and transmitted energy, and are function of the signal incidence angle, the polarizations of both the incoming and reflected signal and the dielectric constant of the surface target. The high discrepancy between the dielectric constants of respectively dry soil and water allows for assessing the volumetric water content of a wet soil. Dielectric constant values can be converted to soil moisture using several models [e.g., 2, 3]. Apart from soil moisture, the backscattered radar signal shows to be extremely dependent on the roughness state of the sensed surface, in most backscatter models described by the surface root mean square (RMS) height s, the correlation length l and an autocorrelation function (ACF) [4]. The ACF is mostly chosen to be of exponential or Gaussian type [4], restricting the problem to the derivation of only s and l. Although the latter roughness parameters are more precisely derived from two-dimensional surface height measurements, e.g. using terrestrial laser or photogrammetric instruments [5], most radar remote sensing studies make use of 1-dimensional surface height measurements for the parameterization of s and l, for which the current standard procedure is as follows: A series of surface height points (roughness profile) is defined along a 1-dimensional surface transect, mostly sampled by means of meshboard, pin profilometer or laser techniques [6]. Generally, profiles used in practice have a length between 1 m and 4 m [6–9], and the horizontal spacing between height points usually lies between 1 mm [5] and 2 cm [7]. From this profile a linear trend is removed to compensate for the possibility that the measurement device was not aligned perfectly parallel to a horizontal reference surface [8]. The RMS height s can then be calculated as the standard deviation of the series of height points [10], while the correlation length l is defined as the horizontal distance over which the correlation between surface height points is larger than 1/e [10]. As s and l are extremely variable between different measurements within one agricultural field, both roughness parameters are commonly averaged over a number of profiles, mostly ranging from 3 to 20 [7–9, 11, 12]. This standard parameterization procedure is not absolute: vertical accuracies and horizontal spacings of measured surface points differ for various instruments, causing diverging roughness parameterizations [4, 9]. Moreover, s and l are subject to a scaling problem, as they generally both increase with increasing profile length [8, 9, 11, 13]. The choice of profile length therefore has a determining influence on the parameterization results. Besides, the assumption of a planar reference surface, justifying the removal of a linear trend from the profile, may only be valid when using short 1-m profiles. Longer profiles, e.g. 4 m in length, often dispose of topographic undulations along the transect and may therefore require the removal of a low-frequency roughness spectrum using a higher-order polynomial. As briefly summarized above, the parameterization of roughness from profile measurements is characterized by several problems. An extensive literature review on these surface roughness problems is provided by Verhoest et al [14]. The present paper focuses on the influence of standard measurement techniques on the parameterization of roughness and its impact on soil moisture retrieval. The remainder of this paper is organized as follows: Section 2. elaborates on the applied soil moisture retrieval technique and its input parameters, Section 3. discusses the sensitivity of this soil moisture retrieval to RMS height and correlation length, further, in Section 4., the generation of synthetical 1-dimensional roughness profiles is explained, and subsequently, a theoretical study on these synthetical profiles is performed in order to assess the influence of roughness parameterization techniques on soil moisture retrieval. The sensitivity of soil moisture retrieval to roughness parameters and the influence of standard roughness parameterization aspects are merely demonstrated on theoretical data, since working with actual SAR data would not allow for a quantitative assessment. The commonly used Integral Equation Model (IEM) [15, 16] is chosen as backscatter model in order to yield similar errors in soil moisture as can be expected in many practical hydrological applications. Finally, conclusions are formulated in Section 5. Many empirical, semi-empirical and theoretical models have been developed to retrieve soil moisture content from the backscattered radar signal. A large number of studies proposed a simple linear empirical relationship between the backscatter coefficient and soil moisture content. Such relationship is easy to apply, however, only valid for a single study site, under the condition that surface roughness remains constant over successive radar acquisitions [e.g., 17, 18]. The mostly used semi-empirical models, developed by Oh et al. [19] and Dubois et al. [20, 21], are based on a theoretical foundation, however, they still contain model parameters that are derived from experimental data. Conversely, theoretical models present an approximate physical description of wave scattering on rough surfaces. Amongst the mostly used physical approximations are the Small Perturbation Model (SPM) [22], the Kirchhoff Approximations [23] and the IEM [15, 16]. Despite their theoretical foundation, many of these models cannot be applied operationally because of their narrow validity ranges for the majority of natural surfaces. The model with the largest validity range concerning roughness parameters is probably the IEM. Because of this, the IEM has become the most widely used scattering model for bare soil surfaces [24], which gives a sound justification for use in the present theoretical study. The single scattering approximation of the IEM calculates backscatter coefficients σ V V 0 and σ H H 0, given the dielectric constant ε of a bare soil, the radar frequency f (GHz), the incidence angle θ (°) and roughness parameters: s (cm), l (cm) and an ACF. Since many authors [e.g., 7, 8, 11, 25] found that for agricultural soils the ACF is well approximated by an exponential function, this type of ACF will be adopted in all further simulations. Based on several experiments, the validity condition of the single scattering approximation of the IEM is often expressed by ks < 3 [e.g., 16], with k the wave number equal to 2π/λ and λ the wavelength. In many problems, soil moisture (dielectric constant) needs to be modelled based on observed backscatter coefficients, i.e. the IEM should be applied inversely. Several inversion algorithms have been developed, including Look-Up Tables (LUT) [e.g., 26], neural networks [e.g., 27], and the method of least squares [e.g., 28, 29]. Alternatively, the inversion problem can be solved iteratively [e.g., 30], which is preferred in this theoretical study because of its simplicity. To translate the dielectric constant into soil moisture, the four-component dielectric mixing model of Dobson et al. [2] is used. Table 1 lists the input parameters for the IEM and the dielectric mixing model used in the remainder of this work. As was also applied by Verhoest et al. [31, 32], retrieved moisture contents above 45 vol% are set equal to 45 vol%, whereas moisture contents below 2 vol% are set to 2 vol%, in order to limit the retrieval results to plausible soil moisture contents of real soils. As soil roughness largely influences the backscattered signal, one can expect that a correct roughness parameterization is indispensable in order to ensure accurate soil moisture retrieval. To assess the impact of roughness parameterization errors on the soil moisture retrieval, a profound sensitivity analysis is performed using the following experimental setup: for different values of soil moisture (5, 15, 25 and 35 vol%), backscatter coefficients are calculated for s &isin; [0.3 cm, 2.5 cm], l &isin; [1 cm, 50 cm] and predefined radar configurations, given in Table 1. Figure 1 illustrates the calculated backscatter coefficients for ENVISAT ASAR VV and ALOS PALSAR HH for 25vol% moisture content. As can be seen from this figure, soil roughness may cause a wide range of backscatter coefficients for a soil with a given moisture content. Based on these calculated backscatter coefficients, the sensitivity of soil moisture retrieval to s and l, respectively, expressed as the gradient of the retrieval surface along the s and l direction are numerically approximated as: ∂ Mv ∂ s ≈ Mv IEM − 1 ( σ Mv , s , l 0 , s + Δ s , l ) − Mv IEM − 1 ( σ Mv , s , l 0 , s − Δ s , l ) 2 Δ s , ∂ Mv ∂ l ≈ Mv IEM − 1 ( σ Mv , s , l 0 , s , l + Δ l ) − Mv IEM − 1 ( σ Mv , s , l 0 , s , l − Δ l ) 2 Δ l ,with σ Mv , s , l 0 the backscatter coefficient calculated with IEM for a given soil moisture content Mv, s and l, and Δs and Δ/ the discretization steps of 0.01 cm and 0.1 cm, respectively. Finally, it must be stressed that a source of error may be introduced in the calculated backscatter coefficients, presented in Figure 1, due to the use of the single scattering approximation of the IEM. This approximation may cause an underestimation of backscatter coefficients for very rough surfaces that cause a multiple scattering of the SAR signal. Moreover, an underestimation of backscatter coefficients may introduce errors in the presented sensitivity plots in Figures 2 to 5. Therefore, a more cautious interpretation of the sensitivity figures for C-band configuration is adviced in case of large RMS heights, particularly in combination with very small correlation lengths. Figures 2 and 3 show the sensitivity to s and l, respectively, for an ASAR VV configuration for the different moisture contents considered. As can be seen from these figures, a small error on the parameterization of s influences the soil moisture retrieval much more than a ten times larger error on l, which implies that the parameterization of s requires a higher accuracy than the parameterization of l. Fortunately, Baghdadi et al. [33], amongst several others, have shown that generally higher accuracies are obtained for RMS height parameterization than for correlation length As is revealed from Figure 2, the impact of small RMS height errors on soil moisture retrieval increases with increasing moisture content, as was already demonstrated in several publications [e.g., 34]. Also, gradients in the s direction are generally negative for small RMS heights together with large correlation lengths, whereas positive gradients are found with large RMS heights and small correlation lengths. Figure 3 shows that the impact of small correlation length errors on soil moisture retrieval only slightly depends on soil moisture content. An increase in moisture content only causes a substantial increase in sensitivity for very small correlation lengths. However, this effect may be neglected as such small correlation lengths are very unusual for natural surfaces. Conversely to what is found for the sensitivity to s in Figure 2, positive gradients generally occur with small RMS heights and large correlation lengths, whereas large RMS heights and small correlation lengths give rise to negative gradients. Sensitivity plots to s and l for a PALSAR HH configuration, respectively presented in Figures 4 and 5, show similar trends as found for an ASAR VV configuration. However, comparison of Figures 2 and 4 demonstrates that negative gradients in the s direction are more pronounced for a PALSAR HH configuration than for an ASAR VV configuration, whereas the opposite is true for positive gradients. Moreover, a comparison between Figures 3 and 5 reveals that the sensitivity to l is less subject to variations in RMS height for a PALSAR HH configuration than for an ASAR VV configuration. Similar sensitivity plots as presented in Figures 2 to 5 are obtained for different combinations of polarization, incidence angle and frequency, and generally show an increased sensitivity for HH polarization compared to VV polarization, and for a higher incidence angle and lower frequency (data not shown). The sign of the gradients expressed in Figures 2 to 5 allows for assessing whether a given error on the parameterization of s or l will cause an over- or underestimation of the soil moisture retrieval. If s or l is underestimated (overestimated), a negative gradient will lead to an overestimation (underestimation) of the soil moisture content, whereas a positive gradient will result in an opposite error. Besides, the magnitude of this gradient provides information on the magnitude of the retrieval error. For example, a gradient of 100 vol%/cm means that a 0.01 cm error on s causes a 1 vol% error on soil moisture. However, as the deviation of the parameterized s or l value from its actual value becomes larger, a first order approximation of the error will be insufficient to describe the actual error made. Moreover, the presented sensitivity figures do not account for simultaneous errors on s and l. Based on the results of Section 3., one can deduce that even small errors in roughness parameterization, particularly of RMS height, may have a large impact on the soil moisture retrieval. This section further elaborates on the roughness parameterization errors that are associated with standard profile measurement techniques and on the influence of these errors on soil moisture retrieval. To be able to individually address the errors involved with each standard parameterization aspect, synthetical surface profiles are generated. Subsection 4.1. demonstrates the method used to generate such profiles. The main advantage of synthetical profiles is that they can be designed with a predefined RMS height, correlation length, profile length and spacing between series of height points. Moreover, inaccuracies of the instrument measuring a profile may be easily simulated by adding white noise to each point of the series, and topographic height variations along the profile, which in reality may be caused by either an oblique positioning of the instrument or microrelief, may be introduced by adding a linear or undulating trend. On the designed profiles, standard parameterization techniques can be applied and evaluated quantitatively. In Subsections 4.2. to 4.6., different experiments are set up to demonstrate respectively the influences of the following parameterization aspects: the profile length, the number of profiles over which s and l are averaged, the spacing between height points, instrument accuracy and the trend removal technique applied. All experiments are conducted on synthetical profiles with (s,l) = (1 cm, 5 cm), (1 cm,40 cm), (2 cm,5 cm) and (2 cm,40 cm), for soil moisture contents of 5, 15, 25 and 35 vol%, and for ASAR and PALSAR HH and VV polarized configurations (see Table 1). Only the most relevant results will be shown. In order to assess the influence of roughness parameterization techniques on the soil moisture retrieval, a series of synthetical rough surface profiles is generated. The synthetical 1-dimensional profiles are identified using a first order autoregressive model for an exponential ACF: z t = ϕ ⋅ z t − 1 + a t ,with z[t] the height at coordinate t, a[t] white noise and φ a weight factor which can be found from the Yule-Walker equations as [35]: ϕ = e − Δ x / l ,with Δx the horizontal spacing between height points (cm). Using Equations (3) and (4), a surface profile can be generated with a desired l. In order to obtain the desired RMS height, the series is standardized, by subtracting the mean and dividing by the standard deviation, and subsequently multiplied by the desired RMS The dependency of roughness parameters on profile length has already been described in depth [e.g., 6, 8, 11, 36]: short profiles generally result in an underestimation of both s and l, which is more severe for smooth surfaces than for rough surfaces. According to Oh and Kay [11], profile lengths should at least be 40 times the correlation length to obtain the RMS height with a ±10% precision of the mean value, whereas the same accuracy for correlation length only becomes feasible for profile lengths of at least 200 times the correlation length. Although these scaling properties are well known, the impact of using different profile lengths on the soil moisture retrieval has not been reported yet [14]. To assess the influence of profile length on the parameterization of roughness and soil moisture retrieval, extremely long profiles are generated with a Δx of 0.1 cm and predefined (s,l) parameters. From such a profile, 500 non-overlapping profiles of different lengths, ranging from 1 m up to 20 m, are sampled, after which the standard roughness parameterization procedure is applied on each of these profiles. As a result, 500 (s,l) couples are derived per profile length. Backscatter coefficients are then calculated for a soil having a specific moisture content and roughness parameters equal to the ones used to generate the extremely long profile. Subsequently, these backscatter coefficients are inverted with IEM^−1 into soil moisture content, using the roughness parameters from the sampled profiles. Figure 6 shows the mean and standard deviations of s and l, derived from 500 sampled profiles, for different profile lengths. The sampled profiles originate from long profiles with (s,l) = (1 cm,5 cm) and (s,l) = (1 cm,40 cm). This figure shows for both s and l a similar scaling behavior as was already reported by Oh and Kay [11], Mattia et al. [6] and Callens et al. [8]: an increase in profile length leads to an increase of the roughness parameters, which is more pronounced for surfaces characterized by a large correlation length. Similar tests with synthetical profiles of (s,l) = (2 cm,5 cm) and (2 cm,40 cm) lead to exactly the same scaling relations as shown in Figure 6 (data not shown), which implies that the scaling behavior of s and l merely depends on the magnitude of the surface correlation length. Figure 7 presents the mean and standard deviations of inverted soil moisture contents for different profile lengths and sensor configurations. This figure demonstrates that the inverted soil moisture content may be largely over- or underestimated when using short profiles, particularly for high moisture contents. Moreover, a different behavior is found depending on the roughness state of the surface: the parameterization of s and l on a rough soil, e.g. (s,l) = (2 cm,5 cm), results in overestimated soil moisture contents, whereas opposite errors may be found on smooth surfaces, e.g. characterized by (s,l) = (1 cm,40 cm). Overestimations are generally more severe for a PALSAR configuration and HH polarization, whereas underestimations are more pronounced for an ASAR configuration and VV polarization. Finally, for the roughness parameter sets used in the example demonstrated, average retrieval errors for 4-m profiles, which are still feasible to perform in the field [8], are at most about 5 vol%. However, for soil moisture retrievals based on one roughness parameter set, extremely large errors are found, as illustrated by the standard deviations shown in Figure 7. A rough estimate of the soil moisture retrieval error for a surface with given moisture content, roughness state and a certain sensor configuration may also be deduced from Figures 2 to 5. As an example, Figure 6 shows that a 4-m profile, used on a surface with (s,l) = (1 cm,40 cm), on average results in (s,l) = (0.8 cm,20 cm). From Figures 2 and 3, it can be seen that, for a moisture content of 25 vol% and an ASAR VV configuration, gradients in the s and l direction are respectively -50 vol% and 0.7 vol%. Based on these gradients and the errors on s and l, a retrieval error of -4 vol% is found, which approximately resembles the error demonstrated in Figure 7 (h) at 4-m profile length. In this experimental setup, it was assumed that an optimal parameterization of roughness requires an extremely large profile, resulting in precise asymptotic roughness parameters (see Figure 6). However, if a shorter initial profile would have been used for sampling, the scaling behavior of s and l would have been similar, however, resulting in different soil moisture contents. Unfortunately, it is currently not yet known on which spatial scale roughness parameters need to be defined in order to be most appropriate for describing scattering on rough surfaces. The above example may therefore only be seen as an illustration of the errors that can be expected when using different profile lengths for the same problem. According to Bryant et al. [9], the surface RMS height needs to be averaged over at least twenty 3-m profiles in order to be representative. Using 2-m profiles with correlation lengths between 2 and 20 cm, Baghdadi et al. [12] demonstrated that by averaging roughness values over 10 profiles, the RMS height can be derived with a precision better than ±5%, whereas the precision for correlation length ranges from ±5% to ±15%. According to Davidson et al. [7], Callens et al. [8] and Oh and Kay [11], this variability decreases with increasing profile length. In this section, a theoretical experiment is set up to assess the minimum number of profiles that is needed to obtain roughness parameters with a precision of ±10% of their mean value, and subsequently, to investigate the effect of using averaged roughness parameters for soil moisture retrieval. A field experiment is simulated in which roughness parameters are determined by averaging n profiles of a certain length (n ranging from 1 to 20). In order to assess the variability of the determined average roughness parameters, the same procedure is repeated 1000 times, i.e. if n = 4, in total 4000 profiles of a certain length are sampled from an extremely large synthetical roughness profile. Based on the obtained series consisting of 1000 averaged roughness parameters for different numbers of profiles n, one can find the number n for which the standard deviation of s or l is less than 10% of the mean. The experiment is performed for sampled profiles with lengths ranging from 1m up to 20 m. Next, backscatter coefficients are calculated for given moisture contents, sensor configurations and the roughness parameters used to generate the extremely large profiles, and are subsequently inverted with IEM^−1 into soil moisture content, using the series of averaged roughness parameters from sampled profiles. Only 4-m profiles are considered in this part of the experiment, as these are frequently used in practice. Figure 8 shows the number of profiles that is required to obtain a standard deviation on RMS height or correlation length less than 10% of the mean, for different sampled profile lengths from large profiles with (s,l) = (1 cm,5 cm) and (s,l) = (1 cm,40 cm). As can be seen in this figure, the required number of profiles decreases with increasing profile length. Moreover, as surfaces with a larger correlation length show more variability in roughness parameterization (see Figure 6), the required number of profiles consequently increases with an increase of l. Similar tests on surfaces with (s,l) = (2 cm,5 cm) and (s,l) = (2 cm,40 cm) reveal that this number is not influenced by s, as the same plots as those presented in Figure 8 are obtained (data not shown). Analysis of the soil moisture retrieval error for sampled 4-m profiles from surfaces with (s,l) = (1 cm,5 cm) and (s,l) = (1cm,40 cm) and ASAR VV and PALSAR HH configurations (see Table 1), as illustrated in Figure 9, reveals that an increase in the number of profiles used to average roughness parameters only causes a moderate decrease of the standard deviation of inverted soil moisture contents. Conversely, the sensor configuration, soil moisture content and surface roughness state have a much higher impact on this standard deviation. In general, larger standard deviations are obtained for higher moisture contents, PALSAR HH configuration and surfaces with larger correlation length. As can be seen in Figure 9, the mean inverted moisture content can be strongly biased, particularly for high moisture contents and roughness parameters (s,l) = (1 cm,40 cm), which is due to the scaling problem of roughness as discussed in Section 4.2. Similar tests for an ASAR HH and a PALSAR VV configuration show intermediate results compared to ASAR VV and PALSAR HH configurations (data not shown). The horizontal spacing between discrete height observations along the profile is mostly defined by the instrument used. For laser devices, this spacing commonly ranges between 1 mm [5] and 5 mm [7], whereas for pin profilometers, horizontal distances between adjacent height measurements ranging from 2 mm [5] up to 2 cm [37] have been reported. According to Ulaby et al. [38], a spacing of 1/10 of the wavelength of the SAR signal is recommended. However, according to Ogilvy [4], the horizontal spacing should not exceed 0.1 times the correlation length for accurate parameterization of roughness. Larger spacings cause a change in slope of the ACF around zero, as the high-frequency component (height deviations over very small horizontal distances) is lost. Therefore, larger spacings may cause the ACF to resemble a Gaussian function, whereas in reality the function has an exponential shape. Such false interpretation may lead to large retrieval errors. To assess the impact of the horizontal spacing on the roughness parameterization and soil moisture retrieval, an experiment is set up in which ten 4-m profiles with given set of (s,l) parameters are generated with a spacing of 1 mm. These profiles are then resampled to spacings of 2, 5, 10 and 15 mm, after which RMS heights and correlation lengths are calculated. Subsequently, these roughness parameters are used for soil moisture retrieval from backscatter coefficients, obtained for given soil moisture contents and the roughness parameters defined at 1-mm spacing. Finally, the effect of a misinterpretation of the ACF on the soil moisture retrieval is investigated using the calculated roughness data from resampled profiles with 15-mm spacing. Mean and standard deviations of roughness parameters calculated from ten resampled profiles are shown in Table 2. In general, it can be found that an increase in horizontal spacing causes a decrease in RMS height and an increase in correlation length, which are more pronounced for surfaces with small correlation lengths. Mean and standard deviations of inverted soil moisture contents for different horizontal spacings and sensor configurations (Table 1) and a surface with (s,l) = (1 cm,5 cm) are shown in Figure 10. This figure illustrates that larger spacings give rise to larger retrieval errors, being more pronounced for higher moisture contents. Smaller retrieval errors are found for a smooth surface with (s,l) = (1 cm,40 cm), whereas larger errors are obtained for a rough surface with (s,l) = (2 cm,5 cm) (data not shown). Finally, errors involved with a PALSAR configuration are generally larger than errors obtained for an ASAR configuration. The latter results may also be derived using the data presented in Table 2 and the sensitivity plots shown in Figures 2 to 5. The effect of the horizontal spacing on the ACF is demonstrated in Figure 11 for profiles with a spacing of 1 mm and 15 mm. It is clear that a steeper slope at the origin is encountered for the profile with 1-mm spacing than for the profile with 15-mm spacing, illustrating the loss of the high-frequency roughness component with an increase in distance between measurement points. If the ACF is therefore assumed to be Gaussian, whereas in reality the profile is characterized by an exponential ACF, extreme retrieval errors covering the entire range of soil moisture content may be found, as illustrated by the boxplots in Figure 12. As is revealed from these boxplots, soil moisture retrieval using a Gaussian ACF leads to severe underestimations in case of a rough surface, and conversely, severe overestimations in case of a smooth surface. Finally, standard deviations of the retrieved soil moisture contents are much larger for rough surfaces than for smooth surfaces, as for rough surfaces the retrieval results are more diverging for the different sensor configurations considered. The accuracy of instruments that measure discrete surface height points varies from less than 1 mm for non-contact techniques, such as laser profilometers, up to 2.5 mm for instruments that require a destructive contact with the surface, e.g. meshboards and pin profilometers [5]. In case a meshboard is used, the accuracy may additionally decay because of the digitization process needed to outline the surface [14]. D'Haese et al. [39] digitized the same profile ten times and found a coefficient of variation of 1.7% on RMS height and 6.5% on correlation length, with an average (s,l) = (0.96 cm,10.2 cm). The same profile, digitized by 12 different people, leads to a coefficient of variation of 4.52% and 4.51% for respectively RMS height and correlation length. In case a pin profilometer is used, the digitization can be processed electronically and its influence on RMS height becomes negligible [40]. To assess the effect of the instrument accuracy on the roughness parameterization and consequently soil moisture retrieval, the following experiment is carried out: a noisy signal, uniformly distributed in [−a,a], with a the accuracy assumed (1mm, 2mm or 5 mm), is added to ten 4-m wide synthetical roughness profiles with a predefined s and l, after which the roughness parameters are The calculated roughness parameters may then be used for inversion of backscatter coefficients obtained with given moisture contents and the predefined roughness parameters from profiles without added noise signal. The Root Mean Square Errors (RMSE) between roughness parameters calculated on the noisy profiles and predefined roughness parameters are presented in Table 3. As is revealed from this table, errors are marginal for inaccuracies up to 2 mm, typically involved with laser and pin profilometer measurements. Conversely, large inaccuracies up to 5 mm, which are possible in case of using meshboards and manual digitization, may result in an RMSE up to about 0.04 cm and 3.00 cm, respectively for s and l, as found for a smooth surface with (s,l) = (1 cm,40 cm). Soil moisture retrieval errors due to an inaccurate roughness parameterization are presented in Table 4 for a PALSAR HH configuration (ASAR and VV configurations both lead to smaller errors). In general, retrieval errors are less than 2 vol% for instruments with a noise level smaller than 2 mm. Moreover, the retrieval error increases with an increase in moisture content and is dependent on the surface roughness, with the largest errors for the surface of (s,l) = (1 cm,40 cm). Instruments with a noise level of 5 mm may result in errors ranging from ±0.5 vol% for dry and rough fields up to ±8 vol% for wet and smooth fields. Given these results, a cautious use of low resolution instruments is advised, since their roughness parameterization may introduce large retrieval errors. The standard procedure for roughness parameterization includes the removal of a linear trend from a profile to compensate for the fact that the profile transect may be slightly tilted with respect to a horizontal reference surface. However, in case a field shows a slightly undulating surface, corresponding to roughness at very low frequency, it is currently not known whether or not such low-frequency component should be removed from the profile in order to precisely measure the roughness spectrum as it is sensed by a radar signal, particularly for high-frequency radar (e.g. at C-band). As argued by Ulaby et al. [10], only the high-frequency component should be maintained for the parameterization of roughness, whereas a low-frequency roughness component should be included directly in the backscatter model. According to Callens et al. [8], the use of long profiles from 4 m onwards on slightly undulating fields requires the removal of a second- or third-order polynomial. Alternatively, Bryant et al. [9] introduced piecewise 1-m linear trend removal over longer profiles. The present section is not aiming at investigating which type of trend should be removed from long surface profiles in order to characterize the roughness spectrum as encountered by the radar signal, but rather, it intends to assess the influence of using different trend removal techniques on the parameterization of roughness and the soil moisture retrieval. To assess the impact of commonly used detrending techniques on roughness parameterization and soil moisture retrieval, ten 4-m profiles are generated with predefined (s,l). Furthermore, two trend surfaces are generated: (1) a planar surface with a slope of 0.025 m/m, and (2) a slightly undulating surface simulated by a cosine function with wavelength of 5 m and amplitude of 5 cm, which in reality will not be seen as a major deviation from a planar surface. Next, these trend surfaces are added to the ten 4-m profiles. Figure 13 shows examples of an original 4-m profile, a linear trended profile and a cosine trended profile. Subsequently, trends are removed by subtracting a 4-m linear function, piecewise 1-m linear functions, and second- and third-order polynomials, after which the roughness parameters are calculated. The derived roughness parameter sets are then used to invert backscatter coefficients obtained for the predefined (s,l) parameters from the original 4-m profiles and predefined soil moisture Table 5 shows the mean and standard deviations (between brackets) of the calculated roughness parameters after detrending of ten linear and cosine trended profiles using the techniques considered. Analysis of this table reveals that even 4-m linear detrending of linear trended profiles may lead to large errors in the parameterization of s and l, with both parameters being underestimated. Note that the linear de-trended profile may differ from the initial non-trended profile. This can be explained by the fact that the initially generated 4-m profile may show a non-flat first order regression, caused by the autocorrelated nature and randomness of the profile generation process. Higher-order polynomials clearly delete some roughness on linear profiles, resulting in lower s values, however, the largest errors are found with piecewise linear detrending over 1-m subprofiles. On cosine trended profiles, most precise roughness parameters are obtained using higher-order polynomials. Linear detrending causes s and l to be overestimated, whereas piecewise detrending still leads to a large underestimation of both roughness parameters. Figures 14 and 15 respectively demonstrate inverted soil moisture contents for linear and cosine trended surfaces with (s,l) = (1 cm,5 cm). Based on these figures, 4-m linear detrending is found to be superior in case the profile is characterized by a linear trend, however, if applied on cosine trended profiles, retrieval errors up to ±25 vol% may be expected. Moreover, in case of 4-m linear detrending, retrieved soil moisture contents are largely diverging for the different sensor configurations considered. Conversely, soil moisture errors involved with piecewise linear detrending and higher-order polynomial detrending are relatively low, despite the large errors found in the roughness parameterization. These low errors may probably be attributed to the fact that s and l are generally biased in the same direction, whereas gradients in the s and l directions have opposite signs for most parameter combinations, as illustrated in Figures 2 to 5. Similar tests on profiles with (s,l) = (2 cm,5 cm) show analogous results. However, tests on profiles with (s,l) = (1 cm,40 cm) reveal that, using a second-order polynomial, inverted soil moisture contents may be underestimated up to approximately 15 vol%, whereas third-order polynomial detrending results in less severe underestimations of only 7.5 vol% (data not shown) and therefore appears to be the most appropriate simple technique for the removal of undulating trends. Based on this experiment, it cannot be decided whether or not undulations along profiles should be removed prior to roughness parameterization. However, it can be concluded that soil moisture retrieval results may be very different in case undulations are removed through non-linear detrending than in case they are maintained, i.e. with linear detrending. Moreover, third-order polynomial detrending shows to be more appropriate for the removal of such undulations than second-order detrending and piecewise 1-m linear detrending. Nevertheless, as even this technique may still lead to substantial errors, future research definitely needs to explore more advanced trend removal techniques, such as techniques based on spectral analysis. Correct surface roughness parameters are of extreme importance in order to accurately retrieve soil moisture from SAR. A sensitivity study of the soil moisture retrieval to RMS height and correlation length reveals that small errors on RMS height generally more affect the soil moisture retrieval than ten times larger errors on correlation length. Therefore, RMS height parameterization requires a higher accuracy than the parameterization of the correlation length. Besides, sensitivity surfaces of respectively s and l generally show opposite trends, causing soil moisture retrieval errors to be partially cancelled out if both roughness parameters are biased in the same direction. Finally, sensing with an L-band HH configuration increases the sensitivity to roughness compared to a C-band VV The profile length used during in situ roughness measurements has an important influence on the RMS height and correlation length parameterization. Shorter profiles result in lower RMS heights and correlation lengths and lead to over- or underestimation of the moisture content, depending on the roughness of the surface and the sensor configuration. On the other hand, longer profiles give rise to higher roughness parameters with reduced variability, and consequently, result in more stable retrieval results. However, the exact spatial scale at which roughness needs to be measured in order to describe the scattering on rough surfaces is not yet known. The number of profiles over which RMS height and correlation length are averaged only has a moderate impact on the final roughness parameters and soil moisture retrieval results. Generally, a higher number of profiles is required for shorter profiles, surfaces with higher correlation lengths, higher soil moisture contents, and L-band HH configuration. The horizontal spacing between height points measured along a profile is of low influence on the soil moisture retrieval, yet may cause confusion in the determination of the appropriate ACF. A misinterpretation of the slope of the ACF can lead to errors covering the complete range of moisture content. Instrument inaccuracies up to ±2 mm, typically found for most current instruments, have a negligible impact on the soil moisture retrieval result. However, inaccuracies of ±5 mm may lead to errors ranging from ±0.5 vol% up to ±8 vol%. Such inaccuracies are possible for roughness parameterized using a meshboard and manual digitization. Probably the most prevailing aspect in the parameterization of roughness is the removal of surface trends. In case the surface is characterized by an undulating trend, a linear removal may lead to retrieval errors up to 25 vol%, as was found on 4-m profiles of (s,l) = (1 cm,5 cm) with added cosine function. More precise retrieval results are obtained through the removal of a third-order polynomial, with errors less than 7.5 vol% irrespective of the type of trend and sensor configuration used. Nevertheless, further research needs to explore more complex detrending techniques and evaluate the retrieval errors involved. As shown in this paper, the parameterization of surface roughness is not obvious. In the demonstrated experiments, various aspects were treated individually. However, in practice, different parameterization problems show up simultaneously, through which roughness errors may add up or cancel out. Therefore, future research definitely needs to clarify the errors involved in the entire parameterization and soil moisture retrieval process. Moreover, optimal standard parameterization procedures should be developed in function of the sensor configuration and target properties under study. Finally, being a theoretical study, the obtained results should be corraborated with experimental SAR observations. The research presented in this paper is funded by the Belgian Science Policy Office in the frame of the Stereo II programme - project SR/00/100. KozlovA.I.LigthartL.P.LogvinA.I.Kluwer Academic PublishersDordrecht, The Netherlands2001 DobsonM.C.UlabyF.T.HallikainenM.T.El-RayesM.S.Microwave dielectric behavior of wet soils: Part II - Dielectric mixing models198523354610.1109/TGRS.1985.289498 HallikainenM.T.UlabyF.T.DobsonM.C.El-RayesM.A.WuL.K.Microwave dielectric behavior of wet soils: Part I - Empirical models and experimental observations198523253410.1109/TGRS.1985.289497 OgilvyJ.A.IOP Publishing LtdRedcliffe Way, Bristol, UK1991 JesterW.KlikA.Soil surface roughness measurement-methods, applicability, and surface representation20056417419210.1016/j.catena.2005.08.005 MattiaF.DavidsonM.W.J.Le ToanT.D'HaeseC.M.F.VerhoestN.E.C.GattiA.M.BorgeaudM.A comparison between soil roughness statistics used in surface scattering models derived from mechanical and laser profilers2003411659167110.1109/TGRS.2003.813359 DavidsonM.W.J.Le ToanT.MattiaF.SatalinoG.ManninenT.BorgeaudM.On the characterization of agricultural soil roughness for radar remote sensing studies20003863064010.1109/36.841993 CallensM.VerhoestN.E.C.DavidsonM.W.J.Parameterization of tillage-induced single-scale soil roughness from 4-m profiles20064487888810.1109/TGRS.2005.860488 BryantR.MoranM.S.ThomaD.P.Holifield CollinsC.D.SkirvinS.RahmanM.SlocumK.StarksP.BoschD.Gonzlez DugoM.P.Measuring surface roughness height to parameterize radar backscatter models for retrieval of surface soil moisture2007413714110.1109/LGRS.2006.887146 UlabyF.T.MooreR.K.FungA.K.IIArtech HouseBoston, MA1982 OhY.KayY.C.Condition for precise measurement of soil surface roughness19983669169510.1109/36.662751 BaghdadiN.CerdanO.ZribiM.AuzetV.DarbouxF.El HajjM.KheirR.B.Operational performance of current synthetic aperture radar sensors in mapping soil surface characteristics in agricultural environments: application to hydrological and erosion modelling20082292010.1002/hyp.6609 BaghdadiN.PaillouP.GrandjeanG.DuboisP.DavidsonM.W.J.Relationship between profile length and roughness variables for natural surfaces2000213375338110.1080/014311600750019994 VerhoestN.E.C.LievensH.WagnerW.Álvarez-MozosJ.MoranM.S.MattiaF.On the soil roughness parameterization problem in soil moisture retrieval of bare surfaces from synthetic aperture radar200884213424810.3390/s8074213 FungA.K.LiZ.ChenK.S.Backscattering from a randomly rough dielectric surface19923035636910.1109/36.134085 FungA.K.Artech HouseBoston, MA1994 WeimannA.Von SchönemarkM.SchumannA.JormP.GunterR.Soil moisture estimation with ERS-1 SAR in the East German loess soil area19981923724310.1080/014311698216224 ZribiM.BaghdadiN.HolahN.FafinO.New methodology for soil surface moisture estimation and its application to envisat-asar multi-incidence data inversion20059648549610.1016/j.rse.2005.04.005 OhY.SarabandiK.UlabyF.T.An empirical model and an inversion technique for radar scattering from bare soil surfaces19923037038110.1109/36.134086 DuboisP.C.van ZylJ.EngmanE.T.Measuring soil moisture with imaging radars1995a3391592610.1109/36.406677 DuboisP.C.van ZylJ.EngmanE.T.Corrections to ‘measuring soil moisture with imaging radars’1995b331340 RiceS.O.Reflection of electromagnetic waves from slightly rough surfaces19514361378 BeckmanP.SpizzichinoA.PergamonNew York, USA1963 MoranM.S.Peters-LidardC.D.WattsJ.M.McElroyS.Estimating soil moisture at the watershed scale with satellite-based radar and land surface models20043080582610.5589/m04-043 OgilvyA.FosterJ.M.Rough surfaces: Gaussian or exponential statistics?19982212431251 RahmanM.MoranM.S.ThomaD.P.BryantR.SanoE.E.Holifield CollinsC.D.SkirvinS.KershnerC.OrrB.J.A derivation of roughness correlation length for parameterizing radar backscatter models20072839944012 SatalinoG.MattiaF.DavidsonM.W.J.Le ToanT.PasquarielloG.BorgeaudM.On current limits of soil moisture retrieval from ERS-SAR data2002402438244710.1109/TGRS.2002.803790 OhY.Quantitative retrieval of soil moisture content and surface roughness from mulitipolarized radar observations of bare soil surfaces20044259660110.1109/ TGRS.2003.821065 BaghdadiN.HolahN.ZribiM.Soil moisture estimation using multi-incidence and multi-polarization ASAR SAR data2006271907192010.1080/01431160500239032 Alvarez-MozosJ.CasalíJ.González-AudícanaM.VerhoestN.E.C.Assessment of the operational applicability of RADARSAT-1 data for surface soil moisture estimation20064491392410.1109/TGRS.2005.862248 VerhoestN.E.C.De BaetsB.MattiaF.SatalinoG.LucauC.DefournyP.A possibilistic approach to soil moisture retrieval from ERS SAR backscattering under soil roughness uncertainty200743W0743510.1029/ 2006WR005295 VerhoestN.E.C.De BaetsB.VernieuweH.A Takagi-Sugeno fuzzy rule-based model for soil moisture retrieval from SAR under soil roughness uncertainty2007451351136010.1109/TGRS.2007.894930 BaghdadiN.KingC.BourguignonA.RemondA.Potential of ERS and RADARSAT data for surface roughness monitoring over bare agricultural fields: application to catchments in Northern France2002233427344210.1080/01431160110110974 AlteseE.BolognaniO.ManciniM.TrochP.A.Retrieving soil moisture over bare soils from ERS-1 synthetic aperture radar data: Sensitivity analysis based on a theoretical surface scattering model and field data19963265366210.1029/95WR03638 BoxG.E.P.JenkinsG.M.ReinselG.C.Prentice Hall, Inc.1994 OhY.HongJ.Y.Effect of surface profile length on the backscattering coefficients of bare surfaces20074563263810.1109/TGRS.2006.888137 BaghdadiN.GherboudjI.ZribiM.SahebiM.KingC.BonnF.Semi-empirical calibration of the IEM backscattering model using radar images and moisture and roughness field measurements2004253593362310.1080/01431160310001654392 UlabyF.T.MooreR.K.FungA.K.Artech HouseBoston, MA19863 D'HaeseC.VerhoestN.E.C.De TrochF.Technical report, Laboratory of Hydrology and Water Management - Ghent UniversityGhent, Belgium2000 ArcherD.J.WadgeG.On the use of theoretical models for the retrieval of surface roughness from playa surfacesProc. of the 2nd International Workshop on Retrieval of Bio- and Geo-physical Parameters from SAR Data for Land ApplicationsNoordwijk, The Netherlands1998 Backscatter coefficients calculated for different values of RMS height and correlation length and a moisture content of 25 vol% for (a) an ASAR VV configuration and (b) a PALSAR HH configuration. Sensitivity of the soil moisture retrieval to RMS height (vol%/cm) for an ASAR VV configuration and a moisture content of (a) 5 vol%, (b) 15 vol%, (c) 25 vol%, and (d) 35 vol%. Sensitivity of the soil moisture retrieval to correlation length (vol%/cm) for an ASAR VV configuration and a moisture content of (a) 5vol%, (b) 15vol%, (c) 25vol%, and (d) 35vol%. Sensitivity of the soil moisture retrieval to RMS height (vol%/cm) for a PALSAR HH configuration and a moisture content of (a) 5 vol%, (b) 15 vol%, (c) 25 vol%, and (d) 35 vol%. Sensitivity of the soil moisture retrieval to correlation length (vol%/cm) for a PALSAR HH configuration and a moisture content of (a) 5vol%, (b) 15vol%, (c) 25 vol%, and (d) 35 vol%. Mean and standard deviations of RMS height and correlation length for different profile lengths, sampled from large profiles with (a) (s,l) = (1 cm,5 cm) and (b) (s,l) = (1 cm,40 cm). Mean and standard deviations of inverted soil moisture contents for different profile lengths. Inverted soil moisture contents are derived using roughness parameters from sampled profiles, originating from large profiles with (s,l) equal to (a), (b) (2 cm,5 cm), (c), (d) (1 cm,5 cm), (e), (f) (2 cm,40 cm) and (g), (h) (1 cm,40 cm), and initial moisture contents of (a), (c), (e), (g) 5 vol% and (b), (d), (f), (h) 25 vol%. Number of profiles required to obtain a standard deviation of RMS height or correlation length less than 10% of the mean for different profile lengths. Sampled profiles originate from large profiles with (s,l) equal to (a) (1 cm,5 cm) and (b) (1 cm,40 cm). Mean and standard deviations of inverted soil moisture contents for different numbers of profiles used. Inverted soil moisture contents are derived using roughness parameter series from sampled 4-m profiles, originating from large profiles with (s,l) equal to (a), (b) (1 cm,5 cm) and (c), (d) (1cm,40 cm), and for (a), (c) ASAR VV and (b), (d) PALSAR HH. Considered initial moisture contents are 5 vol% (crosses), 15 vol% (circles), 25 vol% (stars) and 35 vol% (squares). Mean and standard deviations of inverted soil moisture contents for different horizontal spacings used in the parameterization of roughness from 4-m profiles with (s,l) = (1 cm,5 cm). Considered spacings are (a) 2mm, (b) 5mm, (c) 10mm and (d) 15mm. Autocorrelation functions derived for the same roughness profile with (s,l) = (1 cm,5 cm), sampled with a spacing of respectively 1mm and 15 mm. Boxplots of inverted soil moisture contents, calculated using roughness parameters from resampled profiles with 15-mm spacing, and exponential (E) and Gaussian (G) autocorrelation functions, for initial moisture contents of 5, 15, 25 and 35 vol%, and all defined sensor configurations, for profiles with (s,l) equal to (a) (2 cm,5 cm), (b) (1 cm,5 cm), (c) (2cm,40 cm) and (d) (1cm,40 cm). (a) Original simulated 4-m roughness profile with (s,l) = (1 cm,5 cm) and a horizontal spacing of 1 mm, added to (b) a linear trend with slope of 0.025 m/m, and (c) a cosine trend with a wavelength of 5m and an amplitude of 5 cm Mean inverted soil moisture contents for different radar configurations, using roughness parameters obtained after (a) linear detrending over the 4-m profile, (b) piecewise 1-m detrending, (c) second-order polynomial detrending and (d) third-order polynomial de-trending of 10 synthetical roughness profiles of (s,l) = (1 cm,5 cm) with added linear trend. Mean inverted soil moisture contents for different radar configurations, using roughness parameters obtained after (a) linear detrending over the 4-m profile, (b) piecewise 1-m detrending, (c) second-order polynomial detrending and (d) third-order polynomial de-trending of 10 synthetical roughness profiles of (s,l) = (1 cm,5 cm) with added cosine trend. Input parameters used for the IEM and the four-component dielectric mixing model. Model Parameter Value Integral Equation Model ENVISAT ASAR configuration Frequency 5.3 GHz (C-band) Polarization HH or VV Incidence angle 23° ALOS PALSAR configuration Frequency 1.27 GHz (L-band) Polarization HH or VV Incidence angle 34.3° 4-Component Dielectric Mixing Model Bulk density 1.2 g/cm^3 Specific density 2.65 g/cm^3 Sand content 15% Clay content 11.4% Temperature 15°C Average values of s and l obtained for different horizontal spacings. Standard deviations are added between brackets. Sample spacing ‘Truth’ (1 mm) 2mm 5mm 10mm 15mm s (cm) 1 0.99 (0.00) 0.98 (0.00) 0.97 (0.01) 0.96 (0.02) l (cm) 5 5.02 (0.04) 5.15 (0.11) 5.34 (0.22) 5.47 (0.40) s (cm) 1 1.00 (0.00) 1.00 (0.00) 1.00 (0.00) 0.99 (0.00) l (cm) 40 40.02 (0.04) 40.07 (0.21) 40.20 (0.41) 40.23 (0.37) s (cm) 2 1.99 (0.01) 1.97 (0.01) 1.93 (0.02) 1.91 (0.01) l (cm) 5 5.07 (0.05) 5.17 (0.08) 5.43 (0.19) 5.76 (0.52) s (cm) 2 2.00 (0.00) 2.00 (0.00) 2.00 (0.01) 1.99 (0.01) l (cm) 40 40.03 (0.05) 40.16 (0.28) 40.30 (0.45) 40.26 (0.58) RMSE values of s and l for different values of instrument accuracy. Correct (s,l) RMSE on s or l due to 1mm noise 2mm noise 5mm noise s (cm) l (cm) s (cm) l (cm) s (cm) l (cm) (1 cm,5 cm) 0.0019 0.0316 0.0074 0.0775 0.0443 0.3782 (1 cm,40 cm) 0.0017 0.1673 0.0068 0.4764 0.0421 2.9972 (2 cm,5 cm) 0.0015 0.0447 0.0043 0.0316 0.0215 0.1265 (2 cm,40 cm) 0.0009 0.0707 0.0035 0.1871 0.0219 0.9301 RMSE values of the retrieved soil moisture contents due to roughness parameterization errors, introduced by instrument noise. (s,l) of original profile Soil moisture content (vol%) RMSE (vol%) of retrieved soil moisture due to 1 mm noise 2 mm noise 5 mm noise (1 cm,5 cm) 5 0.04 0.11 0.54 (1 cm,5 cm) 15 0.11 0.30 1.39 (1 cm,5 cm) 25 0.21 0.57 2.61 (1 cm,5 cm) 35 0.34 0.92 4.15 (1 cm,40 cm) 35 0.39 2.08 8.31 (2 cm,5 cm) 35 0.24 0.26 0.51 (2 cm,40 cm) 35 0.23 0.46 2.76 Average values of s and l, calculated after detrending of 4-m profiles. Standard deviations are added between brackets. Correct roughness values s (cm) l (cm) s (cm) l (cm) s (cm) l (cm) s (cm) l (cm) Detrending type Linear trended surface Linear 4-m 0.99 (0.01) 4.93 (0.13) 0.91 (0.07) 30.09 (7.48) 1.97 (0.03) 4.76 (0.25) 1.80 (0.18) 31.57 (6.00) Piecewise 1-m 0.92 (0.03) 3.80 (0.45) 0.51 (0.04) 8.31 (1.34) 1.85 (0.06) 3.77 (0.40) 0.98 (0.12) 7.45 (1.12) Second-order 0.98 (0.02) 4.67 (0.29) 0.81 (0.10) 24.22 (7.40) 1.94 (0.04) 4.51 (0.26) 1.57 (0.19) 22.00 (7.58) Third-order 0.96 (0.02) 4.36 (0.30) 0.64 (0.05) 15.02 (3.96) 1.93 (0.03) 4.37 (0.26) 1.41 (0.16) 17.84 (6.18) Cosine trended surface Linear 4-m 2.64 (0.13) 50.07 (2.25) 2.78 (0.42) 54.30 (4.68) 3.07 (0.26) 28.13 (8.31) 3.01 (0.74) 44.81 (7.06) Piecewise 1-m 0.95 (0.04) 4.13 (0.49) 0.58 (0.06) 9.40 (1.47) 1.85 (0.05) 3.79 (0.36) 1.00 (0.10) 7.79 (1.47) Second-order 1.32 (0.13) 13.48 (6.67) 1.10 (0.35) 25.47 (8.16) 2.17 (0.13) 7.19 (2.03) 1.88 (0.27) 28.38 (8.27) Third-order 1.04 (0.04) 5.33 (0.53) 0.79 (0.12) 18.16 (4.72) 1.95 (0.05) 4.53 (0.30) 1.47 (0.12) 18.81 (5.13)
{"url":"http://www.mdpi.com/1424-8220/9/2/1067/xml","timestamp":"2014-04-16T19:12:46Z","content_type":null,"content_length":"115962","record_id":"<urn:uuid:51f5e587-ebfe-4ca1-b1c6-859820f9dc2f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
College Algebra Tutors Corona Del Mar, CA 92625 Professional math tutor, College Algebra, SAT, Spanish, K-6 program ...Susan is extremely experienced and has been tutoring full-time for over 8 years and has logged over 9,840 hours tutoring. She tutors all levels of math, pre-algebra, algebra I,II, geometry, pre-calculus, trigonometry, statistics, and college algebra . She also... Offering 10+ subjects including algebra 2
{"url":"http://www.wyzant.com/Irvine_CA_college_algebra_tutors.aspx","timestamp":"2014-04-19T23:11:00Z","content_type":null,"content_length":"60609","record_id":"<urn:uuid:baf601d5-e2cf-4064-be37-331a32d7a163>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The figure PQRS is a parallelogram. Point M is the midpoint of PS. What is the length of QY? 12 units 14 units 16 units 18 units • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fcd5bf5e4b0c6963ad855a5","timestamp":"2014-04-19T22:35:35Z","content_type":null,"content_length":"88494","record_id":"<urn:uuid:03bda05a-c90b-4a79-b599-efbc4f15820d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Interior Point Approach to Linear , 2002 "... QP is the optimization of a quadratic function subject to linear equality and inequality constraints. It arises in multiple objective decision making where the departure of the actual decisions from their corresponding ideal, or bliss, value can be evaluated using a weighted quadratic norm as a meas ..." Add to MetaCart QP is the optimization of a quadratic function subject to linear equality and inequality constraints. It arises in multiple objective decision making where the departure of the actual decisions from their corresponding ideal, or bliss, value can be evaluated using a weighted quadratic norm as a measure of deviation. The formulation of mean-variance optimization of uncertain systems also leads to QP. An important application of mean-variance is in simple optimal portfolio problems where the constraints are linear and the objective function is quadratic (Markowitz, 1959). The decision maker has to reconcile the conflicting desires of maximizing expected portfolio return, represented by the linear term, and minimizing the portfolio risk, represented by the quadratic (variance) term, in the objective function. Sequential QP algorithms require the solution of QP subproblems to generate descent directions for general nonlinear optimization and minimax. "... Abstract. In sernidefinite programming, one minimizes a linear function subject to the constraint that an affine combination of synunetric matrices is positive semidefinite. Such a constraint is nonlinear and nonsmooth, but convex, so semidefinite programs are convex optimization problems. Semidefin ..." Add to MetaCart Abstract. In sernidefinite programming, one minimizes a linear function subject to the constraint that an affine combination of synunetric matrices is positive semidefinite. Such a constraint is nonlinear and nonsmooth, but convex, so semidefinite programs are convex optimization problems. Semidefinite programming unifies several standard problems (e.g., linear and quadratic programming) and finds many applications in engineering and combinatorial optimization. Although semidefinite programs are much more general than linear programs, they are not much harder to solve. Most interior-point methods for linear programming have been generalized to semidefinite programs. As in linear programming, these methods have polynomial worst-case complexity and perform very well in practice. This paper gives a survey of the theory and applications of semidefinite programs and an introduction to primaldual interior-point methods for their solution. Key words, semidefinite programming, convex optimization, interior-point methods, eigenvalue optimization, combinatorial optimization, system and control theory AMS subject classifications. 65K05, 49M45, 93B51, 90C25, 90C27, 90C90, 15A18 1. Introduction. 1.1. Semidefinite programming. We consider the problem of minimizing a linear function
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2799099","timestamp":"2014-04-16T16:34:22Z","content_type":null,"content_length":"18176","record_id":"<urn:uuid:1311aca0-e963-4f81-9282-8bf42dfa6052>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
2D Laser Range Finder Ever since I read Ken Maxon’s article: ‘A Real-time Laser Range Finding Vision System’, I’ve wanted to experiment with a similar system. This past Sunday afternoon, I finally got around to it. I don’t have any CLPD chops, so I used a regular computer. Above you can see my initial rough results. The line laser registers lower in the image field for the block than for the background. The software properly converted that lower registration to a distance of about 26.5 inches. The idea behind the concept is fairly straight forward. By angling and elevating the camera relative to the laser we can establish a trigonometric relationship between the vertical location of the laser and the distance to the object that the laser is striking. Since the laser is a line on the screen, each pixel column in the image field becomes it’s own independent laser range finder. In the picture above four possible angles are shown to demonstrate that each angle corresponds to a different obstruction distance. Each possible height in the image field corresponds to a specific angle of light into the camera’s roughly conical field of view. The line from the laser’s point of impact to the camera, the vertical line from the camera to the laser generator and the horizontal laser line itself form a right triangle that allows us to compute distance. By dividing the angular field of vision by the number of rows in the the camera we come up with a formula to compute the angle of entry for the reflected laser light for each of the rows in the camera. We also know the distance from the camera to the laser generator. (Its fixed). Pluging the angle and the vertical distance in the trigonmetry equation: tangent of theta equals the opposite leg length divided by the adjacent leg length, and re-arranging, we compute the opposite leg length and determine how far away the object is. You can see my software doing just that in the first picture. The software doesn’t compensate for the fact that the horizontal pixels are each their own angle as well, but in this first rough pass I felt it wasn’t necessary. You’ll note the the software makes no attempt to measure the left/right distances. With a sub 5mW power level and a popular frequency (red), something like is probably restricted to a small indoor bot with somewhat short range needs. Still in complement to other scanning systems like a spinning ultra sonic range finder, I can see this adding mechanism to the reliability of local environment mapping system. If I ever build my home security patrol bot, I’ll definitely think about this technique. Below you can see the rig I used to test this idea out. It’s pretty basic but it got the job done. To the left you see the paper back drop and the paper obstruction. To the right you can see the line laser generator sitting at the bottom of the dremel drill press with the webcam perched on top.
{"url":"http://milwaukeemakerspace.org/2010/03/2d-laser-range-finder/","timestamp":"2014-04-18T23:15:24Z","content_type":null,"content_length":"28420","record_id":"<urn:uuid:191d359c-123f-4487-8ff5-421ec7d3efec>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00206-ip-10-147-4-33.ec2.internal.warc.gz"}
Some Iterative Methods for Solving Nonconvex Bifunction Equilibrium Variational Inequalities Journal of Applied Mathematics Volume 2012 (2012), Article ID 280451, 10 pages Research Article Some Iterative Methods for Solving Nonconvex Bifunction Equilibrium Variational Inequalities ^1Mathematics Department, COMSATS Institute of Information Technology, Park Road, Islamabad, Pakistan ^2Mathematics Department, HITEC University, Texila Cantt, Islamabad, Pakistan Received 29 March 2012; Accepted 21 April 2012 Academic Editor: Rudong Chen Copyright © 2012 Muhammad Aslam Noor et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We introduce and consider a new class of equilibrium problems and variational inequalities involving bifunction, which is called the nonconvex bifunction equilibrium variational inequality. We suggest and analyze some iterative methods for solving the nonconvex bifunction equilibrium variational inequalities using the auxiliary principle technique. We prove that the convergence of implicit method requires only monotonicity. Some special cases are also considered. Our proof of convergence is very simple. Results proved in this paper may stimulate further research in this dynamic field. 1. Introduction Variational inequalities theory, which was introduced by Stampacchia [1], can be viewed as an important and significant extension of the variational principles. This theory combines the theory of extremal problems and monotone operators under a unified viewpoint. It is well known that the variational inequalities represent the optimality condition of the convex function. For the directional differentiable convex functions, we have another class of variational inequalities, which is known as the bifunction variational inequalities. Let be a closed and convex set in the real Hilbert space . For a given bifunction , we consider the problem of finding such that which is called the bifunction variational inequality. Crespi et al. [2–4], Fang and Hu [5], Lalitha and Mehta [6], and Noor [7 ] have studied various aspects of the bifunction variational inequalities. We would like to mention that that the bifunction variational inequality is quite different than the variational inequality. For a given bifunction , Blum and Oettli [8] considered the problem of finding such that which is known as the equilibrium problem. It has been shown that the variational inequalities and fixed point problems are special cases of the equilibrium problems. We would like to emphasize that the bifunctions and are distinctly different from each other. Their properties are different from each other. It is natural to consider the unification of these problems. This fact has motivated Noor et al. [9, 10] to consider a general and unified class, which is called the bifunction equilibrium variational inequality. They considered the problem of finding such that Obviously, problem (1.3) includes the problems (1.1) and (1.2) as special cases. They have also discussed the numerical methods for solving such type of bifunction equilibrium variational inequalities using the auxiliary principle technique. For the applications and numerical methods for the bifunction equilibrium variational inequalities, see [2–30] and the references therein. These problems have been studied in the convexity setting. This means that the underlying set is a convex set. Naturally a question arises as to whether or not these problems are well defined on the nonconvex sets. The answer to this question is positive. It is possible to consider these problems on the prox-regular sets. The prox-regular sets are nonconvex sets, see [11, 12, 24, 29]. Several authors have studied properties of these nonconvex sets related to a good behaviour of their boundary. See Sebbah and Thibault [30] and Noor [23] for the applications and projection characterization of the prox-regular sets. In recent years, Noor [7, 20–24] and Bounkhel et al. [11] have considered variational inequality in the context of uniformly prox-regular sets. In this paper, we introduce and consider the bifunction equilibrium variational inequalities on the prox-regular sets, which is called the nonconvex bifunction equilibrium variational inequality. This class is quite general and unifying one. One can easily show that the several classes of equilibrium problems and variational inequalities are special cases of this new class. There are a substantial number of numerical methods including projection technique and its variant forms, Wiener-Hopf equations, auxiliary principle and resolvent equations methods for solving variational inequalities. However, it is known that projection, Wiener-Hopf equations, and proximal and resolvent equations techniques cannot be extended and generalized to suggest and analyze similar iterative methods for solving bifunction variational inequalities due to the nature of the problem. This fact has motivated the use of the auxiliary principle technique, which is mainly due to mainly due to Glowinski et al. [13]. This technique deals with finding the auxiliary problem and proving that the solution of the auxiliary problem is a solution of the original problem by using the fixed point problem. This technique is very useful and can be used to find the equivalent differentiable optimization problem. Glowinski et al. [13] used this technique to study the existence of a solution of the mixed variational inequality. Noor [18, 19] has used this technique to develop some iterative schemes for solving various classes of variational inequalities. We point out that this technique does not involve the projection of the operator and is flexible. It is well known that a substantial number of numerical methods can be obtained as special cases from this technique. In this paper, we show that the auxiliary principle technique can be used to suggest and analyze a class of iterative methods for solving the nonconvex bifunction equilibrium variational inequalities. We also prove that the convergence of the implicit method requires only the monotonicity, which is a weaker condition than monotonicity. Since the nonconvex bifunction equilibrium variational inequalities included (nonconvex) bifunction variational inequalities and (nonconvex) equilibrium problems as special cases, results obtained in this paper continue to hold for these and related problems. Our method of proof is very simple as compared with other techniques. 2. Preliminaries Let be a real Hilbert space whose inner product and norm are denoted by and , respectively. Let be a nonempty and convex set in . We, first of all, recall the following well-known concepts from nonlinear convex analysis and nonsmooth analysis [12, 29]. Poliquin et al. [29] and Clarke et al. [12] have introduced and studied a new class of nonconvex sets, which are called uniformly prox-regular sets. Definition 2.1. The proximal normal cone of at is given by where is a constant and Here is the usual distance function to the subset , that is, The proximal normal cone has the following Lemma 2.2. Let be a nonempty, closed, and convex subset in . Then , if and only if there exists a constant such that Definition 2.3. For a given , a subset is said to be normalized uniformly -prox-regular if and only if every nonzero proximal normal cone to can be realized by an -ball, that is,for all and , one has It is clear that the class of normalized uniformly prox-regular sets is sufficiently large to include the class of convex sets, -convex sets, submanifolds (possibly with boundary) of , the images under a diffeomorphism of convex sets, and many other nonconvex sets; see [12, 29]. It is well known [11, 12, 29] that the union of two disjoint intervals [] and [] is a prox-regular set with . For other examples of prox-regular sets, see M. A. Noor and K. I. Noor [24]. Obviously, for , the uniformly prox-regularity of is equivalent to the convexity of . This class of uniformly prox-regular sets has played an important part in many nonconvex applications such as optimization, dynamic systems and differential inclusions. For the sake of simplicity, we take . Then it is clear that, for , we have . For given bifunctions , we consider the problem of finding such that which is called the nonconvex bifunction equilibrium variational inequality. We note that, if , the convex set in , then problem (2.6) is equivalent to finding such that Inequality of type (2.6) is called the bifunction equilibrium variational inequality, considered and studied by Noor et al. [9]. If , where is a nonlinear operator, then problem (2.6) is equivalent to finding such that which is called the nonconvex equilibrium variational inequality and appears to be a new one. For a suitable and appropriate choice of the bifunctions and the spaces, one can obtain several new classes of equilibrium and variational inequalities, see [1–30] and the references therein. This shows that the problem (2.6) is quite general and includes several new and known classes of variational inequalities and equilibrium problems as special cases. 3. Main Results In this section, we use the auxiliary principle technique of Glowinski et al. [13] as developed by Noor et al. [10, 26, 27] to suggest and analyze some iterative methods for solving the nonconvex equilibrium bifunction variational inequality (2.6). We would like to mention that this technique does not involve the concept of the projection and the resolvent, which is the main advantage of this For a given satisfying (2.6), consider the problem of finding such that where , and are constants. Inequality of type (3.1) is called the auxiliary nonconvex bifunction variational inequality. Note that if , then is a solution of (2.6). This simple observation enables us to suggest the following iterative method for solving the nonconvex bifunction variational inequalities (2.6). Algorithm 3.1. For a given , compute the approximate solution by the iterative scheme Algorithm 3.1 is called the inertial proximal point method for solving the nonconvex bifunction equilibrium variatioanal inequalities (2.6). If , then the uniformly prox-regular set reduces to the convex set . Consequently, Algorithm 3.1 collapses to the following. Algorithm 3.2. For a given , compute the approximate solution by the iterative scheme Algorithm 3.2 is called the inertial proximal point method for solving the equilibrium bifunction variational inequalities 92.2) and appears to be a new one. We note that, if , then Algorithm 3.1 reduces to the following. Algorithm 3.3. For a given , compute the approximate solution by the iterative scheme Algorithm 3.3 is called the proximal point algorithm for solving nonconvex bifunction equilibrium variational inequality (2.6). In particular, if , then the uniformly prox-regular set becomes the convex set and consequently Algorithm 3.3 reduces to the following algorithm. Algorithm 3.4. For a given , compute the approximate solution by the iterative scheme which is known as the proximal point algorithm for solving bifunction equilibrium variational inequalities (2.7) and has been studied extensively, see [10, 26, 27]. For suitable rearrangement and appropriate choice of the operators and spaces, one can obtain a numer of proximal point algorithms for solving various classes of bifunction variational inequalities, equilibrium problems, and optimization problems. This shows that Algorithm 3.1 is quite general and unifying one. For the convergence analysis of Algorithm 3.3, we recall the following concepts and results. Definition 3.5. A bifunction is said to be monotone, if and only if Definition 3.6. A bifunction is said to be monotone, if and only if Remark 3.7. We would like to point out that the bifunctions and are different, that is . Due to this reason, one cannot define . This is the reason that problem (2.6) is not equal to nonconvex bifunction equilibrium variational inequality problem. We now consider the convergence criteria of Algorithm 3.3. The analysis is in the spirit of Noor [9, 18, 19]. In a similar way, one can consider the convergence analysis of other algorithms. Theorem 3.8. Let the bifunction be monotone. If is the approximate solution obtained from Algorithm 3.3 and is a solution of (2.6), then Proof. Let be a solution of (2.6). Then since and are monotone operators. Taking in (3.9), we have Setting in (3.4), and using (3.10), we have From this, one can easily obtain the required result (3.8). Theorem 3.9. Let be a finite dimension subspace, and let be the approximate solution obtained from Algorithm 3.3. If is a solution of (2.6) and , then . Proof. Let be a solution of (2.6). Then it follows from (3.5) that the sequence is bounded and which implies that Let be a cluster point of the sequence , and let the subsequence of the sequence converge to . Replacing by in (3.4) and taking the limit and using (3.14), we have which implies that solves the nonconvex bifunction equilibrium variational inequality (2.6) and Thus it follows from the above inequality that the sequence has exactly one cluster point and , the required result. We note that, for , the -prox-regular set becomes a convex set and the nonconvex bifunction equilibrium variational inequality (2.6) collapses to the bifunction equilibrium variational inequality ( 2.7). Thus our results include the previous known results as special cases. It is well known that, to implement the proximal point methods, one has to calculate the approximate solution implicitly, which is itself a difficult problem. To overcome this drawback, we suggest another iterative method, the convergence of which requires only partially relaxed strongly monotonicity, which is a weaker condition that of cocoercivity. For a given satisfying (2.6), consider the problem of finding such that which is also called the auxiliary nonconvex bifunction equilibrium variational inequality. Note that problems (3.1) and (3.17) are quite different. If , then clearly is a solution of the nonconvex bifunction equilibrium variational inequality (2.6). This fact enables us to suggest and analyze the following iterative method for solving the nonconvex bifunction equilibrium variational inequality (2.6). Algorithm 3.10. For a given , compute the approximate solution by the iterative scheme Note that, for , the uniformly prox-regular set becomes a convex set and Algorithm 3.3 reduces to the following. Algorithm 3.11. For a given , calculate the approximate solution by the iterative scheme which is known as the projection iterative method for solving bifunction equilibrium variational inequalities 4. Conclusion For appropriate and suitable choice of the operators and the spaces, one can suggest and analyze several iterative methods for solving the nonconvex bifunction equilibrium variational inequalities. This shows that the algorithms suggested in this paper are more general and unifying ones. Using essentially the technique of Theorems 3.8 and 3.9, one can study the convergence analysis of Algorithm 3.10. It is an interesting problem to compare these iterative methods with other numerical methods for solving the nonconvex bifunction equilibrium variational inequalities. The ideas and technique of this paper may stimulate further research in these interesting fields. The authors would like to express their gratitude to Dr. S. M. Junaid Zaidi, Rector, CIIT, for providing excellent research facilities. 1. G. Stampacchia, “Formes bilinéaires coercitives sur les ensembles convexes,” Comptes-rendus de l'Académie des Sciences de Paris, vol. 258, pp. 4413–4416, 1964. View at Zentralblatt MATH 2. G. P. Crespi, I. Ginchev, and M. Rocca, “Minty variational inequalities, increase-along-rays property and optimization,” Journal of Optimization Theory and Applications, vol. 123, no. 3, pp. 479–496, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 3. G. P. Crespi, I. Ginchev, and M. Rocca, “Existence of solutions and star-shapedness in Minty variational inequalities,” Journal of Global Optimization, vol. 32, no. 4, pp. 485–494, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 4. G. P. Crespi, I. Ginchev, and M. Rocca, “Some remarks on the Minty vector variational principle,” Journal of Mathematical Analysis and Applications, vol. 345, no. 1, pp. 165–175, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 5. Y.-P. Fang and R. Hu, “Parametric well-posedness for variational inequalities defined by bifunctions,” Computers & Mathematics with Applications, vol. 53, no. 8, pp. 1306–1316, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. C. S. Lalitha and M. Mehta, “Vector variational inequalities with cone-pseudomonotone bifunctions,” Optimization, vol. 54, no. 3, pp. 327–338, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 7. M. A. Noor, “Some new classes of nonconvex functions,” Nonlinear Functional Analysis and Applications, vol. 11, no. 1, pp. 165–171, 2006. View at Zentralblatt MATH 8. E. Blum and W. Oettli, “From optimization and variational inequalities to equilibrium problems,” The Mathematics Student, vol. 63, no. 1–4, pp. 123–145, 1994. View at Zentralblatt MATH 9. M. A. Noor, K. I. Noor, and E. Al-Said, “On nonconvex bifunction variational inequalities,” Optimization Letters. In press. View at Publisher · View at Google Scholar 10. M. A. Noor, K. I. Noor, and E. Al-Said, “Iterative methods for solving nonconvex equilibrium variational inequalities,” Applied Mathematics and Information Science, vol. 6, no. 1, pp. 65–69, 11. M. Bounkhel, L. Tadj, and A. Hamdi, “Iterative schemes to solve nonconvex variational problems,” Journal of Inequalities in Pure and Applied Mathematics, vol. 4, no. 1, pp. 1–14, 2003. View at Zentralblatt MATH 12. F. H. Clarke, Yu. S. Ledyaev, R. J. Stern, and P. R. Wolenski, Nonsmooth Analysis and Control Theory, vol. 178 of Graduate Texts in Mathematics, Springer, New York, NY, USA, 1998. 13. R. Glowinski, J.-L. Lions, and R. Trémolières, Numerical Analysis of Variational Inequalities, vol. 8 of Studies in Mathematics and Its Applications, North-Holland, Amsterdam, The Netherlands, 14. F. Giannessi, A. Maugeri, and P. M. Pardalos, Equilibrium Problems: Nonsmooth Optimization and Variational Inequality Models, vol. 58, Kluwer Academics Publishers, Dordrecht, The Netherlands, 15. R. P. Gilbert, P. D. Panagiotopoulos, and P. M. Pardalos, From Convexity to Nonconvexity, vol. 55 of Nonconvex Optimization and Its Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 2001. View at Zentralblatt MATH 16. D. Kinderlehrer and G. Stampacchia, An Introduction to Variational Inequalities and Their Applications, vol. 31 of Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 2000. 17. M. A. Noor, “General variational inequalities,” Applied Mathematics Letters, vol. 1, no. 2, pp. 119–122, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 18. M. A. Noor, “Some developments in general variational inequalities,” Applied Mathematics and Computation, vol. 152, no. 1, pp. 199–277, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 19. M. A. Noor, “Iterative schemes for nonconvex variational inequalities,” Journal of Optimization Theory and Applications, vol. 121, no. 2, pp. 385–395, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 20. M. A. Noor, “Extended general variational inequalities,” Applied Mathematics Letters, vol. 22, no. 2, pp. 182–186, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 21. M. A. Noor, “Implicit iterative methods for nonconvex variational inequalities,” Journal of Optimization Theory and Applications, vol. 143, no. 3, pp. 619–624, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 22. M. A. Noor, “On an implicit method for nonconvex variational inequalities,” Journal of Optimization Theory and Applications, vol. 147, no. 2, pp. 411–417, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 23. M. A. Noor, “Some iterative methods for general nonconvex variational inequalities,” Mathematical and Computer Modelling, vol. 54, no. 11-12, pp. 2955–2961, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 24. M. A. Noor and K. I. Noor, “Iterative schemes for trifunction hemivariational inequalities,” Optimization Letters, vol. 5, no. 2, pp. 273–282, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 25. M. A. Noor, K. I. Noor, and E. Al-Said, “Auxiliary principle technique for solving bifunction variational inequalities,” Journal of Optimization Theory and Applications, vol. 149, no. 2, pp. 441–445, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 26. M. A. Noor, K. I. Noor, S. Zainab, and E. Al-Said, “Proximal algorithms for solving mixed bifunction variational inequalities,” International Journal of Physical Sciences, vol. 6, no. 17, pp. 4203–4212, 2011. 27. M. A. Noor, K. I. Noor, and Z. Huang, “Bifunction hemivariational inequalities,” Journal of Applied Mathematics and Computing, vol. 35, no. 1-2, pp. 595–605, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 28. M. A. Noor, K. I. Noor, and T. M. Rassias, “Some aspects of variational inequalities,” Journal of Computational and Applied Mathematics, vol. 47, no. 3, pp. 285–312, 1993. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 29. R. A. Poliquin, R. T. Rockafellar, and L. Thibault, “Local differentiability of distance functions,” Transactions of the American Mathematical Society, vol. 352, no. 11, pp. 5231–5249, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 30. M. Sebbah and L. Thibault, “Metric projection and compatibly parameterized families of prox-regular sets in Hilbert space,” Nonlinear Analysis: Theory, Methods & Applications A, vol. 75, no. 3, pp. 1547–1562, 2012. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/jam/2012/280451/","timestamp":"2014-04-16T11:45:09Z","content_type":null,"content_length":"271510","record_id":"<urn:uuid:0802d193-0e8c-4571-8832-39eac0cbad97>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
R and The Journal of Computational and Graphical Statistics by Joseph Rickert I don’t think that most people find reading the articles in the statistical journals to be easy going. In my experience, the going is particularly rough when trying to learn something completely new and I don’t expect it could be any other way. There is no getting around the hard work. However, at least in the field of computational statistics things seem to be getting a little easier. These days, it is very likely that you will find some code included in the supplementary material for a journal article; at least in the Journal of Computational and Graphical Statistics (JCGS) anyway. JCGS, which was started in 1992 with the mission of presenting “the very latest techniques on improving and extending the use of computational and graphical methods in statistics and data analysis”, still seems to be the place to publish. (Stanford's Rob Tibshirani published an article in Issue 1, Volume 1 back in 1992, Robert Tibshirani & Michael LeBlanc, and also in the most recent issue: Noah Simon, Jerome Friedman, Trevor Hastie & Robert Tibshirani.) Driven by the imperative to produce reproducible research most authors in this journal include some computer code to facilitate independent verification of their results. Of the 80 non-editorial articles published in the last 6 issues of JCGS all but 9 of these included computer code as part of the supplementary materials. The following table lists the counts of the type of software included. (Note that a few articles included code in multiple languages, R and C++ for example.) June13 March13 Dec12 Sept12 June12 March12 total_by_code R 9 9 5 5 7 7 42 Matlab 6 0 1 3 4 4 18 c 0 0 1 2 1 0 4 cpp 0 0 0 1 1 2 4 other 0 1 0 3 0 2 6 none 0 6 2 0 1 0 9 total_by_month 15 16 9 14 14 15 83 R code accounted for 57% of the 74 instances of software included in the supplementary materials. I think an important side effect of the inclusion of code is that studying the article is much easier for everyone. Seeing the R code is like walking into a room of full of people and spotting a familiar face: you know where to start. And, at least it seems feasible to “reverse engineer” the article. Look at the input data, run the code, see what it produces and map it to the math. The following code comes from the supplementary material included in the survey article: “Computational Statistical Methods for Social Networks Models” by Hunter, Krivitsky and Schweinberger in the December 2012 issue of JCGS. # Some of the code from Appendix of the article: # “Computational Statistical Methods for Social Networks Models” # by Hunter, Krivitsky and Schweinberger in the December 2012 issue of JCGS. #Two-dimensional Euclidean latent space model with three clusters and random # receiver effects monks.d2G3r <- ergmm(samplike ~ euclidean(d=2,G=3)+rreceiver) Z <- plot(monks.d2G3r, rand.eff="receiver", pie=TRUE, vertex.cex=2) text(Z, label=1:nrow(Z)) #Three-dimensional Euclidean latent space model with three clusters and # random receiver effects monks.d3G3r <- ergmm(samplike ~ euclidean(d=3,G=3)+rreceiver) plot(monks.d3G3r, rand.eff="receiver",use.rgl=TRUE, labels=TRUE) Created by Pretty R at inside-R.org The first four lines produce the graph below. The sampson data set contains social network data that Samuel F. Sampson collected in the late ‘60s when he was a resident experimenter at a monastery in New England. The call to ergmm() fits a “latent space model” by embedding the data in a 2 dimensional Euclidean space, clustering it into 3 groups and including a random “receiver” effect”. The last 4 lines of code produce a way cool, interactive three dimensional plot that you can rotate. You can follow this conversation by subscribing to the comment feed for this post.
{"url":"http://blog.revolutionanalytics.com/2013/09/r-and-the-journal-of-computational-and-graphical-statistics.html","timestamp":"2014-04-16T13:04:00Z","content_type":null,"content_length":"36528","record_id":"<urn:uuid:50656ee1-f1ff-41c1-bb7e-c9c195e7c597>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathramz Problem Solving Group MathRamz Problem Solving Group (MRPSG) is a group of members of mathramz forum. The group aims at solving mathematical problems in international journals such as the American Mathematical Monthly ( AMM), College Mathematics Journal (CMJ), Pi Mu Epsilon Journal, and Mathematics Magazine. The group work is based on collabrative problem solving and sharing of ideas. The group discussions are carried out in a dedicated closed forum for the group members only. Joining the Group For joining the group, you need to register in mathramz forum, and then communicate via private messages with Ali or QwareeqMathematics. Althernatively, you can send to the following group email: Current Members The group was formed in August 7th, 2010. Current Members are : (alphabatically)
{"url":"http://www.mathramz.com/mrpsg/","timestamp":"2014-04-19T01:54:07Z","content_type":null,"content_length":"9142","record_id":"<urn:uuid:89c2a631-1895-4ce6-9e3e-dd032526a0c7>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
SPSSX-L archives -- October 2002 (#318)LISTSERV at the University of Georgia Date: Tue, 29 Oct 2002 09:12:21 -0500 Reply-To: Richard Ristow <wrristow@mindspring.com> Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU> From: Richard Ristow <wrristow@mindspring.com> Subject: Re: Some formulae needed Comments: To: "Burleson,Joseph A." <burleson@up.uchc.edu> In-Reply-To: <7D25CCC35D3CD31192BF006008BFB1FE04F572C4@nsofs14.uchc.edu> Content-Type: text/plain; charset="us-ascii"; format=flowed At 03:27 PM 10/24/2002 -0400, Burleson,Joseph A. wrote: >Now can anyone tell us why the harmonic [mean] is used instead of the >geometric? The harmonic clearly penalizes discrepancy more, e.g.: >Overall N = 100, n1 = 30, n2 = 70 >Arithmetic n-bar = 50 >Geometric mean n = 45.8 >Harmonic n' = 42 That's why. Here's how it looks with a constant n1=10 as n2 grows larger: N1: 10 N2 Total N Mean Geo. Harmonic 10 20 10 10.0 10.0 40 50 25 20.0 16.0 90 100 50 30.0 18.0 490 500 250 70.0 19.6 990 1000 500 99.5 19.8 Intuitively, you have two sources of random uncertainty: the mean of the larger group, and the mean of the smaller one. Those two uncertainties are equally important for the comparison. As the discrepancy in sample size gets larger, the uncertainty (the standard error of estimate) of the mean of the smaller group becomes much larger, and dominates the overall uncertainty. Both the arithmetic mean (the "mean") and the geometric mean would have the effective total N grow indefinitely as N2 increases. The harmonic mean implies, correctly, that no matter how large N2 is, effective N is limited by N1, because the uncertainty in that group mean never diminishes.
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0210&L=spssx-l&F=&S=&P=36113","timestamp":"2014-04-18T15:40:01Z","content_type":null,"content_length":"10489","record_id":"<urn:uuid:458743b4-ba5e-4dd9-a28a-14934da0aefb>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
New I Bond Rates Will Surpass Short-Term CD Rates The Labor Department released the March CPI numbers today, and with these numbers, the next I Bond inflation component can be computed. Thanks to the recent inflation, it's a very high 4.60%. This number is added on to the I Bond fixed rate to derive the I Bond composite rate. The new rate makes an attractive no-risk opportunity. However, with the annual I-Bond purchase limit of $5,000 online and $5,000 paper, the opportunity is limited. If you buy I Bonds before the end of April, you can know the rate you'll receive for the next 12 months. The interst rate for the first 6 months will be based on the current inflation component (0.74%). The next 6 months will be based on this new rate (4.60%). After that, it'll depend on future inflation numbers. The current fixed component of zero percent will stay the same for the life of the bond. I describe the details of calculating the expected return below. If you decide to wait after April to buy I Bonds, you'll have to guess about the May fixed rate component. At least we know it can't drop below zero percent (at least by the current rules at the Treasury). Based on today's rate environment, I think it's very likely to stay zero percent. Based on this we can easily compute the short-term rate of return, but first, I'll compute the rates of return if the I Bond is purchased before May. I Bond Rates of Return for April 2011 Purchase From Treasury Direct I Savings Bonds FAQs: The semiannual inflation rate announced in May is the change between the CPI-U figures from the preceding September and March All previous CPI-U numbers are available from this government webpage. The CPI-U for September 2010 was 218.439. Last March 2011 CPI-U was 223.467. This is an increase of 2.301%. The annualized version of this is about 4.60%. If you buy before May, you'll receive the current I-Bond fixed rate of 0% for the life of the I Bond. The inflation component will be added to this rate and will change every 6 months. The current inflation component is 0.74%, and the composite rate is 0.74%. Here's an estimate of the return for the next year: • 0.74% from April 2011 through September 2011 • 4.60% from October 2011 through March 2012 I Bonds increase in value on the first day of the month. So on May 1st, you'll earn the interest for the full month of April. So for maximum return, it's best to buy I Bonds near the end of the month and redeem them early in the month. If you redeem an I Bond before 5 years, you lose the last 3 months of interest. So based on this and the above numbers, if you buy an I Bond on April 29, 2011, the value of the I Bond on April 1, 2012 would be about 1.53% higher. For 11 months, this comes out to an annualized yield of about 1.66%. Below is an estimated annualized return for I Bond redemptions from April 1, 2012 to July 1, 2012. It is assumed you will buy the I Bond on April 29, 2011 which gives you almost an extra month of interest. This effectively reduces the 3-month penalty to 2 months. • 1.66% - redeem on 4/1/12, 6mo of 0.74%, 3mo of 4.60%, and 3mo of 0% (penalty) • 1.90% - redeem on 5/1/12, 6mo of 0.74%, 4mo of 4.60%, and 3mo of 0% (penalty) • 2.11% - redeem on 6/1/12, 6mo of 0.74%, 5mo of 4.60%, and 3mo of 0% (penalty) • 2.29% - redeem on 7/1/12, 6mo of 0.74%, 6mo of 4.60%, and 3mo of 0% (penalty) The highest guaranteed rate would be an annualized return of 2.29% for about 14 months (from 4/29/11 to 7/1/12). Note, it's best not to wait until the last day of the month to buy I Bonds at Treasury Direct. You probably want to give yourself a few days to ensure they are officially purchased before the end of the month. I Bond Rates of Return for May 2011 Purchase As I mentioned above, we won't know the I Bond fixed rate until May. However, it's very likely to remain zero percent. So the worst case composite rate will be 4.60%. If inflation stops rising in the next several months, and the November 2011 I Bond inflation component is zero percent, the rate of return is easy to calculate. It will be half of 4.60% (4.60% for 6 months and 0% for 6 months). And in that case the 3 month penalty is also zero. So the total rate of return is 2.30%. If you buy the I Bond at the end of May 2011 and redeem the I Bond at the start of May 2012, the time period is close to 11 months. Thus, the annualized rate of return becomes 2.51%. We can also estimate the annualized return for I Bond redemptions for June, July and August. The following assumes the I Bond will be purchased on May 31, 2011. As I mentioned above, this effectively reduces the 3-month penalty to 2 months. • 2.51% - redeem on 5/1/12, 6mo of 4.60%, 3mo of 0.00%, and 3mo of 0% (penalty) • 2.30% - redeem on 6/1/12, 6mo of 4.60%, 4mo of 0.00%, and 3mo of 0% (penalty) • 2.12% - redeem on 7/1/12, 6mo of 4.60%, 5mo of 0.00%, and 3mo of 0% (penalty) • 1.97% - redeem on 8/1/12, 6mo of 4.60%, 6mo of 0.00%, and 3mo of 0% (penalty) So if you're going to wait until May, a 2.51% APY for 11 months is probably the best you can do. Each month you wait, the period of zero percent increases. Going out any further requires an estimate for the inflation component in May 2012 in addition to November 2011. What if the I Bond inflation component in November 2011 is 2.00%? This seems like a reasonable estimate. Recalculating the above with all other assumptions the same results in the following: • 3.05% - redeem on 5/1/12, 6mo of 4.60%, 3mo of 2.00%, and 3mo of 0% (penalty) • 2.97% - redeem on 6/1/12, 6mo of 4.60%, 4mo of 2.00%, and 3mo of 0% (penalty) • 2.89% - redeem on 7/1/12, 6mo of 4.60%, 5mo of 2.00%, and 3mo of 0% (penalty) • 2.83% - redeem on 8/1/12, 6mo of 4.60%, 6mo of 2.00%, and 3mo of 0% (penalty) So if you buy an I Bond on May 31, 2011 and you redeem that I Bond on May 1, 2012, your annualized yield for 11 months will be at least 2.51%, and it could easily be 3.05%. Remember the $10K Annual Purchase Limit Before you get too excited, remember that the annual purchase limit is $5K for online and $5K for paper. So if you earn 2.51% APY for 11 months on $10K, the total dollar amount of interest is about $230. For 3.05% APY it'll be about $280. As a comparison, a $10K deposit into Ally Bank's No-Penalty CD (at 1.15% APY) would return about $105. So you won't make that much more with the I Bond. Nevertheless, I Bonds have some nice features that CDs don't have such as being exempt from state and local income tax. I Bond Features Below is a summary of the I Bond features. More information is available at this Treasury I Bond page: • Can't be redeemed within 12 months of issue date • Lose 3 months interest if redeemed within 5 years • Interest is composed of fixed and inflation-based rate • Fixed rate remains for life of bond • Inflation-based rate changes every 6 months after issue date • New rates announced every six months on November and May 1st • Federal tax can be deferred on interest until bond is redeemed • Interest is exempt from state and local tax • Some or all interest is tax exempt when used for educational expenses • $10,000 maximum of I Bond purchases per year ($5K online and $5K paper) - total was $60,000 before 2008 (Treasury's press release). For more details about the purchase limit, please refer to the Treasury Direct's FAQ on the new purchase limit. 34 comments. Comment #1 by Mike posted on That rate passes most 5 year CD rates... Thanks for taking the time to explain that so clearly. Comment #2 by Sandra posted on Something I don't understand in your analysis: I-bonds have to be held for at least 12 months. So if you purchase an I-bond on May 31, 2011, how can you redeem it on May 1, 2012? That's less than 12 Comment #3 by KenBDG posted on A savings bond's issue date is based on the month that it is purchased. So from the Treasury point of view, there isn't any difference between an I Bond purchased on May 1st vs one that is purchased on May 31st. That's why it's best to buy savings bonds near the end of the month and redeem them at the start of the month. Comment #4 by Steve (anonymous) posted on Thank you for this timely post and for your blog in general. It has helped me a lot. Comment #5 by Anonymous posted on A well done and thorough analysis! Thanks! Comment #6 by Anonymous posted on can the 3 month penalty be deducted on income taxes(like the cd early withdrawl can be)? Comment #7 by Anonymous posted on Anonymous #6, You receive proceeds net of the 3 month penalty when you cash in your bond. The net of penalty amount is what shows on the 1099. Unlike a bank where you have the gross and net amount on the 1099. It's been a couple years since I cashed in a bond early, but that is what I remember receiving. Comment #8 by Anonymous posted on Is it easy to redeem them? What is the process? Comment #9 by Anonymous posted on For paper, you can redeem at the bank. I haven't used TD, but it should be quick, since it is web based. Comment #10 by lou posted on Too bad they restict it to $10,000. At that amount it's not worth the time and energy. Comment #11 by Anonymous posted on will this result in a 1099-int or -div? Comment #12 by Tom (anonymous) posted on Hi, Thank you for a great article and for the information. I found a website that lists the history of the fixed rates on i Bonds. My question is this:. Do we have any clue as to how the Us treasury determines the fixed rate on i bonds? Inverstors that bought back in 1998-2001 have a solid 3% fixed rate and they must be very pleased. Thank you again, Tom Comment #13 by Anonymous posted on I guess there is a dicrepency between what you've written here and over at "Best Bank Account Interest Rates - Summary for April 16, 2011". I've given my connects over there. Comment #14 by Saver posted on I remember the days of the $60K maximum per person and the 3.0% fixed rate on I bonds. Fortunately, I took advantage of them. The icing on the cake was that I was able to charge my savings bond purchases to a credit card, thus raking in a lot of frequent flier miles. Not only did I get a good rate, I had free flights to Europe for several years. Comment #15 by Anonymous posted on To 8 and 9: Using Treasury Direct is pretty easy. I have bought and redeemed savings bonds using my account there. It is pretty much like any online savings account. You just buy the bonds with money from your linked bank account and when you redeem them, you can have the money moved right into your linked bank account. Comment #16 by Anonymous posted on I was lucky to lock in that rate in 2000-2002 on 60k. We used CC and borrowed on 0 loan for 6 months, or earned 1% cash back, got 28 days free for payment, etc. Made one error, forgot to ask anuual Div. statement did not report each yr. will have a big tax bill when we cash out. Ust follow his advice, he has done a difficult work for all of us. He is right. It is almost worry free. Even wehn the int went down, we did not cash them out. We are not smart enough to take Stock Risk, we lost lot in stocks in 2000---they rip you off every which way. You can't time the market. We have too many bubbles, they all bust one by one. People sold stocks and bought homes, stockbrokers became realty brokers, now they are selling commodities. Everybody is diversifying in Commodities, all the 401ks and mutual funds are buying commodities, this bubble shall burst too. There is no bubble in Bonds, it is in Gold, Silver, Oil, Corn. coffee, sugar, cotton, and the rest of it. You all wait and see, it shall burst...can't last forever. Poorest of the poor can't afford to buy food. Comment #17 by Anonymous posted on What exactly do you look for in the CPI numbers to calculate the bond rate? Are all the bonds based on the CPI? Thanks. Comment #18 by Anonymous posted on To Anonymous # 11 You will receive a 1099-INT from the bank (if you redeemed it there) or the Burequ of Public Debt (if you sent the bond to WV). US Government payments are considered interest and not dividends unlike stocks and mutual funds. Treasury securities are treated the same like corporate and municipal bond interest payments. Comment #19 by Ed (anonymous) posted on Thanks for the informative article, I'm still trying to wrap my head around how this works. So the rate changes every May 1 and Nov 1, but the rate applies for 6mos from *your* purchase date, which is not necessarily May-Nov or Nov-May? Then after 6mos the new, current rate applies for the next 6 months? Thanks in advance! Comment #20 by Ed (anonymous) posted on One other question: Where does the current inflation rate of .74% come in? The I bond website says the current rate is .37%? Comment #21 by Anonymous posted on So the rate changes every May 1 and Nov 1, but the rate applies for 6mos from *your* purchase date, which is not necessarily May-Nov or Nov-May? Then after 6mos the new, current rate applies for the next 6 months? That is correct. The rate announced (4.60%) applies to the first six months of bonds purchased between May and November. Bonds purchased before May would have the current (0.74%) component for six months, then the May-November component for six months. Comment #22 by Anonymous posted on One other question: Where does the current inflation rate of .74% come in? The I bond website says the current rate is .37%? .37% is paid over six months. Annualized, that is .74% Comment #23 by Saver posted on Just to add to the response from #21. There are two elements to the I bond interest rate - the fixed rate which remains constant for the life of the bond and the variable rate, which changes each six months, according to the rate of inflation. The fixed rate also is determined every six months, but once you buy the bond, it will not change. I am not sure how the fixed rate is determined, but there must be some magical formula. It is possible for the total interest rate on an I bond to be zero as happened about two years, but, at least for now, it is not allowed to be negative, even if the rate of deflation should exceed the fixed rate on the bond. Comment #24 by Anonymous posted on Thanks very much! I just want to be sure of one thing: my husband and I can EACH buy $10,000 worth using our respective SSNs, correct? Comment #25 by Saver posted on # 24 - That is correct, but you each have to buy $5K through Treasury Direct and $5K through a bank. Comment #26 by Anonymous posted on Is the credit card purchase option still available? Comment #27 by Saver posted on They got rid of that boondoggle years ago. The credit card companies quickly got wise. Comment #28 by Anonymous posted on It appears the rate released is actually half of what the CPI-U indicated and the author estimated. Why is this? Comment #29 by Raj Against The Machine (anonymous) posted on As far as actually purchasing through the TD website, does it allow periodic purchases over time like a mutual fund, or do you have to "ladder" purchases like a CD? Thanks, - Raj Comment #30 by Anonymous posted on You can set up monthly purchases but you would need to pay attention to when the interest rate changes as it might not always be worth buying more. Comment #31 by Anonymous posted on I bought $24000 worth in June 2002. This is the next to last asset I will sell in retirement (after my Roth IRA). Whot a deal. Comment #32 by Anonymous posted on I want to be an I bond from tressury direct. I ahve set up an account. Now i want to know if i should wait till October 2011 to buy ? Or go ahead and buy now Comment #33 by Anonymous posted on I find the website unusually annoying to use (for buying I-bonds) I appreciate security but this nonsense of trying to get into the system, and then having to use the keyboard for some info, the mouse for other, winds up being all wrong! I am not a computer novice and I know what I am doing, but what about novices trying to negotiate, it could be too much. Comment #34 by anonymous (anonymous) posted on Please reconsider phasing out of paper Ibonds. I give them as gifts and a bond gift on the webiste just is not as pleasing as a bond in hand. Also, raise the limit per SSN...that would be nice.
{"url":"http://www.depositaccounts.com/blog/2011/04/new-i-bond-rates-will-surpass-shortterm-cd-rates.html","timestamp":"2014-04-20T03:16:46Z","content_type":null,"content_length":"56686","record_id":"<urn:uuid:9a3796fa-d244-4dcb-9769-c7721be84e7e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodside, NY Calculus Tutor Find a Woodside, NY Calculus Tutor Hi parents and students, My name is Natalie and I am a forthcoming high school mathematics teacher. I graduated from a NYC specialized high school and I am currently studying at New York University, majoring in Mathematics Secondary Education. I have been a volunteer math tutor for the last 5 years, and have grown to work quickly and effectively on any mathematics subject. 19 Subjects: including calculus, geometry, biology, algebra 1 ...I have tutored introductory Statistics and well as Calculus-based courses working from texts by Sheldon Ross and Robert Hogg. I am able to tutor such topics as standard deviation, Central Limit Theorem, marginal probabilities, order statistics, and maximum likelihood estimators. I am not able to tutor topics such as Measure Theory and sigma-algebras. 32 Subjects: including calculus, physics, statistics, geometry ...I continued this study at a collegiate level my freshman year of college, taking a rigorous introductory course in Linear Algebra. Over time, my understanding of linear algebra deepened as I saw its applications in diverse areas of physics and mathematics. From Cramer's Rule to Graham-Schmidt o... 4 Subjects: including calculus, differential equations, logic, linear algebra ...After post-docs at Massachusetts Institute of Technology and the Hebrew University of Jerusalem, I am in New York and ready to help you or your child learn math and science. I originally went to school to get my Ph.D. to become a professor. Not sure I still want to be a professor, but I really miss teaching, which I did for years in graduate school. 10 Subjects: including calculus, physics, writing, algebra 2 ...Ellery received his Bachelor’s degree in Physics and Jazz Performance from New York University. While at NYU, he prioritized both teaching and research, working as a teaching assistant and tutor, and as a research assistant in NYU’s Center for Soft Matter Research. He received numerous awards from the Physics Department, including the Robert F. 8 Subjects: including calculus, physics, algebra 1, algebra 2 Related Woodside, NY Tutors Woodside, NY Accounting Tutors Woodside, NY ACT Tutors Woodside, NY Algebra Tutors Woodside, NY Algebra 2 Tutors Woodside, NY Calculus Tutors Woodside, NY Geometry Tutors Woodside, NY Math Tutors Woodside, NY Prealgebra Tutors Woodside, NY Precalculus Tutors Woodside, NY SAT Tutors Woodside, NY SAT Math Tutors Woodside, NY Science Tutors Woodside, NY Statistics Tutors Woodside, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/woodside_ny_calculus_tutors.php","timestamp":"2014-04-19T05:24:03Z","content_type":null,"content_length":"24369","record_id":"<urn:uuid:5cdbbcdf-b6d8-4e60-913c-9ec5b52b5310>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
green's function i have just completed green's function for second ode but for finding solution i could not find just one method i sometimes have to find linearly independent solu sometimes sometimes it comes by different conditions is that is true or is there a particular way which includes all the cases
{"url":"http://www.physicsforums.com/showthread.php?t=344014","timestamp":"2014-04-18T03:04:52Z","content_type":null,"content_length":"19037","record_id":"<urn:uuid:014a9dae-3fa5-41f5-b8e1-f75f184e7853>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
North Decatur, GA ACT Tutor Find a North Decatur, GA ACT Tutor ...Make sure to ask her for the new address! She does not do in-home tutoring.*** Current Availability as of 4/17/14: Thursday: 3, 5p Friday: 2, 3p Tuesday: 4, 5p Wednesday: 7p (this week only) You're probably trying to find a tutor who stands out from the rest. You're looking for someone who knows what she's teaching. 22 Subjects: including ACT Math, reading, writing, calculus ...Currently I am tutoring regularly via Skype which is allowing students to get last minute help with homework when needed - even in 15 minute increments - and allowing for longer scheduled tutoring sessions to prepare for quizzes and tests all from the convenience of one's home. The key to your s... 10 Subjects: including ACT Math, geometry, algebra 1, algebra 2 ...I passed the GACE content exams necessary to teach High School Mathematics in Georgia with a score of 577 out of 600. Altogether I have tutored math at the high school level for 5+ years. I have been tutoring math since I was in high school myself, and did so throughout college as an America Counts math tutor. 25 Subjects: including ACT Math, English, reading, writing Hi,My name is Alex. I graduated from Georgia Tech in May 2011, and am currently tutoring a variety of math topics. I have experience in the following at the high school and college level:- pre algebra- algebra- trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed classes and received a 5 on the AB/BC Advanced Placement Calculus exams. 16 Subjects: including ACT Math, calculus, geometry, algebra 2 ...As well as tutoring, I have volunteered in my local elementary school to help student with their homework for their homework club. Also I mentor students from middle school to high school on behavior, studies, and other topics. I have attended many lectures on best study skills, tutored other c... 14 Subjects: including ACT Math, chemistry, geometry, biology Related North Decatur, GA Tutors North Decatur, GA Accounting Tutors North Decatur, GA ACT Tutors North Decatur, GA Algebra Tutors North Decatur, GA Algebra 2 Tutors North Decatur, GA Calculus Tutors North Decatur, GA Geometry Tutors North Decatur, GA Math Tutors North Decatur, GA Prealgebra Tutors North Decatur, GA Precalculus Tutors North Decatur, GA SAT Tutors North Decatur, GA SAT Math Tutors North Decatur, GA Science Tutors North Decatur, GA Statistics Tutors North Decatur, GA Trigonometry Tutors Nearby Cities With ACT Tutor Avondale Estates ACT Tutors Belvedere, GA ACT Tutors Briarcliff, GA ACT Tutors Decatur, GA ACT Tutors Dunaire, GA ACT Tutors Embry Hls, GA ACT Tutors North Atlanta, GA ACT Tutors North Springs, GA ACT Tutors Overlook Sru, GA ACT Tutors Scottdale, GA ACT Tutors Snapfinger, GA ACT Tutors Tucker, GA ACT Tutors Tuxedo, GA ACT Tutors Vinnings, GA ACT Tutors Vista Grove, GA ACT Tutors
{"url":"http://www.purplemath.com/North_Decatur_GA_ACT_tutors.php","timestamp":"2014-04-18T16:30:49Z","content_type":null,"content_length":"24203","record_id":"<urn:uuid:ec693034-7868-4da1-be2c-98db48994f96>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimax Theorems for Set-Valued Mappings under Cone-Convexities Abstract and Applied Analysis Volume 2012 (2012), Article ID 310818, 26 pages Research Article Minimax Theorems for Set-Valued Mappings under Cone-Convexities ^1Department of Occupational Safety and Health, College of Public Health, China Medical University, Taichung 404, Taiwan ^2Department of Mathematics, Aligarh Muslim University, Aligarh 202 002, India ^3Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran 31261, Saudi Arabia ^4Department of Mathematics, National Tsing Hua University, Hsinchu 300, Taiwan Received 7 September 2012; Accepted 27 October 2012 Academic Editor: Ondrej Dosly Copyright © 2012 Yen-Cherng Lin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The aim of this paper is to study the minimax theorems for set-valued mappings with or without linear structure. We define several kinds of cone-convexities for set-valued mappings, give some examples of such set-valued mappings, and study the relationships among these cone-convexities. By using our minimax theorems, we derive some existence results for saddle points of set-valued mappings. Some examples to illustrate our results are also given. 1. Introduction The minimax theorems for real-valued functions were introduced by Fan [1, 2] in the early fifties. Since then, these were extended and generalized in many different directions because of their applications in variational analysis, game theory, mathematical economics, fixed-point theory, and so forth (see, for example, [3–11] and the references therein). The minimax theorems for vector-valued functions have been studied in [4, 9, 10] with applications to vector saddle point problems. However, the minimax theorems for set-valued bifunctions have been studied only in few papers, namely, [4–8] and the references therein. In this paper, we establish some new minimax theorems for set-valued mappings. Section 2 deals with preliminaries which will be used in rest of the paper. Section 3 denotes the cone-convexities of set-valued mappings. In Section 4, we establish some minimax theorems by using separation theorems, Fan-Browder fixed-point theorem. In the last section, we discuss some existence results for different kinds of saddle points for set-valued mappings. 2. Preliminaries Throughout the paper, unless otherwise specified, we assume that , are two nonempty subsets, and is a real Hausdorff topological vector space, is a closed convex pointed cone in with . Let be the topological dual space of , and let We present some fundamental concepts which will be used in the sequel. Definition 2.1 (see [3, 4, 8]). Let be a nonempty subset of . A point is called a (a)minimal point of if ; denotes the set of all minimal points of ;(b)maximal point of if ; denotes the set of all maximal points of ;(c)weakly minimal point of if ; denotes the set of all weakly minimal points of ;(d)weakly maximal point of if ; denotes the set of all weakly maximal points of . It can be easily seen that and . Lemma 2.2 (see [3, 4]). Let be a nonempty compact subset of . Then, (a); (b); (c); (d). Following [6], we denote both and by (both and by ) in since both and (both and ) are the same in . Definition 2.3. Let , be Hausdorff topological spaces. A set-valued map with nonempty values is said to be (a)upper semicontinuous at if for every and for every open set containing , there exists a neighborhood of such that ;(b)lower semi-continuous at if for any sequence such that and any , there exists a sequence such that ;(c)continuous at if is upper semi-continuous as well as lower semi-continuous at . We present the following fundamental lemmas which will be used in the sequel. Lemma 2.4 (see [9, Lemma 3.1]). Let , , and be three topological spaces. Let be compact, a set-valued mapping, and the set-valued mapping defined by (a)If is upper semi-continuous on , then is upper semi-continuous on . (b)If is lower semi-continuous on , so is . Lemma 2.5 (see [9, Lemma 3.2]). Let be a Hausdorff topological vector space, a set-valued mapping with nonempty compact values, and the functions , defined by and . (a)If is upper semi-continuous, so is . (b)If is lower semi-continuous, so is . (c)If is continuous, so are and . We shall use the following nonlinear scalarization function to establish our results. Definition 2.6 (see [6, 10]). Let and . The Gerstewitz function is defined by We present some fundamental properties of the scalarization function. Proposition 2.7 (see [6, 10]). Let and . The Gerstewitz function has the following properties: (a); (b); (c), where is the topological boundary of ; (d); (e); (f) is a convex function; (g) is an increasing function, that is, ; (h) is a continuous function. Theorem 2.8 ( Fan-Browder fixed-point theorem (see [12])). Let be a nonempty compact convex subset of a Hausdorff topological vector space and let be a set-valued mapping with nonempty convex values and open fibers, that is, is open for all . Then, has a fixed point. 3. Cone-Convexities In this section, we present different kinds of cone-convexities for set-valued mappings and give some relations among them. Some examples of such set-valued mappings are also given. Definition 3.1. Let be a nonempty convex subset of a topological vector space . A set-valued mapping is said to be (a) above convex [4] (resp., above--concave [5]) on if for all and all , (b)below- convex [13] (resp., below- -concave [9, 13]) on if for all and all , (c)above- quasi-convex (resp., below- -quasiconcave) [7, Definition 2.3] on if the set is convex for all ;(d)above-properly -quasiconvex (resp., above-properly -quasiconcave [6]) on if for all and all , either or (e)below-properly -quasiconvex [7] (resp., below-properly -quasiconcave) on if for all and all , either or (f) above-naturally -quasiconvex [6] on if for all and all , where denotes the convex hull of a set ;(g)above convex-like (resp., above- -concave-like) on ( is not necessarily convex) if for all and all , there is an such that (h)below convex-like [13] (resp., below concave-like) on ( is not necessarily convex) if for all and all , there is an such that It is obvious that every above--convex set-valued mapping or above-properly -quasi-convex set-valued mapping is an above-naturally -quasi-convex set-valued mapping, and every above--convex (above--concave) set-valued mapping is an above--convex-like (above--concave-like) set-valued mapping. Similar relations hold for cases below. Remark 3.2. The definition of above-properly -quasi-convex (above-properly -quasi-concave) set-valued mapping is different from the one mentioned in [7, Definition 2.3] or [5, 6]. The following Examples 3.3 and 3.4 illustrate the reason why they are different from the one mentioned in [5–7]. However, if is a vector-valued mapping or a single-valued mapping, both mappings reduce to the ordinary definition of a properly -quasi-convex mapping for vector-valued functions [7]. The above--convexity in Definition 3.1 is also different from the below--convexity used in [5, 9]. Example 3.3. Consider . Let be a set-valued mapping defined by and for all , Then is an above-properly -quasi-convex set-valued mapping, but it is not below-properly -quasi-convex. On the other hand, let be a set-valued mapping defined by and for all , Then, is a below-properly -quasi-convex set-valued mapping, but it is not above-properly -quasi-convex. Example 3.4. Let . Define by Then is continuous, above--quasi-convex, below--quasi-concave, above-properly -quasi-convex, and above-properly -quasi-concave, but it is not below-properly Proposition 3.5. Let be a nonempty set (not necessarily convex) and for a given set-valued mapping with nonempty compact values, define a set-valued mapping as (a)If is above--convex-like, then is so. (b)If is a topological space and is a continuous mapping, then is upper semicontinuous with nonempty compact values on . Proof. (a) Let be above--convex-like, and let be arbitrary. Since is above--convex-like, for any , there exists such that By Lemma 2.2, Therefore, is above--convex-like. (b) The upper semicontinuity of was deduced in [4, Lemma 2]. Proposition 3.6. Let be a nonempty convex set, and let be a set-valued mapping with nonempty compact values. Then, the set-valued mapping defined by is above--quasiconvex if is so. The following result can be easily derived, and therefore, we omit the proof. Proposition 3.7. Let be a nonempty convex set and be above--concave. Then the set-valued mapping is above--concave and below--quasiconcave. Furthermore, if is above-properly -quasiconcave, then the set-valued mapping is also above-properly -quasiconcave and below--quasiconcave. Let and be a set-valued mapping. Then, the composition mapping is defined by Clearly, the composition mapping is also a set-valued mapping. Proposition 3.8. Let be a nonempty set, a set-valued mapping, and . (a)If is above--convex-like, then is above--convex-like. (b)If is below--concave-like, then is below--concave-like. (c)If is a topological space and is upper semi-continuous, then so is . Proof. (a) By the definition of above--convex-like set-valued mapping , for any and all , there exists such that . For any , there exist , such that For any , we have . Hence, . Thus, is The proof of (b) and (c) is easy, and therefore, we omit it. Proposition 3.9. Let be a nonempty convex set and . (a)If is above--concave (above-properly -quasi-concave), then is above--concave (above-properly -quasi-concave).(b)If is above-properly -quasi-convex, then is above--quasi-convex and above-properly -quasi-convex.(c)If is above--convex, then is above--convex and above--quasi-convex. Lemma 3.10. Let be a real Hausdorff topological vector space and a closed convex pointed cone in with . Let be a nonempty compact subset of a topological space , and let be an upper semi-continuous set-valued mapping with nonempty compact values. Then, for any , there exists such that . Proof. For any given , the mapping is upper semi-continuous by Proposition 3.8 (c). By the compactness of , there exist and such that . By Lemma 2.2, there exists such that , and hence . On the other hand, , we know that , and then . Therefore, the conclusion holds. Proposition 3.11. Let be a nonempty convex set. If is above-properly -quasi-convex, then it is above--quasi-convex. Proof. For any , let . Then, and are subsets of . Since is above-properly -quasi-convex, for any , is contained in either or , and hence, in . Thus, the set is convex, and therefore, is Proposition 3.12. Let be a nonempty convex set. If is above-naturally -quasi-convex, then it is above--quasi-convex. Proof. Let , , and be the same as given as in Proposition 3.11. Then, since is convex. By the above-naturally -quasi-convexity, for all . Thus, the set is convex, and therefore, is Proposition 3.13. Let be a nonempty convex set. If is above-naturally -quasi-convex, then is above-naturally -quasi-convex for any . Proof. Let be given. From the above-naturally -quasi-convexity of , for any and any , For any , there is a such that . Then there exist and , such that . Hence, , and Therefore, is a above-naturally Proposition 3.14. Let be a set-valued mapping with nonempty compact values. For any , (a)if for some , then ;(b)if for some , then . Proof. Let . Suppose that . Then Then, there exists and . Therefore, there exists such that and . Since , and . This implies that , which is a contradiction. This proves (a). Analogously, we can prove (b), so we omit it. Remark 3.15. Propositions 3.8 and 3.9, Lemma 3.10, and Propositions 3.13 and 3.14 are always true except Proposition 3.8 (b) if we replace by any Gerstewitz function. 4. Minimax Theorems for Set-Valued Mappings In this section, we establish some minimax theorems for set-valued mappings with or without linear structure. Theorem 4.1. Let , be two nonempty compact subsets (not necessarily convex) of real Hausdorff topological spaces and , respectively. Let the set-valued mapping be lower semi-continuous on and upper semi-continuous on such that for all , is nonempty compact and satisfies the following conditions: (i)for each , is below--concave-like on ; (ii)for each , is above--convex-like on . Then, Proof. Since it is sufficient to prove that Choose any such that . For any , let Then, by the lower semi-continuity of the set-valued mapping , the set is closed, hence it is compact for all . By the choice of , we have Since is compact and the collection covers , there exist finite number of points in such that or This implies that and therefore, Following the idea of Borwein and Zhuang [14], let where . Then the set is convex, so is . We note that the interior of is nonempty since Since , by separation hyperplane theorem [15, Theorem 14.2], there is a such that where , that is, By (4.11 ), (4.13), and the choice of , we have that . Furthermore, from the fact we have Hence, by (4.13), we have or Thus, we have . Hence, by (4.17), we have Since is below--concave-like in , there is such that Therefore, and hence, This completes the proof. Remark 4.2. Theorem 4.1 is a modification of [14, Theorem A]. If is a real-valued function, then Theorem 4.1 reduces to the well-known minimax theorem due to Fan [2]. We next establish a minimax theorem for set-valued mappings defined on the sets with linear structure. Theorem 4.3. Let , be two nonempty compact convex subsets of real Hausdorff topological vector spaces and , respectively. Let the set-valued mapping be lower semi-continuous on and upper semi-continuous on such that for all , is nonempty compact, and satisfies the following conditions: (i)for each , is above--quasi-convex on ; (ii)for each , is above--concave, or above-properly -quasi-concave on ; (iii)for each , there is a such that Then, Proof. We only need to prove that is impossible, since it is always true that Suppose that there is an such that Define by For each , . Since is compact and the set-valued mapping is upper semi-continuous, there is a such that . On the other hand, from the condition (iii), for each , there is a such that . Hence, for each , . By (i) and Proposition 3.6, the mapping is above--quasi-convex on . By (ii) and Proposition 3.7, the mapping is below--quasi-concave on . Hence, for each , the set is convex. From the lower semi-continuities on and upper semi-continuity on of , the set is open in . By Fan-Browder fixed-point Theorem 2.8, there exists such that that is, which is a contradiction. This completes the proof. Remark 4.4. [5, Propositions 2.7 and 2.1] can be deduced from Theorem 4.3. Indeed, in [5, Proposition 2.1], the above-naturally -quasi-convexity is used. By Proposition 3.12, the condition (i) of Theorem 4.3 holds. Hence the conclusion of Proposition 2.1 in [5] holds. We also note that, in Theorem 4.3, the mapping need not be continuous on . Hence Theorem 4.3 is a slight generalization of [7, Theorem 3.1]. Theorem 4.5. Let and be nonempty compact (not necessarily convex) subsets of real Hausdorff topological vector spaces and , respectively. Let the mapping be upper semi-continuous with nonempty compact values and lower semi-continuous on such that (i)for each , is below--concave-like on ; (ii)for each , is above--convex-like on ;(iii)for every , Then for any there is a such that that is, Proof. Let for all . From Lemma 2.4 and Proposition 3.5, the set-valued mapping is upper semi-continuous with nonempty compact values on . Hence the set is compact, and so is . Then is a closed convex set with nonempty interior. Suppose that . By separation hyperplane theorem [15, Theorem 14.2], there exist , and a nonzero continuous linear functional such that Therefore, This implies that and for all . Let . From Lemma 3.10, for each fixed , there exist and with such that . Choosing and in (4.36), we have Therefore, By the conditions (i), (ii) and Proposition 3.8, the set-valued mapping is below--concave-like on for all , and the set-valued mapping is above--convex-like on for all. From Theorem 4.1, we have Since is compact, there is an such that . For any and all , we have that is, Thus, , and hence, If , by the condition (iii), which contradicts (4.43). Hence, for every , that is, or The following examples illustrate Theorem 4.5. Example 4.6. Let , and It is obviously that is below--concave-like on and above--convex-like on . We now verify the condition (iii) of Theorem 4.5. Indeed, for any , Then, Thus, for every , and the condition (iii) of Theorem 4.5 holds. Furthermore, for any , Then, Thus, Hence, the conclusion of Theorem 4.5 holds. Example 4.7. Let , , , and be defined by Let for all . Then is upper semi-continuous, but not lower semi-continuous on , and is not continuous but is upper semi-continuous on . Moreover, has nonempty compact values and is lower semi-continuous on . It is easy to see that is below--concave-like on and is above--convex-like on . We verify the condition (iii) of Theorem 4.5. Indeed, for all , . Then, Therefore, the condition (iii) of Theorem 4.5 holds. Since for all , and , for each , we can choose such that Furthermore, Therefore, Hence, the conclusion of Theorem 4.5 holds. Remark 4.8. Theorem 3.1 in [5] Theorem 3.1 in [6], or Theorem 4.2 in [7] cannot be applied to Examples 4.6 and 4.7 because of the following reasons: (i)the two sets and are not convex in Example 4.6; (ii) is not continuous on in Examples 4.6 and 4.7. Theorem 4.9. Let , be two nonempty compact convex subsets of real Hausdorff topological vector spaces and , respectively. Suppose that the set-valued mapping has nonempty compact values, and it is continuous on and lower semi-continuous on such that (i)for each , is above-naturally -quasi-convex on ;(ii)for each , is above--concave or above-properly -quasi-concave on ;(iii)for every , (iv) for any continuous increasing function and for each , there exists such that Then, for any , there is a such that , that is, Proof. Let be defined as the same as in the proof of Theorem 4.5. Following the same perspective as in the proof of Theorem 4.5, suppose that . For any and Gerstewitz function . By Proposition 2.7 (d), we have Let . From Lemma 3.10, for the mapping and Remark 3.15, for each , there exist and with such that . Choosing in (4.65), we have Therefore, By conditions (i), (ii) and Remark 3.15, the set-valued mapping is upper semi-continuous, and either above--concave or above-properly -quasi-concave on , and the set-valued mapping is lower semi-continuous and above--quasi-convex on . From Theorem 4.3, we have Since the set-valued mapping is lower semi-continuous on , by Lemma 2.4 (b) and Lemma 2.5 (b), the set-valued mapping is upper semi-continuous on . By the compactness of , there exists such that . For all and all , we have . Thus, , and hence, If , by the condition (iii), which contradicts (4.69). Hence, for every , that is, This completes the proof. The following example illustrates Theorem 4.9. Example 4.10. Let , and be a set-valued mapping defined as Let for all . Then is lower semi-continuous, but not upper semi-continuous on , and is continuous on , and has nonempty compact values and is lower semi-continuous on . It is easy to see that is above--concave or above-properly -quasi-concave on and is above-naturally -quasi-convex on. We verify the condition (iii) of Theorem 4.9. Indeed, for all , and . Hence, Therefore, the condition (iii) of Theorem 4.9 holds. Since for any , we can choose such that For any continuous increasing function , the condition (iv) of Theorem 4.9 holds. Furthermore, since for each , we have Thus,
{"url":"http://www.hindawi.com/journals/aaa/2012/310818/","timestamp":"2014-04-16T16:41:21Z","content_type":null,"content_length":"1044947","record_id":"<urn:uuid:26ef4fdb-de19-4d70-80ea-08fcec8d55df>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Profile Curve This is translated from my own language, so feel free to correct me if I'm using a wrong translation. A profile curve in (x, z)-plan is given by the graph of the function z = ln (x), where x $\epsilon$ [1, 2]. The profile curve is rotated the angle $Pi$ around z-axis counterclockwise as seen from the z-axis' positive end. This yields the rotation surface F. Find a parametrization for the profile curve and F.
{"url":"http://mathhelpforum.com/advanced-math-topics/174277-profile-curve.html","timestamp":"2014-04-21T03:17:18Z","content_type":null,"content_length":"35890","record_id":"<urn:uuid:b9edc1fc-95ef-4da5-aebc-0af96ac48d5f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Selecting magnet to be used along with hall effect sensors. I am trying to create a hall effect based position sensor to be attached onto a DC motor to generate feedback for position control. I am thinking of the following arrangement: 4 hall sensors are placed 90 degrees apart around the magnetic element. Each sensor surface is directed at the axis, so if the magnetic field vector is perpendicular to the surface, the measured voltage is at maximum (or minimum depending on the polarity). By computing the mangitude (and polarity) along the two axes by measuring the induced hall effect voltage onto the 2 pairs of sensors, I should be able to determine the angle of the axis pretty I am trying to determine what type of magnet should I be using. Unfortunately, I skiped class and I have no idea on how to calculate the best possible shape. What I need is the perperdicular component of the magnetic field vector on each of the four sensors to be propoprtional to B * sin(theta) where theta is angle (of orientation on a x-y plane) of the magnet, and therefore the axis. From what I can gather, I need the direction of the magnetic field line to be proportional to the angle from the axis of the magnet, and the magnitude to be constant at fixed distance. The intent here is to generate a Voltage output that is as much as possible proportional to sin(θ) and cos(θ) on the two axes. I can only find N50 disc magnets that can fit to the available space. Is there a closed form solution for the disc shaped magnet`s magnetic field, which I could use to estimate the sensor outputs, say in MATLAB? I apologize in advance if this has been answered before. I did search, but I was unable to find something
{"url":"http://www.physicsforums.com/showthread.php?t=703304","timestamp":"2014-04-17T07:37:05Z","content_type":null,"content_length":"23647","record_id":"<urn:uuid:cf9ac715-52d6-4a4d-811b-e486f612608b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Earnings Quality: Measuring The Discretionary Portion Of Accruals | Investopedia By Tim Keefe,CFA (Contact Author | Biography) Using the raw accrual amounts as a proxy for earnings management is a simple method to evaluate earnings quality because firms can have high accruals for legitimate business reasons, such as sales growth. A more complicated proxy can be created by attempting to categorize total accruals into nondiscretionary and discretionary accruals. The nondiscretionary component reflects business conditions (such as growth and the length of the operating cycle) that naturally create and destroy accruals, while the discretionary component identifies management choices. The result of pulling discretionary accrual amounts from the total accrual amount is a metric that reflects accruals that are due to management's choices alone; in other words, there appears to be no business reason for these accruals. So, discretionary accruals are a better proxy for earnings quality. There are many approaches used in an attempt to estimate this nondiscretionary accrual proxy, but estimating the nondiscretionary component of accruals typically involves a regression model. Identifying discretionary accruals by regression can be difficult in practice, and different approaches differ in practically every respect: how to measure the dependent variable (total net accruals or net operating accruals), what to use as independent variables and whether to use a cross-sectional model or a time-series model.Measuring the Dependent Variable Either total net accruals or net operating accruals can be used as the dependent variable. Recall from above that we have defined both of these dependent variables from the cash flow statement as: Total Net Accruals = Net Income - ΔCash - Cash Dividends - Stock Repurchases + Equity Issuance Net Operating Accruals = Net Income - Cash Flow from Operations Many studies have used the balance sheet to calculate total net accruals or net operating accruals, but due to non-articulation issues, the cash-flow approach is better suited to describing accruals in all situations, and we will not detail how to calculate accruals from balance sheet data here.Choosing Independent Variables The independent variables are data items that should have some relationship to nondiscretionary accruals. For example, normal accruals driven by sales, PP&E, expected sales growth and current operating performance. A simple (and one of the most commonly used) model to estimate the nondiscretionary accrual component is the Modified Jones Model (1991). It may or may not be the best model. It surely isn't perfect, but other variables can be added to the equation in an attempt to increase the model's precision. In addition, a fourth variable such as the life-cycle score may capture relationships in total net accruals or operating net accruals that the current three variables in the regression fail to capture. The model can be represented as follows: TNA / ATA = β[0] + β[1](1/ATA) + β[2](ΔSales – ΔRec / ATA) + β[3](GPPE / ATA ) + ε NOA / ATA = β[0] + β[1](1/ATA) + β[2](ΔSales – ΔRec / ATA) + β[3](GPPE / ATA ) + ε Where:TNA= Total net accruals NOA= Net operating accruals ATA = Average total assets ΔSales = Change in sales ΔRec= Change in accounts receivable GPPE = Gross PP&EEach β is the estimated relationship of the independent variable to the dependent variable, and the error term represents the composite effect of all variables not explicitly stated as an independent variable.Using Cross-Sectional or Time-Series Analysis The model can be employed by regressing accrual data from many firms in the same industry for one time period (cross-sectional) or by regressing accrual data from the same firm across several time periods (time-series). There are disadvantages to both methods, but the cross-sectional analysis is probably a better method for the following technical reasons: ● Time-series analysis may not have enough enough observations in the estimation period to obtain reliable parameter estimates for a linear regression. ● The coefficient estimates on ΔSales and GPPE may not be stationary over time. ● The self-reversing property of accruals may result in serially correlated residuals. If any of these issues above is true, it is impossible to make valid statistical inferences from the regression results obtained with time-series analysis.Time-Series Analysis To estimate the nondiscretionary accrual amounts, firm-specific amounts for each independent variable are used for each period/year over a sequence of periods/years. In essence, think of each data item [(TNA / ATA), (1/ATA),(ΔSales – ΔRec / ATA) and (GPPE / ATA )] as coming from the same firm, with each data set being from a different time period. For example, the data set might be one firm with accounting data from each year between 1977 and 2007.The error term, ε, is the estimate of discretionary accruals. This discretionary accrual estimate for the firm can then be used to rank the firm with respect to its peers and all other firms in the universe. A high level of discretionary accruals relative to peers would indicate that earnings quality is relatively low. Meanwhile, a low level of discretionary accruals would indicate that earnings quality is relatively high.Cross-Sectional Analysis In a cross-sectional analysis, the model is a two-stage model. This means that results from the first part of the analysis are plugged into the next stage to get the needed estimate.To estimate the nondiscretionary accrual amounts, firm-specific amounts for each independent variable are used for a particular period across several different firms. In essence, think of each data item [(TNA / ATA), (1/ATA),(ΔSales – ΔRec / ATA) and (GPPE / ATA )] as coming from the same time period with the next data set being from a different firm. For example, the data set might be 45 different firms with accounting data for the year ending 2007.Once β[0], β[1], β[2] and β[3] have been estimated for the cross-section of firms for the period (which is calculated by the computer running a regression equation), we have denoted these estimates as β[0], β[ 1], β[ 2], β[ 3]. Use these cross-sectional coefficients along with a specific firm's data to estimate the individual firm's nondiscretionary accruals for the period. After processing, the calculation results in an estimate for nondiscretionary accruals scaled by average total assets, represented by NDA / ATA below. NDA / ATA = β[ 0] + β[ 1](1/ATA) + β[ 2](ΔSales – ΔRec / ATA) + β[ 3](GPPE / ATA ) + ε Total discretionary accruals are the difference between the individual firm's scaled total net accruals and its estimated total nondiscretionary accrual amount. TDA = TNA / ATA – NDA / ATA If, instead, the regression is run with net operating accruals as the dependent variable, the equations would yield an estimate for just the operating component of nondiscretionary accruals. ODA = NOA / ATA – NDA / ATA The discretionary-accrual estimate for the firm, whether it is based on total net accruals or net operating accruals, can then be ranked against the discretionary accrual estimates of the firm's peers and all other firms in the universe. This ranking is a comparative measure of the size of discretionary accruals, and it is a proxy for the quality of the firm's earnings. A high amount of discretionary accruals indicates lower-quality earnings and is a red flag that management may be using aggressive accounting to overstate earnings. Next: Earnings Quality: Conclusion » Table of Contents 1. Earnings Quality: Measuring The Discretionary Portion Of Accruals comments powered by Disqus
{"url":"http://www.investopedia.com/university/accounting-earnings-quality/earnings9.asp","timestamp":"2014-04-18T18:39:45Z","content_type":null,"content_length":"92274","record_id":"<urn:uuid:a123accb-0d1b-4643-8936-110c816f0088>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Area/Perimeter Function Problem September 30th 2009, 06:46 PM #1 Sep 2009 I posted a problem last week with this same picture and got some great help- many thanks ahead of time! Find the perimeter function in terms of x if the area function is A(x)= 28 sq. ft. So, i guess your area is P(x) = pi*x + 2y + 2x now substitute A(x) for 28 and get your answer in terms of x I hope I'm right. Last edited by Arturo_026; September 30th 2009 at 07:37 PM. Would the area be 2xy + 1/2pi * (x)^2?? Alright. So would the peimeter be (2x+2y)+ Pi (x)? Then put the area function every where there is x?? I misunderstood your question in the first post, sorry. Yes, with the perimeter function find the value of y in terms of x and then substitute that in the area function. So, the value of y in terms of x would be x=2y + 1/2pi(x) , right? Sorry for so many questions! September 30th 2009, 07:12 PM #2 May 2009 September 30th 2009, 07:28 PM #3 Sep 2009 September 30th 2009, 07:30 PM #4 May 2009 September 30th 2009, 07:34 PM #5 Sep 2009 September 30th 2009, 07:41 PM #6 May 2009 September 30th 2009, 07:47 PM #7 Sep 2009
{"url":"http://mathhelpforum.com/calculus/105332-area-perimeter-function-problem.html","timestamp":"2014-04-17T22:36:26Z","content_type":null,"content_length":"42690","record_id":"<urn:uuid:67676a88-23b5-4744-9f3d-7a1a1e32b685>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: March 2009 [00569] [Date Index] [Thread Index] [Author Index] BarChart - Extendng the Y-axis and labeling the endpoints • To: mathgroup at smc.vnet.net • Subject: [mg97536] BarChart - Extendng the Y-axis and labeling the endpoints • From: Donald DuBois <donabc at comcast.net> • Date: Sat, 14 Mar 2009 18:16:30 -0500 (EST) Here is a BarChart (almost directly from the Mathematica Documentation): thisBarChart = BarChart[{1, 2, -1.3, 2.4}] I am trying to extend the labeling of the Y-axis so that the endpoints of the Y-axs extend a little beyond the extremes of the data in the Y direction (the extremes of the data being -1.3 and 2.4) and these endpoints are labeled with multiples of the chosen increment. I believe this makes the BarChart easier to read and is aesthetically more pleasing. For example, with the above evaluation of BarChart, the labeling of the Y-axis goes from 2.0 to -1.0 in increments of 0.5 as chosen by Mathematica but the Y-axis itself extends beyond these labels. I would like to extend the labeling in both directions (positive and negative sides along the Y-axis) to the nearest integral multiple of the increment that Mathematica has chosen so the Y-axis would be extended slightly beyond the extremes of the data and the endpoints of the Y-axis would have labels. In this example, the Y-axis would be extended from 2.5 to -1.5 and the labeling would go from { -1.0, -0.5, 0.0, 0.5, 1.0, 1.5, 2.0} to {-1.5, -1.0, -0.5, 0.0, 0.5, 1.0, 1.5, 2.0, 2.5} with the labeling being integral multiples of the chosen increment. To do this, I need to know that the automatic labeling is in increments of 0.5 (as chosen by Mathematica) and then I need to extend the endpoint labels from {-1.0, 2.0} to {-1.5, 2.5}. I can get the PlotRange that is chosen by Mathematica automatically by using AbsoluteOptions on the graphical object called thisBarChart: AbsoluteOptions[thisBarChart, PlotRange] which produces {PlotRange -> {{0.4, 4.6}, {-1.3, 2.4}}} But, I still don't know the increment that Mathematica used so that I can force the labeling to a high of 2.5 and a minimum of -1.5. There's another complication: Assuming I did know (by magic) that the chosen increment was 0.5 for the labeling I thought I could then use BarChart[{1, 2, -1.3, 2.4}, PlotRange -> {Automatic, { 2.5, -1.5}}] to extend the Y-axis endpoints with labels of -1.5, and 2.5 but this doesn't work because the endpoints of the Y-azis are not labeled. I find, through experimentation only, that what I need is: BarChart[{1, 2, -1.3, 2.4}, PlotRange -> {Automatic, {3.0, -2.0}}] and the increment for the labeling has automatically changed to 1 (the fact that the increment has changed from 0.5 to 1.0 in this case is not a problem.) How can I get Mathematica to do this in a programmatic way so that I don't need to experiment each time to (1) get the Y-axis extended a little beyond the extremes of the data and (2) the endpoints labeled with integral multiples of the chosen increment? Thank you in advance for any help you can give me.
{"url":"http://forums.wolfram.com/mathgroup/archive/2009/Mar/msg00569.html","timestamp":"2014-04-18T13:13:50Z","content_type":null,"content_length":"27894","record_id":"<urn:uuid:6d56ecd4-ab98-415f-b8e0-8401c31db88a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Addition with fractions July 29th 2013, 09:30 AM #1 Oct 2011 Addition with fractions I'm used to finding the CLM of the denominators but what do I do in this case? Why is adding fractions so complicated. Re: Addition with fractions What do you do with two fractions whose denominators are relatively prime, e.g., 2/3 + 3/5? Re: Addition with fractions $\frac{2}{3}+\frac{3}{5} = \frac{10}{15}+\frac{9}{15}$ I simply find the LCM of each denominator. But in my original fractions there is no CLM. I have another question here. $\frac{b}{b-1}-\frac{1}{2-2b}$ again there is no LCM that I can see here. Even if I expand. $\frac{b}{b-1}-\frac{1}{2(1-b)}$ I don't get what I have to do here Re: Addition with fractions I meant that you find the LCM of two relatively prime numbers by multiplying those numbers. With polynomials the situation is similar. The greatest common divisor of x + 2 and x is 1, i.e., x + 2 and x are relatively prime. Therefore, their LCM is their product. In other words, you multiply both the numerator and the denominator of the first fraction by x and similarly multiply both parts of the second fraction by x + 2. Then the denominators will be equal and you can add. In fact, you don't have to find the least common multiple to add fractions; it is enough to have some common multiple, e.g., the product of the two denominators. Here you again can multiply the denominators. Note, however, that the denominators are not relatively prime: the first divides the second because 2 - 2b = (-2)(b - 1). Therefore, the LCM is 2 - 2b and it is enough to multiply both top and bottom of the first fraction by -2. Re: Addition with fractions I meant that you find the LCM of two relatively prime numbers by multiplying those numbers. With polynomials the situation is similar. The greatest common divisor of x + 2 and x is 1, i.e., x + 2 and x are relatively prime. Therefore, their LCM is their product. In other words, you multiply both the numerator and the denominator of the first fraction by x and similarly multiply both parts of the second fraction by x + 2. Then the denominators will be equal and you can add. In fact, you don't have to find the least common multiple to add fractions; it is enough to have some common multiple, e.g., the product of the two denominators. Here you again can multiply the denominators. Note, however, that the denominators are not relatively prime: the first divides the second because 2 - 2b = (-2)(b - 1). Therefore, the LCM is 2 - 2b and it is enough to multiply both top and bottom of the first fraction by -2. I'm a bit lost, can you explain with the actualy fractions? I find it hard to turn text into equations like that. Re: Addition with fractions $\frac{3}{x+2}+\frac{x-2}{x} =\frac{x\cdot3}{x(x+2)}+\frac{(x+2)(x-2)}{(x+2)x}$. Now the denominators are equal and you can add. $\frac{b}{b-1}-\frac{1}{2-2b} = \frac{(-2) b}{(-2)(b-1)}-\frac{1}{2-2b}$. Again, the denominators are equal. Alternatively, $\frac{b}{b-1}-\frac{1}{2-2b}= \frac{b}{b-1}+\frac{1}{2b-2} = \frac{b}{b-1}+\frac{1}{2(b-1)} = \frac{2b}{2(b-1)}+\frac{1}{2(b-1)}$ Re: Addition with fractions $\frac{3}{x+2}+\frac{x-2}{x} =\frac{x\cdot3}{x(x+2)}+\frac{(x+2)(x-2)}{(x+2)x}$. Now the denominators are equal and you can add. $\frac{b}{b-1}-\frac{1}{2-2b} = \frac{(-2) b}{(-2)(b-1)}-\frac{1}{2-2b}$. Again, the denominators are equal. Alternatively, $\frac{b}{b-1}-\frac{1}{2-2b}= \frac{b}{b-1}+\frac{1}{2b-2} = \frac{b}{b-1}+\frac{1}{2(b-1)} = \frac{2b}{2(b-1)}+\frac{1}{2(b-1)}$ oh right ok now I understand thanks, it will take some getting used to in order to notice this Re: Addition with fractions This is so annoying I just can't do this... look at this one. Now what? What do I multiply the left and right hand sides by? The only thing they have in common is $a-2$ so I will multiply the left by $a$ and the right side by $a+2$ $\frac{2}{(a-2)(a+2)}\times\frac{a}{a}} = \frac{2a}{a(a-2)(a+2)}$ Now the right hand side. $\frac{1}{a(a-2)}\times\frac{a+2}{a+2} = \frac{a+2}{a(a-2)(a+2)}$ Finally we get $\frac{2a}{a(a-2)(a+2)} - \frac{a+2}{a(a-2)(a+2)}$ as they now both have common multiples. Then after the calculation we arrive at $\frac{a-2}{a(a-2)(a+2)}$ and after cancellation $\frac{1}{a^2+2a}$ which I think is wrong Re: Addition with fractions Thanks for this type of post. Re: Addition with fractions The last term should be $\frac{1}{a(a+2)}$ instead of $\frac{1}{a(a-2)}$. What you did next was correct, and you should have arrived at $\frac{a+2}{a(a-2)(a+2)}=\frac{1}{a^2-2a}$. Note, however, that this last equality is true only for $ae-2$. Re: Addition with fractions I'm not going to open another topic but this is just pathetic, as soon as a question is slightly different I can't do it, I'm trying to be so patient but this is just a **** take. How can I not solve this? It looks so so so so simple... It's the whole denominator thing that throws me off EVERY single time. I'm going to guess that I should multiply both sides by the LCM which is 15. $15\times\left(\frac{x}{x+5}\right)=\left(\frac{2}{ 3}\right)\times15$ now what?! Why is this crap so difficult? Fractions in algebra is almost impossible. Last edited by uperkurk; August 1st 2013 at 01:15 PM. Re: Addition with fractions The LCM of (x + 5) and 3 is not 15. Indeed, 15 is not divisible by (x + 5). The LCM is 3(x + 5), and that's by what you multiply both sides. Remember that you can always multiply both sides by the product of the denominators and not necessarily by their LCM. The worst thing you get is a fraction that can be reduced and/or larger expressions. Re: Addition with fractions I'm not going to open another topic but this is just pathetic, as soon as a question is slightly different I can't do it, I'm trying to be so patient but this is just a **** take. How can I not solve this? It looks so so so so simple... It's the whole denominator thing that throws me off EVERY single time. I'm going to guess that I should multiply both sides by the LCM which is 15. $15\times\left(\frac{x}{x+5}\right)=\left(\frac{2}{ 3}\right)\times15$ The two denominators are NOT 3 and 5, they are 3 and x+ 5. The LCM is 3(x+ 5). You can't just pick numbers out of the algebraic expressions. tex] $\frac{15x}{x+5}=\frac{30}{3}$ No. The fractions become $3(x+ 5)\frac{x}{x+ 5}= 3x$ and $3(x+ 5)\frac{2}{3}= (x+ 5)(2)= 2x+ 10$. So 3x= 2x+ 10. Can you solve that? now what?! Why is this crap so difficult? Fractions in algebra is almost impossible. Re: Addition with fractions Re: Addition with fractions I'm not going to open another topic but this is just pathetic, as soon as a question is slightly different I can't do it, I'm trying to be so patient but this is just a **** take. How can I not solve this? It looks so so so so simple... It's the whole denominator thing that throws me off EVERY single time. I'm going to guess that I should multiply both sides by the LCM which is 15. $15\times\left(\frac{x}{x+5}\right)=\left(\frac{2}{ 3}\right)\times15$ now what?! Why is this crap so difficult? Fractions in algebra is almost impossible. Rules of thumb here. 1) multiply both sides by the denominators 2) Add up the numerators 3) Factor everything in sight 4) Cancel out common factors. So for your last problem: $\frac{x}{x + 5} = \frac{2}{3}$ Denominators: x + 5 and 3, so the common denominator is 3(x + 5). Multiply both sides by this. $\frac{x}{x + 5} \cdot 3(x + 5) = \frac{2}{3} \cdot 3(x + 5)$ Now cancel out the common denominators: $x \cdot 3 = 2(x + 5)$ I'm sure you can finish from here. Whoops! Plato got there first. July 29th 2013, 09:54 AM #2 MHF Contributor Oct 2009 July 29th 2013, 10:01 AM #3 Oct 2011 July 29th 2013, 10:35 AM #4 MHF Contributor Oct 2009 July 29th 2013, 10:46 AM #5 Oct 2011 July 29th 2013, 11:00 AM #6 MHF Contributor Oct 2009 July 29th 2013, 11:32 AM #7 Oct 2011 July 29th 2013, 12:43 PM #8 Oct 2011 August 1st 2013, 04:33 AM #9 May 2013 August 1st 2013, 09:11 AM #10 MHF Contributor Oct 2009 August 1st 2013, 01:13 PM #11 Oct 2011 August 1st 2013, 01:38 PM #12 MHF Contributor Oct 2009 August 1st 2013, 01:41 PM #13 MHF Contributor Apr 2005 August 1st 2013, 01:43 PM #14 August 1st 2013, 01:48 PM #15
{"url":"http://mathhelpforum.com/algebra/220898-addition-fractions.html","timestamp":"2014-04-17T22:22:50Z","content_type":null,"content_length":"96595","record_id":"<urn:uuid:43c3c2a1-f155-412e-8df2-83c41899896a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
User user21541 bio website visits member for 2 years, 1 month seen 19 hours ago stats profile views 97 19 asked Sufficient conditions for sums of Laguerre polynomials to be non-negative 25 awarded Tumbleweed 24 asked binomial transform, Hurwitz zeta function Feb Eigenvectors of contraction times projection 12 comment Yes, a typo, thank you. 12 awarded Editor Feb Eigenvectors of contraction times projection 12 revised edited body; added 3 characters in body 12 asked Eigenvectors of contraction times projection 28 accepted Estimate entropy of a binary process in terms of decay of correlations 26 awarded Supporter Dec Estimate entropy of a binary process in terms of decay of correlations 26 comment Thanks, the process I want to understand has many other useful properties I can use, I was looking for minimal conditions. 25 asked Estimate entropy of a binary process in terms of decay of correlations 3 answered Minimum 1st-neghbors distance between N random points on a ring Aug Approximating Moment of Sum of RVs 22 comment @Mark Meckes: 1968 edition, chapter 4. dependent variables, section 20: mixing processes, paragraph on moment inequalities. Aug Approximating Moment of Sum of RVs 22 comment @Bill Johnson: absolutely, severe overkill. result must follow also from some easier inequalities as well. 22 answered Approximating Moment of Sum of RVs Aug Approximating Moment of Sum of RVs 22 comment Look at Lemma 4, page 172, in Billingsley's book Convergence of probability measures. Lemma is for p=4, but it works for all even p. If I am not mistaken, this lemma gives the bound you are looking for. 14 awarded Teacher Apr functions whose average along orbits is zero or a constant 23 comment the most famous example of a result of this nature is the so-called Livshic lemma: suppose $X$ is a mixing subshift, $T:X\to X$ is a left shift, and $f$ is a Holder-continuous function such that $$ \sum_{i=0}^{p-1} f(T^ix) =0 $$ for every periodic $x$: $x=T^px$. Then $f=g-g\circ T$. There are many generalizations of this result. 22 answered functions whose average along orbits is zero or a constant Apr awarded Scholar
{"url":"http://mathoverflow.net/users/21541/user21541?tab=activity","timestamp":"2014-04-16T20:08:52Z","content_type":null,"content_length":"43963","record_id":"<urn:uuid:9af563c6-314a-40c8-8234-491cae28e182>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization of Research CHAPTER 5 OPTIMIZATION OF RESEARCH It is appropriate to optimize your research before assessing its adequacy. Statistics such as a “t test” or “ANOVA” provide a way to characterize or assess the reliability of research. Before assessing the reliability of your research however, it only makes sense to first take steps to maximize its reliability. Additionally, you should maximize the generality, detectability, and meaningfulness of the research I. Reliability Maximization Before a functional relationship or an event can be considered a fact or true it must be possible for each person to verify its existence for themselves. When you can add up a column of numbers several times and always get the same answer then in all likelihood you have added the numbers correctly. When someone else “checks” your addition and also gets the same answer then your confidence is even greater that you have the “true” answer. This is because more than one person will be able to repeat or replicate a true event. A repeatable finding is one whose controlling factors have been correctly identified. If you completely understand something you can do what it takes to make it happen or to make it stop. If you do not really understand a phenomenon you will not be able to control it reliably because it will come and go according to its true determinants and not necessarily occur in conjunction with your manipulations. If you are a creative cook you want to be able to specify a recipe so that other people can obtain the same results. Likewise, if you are a consumer of recipes then you want to be assured that if you follow the instructions you will get something good to eat. An unreliable cook is one whose meals sometimes come out right and sometimes come out no so good. If you took a person to ten psychologists and asked for a diagnosis, how much agreement would you expect? What would that amount of agreement mean? What amount of agreement do patients have a right to expect? Several specific ways have evolved which enormously facilitate your ability to maximize reliability and thereby separate fact from fiction or to tell the truth rather than to give yourself and others. They assure that you are connected to reality by maintaining your focus on the necessity that others be able to repeat your findings. If you wish you can view this chapter allegorically. This chapter presents the rules by which you can chain a “demon.” It’s as if the variables which make things happen are invisible ghosts because they are not always immediately apparent or clearly understood. If you understand them and control for them (know their name), you can make things happen the way you want. The more ghosts you have no control over the more often your wishes will not come true. If you follow the rules in this chapter then you will be more likely to obtain productive work without the risk of being tricked by an illusion. The use of more explicit and clearer definitions increase the reliability of a functional relationship or an event. We must clearly include the elements which are part of the meaning and exclude the elements which are not a part of the meaning. We must have explicit and precise boundaries on our definitions. See diagram on meaning (Chapter 1 section 2. c. 4.). If the set containing the elements to be measured changes, then any measure of the “set” is also likely to vary. If reliability is “getting the same answer when adding up a column of numbers multiple times,” then there must be some clear way to distinguish which “numbers” are to be added up and which are not, otherwise the answer will change. Operational/functional definitions accomplish this end. For example, often there is substantial disagreement over whether or not “punishment” works. The disagreement is empty. If we were to operationalize the definitions which the disagreeing parties were using we might hear “I showed my anger by no longer helping the person develop into a better fully functioning person. I stopped criticizing everything they did” or “20 micro volts of electricity delivered to the floor” (subject wore shoes). An example alternative to these useless definitions is “the consequence necessary to reduce the rate of the behavior by one half.” Communication must ultimately be based on operational/functional definitions. The value of explicit definitions are apparent in other situations such as dealing with a “Philadelphia” lawyer or the “devil.” For example: You want to live forever? No problem you will live forever. What? You did not want to age? Sorry, you didn't ask for that. You must specify exactly what you mean. Imagine being in a psychiatric institution trying to get out. What would you have to do to demonstrate that you are “sane”? What if your mother were the ward supervisor? On the other hand, what if that person that you can never get along with was your ward supervisor? What if they thought it was an act? How would you get out? What would you have to do to show yourself sane? What if you found out that your roommate just got out of a psychiatric hospital after killing twelve people? What could your roommate do to demonstrate recovery? What if it were you who had just been released? The next time you have a disagreement with someone, try defining the words at the focus of the disagreement. The disagreement will probably disappear. Accurately and precisely describing and quantifying what is observed makes it more likely that all observers will agree on exactly what did in fact happen (i.e., that you will arrive at the truth). Quantification is the mapping of the magnitude of a thing into the vicarious system, and could be seen as modifiers for nouns such as “140 meters” or “large car” and relationships such as “2.6 times” and “bigger.” Our term must be unambiguous. Consider trying to study optical illusions with no quantification of measurement. The laws of physics would change with each new background, two things equal to a third would not necessarily be equal to each other. Large and small as modifiers could take on meaning to suit the user in their reality, imagine for example, loaning people money without quantified descriptions of the amount. Imagine a football game without a chain and the referee could just say “well it seems like a first down to me.” Recipes come out the same over and over again if the ingredients are quantified. Alchemy changed to chemistry when quantification began to be used. Quantification or “measurement” can be seen as the conversion of some property of an event in nature into some elements in a vicarious system such as language or mathematics. The amount of information or accuracy can vary. The issue is how perfectly and completely is the listener expected to know what the speaker saw? Quantitative relationships apparent in nature may not be available in the converted information. Information can be lost or error can be added. Clearly the conversion to the vicarious system must index the events in some repeatable way. The conversion must also generate information which reflects the actual attributes in nature which are of interest. See section I E of this chapter. Systematic errors change a measure by a constant amount in a constant direction, such as a broken ruler. Each measure will be one inch more than it should be if a one inch piece had been broken off. These errors change the mean but can be removed if they are constant and known by simply adding one inch (in this example) to the obtained measurement. Unsystematic errors change a measure in random directions, sometimes adding, sometimes subtracting. These errors do not change the mean if they are truly random. If not random or if not specifiable, then no correction is possible. c. Type of Measurement Operation i. Objective Quantification In this type of measurement, there is a mapping of events or relationships in the natural world into the vicarious system. There is great emphasis on reliability and validity across individuals. Counting objects is one example: "one chicken, two chickens, three chickens, ...... etc." A variety of operations are available to attach a number to a comparison which occurred in nature. The evaluation of things is an example: "For me, this movie is the best and this second best." This type of conversion places each individual in the set to be evaluated in an ordered list such as first place, second place, third place. This type of conversion produces a value on some scale such as on a scale of 1 to 10 (e.g., rate this movie). This conversion attaches a numerical value to each item to be scaled. This simply identifies the item that is most xxx of the two being compared. d. Amount of Information Retained in the Vicarious System Groups are homogeneous only in the dimension used to define the group. In other dimensions a group can show much diversity. Even though we have separated wheat from chaff, within our bag of wheat the kernel sizes, colors, weights, or protein may vary. A pile of bricks is not the same as a pack of dogs but both the individual bricks and individual dogs differ from one another. The bricks can vary in weight and color and so can the dogs. Elements of a group are frequently differentiated by assigning different elements different numerals. Assigning a “numeral” to an instance or an occurrence of something does not always mean the same thing. “Seven” is not the same as “seven” and both are completely different than “seven” and “seven” is wholly different than the previous three. There are four different things which could be meant by the numerals assigned to the elements of a group. They are essentially homonyms - same word: “seven” or “number,” but each with a different meaning. As a more familiar example imagine that someone bets you $20. that you could not use the word “lead” in a sentence correctly. The same word can mean different things depending on what was intended when the person wrote it down. The following four types of numbering systems differentiate elements of a group. Each category implies successively more about the quality or quantity of that difference. You must remember that some forms of differentiation say more about the nature of those differences than others. If we see three people we can say they are class majestic, regal, and imperial or heavy, heavier, and heaviest , or size 30, 40, 50, or 150 pounds, 200 pounds and 250 pounds. In fact they weigh 150, 200, or 250 pounds they are, in fact, ordered in what is called a ratio scale. However we may not always have or even want every aspect of information available in nature. In our technical terminology, their ordering could be nominal, ordinal, interval or ratio depending on how much information we want to communicate. Similarly for personality except that the relationship in nature is less well known or understood. In essence then we are talking about how much information is retained in the vicarious system with respect to the variables of interest. This is like assigning different football players different numbers. The only thing that is implied by a nominal number is that it is different from any other nominal number. You have used numbers in the same way you would use colors or shapes: the yellow glass or the red glass; the round candy or the square candy. You are using the number as a name or label. You are classifying the item. You may not put something labeled with the number 23 into the category reserved for items labeled 14 but the numbers tell you nothing other than the categories are different. You may not add, subtract, multiply, or divide the numbers meaningfully because they have no specified relationship to each other. Removing a receiver (Player #83) from one football team is not equivalent to removing the entire backfield; quarterback, halfbacks, and fullback (players numbered 12, 20, 21, 30) from the other. Any change is a difference in kind not a difference in amount. You could not find two “lucky” numbers on a roulette wheel, play their mean and be twice as lucky. The different numbers in this class could be thought of as representing a qualitative change. There are three subclasses of nominal numbers: 1) identification - each individual has a different number, e.g., ID numbers; 2) categorization, e.g., blue = 1, green = 2 and red = 3; and 3) dichotomization - all events are dichotomized into two groups, e.g., male = 1 and female = 0. The numerals can not be used in a variety of ways because they are nothing more than names and have no prespecified relationship to each other. This system of numbers includes all the characteristics of nominal numbers. It also implies that larger numbers represent quantities that are larger than the smaller numbers. However the amount of difference between numbers is not necessarily the same. An example of this type of difference is the finish order in a race. The difference between the first and second runner may not be the same as the difference between the second and third. It is not appropriate to say that the second (2) place runner took twice as long as the first (1) place runner, and that the third (3) place runner took three times as long as the first (1) place runner. The above figure spatially represents that the interval between the numbers in an ordinal scale may not be the same. In this system all the properties of nominal and ordinal numbers are included. In addition, the intervals between numbers are also equal. The difference between numbers is therefore the same, for example, 36 degrees Celsius is the same amount above 35 degrees as 200 degrees is above 199 degrees. One degree is equal to one degree. However, interval numbers lack a true zero. 100 degrees is not twice as hot as 50 degrees. Using a more obvious representation (100 + x) degrees is not 2 times larger than (50 + x) degrees because the x term would also have to be considered. The figure below illustrates this effect with the arbitrary zero axis as a dotted line and the true axis as a solid line. While 4 is twice as high above the arbitrary axis as 2, it is not twice as high above the real origin. It can be seen that the true measure can be like an entire iceberg while the measure based on an arbitrary zero measure (an interval number) is like what is above the surface of the water. Twice as high above the water is not the same as twice as high altogether, if the icebergs move to fresh water their relative size above the water will change. In this system all the above attributes of the other number systems are included. In addition, zero means the absence of the property. For example, in the Kelvin scale of temperature, 0 degrees means absolute zero, the absence of molecular motion and therefore the absence of temperature. Two degrees Kelvin is twice as hot as 1 degree Kelvin. Ratio numbers are what most people mean when they talk without qualifications about numbers. The following figure illustrates measurement scales. The available values which the numbers can, in principle, take; or the "step" type also varies. Some variables increase only in whole steps. They are referred to as discrete variables, e.g., family size increases only in whole steps - 1 child, 2 children, 3 children, etc. Continuous variables on the other hand can have any value in between. In actuality there are an infinite number of values between 1 and 2, e.g., weight or height can vary 100, 100.1, 100.01, 100.001, etc. Each type of variable can therefore be categorized with: 1) type of measurement scale and, 2) type of step. For example, "discrete interval" or "continuous ratio." It would not be meaningful to use a continuous step type with a nominal or ordinal scale. │ │ Continuous │Discrete│ │Ratio │ │ │ │Interval│ │ │ │Ordinal │not meaningful │ │ │Nominal │not meaningful │ │ Are we directly measuring the ultimate thing of interest or are we measuring one thing and hoping that it will give us information about another? In this case, we measure the thing directly and simply report that datum (e.g., the number of pecks in an interval. A behavior which can be measured is used to provide information about a construct which cannot be measured directly. The rate of running is thought to tell us how much fear the person has. Fear is thought to be something "more" than simply a behavior such as running away or screaming. iii. Behavior to Behavior Inference In this case, one behavior is used to provide information about a second behavior. What the person says they will do is thought to tell us what they will actually do. In this case, a behavior is used to predict another explicit behavior. It could be talking about something predicting what will actually be done, or turning away predicts screaming. Even if the “outer” boundaries of a set are precisely defined there may be variability in the elements within that set. A “group” implies necessary similarity on only one dimension. All items with the attribute are considered members of the group, those without it are excluded. When forming a group based on gender one ignores eye color. It can be seen that the rules which are used to define the group can be shifted such to include more or less variety. A group could be formed of blue-eyed males, or males between 18 and 20 with 3 years of college, or live organisms etc. When you obtain one score from some set, you have a datum which is easy to deal with, you just write it down. When you obtain a second measure of the same attribute from that same set the situation can change, the measurement may be the same or it may differ. To use a specific example, suppose you gave an IQ test (the measure) three times to a person (the set). If they got the same score all three times then you know their IQ, the characterization of the set was easy; however, if they got three different scores then the situation is much more complicated. Do you use only the first score, only the last, or do you take the mean? What you do depends on what you believe is causing the difference. In sum, you must consider the scores as the same (one set) or different (three sets) before deciding on the value of the IQ, or in order to decide which measures apply to the person and which do not. The first point of view is that the IQ or whatever was being measured actually remained constant and that the variation was caused by random measurement errors. In this case the average of all the measurements would be the true measurement because the measurement errors are random (sometimes above, sometimes below), and will, therefore cancel out. A random error cancels out because it varies above and below the true value equally. (A subsequent section demonstrates that the mean of a random error is equal to zero.) This is saying that each measurement is made up of the true measurement and some random error, and that the variation in the measure is caused by the error of measurement, not changes in the thing being measured. The alternative point of view to deal with differences in measurement is to assume that the difference may be the result of an important systematic independent variable in which you should be specifically interested. For example, if you gave the tests consecutively, without any interruption, it may very well be that the person was getting tired and that the steady downward trend of the scores represented an important effect and not simply error of measurement. In this case it would not be appropriate to average the individual test scores to determine the “true” measurement. In the behavioral sciences, if you average across groups of individuals you are, in effect, saying that any differences between those groups of individuals are only errors of measurement and that all the groups are actually the same. If you average across individuals you are assuming that individual differences are only errors of measurement and that all the individuals are actually identical. If you average across days you assume that daily variations are only error. If you average across a session you assume that no meaningful systematic effect is occurring within a session. It is very important to realize that any systematic effect which occurs across the dimension across which you are averaging becomes a confound. At best you will be ignoring a source of variance at worst your interpretation of the data will be wrong. Obviously a decision as to what is going to be considered error or “noise” and what is to be considered a potentially important variable must be made before any averaging or grouping is done. This issue is explicitly addressed in the difference between an ideographic technique (a single subject under a wide variety of circumstances, e.g., Skinnerian research) and a nomothetic technique (comparison of the averages of the performance of groups of subjects (e.g., an approach typified by classical Hullian research). Sometimes it is argued that psychology is the study of the differences between individuals, whereas Sociology is the study of the differences between groups with the differences between individuals considered as error of measurement. Once the determination of the appropriate level of grouping has been achieved various methods of graphic illustrations and quantitative measures for that group are then available. However, it should always be kept in mind that a choice of a unit of measure is a deliberate choice of what is to be considered irrelevant noise and what is to be considered the important subject matter of the a. Distance as Representing Quantity Numbers can be represented by distance from an origin along an axis. As will be seen, this simple convention (analytical geometry) has had a very large impact on our representing the world symbolically. Distance from the origin to the right or upward is considered positive while distance to the left or down is considered negative. Greater distance indicates greater quantities. b. Space as Characterizing Group Data Groups contain more than a single instance and therefore can be quite difficult to describe. Several ways have evolved of comprehending and depicting the information in a group of measures. A tally graph indicates the number of occurrences or the frequency of each value by the number of marks above that value on the abscissa. The height of the stack of marks for that variable therefore also indirectly indicates the frequency of that variable. The total number of marks or asterisks, for example, in all columns then indicates the total number of occurrences. The ratio of the number of asterisks in any one column to the total number of asterisks in all columns indicates the relative frequency of that kind of asterisk occurring. For example, if there were a total of 100 occurrences, and 11 were of the specified type then 11% would then be of that type. Continuing this line of reasoning it can be seen that if things continue going like they have been you can predict that 11% of the future occurrences will also be of that type. A tally distribution therefore depicts, that of the total number of occurrences, how many were of a particular type, what proportion were that type, and if things continue as they have been, what proportion will be of that type in the future (i.e., the probability of that type occurring). The same reasoning applies to other ways of depicting grouped data such as bar graphs, histograms, polygons, and frequency distributions. Space is used to represent important attributes. A common convention is to use height or the ordinate to indicate frequency, and the distance to the right or left of the abscissa to indicate some other dimension. In the behavioral sciences the ordinate (y) is used to represent the dependent variable while the abscissa (x) represents the independent variable. (The word "acrossissa" can be used as a memory aid. It is almost the same as abscissa and across means back and forth and contains the word cross or x.) In these cases, the height or the ordinate indicates the frequency of that variable. The total area represents the total number of occurrences. The relative area of one part of the histogram or polygon to the area of the whole histogram or polygon represents the relative frequency of an occurrence of that particular event. This ratio also represents the probability of that particular event occurring in the future if things continue as they have. Instances can be tallied and then depicted in a figure as: In the figure above the values for each season are indicated by the number of asterisks; any mark would serve equally well. The values are also indirectly indicated by the height of the columns. Numerical data can be represented more abstractly, i.e., the ordinate or distance up the y axis can represent an arithmetic value. For example, the smog count for each season can be represented by the vertical positioning of a single “*” in a figure. The figure above depicts the number of smog particles for each season in one year with only the height of the asterisk indicating the number of particles for that season. There are two classes of figures which use this columnar form of representation. They are differentiated in terms of what the abscissa represents. A nominal, discrete or discontinuous variable such as eye color or family size could be represented across the “x” axis. A bar graph does not imply intermediate values. There are no families with 2.5 children. There are no football players with number 66.5. The height of the bar represents the quantity of the variable represented up the “y” axis (figure below). If the axis is nominal the categories along the abscissa x axis can appear in any order. Therefore no statements concerning “trends” can be made. Trends require ordinal, interval, or a ratio scale across the x axis. The axis could be rearranged to prove any trend an unscrupulous person wanted as illustrated below. The histogram is used to depict continuous numbers such as height and weight and implies values between the categories listed along the x axis (while the bar graph does not). The bars in a histogram are continuous or are drawn together centered over their x axis value (figure below), whereas those in a bar graph are separated. Values are available between summer and winter. Because order is fixed, it is meaningful to talk about trends in a histogram. A point may be placed above each category along the x axis at the height of the bar for that category. Clearly the points would represent the same information as the bars. If the points are in the center of each category a line can be drawn to connect them. The line then defines a frequency polygon. The frequency polygon implies quantities which would be appropriate for intermediate categories along the x axis. Therefore you may construct a frequency polygon from a histogram but may not construct one from a bar graph. As a further example of the histogram, the smog count can be measured in finer and finer categories with each category being directly continuous with the next until a continuous curve or frequency distribution results (figures below). Note that less and less interpolation is necessary to determine a y value for an intermediate x axis value. As can be seen in the figures, it is clear that continuing to make measurements over smaller and smaller intervals would result in an infinitely fine histogram or a continuous curve representing the frequency of occurrence (figure below). The height of the curve above any point on the abscissa represents the frequency of that particular event, just as the height of the column had represented the frequency of the event in the bar graph and histogram, and just as the number of asterisks had represented frequency in the tally record. The other relationships hold as well, in the case of the frequency distribution however you cannot meaningfully talk about the probability of an event represented by a single bar because the individual bars are infinitely small. Where a histogram allowed you to choose all the data for April by selecting a single bar a frequency distribution requires that you specify all the data from one second after midnight on March 31 to midnight April 30, i.e., specify the limits of the “bar” you want. One speaks of the frequency of all events contained between point A and Point B on the x axis, where A and B can be any position along that axis. A histogram has preselected points A and B. Measures of central tendency specify a value which can be taken as the x value which is representative of all values in the distribution. The difference between various measures of central tendency result from the difficulty of a single formula providing a meaningful single summary value for all possible distributions. These are measures which indicate how the values within the distribution differ from one another. Distributions which differ with respect to their dispersion are spread out or are bunched together Distributions are not always uniformly symmetrical and gaussian. Distributions can be sloped to one side to a greater or lesser extent, or can be sharply peaked or flat. d. Relationship Between Subgroups There may also be specific relationships between the elements in the subgroups making up the set. For example, in the set "people," height and weight are related, whereas IQ and shoe size are not e. Other Methods of Quantifying Group Data i. "Statistics" as Representing Groups If you look at the difference between means of random groups divided by the average of their variances you get t, etc. A measure determined by several different independent methods based on different assumptions is much more likely to be reliable than a measure determined by only one method. If you have an inferred measure like response strength and it is equally well measured by latency, rate, errors, force of pull, and duration, then it is more likely to be reliable than if it were measured only in one way. If you make a statement of a functional relationship based on only an extremely narrow specific group of subjects, kind of apparatus, or procedure it is possible for it to be altered by even a small change in any number of variables. Statements of wide generality which fit many situations are less likely to be disrupted by unforeseen events. More broadly based statements are more reliable. Somewhat like Weber's fraction: the larger the base the larger a difference must be to cause a change. An anomalous change in the illumination at noon doesn't change the exposure required for a picture by very much. Unfortunately, all possible variables are not reported in every research paper. Some variables are not made explicit because they are thought to be obvious and well known, or are common to normally accepted research practice, or to everyone's research style. People familiar with a particular area are likely to know through experience what to control as a matter of professional research practice even though every detail does not appear in print due to publication costs. This absence is not because of carelessness or ignorance but rather the recondite nature of professional research. Reliability can be increased by being aware of the important variables in a situation which are not explicitly pointed out. Apprenticeships or practicums are extremely important for this reason. For a measure to be reliable, you must measure what you think you are measuring. You must be correctly connected to reality. For example, if you read a humidity gauge and believe you are reading a temperature gauge it is extremely unlikely that people will be able to replicate your effect when they follow your instructions to read a temperature gauge. If you view a thermometer from an extreme angle and get a parallax error in your measure, then your measure will not be replicable by either people who measure the temperature correctly or who measure it with their own inaccuracy. You must be correctly connected to reality. If you measure something indirectly you must assure yourself that you have measured what you thought you measured. Indirect measures are prone to error, invalidity, and unreliability. Use direct measures whenever possible. If you ask people what they will do in a crisis, you cannot automatically infer that they will do exactly that. In actuality what they do is an entirely open question. If you claim you are terrifying people by saying 'boo' or if you claim you are measuring fear by measuring withdrawal, you must verify that your construct is in fact the thing being measured. Researchers have put pigeons on a schedule where a key peck could avoid shock. They found that pigeons would not peck to avoid shock. They did not find that pigeons won't avoid shock. They will fly away to avoid shock, as anyone who had chased pigeons in the park will attest to. If you document what you do carefully and follow the procedure explicitly then it is likely that someone else following the same directions will obtain the same results. If you do not document what you did correctly it is unlikely that their cake will turn out, when they use your recipe. It’s also obvious that any statistical tests of reliability must be calculated correctly. The experimental design must allow you to “subtract out” or “cancel” all potential alternate explanations for the effect. You must have what has come to be called internal validity (see research design section in Chapter 4 IV. B.). The statistical treatments which are applied to the data generated by research require that certain assumptions be met. For that reason research which will be analyzed by statistics must be designed with its subsequent statistical analysis in mind. A particularly troublesome problem is the use of the statistic appropriate for the type of variables in the research. See The Conceptual Precursor on the Classification of Variables section (Chapter 5 B. 1. d.). The level of reliability specified by a statistical test is meaningless if done on data which violates the assumptions of that statistic. A sporting record is meaningless if it were obtained in a situation which violates the assumption of the sport. A quarterback who gains 700 yards in a single game by driving to the end zone in a golf cart has set no record. A "no hitter" is no record if a greased ball was used. A technologically sophisticated experiment exerts as much control over confounding variables as possible to minimize their impact. It also measures variables in the most sophisticated way possible to maximize the accuracy and precision of the obtained measures. Higher technology provides better tools to enable correct assessment of the actual controlling variable and provides the power to reproduce the effect by accurately creating or recreating the controlling conditions. Using a scale to measure weight is more reliable than a subjective estimation. Data complete computerized or "untouched by human hands" is be less likely to contain errors. An extremely strong effect is likely to recur, whereas a very small effect is more likely to be overshadowed by noise, or to have arisen by chance in the first place. You can obtain large signal to noise ratios by choosing to study strong effects, by using strong treatments, or by reducing the noise in the experiment. You do an experiment in order to find out more about the variable than you knew before. You do an experiment by manipulating the independent variable to assess its effect on the dependent variable. This seems very simple. Unfortunately many things can confound the interpretation of the results. In any experiment, many possible confounding influences may exist. If you do not control them, some alternative explanations are possible for your results. If this is the case, your experiment is of little value because you cannot be sure why things happened the way they did. It is crucial to understand that if an alternative explanation for the results is available, what you have is a "likely story" and that is all. To put the shoe on the other foot, imagine being innocent but being on trial for your life. The prosecutor has put together a plausible scenario with you as a murderer. You would, of course, want to point out the equally plausible scenario in which you are innocent. We have all seen at least one movie with basically this very plot. The American constitution demands "beyond a shadow of a doubt" for that very reason. The innocent are protected from prosecutors that ignore alternative scenarios, and from hyped-up lynch mobs. For that same reason, the community should be protected from sloppy therapists or experimenters that ignore alternative explanations and get away from the facts. When you are interested in truth you must make sure that alternative explanations are not possible. Otherwise, there is a reasonable doubt and, you have not proven a thing. You should be no more willing to bet someone else's life on a likely story than you would risk your own. By deliberately manipulating the occurrence of its precursors the true cause of an event can be easily established. If all events except the “cause” are allowed to occur and the “result” does not occur, and if only the cause is added and the “result” then occurs; and if the removal of the cause again terminates the result then a cause-effect relationship has been established. If you were trying to discover to what you were allergic, you would do similar research, adding and subtracting items from your environment until you found what was causing the problem. Obviously, it is very important to assure yourself that you have not confounded your experiment in some way by adding or subtracting more than one thing at a time. You may wonder why you would want to bring your allergic reaction back. But, only by doing that will you be sure of its cause and only then will you be relatively sure you are not avoiding some object inappropriately. What if you had a cold rather than an allergy and you got better right after you spent all your money? For the rest of your life you would think you were allergic to money. Obviously, the best method of dealing with confounding variables is to remove or eliminate them altogether. If you were measuring IQ, and found that some people were taking the test while drinking coffee you could methodologically eliminate the confounding variable by not allowing anyone to drink coffee during the test. In the pan balance example, “A” would be taken from each side. Or, you could methodologically eliminate the difference caused by the coffee by having everyone drink coffee (putting an ”A” on both sides of the pan balance) or by considering the coffee drinkers separately from the nondrinkers in your analysis of the results. Alternatively, you could eliminate the effect statistically by subtracting the effect that the coffee had on the test from scores of those who did drink it. This correction requires of course that the addition to each score be known, and that it be the same for each individual. b. Equalization or Matching A different method is to equalize the effect of the confounding or extraneous variable. You could ensure that each group consumed the same amount of coffee as a group or that for each person in the experimental group that drank one cup of coffee there was a person in the control group that also drank one cup of coffee. For each person that drank two cups of coffee in the experimental group there was a person that drank two cups in the control group, and so on. In the pan balance example that would be to assure that for each “A” on one side; there was an “A” on the other side. You could also explicitly balance the two groups. For example, you could have two basketball teams play several games switching players after each game until they ended each game in a tie. This procedure would explicitly balance the groups with respect to basketball playing ability. You could then give the treatment to one team to assess its effects on basketball playing. For the IQ testing example the coffee confound could be explicitly balanced by moving the coffee drinkers around until both groups had the same mean IQ. Then you could administer the treatment of interest to one group, with the assurance that the two groups were equal with respect to the effects of coffee on IQ. With balancing there may not be any equivalent individuals in the two groups, e.g., Group 1 has IQ 200 and 100 while Group 2 has IQ 150 and 150. In the pan balance example this would be moving things back and forth between the pans until they were the same regardless of what and how many things were on each side. You would then add the unknown that you wanted to weigh to one side. d. Balancing by Theoretical Notions There are occasions when equalization or explicit balancing is not applicable or possible. In that case you can still try to balance the two groups based on what you think should be equal. For example, you could counterbalance the groups by assigning the best and worst individual to one group and the two middle individuals to the other group. The problem with balancing in this way is that a “linear” effect of the variable is usually assumed, i.e., it is generally assumed that the average of the middle two intensities of each variable is equal to the average of the outer two, e.g., in a series of 1,2,3,4, it is assumed that 1 + 4 over 2 is equal to 2 + 3 over 2, (as it does in this example which was chosen to work out). In the explicit balancing example, the distribution of IQs is the result of the empirical process of balancing whereas with balancing based on theoretical notions, we sort subjects into groups that we have some reason to believe are equal, but for which we have no proof. We have no evidence that the average of one and four cups of coffee is the same as the average impact of 2 and 3 cups of coffee or that a team which contains a person with an IQ of 200 and one with an IQ of 0 is the same as a team with two people with an IQ of 100. In the pan balance example, this would be “believing” that the things in one pan weighed the same as the different things in the other pan because they should be equal. Members of each group can be selected at random. In this way the groups will tend to be equalized by the action of chance. In the pan balance example, this would be placing 60 handfuls of randomly selected stuff from a barrel on one side and 60 handfuls of randomly selected stuff from the same barrel on the other because the random variations in the items on each side should cancel out. Amazingly enough (given about 30 or more elements), random assignment will tend to equate any number of variables simultaneously. In fact it will equate variables which you are not even aware of. Any other method requires perfect knowledge of potentially confounding factors and a very extensive task of determining how to assign elements such that the overall groups are equated. Additionally, most statistics are designed to assess the likelihood of two groups selected at random differing by the obtained amount. For that reason alone it is advisable to select groups by random assignment if subsequent statistical treatment assumes it. The chosen dependent variables should be viewed in terms of dimensions which are actually controlling those changes. This is an important element in the post hoc reorganization of data discussed in Chapter 4 IV. A. 2. a. Little long term, real consistency would be obtained if a psychotherapy were evaluated in terms of the shoe size of its recipients. The apparent success of the therapy with children and its lack of success with adults would be spurious. Even though shoe size tends to increase with time and therefore correctly indexes presumed increasing psychological health in many clients, it also incorrectly indexes the health of many clients who are getting worse or showing no change. Psychotherapy should be judged in terms of some consensually validatable and relevant dependent measure such as the percentage of successful social interactions or whatever is agreed to be what psychological health means. Lack of operational/functional definitions of what health is allows ignorant and unscrupulous psychotherapists to believe, or at least claim to be effective, when they are not; because of an unclear or an erroneous dependent measure. If it is a proper measure other people will find the same thing, it will be reliable. The independent variable must also be chosen with care. The correct or valid identification of factors which are influencing the dependent measure makes sure that relevant but unrecognized variables will not be inadvertently changed or left out during attempted replication. If the actual controlling variable was not correctly specified it may not be included in an attempt to replicate. The original finding will, as a result, be unreliable. Replication by other laboratories is essential for this reason. It assures that the relevant factors have been specified as the independent variable. It is unlikely that the two laboratories or researchers have the same “hidden” factors. They would be unable to replicate an effect if the specified independent variable were not the cause. Frequently, there is overall consistency in data while there appears to be little local consistency, or vice versa. This is the “forest-trees” problem. A comparison of whole forests may show consistent effects, even through a comparison of each tree is confusing or vice versa. Viewing the data within a broader or narrower context may increase reliability. Additionally, rescaling the data with a transformation such as a log transformation may allow the orderly nature of the data to be more apparent. Whether or not consecutive measures of the same thing have the same value or not is determined by the measuring instrument. An instrument which is extremely coarse will provide measures which differ very little and which are therefore extremely reliable. For example, if you measured your weight on a scale for tractor trailers you would find that you always weighed the same no matter what you ate. The instrument would provide data which was extremely reliable in that it was always the same. Unfortunately it would be of little use. At the opposite extreme, an instrument which is extremely fine will obtain data which varies widely and which are not at all reliable, and therefore do not emphasize the similarity of the events. For example if you measured your weight on an analytical balance accurate to millionths of a gram you would find that your weight varied considerably as you breathed and as the floor vibrated. A compromise must be struck between ridiculously gross measures which are very reliable yet are not useful because they do not detect meaningful differences; and ridiculously fine measures that are not useful in that they vary so widely that they mask similarity. This can be roughly illustrated with the scale on the following three figures. A researcher can maximize the reliability of the findings by correctly choosing questions or structuring the situation to dramatically expose the controlling variables. This is the art of good research. It is the degree to which you can identify a clear boundary condition which varies from one category to the other with a change in the independent variable. Building on the underwater steam shovel metaphor from the first chapter, this is simply your skill in finding the boundary between the boom and the mud by choosing the nature of your probe how you jab around and how well you interpret what you feel. For example when trying to guess letters in a game like "Wheel of Fortune," some letters are more likely to be right than others. A finding which is consistent with great sections of the interlocking body of knowledge and that is not incompatible with any basic understandings is more likely to be reliable than a finding which is inconsistent with much of our acquired knowledge. A finding which is consistent with a great body of knowledge is more believable than one which is simply unlikely to occur by chance alone. Part of the task of the discussion section of a journal article is to establish continuity with the theoretical net for this very reason. Keep in mind that your task is to make sense out of the environment, not to show how you cannot make sense out of it. "Scientific glory" goes to the person who makes sense out of a finding, not to someone who throws out a fact which may be true and important or may be an error and irrelevant. Do not simply search for an anomalous finding. Rather try to advance understanding. A retardate can "not make sense" out of almost any finding. Data are being replicated continually and used to design more complex experiments. When data can be repeated and/or used to predict new results, its reliability is substantiated by definition. This is especially true when investigators with opposing theoretical views replicate the finding. Send comments/criticisms/speculations to Date Last Reviewed: November 17, 2002
{"url":"http://www.jsu.edu/depart/psychology/sebac/fac-sch/rm/Ch5-1.html","timestamp":"2014-04-18T15:43:23Z","content_type":null,"content_length":"78001","record_id":"<urn:uuid:0f1745ab-61b6-486d-8d3c-ca6ed6e93a62>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Gompertz’ Law of Mortality: How Long Must Your Money Last? For a man who never formally studied in or attended university, Benjamin Gompertz achieved a level of scientific immortality that even the most senior and tenured professors could only dream of While most academics toil in obscurity, and their names eventually perish despite their frenetic publishing, Benjamin Gompertz has joined a very small distinctive group of scholars with an actual equation named after him. The equation has been admired and used by researchers in demographic studies, the world over, for almost two centuries now. His equation might not be as famous as Albert Einstein’s ubiquitous E = MC^2, but it sure is a lot more useful for retirement income planning – even if you are a nuclear engineer. Now, just to be clear, Benjamin Gompertz was not a college drop-out, who decided to tinker in his parents’ garage, instead of staying in school. I surmise that Benjamin would have loved nothing more than to enroll in university as an eager teenager. He just wasn’t allowed. You see, back in 1795 England, there were strict quotas on these sorts of things, and he was Jewish; hence, no admission. He had to make do with an informal “street” education. But, despite the handicap, this self-starter taught himself everything there was to know about mathematics – becoming a virtual expert and devotee of Newtonian physics along the way – and made it all the way to the top of the British scientific aristocracy. He was made a fellow and eventually was elected president of the Royal Society. This honor would certainly be inconceivable in today’s hierarchical scientific world. In fact, I’m not sure what happened to his childhood classmates, who did make it into university, but I doubt any of them have an equation named in their honor, used daily almost two centuries later. Simplistic Retirement Planning I have observed that when financial advisors discuss retirement income planning with their clients, they start by asking questions about how long they would like to plan for, or the age to which they expect to live, for example age 85 or 90. Consistent with the pick-your-timeline philosophy, many of the popular financial planning software tools and web-based retirement calculators force users to select a lifetime horizon in advance. Perhaps you too have played with these tools, using various lifetime horizons. I can just hear the discussions “Aunt Gemma lived to 97, but Uncle Bob only made it to 82, so maybe we should use age 90?” or “Oh dear, we can only spend $60,000 per year if we plan to 90” which then leads to the inevitable reductio ad absurdum “Ok, lets plan to 85, because we really need $75,000 per year”. The problem with this approach is that you really shouldn’t be picking your life horizon in advance. Life is random, and you know it. In my opinion, the next step in a scientific approach to retirement income planning is to understand how random your remaining lifespan really can be. To make an informed decision, you need to know the odds of living to various ages. Then, you can decide how long you want to plan for ­— and more importantly how you plan to adjust your spending if you live to a very old age. This is precisely where Benjamin Gompertz’s handy little equation comes in. Gompertz’s Big Discovery Benjamin Gompertz, like other demographers and actuaries in the 19th century, spent much of his life examining records of death – and specifically the exact ages at which people died. Until Gompertz, scientists and researchers would compile or collect these records, but had never given much thought to extracting any forward-looking patterns or formal laws of mortality. They knew how many people had died in Carlisle or Northampton in the past, and could predict how many might die in the next few years – which was very important for insurance pricing – but the entire activity was rather ad hoc in the early 19th century. The mortality tables complied by statisticians was a single frame from a snapshot of the past. Benjamin Gompertz figured out how to convert these individual frames into a movie. The first two columns of Table #1 are an example of the snapshot from a hypothetical life (a.k.a mortality) table. You will see that there were 98,585 people who were 45 years of age and alive in a given (hypothetical) year. Then, of that large group, 146 people died between the age of 45 and 46, then 161 people died between the age of 46 and 47, then 177 people who died between the age of 47 and 48, etc. The fact that these annual mortality rates increased with age was well documented and understood well before the time of Benjamin Gompertz. But, Gompertz went one step further with these numbers in search of a pattern or a natural law. He wanted something like the laws of gravity, which were put forth by his British hero Sir Isaac Newton. So, he tinkered, played with and manipulated the mortality rates from many different mortality tables. Along the way, he decided to compute the natural logarithm of these numbers. Benjamin Gompertz looked at the differences in values between two adjacent ages, and that is exactly when the light bulb went on! When he subtracted subsequent values from each other, displayed in the sixth and final column of the table, he got numbers that were extremely close. As you can see, they are between 9% and 10% regardless of age! He did this process for many different age blocks, different mortality tables with populations from different cities and countries. Sure, the mortality rates were quite different depending on age, gender, country of origin and city, but the difference in the logs was the same. To Benjamin Gompertz this was a very odd coincidence, and an indication that perhaps something deeper was at work. Why should the difference in the natural logarithm of death rates be constant with age? In fact, if you plot the values themselves (column 5) they fall on a straight line, with a slope that is approximately 0.0975. Mathematically speaking, if the difference between the natural logarithm of mortality rate is constant over time, then the mortality rate itself is growing exponentially at the rate of 9.75% per year. Benjamin Gompertz deduced that there was a law of nature at work. Death was no longer just a random event whose likelihood increased with age. There was an underlying force of mortality that led to these values. Benjamin Gompertz discovered that your probability of dying in the next year increases by approximately 9% to 10% per year, from adulthood until old age. Subsequent bio-demographic research has shown that every species on earth has its own rate. From a mathematical point of view, assuming this line and working backwards, if you start with a species whose “chances of dying” increases by (say) 9% per year, you can invert the relationship and obtain the probability you will survive to any age. That is Gompertz’s equation, and why it is named after him. According to his obituary, he was born on March 5th, 1779 in London and died on July 14th, 1865 in London, at the age of 86, which is exactly what one might expect under the Gompertz law of Reprinted by permission of the publisher, John Wiley & Sons Canada, Ltd., from The 7 Most Important Equations for Your Retirement, by Moshe A. Milevsky. Copyright © 2012 by Moshe A. Milevsky. This is where the comments go.
{"url":"http://www.thinkadvisor.com/2012/02/01/gompertz-law-of-mortality-how-long-must-your-money?t=annuities","timestamp":"2014-04-16T13:21:19Z","content_type":null,"content_length":"44247","record_id":"<urn:uuid:1e7f2032-2378-4e96-bdf7-e3165161685d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving momentum-space coupled-channels equations for electron-atom scattering using a rotated-contour method Blackett, Anthony John (2002) Solving momentum-space coupled-channels equations for electron-atom scattering using a rotated-contour method. PhD thesis, Murdoch University. PDF - Front Pages PDF - Whole Thesis Download (202kB) | Preview Download (4MB) | Preview In the last twenty years, electron-atom scattering theory has witnessed significant theoretical developments. One of these advances is the use of the momentum-space convergent closecoupling approach to fully incorporate target atom continua. This theoretical framework is based on the momentum-space Lippmann-Schwinger equation, an integral form of the Schrodinger equation. Although the approach has been highly successful in its application to atomic scattering theory, computing numerical solutions is inherently difficult because the momentum-space LS equation is a singular integral equation. Standard numerical integration techniques are normally employed to solve the problem and as computing power has increased, calculations have improved. However, there remains the problem of the integral's singular nature, which demands complicated methods for selecting integration points, particularly near the energy-dependant singularity. The rotated-contour method uses a conlplex-variable approach that solves the momentum-space LS equation by integrating along a deformed contour in the complex momentum plane away from the singularities. This method has the potential for simplifying the numerical integrations associated with the close-coupling equations. A rotated-contour method is first applied to a simple scattering model - electron scattering from the Yukawa potential. This gives some insight into the difficulties that arise when calculating potential matrix elements for complex momenta. The method is then applied to the s-wave model of the electron-hydrogen scattering problem and finally, the full problem. Existing FORTRAN software written to solve the momentum-space LS equations for electron-hydrogen scattering using standard techniques has been converted to C++. Extensive modification of the code has resulted in a flexible Windows-based program with a graphical user interface that runs on any modern computer using PC architecture. The program can calculate results using either a conventional method (no rotation) or a rotatedcontour method. Using a rotated-contour method to solve the momentum-space LS equations necessitates detailed knowledge of the analytic nature and singularity structure of the coupled channels potentials. This is achieved through the extensive use of the computer symbolic algebra system Maple to compute closed-form solutions for the direct potentials and for a range of partial-wave direct and exchange potentials. It is found that logarithmic branch point singularities are present on the real momentum axis for an extensive class of partial-wave direct-potential matrix elements. The analysis reveals that arotated contour method cannot be applied to the full atomic scattering problem due to these analytic problems which are associated with the long-range nature of the Coulomb potential. Downloads per month over past year
{"url":"http://researchrepository.murdoch.edu.au/357/","timestamp":"2014-04-18T18:16:51Z","content_type":null,"content_length":"30280","record_id":"<urn:uuid:7c25a36c-98b7-4a42-965c-b6189a16ba94>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Micro-Tuning Step-by-Step Pages: 1, 2, 3, 4 After the Baseline We now have three routes we can follow in proceeding with the micro-tuning: • Consider the inefficiencies highlighted by the baseline measurements. • Analyze the algorithm to determine if there are generic inefficiencies that we can improve. • Profile the method to determine where the bottlenecks are, if there are any. These three routes are generic; micro-tuning generally has these options, though any one option may become unavailable when the code becomes more efficient. Usually, some of the options are easier to proceed with than others, but for this article it is instructive to look at all three. Dataset 3 The baseline measurement shows that using Dataset 3 required a much longer time to execute than using the other two datasets. The major difference in this dataset is the existence of strings that do not hold valid numbers. We can easily follow the logic for this case: the method tries to create an Integer object by parsing the string using the Integer constructor. In cases where the string is not a number, the constructor throws a NumberFormatException. This is caught in our catch block, and false is returned. So it seems that this procedure is extremely expensive. In fact, it is fairly well known that creating an exception is an expensive procedure. The actual cost comes from filling out the stack trace that the exception holds, and even though we never care about the stack trace in this method, we cannot avoid its cost. We can, however, avoid the exception completely by checking that we have a valid number before creating the Integer. This is simply done by checking all of the characters to make sure they are all digits: for (int i = 0; i < testInteger.length(); i++) if (!Character.isDigit(testInteger.charAt(i))) return false; Note that negative numbers will return false using this loop, but that is acceptable, since the method in any case always returns false when passed a negative number. Re-running our test with this code added to the beginning of the method produces the results listed in Table 2. │ Table 2: After adding the looped digit test │ │ │ Dataset 1 │ Dataset 2 │ Dataset 3 │ │ Baseline │ 100% │ 84.1% │ 540.0% │ │ With looped digit test │ 114.8% │ 66.1% │ 51.4% │ So we've improved the test using Dataset 3 to be faster than the tests using both the other datasets, in line with what we expected. The test using Dataset 2 is also improved, which I didn't expect. The Algorithm Let's look at the algorithm for any inefficiencies. The algorithm has one obvious redundancy: we check that the integer is greater than 10 and, if it is, we subsequently check that it is greater than or equal to 2. (theInteger.intValue() > 10) && ((theInteger.intValue() >= 2) Clearly, we can dispense with the second test completely. If the first (>10) test fails, then the following (>=2) will not be executed, since we are using the shortcut Boolean AND operator (&&), which only executes its right-hand side if the left-hand side is true. If the "greater than 10" test succeeds, then we know definitely that the "greater than or equal to 2" test will succeed. Are there further inefficiencies? Let's go line-by-line. For convenience, here's the full function again. 1 public boolean checkInteger(String testInteger) 2 { 3 try 4 { 5 Integer theInteger = new Integer(testInteger);//fails if not a number 6 return 7 (theInteger.toString() != "") && //not empty 8 (theInteger.intValue() > 10) && //greater than ten 9 ((theInteger.intValue() >= 2) && 10 (theInteger.intValue() <= 100000)) && //2>=X<=100000 11 (theInteger.toString().charAt(0) == '3'); //first digit is 3 12 } 13 catch (NumberFormatException err) 14 { 15 return false; 16 } First we create an Integer (5). Then we test for an empty string (7). But on checking the Integer class, an empty string is an invalid number for the Integer constructor, so if we do have an empty string, this test will never be executed. If an empty string is passed to the method, the Integer constructor will throw an exception and the catch block will be immediately activated. So we should either eliminate the empty string test, or move it before the Integer creation. It's highly likely that a simple empty string test is faster than the Integer creation procedure, so we should move the test to before the Integer creation. (Note that the test was, in any case, incorrect in using the identity operator != rather than the equals() method, and I'll correct that here.) Moving on, the "greater than 10" test (8) is necessary, and we've already eliminated the "greater than or equal to 2" test (9). The "less than or equal to 100000" (10) is necessary, and the test for the first digit being 3 (11) is necessary. If the first digit is 3, however, the smallest possible valid number is 30 (since the number must be greater than 10); thus the "greater than 10" test should become a "greater than 29" test. Similarly, the largest number possible up to 100000, that starts with a 3, is 39999, so the "less than or equal to 100000" test should become "less than 40000." Finally, the test for the first digit being 3 is surely simpler to execute than creating the Integer. To create the Integer, the minimum that would need to be executed would be to check that every character is a digit. So it makes sense to move the "first digit being 3" test to before the creation of the Integer. public boolean checkInteger(String testInteger) if (testInteger.equals("")) return false; if (testInteger.charAt(0) != '3') return false; Integer theInteger = new Integer(testInteger);//fails if not a number (theInteger.intValue() > 29) && (theInteger.intValue() <= 40000); catch (NumberFormatException err) return false; Re-running our test using this code produces the results listed in Table 3. │ Table 3: After restructuring the algorithm │ │ │ Dataset 1 │ Dataset 2 │ Dataset 3 │ │ Baseline │ 100% │ 84.1% │ 540.0% │ │ Restructured algorithm │ 46.6% │ 39.9% │ 29.8% │ Pages: 1, 2, 3, 4 Next Page
{"url":"http://www.onjava.com/pub/a/onjava/2002/03/20/optimization.html?page=2","timestamp":"2014-04-19T14:31:12Z","content_type":null,"content_length":"29728","record_id":"<urn:uuid:e2d9de9b-d003-4bca-9ae5-7a1486ed645e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
College Algebra and Trigonometry A Unit Circle Approach 5th Edition | 9780321644770 | eCampus.com List Price: [S:$225.32:S] In Stock Usually Ships in 24 Hours. Currently Available, Usually Ships in 24-48 Hours Downloadable Offline Access Questions About This Book? Why should I rent this book? Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book back to us with a free UPS shipping label! No need to worry about selling it back. How do rental returns work? Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in! What version or edition is this? This is the 5th edition with a publication date of 1/5/2010. What is included with this book? • The New copy of this book will include any supplemental materials advertised. Please check the title of the book to determine if it should include any CDs, lab manuals, study guides, etc. • The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included. • The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself. Dugopolskirs"sCollege Algebra and Trigonometry: A Unit Circle Approach, Fifth Editiongives students the essential strategies to help them develop the comprehension and confidence they need to be successful in this course. Students will find enoughcarefully placed learning aids and review tools to help them do the math without getting distracted from their objectives. Regardless of their goals beyond the course, all students will benefit from Dugopolskirs"s emphasis on problem solving and critical thinking, which is enhanced by the addition of nearly 1,000 exercises in this edition. Prerequisites; Equations, Inequalities, and Modeling; Functions and Graphs; Polynomial and Rational Functions; Exponential and Logarithmic Functions; The Trigonometric Functions; Trigonometric Identities and Conditional Equations; Applications of Trigonometry; Systems of Equations and Inequalities; Matrices and Determinants; The Conic Sections; Sequences, Series, and Probability For all readers interested in college algebra and trigonometry. Author Biography Mark Dugopolski was born in Menominee, Michigan. After receiving a BS from Michigan State University, he taught high school in Illinois for four years. He received an M.S. in mathematics from Northern Illinois University at DeKalb. He then received a PhD in the area of topology and an MS in statistics from the University of Illinois at Champaign–Urbana. Mark taught mathematics at Southeastern Louisiana University in Hammond for twenty-five years and now holds the rank of Professor Emeritus of Mathematics. He has been writing textbooks since 1988. He is married and has two daughters. In his spare time he enjoys tennis, jogging, bicycling, fishing, kayaking, gardening, bridge, and motorcycling. Table of Contents P. Prerequisites P.1 Real Numbers and Their Properties P.2 Integral Exponents and Scientific Notations P.3 Rational Exponents and Radicals P.4 Polynomials P.5 Factoring Polynomials P.6 Rational Expressions P.7 Complex Numbers Chapter P Highlights Chapter P Review Exercises Chapter P Test 1. Equations, Inequalities, and Modeling 1.1 Equations in One Variable 1.2 Constructing Models to Solve Problems 1.3 Equations and Graphs in Two Variables 1.4 Linear Equations in Two Variables 1.5 Scatter Diagrams and Curve Fitting 1.6 Quadratic Equations 1.7 Linear and Absolute Value Inequalities Chapter 1 Highlights Chapter 1 Review Exercises Chapter 1 Test Tying it all Together 2. Functions and Graphs 2.1 Functions 2.2 Graphs of Relations and Functions 2.3 Families of Functions, Transformations, and Symmetry 2.4 Operations with Functions 2.5 Inverse Functions 2.6 Constructing Functions with Variation Chapter 2 Highlights Chapter 2 Review Exercises Chapter 2 Test Tying it all Together 3. Polynomial and Rational Functions 3.1 Quadratic Functions and Inequalities 3.2 Zeros of Polynomial Functions 3.3 The Theory of Equations 3.4 Miscellaneous Equations 3.5 Graphs of Polynomial Functions 3.6 Rational Functions and Inequalities Chapter 3 Highlights Chapter 3 Review Exercises Chapter 3 Test Tying it all Together 4. Exponential and Logarithmic Functions 4.1 Exponential Functions and Their Applications 4.2 Logarithmic Functions and Their Applications 4.3 Rules of Logarithms 4.4 More Equations and Applications Chapter 4 Highlights Chapter 4 Review Exercises Chapter 4 Test Tying it all Together 5. The Trigonometric Functions 5.1 Angles and Their Measurements 5.2 The Sine and Cosine Functions 5.3 The Graphs of the Sine and Cosine Functions 5.4 The Other Trigonometric Functions and Their Graphs 5.5 The Inverse Trigonometric Functions 5.6 Right Triangle Trigonometry Chapter 5 Highlights Chapter 5 Review Exercises Chapter 5 Test Tying it all Together 6. Trigonometric Identities and Conditional Equations 6.1 Basic Identities 6.2 Verifying Identities 6.3 Sum and Difference Identities 6.4 Double-Angle and Half-Angle Identities 6.5 Product and Sum Identities 6.6 Conditional Trigonometric Equations Chapter 6 Highlights Chapter 6 Review Exercises Chapter 6 Test Tying it all Together 7. Applications of Trigonometry 7.1 The Law of Sines 7.2 The Law of Cosines 7.3 Vectors 7.4 Trigonometric Form of Complex Numbers 7.5 Powers and Roots of Complex Numbers 7.6 Polar Coordinates (Equations???) 7.7 Parametric Equations Chapter 7 Highlights Chapter 7 Review Exercises Chapter 7 Test Tying it all Together 8. Systems of Equations and Inequalities 8.1 Systems of Linear Equations in Two Variables 8.2 Systems of Linear Equations in Three Variables 8.3 Nonlinear Systems of Equations 8.4 Partial Fractions 8.5 Inequalities and Systems of Inequalities in Two Variables 8.6 The Linear Programming Model Chapter 8 Highlights Chapter 8 Review Exercises Chapter 8 Test Tying it all Together 9. Matrices and Determinants 9.1 Solving Linear Systems Using Matrices 9.2 Operations with Matrices 9.3 Multiplication of Matrices 9.4 Inverses of Matrices 9.5 Solutions of Linear Systems in Two Variables Using Determinants 9.6 Solutions of Linear Systems in Three Variables Using Determinants Chapter 9 Highlights Chapter 9 Review Exercises Chapter 9 Test Tying it all Together 10. The Conic Sections 10.1 The Parabola 10.2 The Ellipse and the Circle 10.3 The Hyperbola Chapter 10 Highlights Chapter 10 Review Exercises Chapter 10 Test Tying it all Together 11. Sequences, Series, and Probability 11.1 Sequences 11.2 Series 11.3 Geometric Sequences and Series 11.4 Counting and Permutations 11.5 Combinations, Labeling, and the Binomial Theorem 11.6 Probability 11.7 Mathematical Induction Chapter 11 Highlights Chapter 11 Review Exercises Chapter 11 Test Appendix: Solutions to “Try This” Exercises Index of Applications
{"url":"http://www.ecampus.com/college-algebra-trigonometry-unit-circle/bk/9780321644770","timestamp":"2014-04-18T16:37:45Z","content_type":null,"content_length":"67394","record_id":"<urn:uuid:59104b60-d55d-4a6c-81d9-cc6aeba06e85>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: Most caches placed? [Archive] - Geocaching Maine 03-20-2006, 02:39 PM I was scrolling through GC.com's site this past weekend finishing up some caches that were on my "to do" list when I happened to notice that Smitty & Co. have a ton of caches out there (and most of them I have yet to do) and this made me wonder who out there has placed the most number of caches here in Maine . . . I was particularly impressed with the volume of Smitty's placed caches to his found caches and from personal experience I've enjoyed the few caches of his that I have done.
{"url":"http://www.geocachingmaine.org/forum/archive/index.php/t-1058.html?s=08e6c82f6c16f68a7ab196cfdc1b8e3c","timestamp":"2014-04-20T21:27:20Z","content_type":null,"content_length":"9204","record_id":"<urn:uuid:768836f1-f14c-46d4-8366-adfb5b3ebe07>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00399-ip-10-147-4-33.ec2.internal.warc.gz"}
Norwalk, CA Algebra 1 Tutor Find a Norwalk, CA Algebra 1 Tutor I have been a tutor since 2007 and have worked with countless students on a variety of subjects. I hold a CA teaching credential in Social Science and Mathematics and was a substitute teacher for the Fullerton High School District for 5 years, which included a long term assignment teaching AP US History, AP Human Geography and an AVID class. I have also taught summer school Health and 23 Subjects: including algebra 1, reading, English, literature I hold a multiple-subject teaching credential from California State University, Fullerton. I have taught 2nd and 3rd grade full time as well as middle school Pre-Algebra. I have been working with students in after-school tutoring, music, and enrichment programs since I was in high school, and I've loved every teaching experience! 8 Subjects: including algebra 1, reading, grammar, writing ...Since joining WyzAnt, I have tutored various students with high school- and college-level general chemistry. With all the shapes involved in geometry, I feel that I can tutor the subject easily due to it being a more visual subject and have been to various high schoolers. Physics has always been my favorite subject. 9 Subjects: including algebra 1, chemistry, physics, geometry ...I have recently earned my associate's degree in Mathematics at Golden West College. I am currently employed through the college and have been tutoring for about 1.5 years. I tutor anything from precalculus down. 5 Subjects: including algebra 1, algebra 2, precalculus, trigonometry ...I have a passion for learning and helping others by tutoring them on subjects they need help with. It brings my great pleasure to know that I have helped other students . I have recently finished my 3rd year of college in which I was able to tutor my classmates and friends. My expertise and knowledge is in math, physics, and of course chemistry. 13 Subjects: including algebra 1, chemistry, physics, geometry Related Norwalk, CA Tutors Norwalk, CA Accounting Tutors Norwalk, CA ACT Tutors Norwalk, CA Algebra Tutors Norwalk, CA Algebra 2 Tutors Norwalk, CA Calculus Tutors Norwalk, CA Geometry Tutors Norwalk, CA Math Tutors Norwalk, CA Prealgebra Tutors Norwalk, CA Precalculus Tutors Norwalk, CA SAT Tutors Norwalk, CA SAT Math Tutors Norwalk, CA Science Tutors Norwalk, CA Statistics Tutors Norwalk, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Norwalk_CA_algebra_1_tutors.php","timestamp":"2014-04-19T12:22:00Z","content_type":null,"content_length":"24252","record_id":"<urn:uuid:ee65f3d6-c7a4-4368-bcae-53ae50977f3f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
Advice on Mathematical Preparation 1. high school mathematics only 2. basic calculus and linear algebra 3. applied mathematics, differential equations, linear programming, and basic probability theory 4. advanced calculus, advanced algebra and stochastic processes 5. real and complex analysis, advanced probability theory, and topology, economics Ph.D. students at the time reported that the level of mathematics used in their various graduate courses was slightly above level 3 ( p.22). There has been an upward trend in the level of mathematics used in graduate courses since that time. For example, the econphd.net website suggests that:
{"url":"http://www.aeaweb.org/gradstudents/Mathematical_Preparation.php","timestamp":"2014-04-20T23:40:25Z","content_type":null,"content_length":"22330","record_id":"<urn:uuid:cf4ea80a-a2c9-4930-ab9f-4220f8afa385>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
anyone want some DE pratice? April 20th 2011, 04:11 AM #1 Mar 2011 anyone want some DE pratice? University won't give me mark schemes for past papers. Gits. x'' + 2x' + x = t with x(0) = x'(0) = 1. My answer is x = t^2(1-t) + t - 1 + e^-t(t+1) using variation of parameters. x'' + 2mx' + (w^2 + m^2)x = 0 with x(0) = 0, x'(0) = A. My answer x = A/w*e^mt*(sinwt) Much appreciated (although I am providing good practice for other people) Last edited by Ackbeet; April 20th 2011 at 04:24 AM. Reason: Eliminating weird fonts. Is this assignment, then, for marks? It is forum policy not knowingly to help with such problems. x'' + 2x' + x = t with x(0) = x'(0) = 1. My answer is x = t^2(1-t) + t - 1 + e^-t(t+1) using variation of parameters. x'' + 2mx' + (w^2 + m^2)x = 0 with x(0) = 0, x'(0) = A. My answer x = A/w*e^mt*(sinwt) Much appreciated (although I am providing good practice for other people) You can always double-check solutions to DE's yourself by plugging back the solution into the original DE and see if you get equality. It is also straight-forward to check the initial conditions. I'm not sure what there is for helpers to do here. no I am practicing for my exam in may. Ok yes I could check them, thanks for reminding me. Weird font , who are you to judge lol. One person's liked font is another person's hated. Last edited by poirot; April 20th 2011 at 05:16 AM. Contrary to many peoples' opinions (at least in the USA; less so in, say, Germany), there are such things as authorities. On the MHF, my rank of MHF Helper is a rank of authority. In addition to that, I am a moderator for the Differential Equations forum in which you posted. So, while I can't hand out infractions (only mods and admins can do that), I can do a fair bit. And I will do a fair bit to keep the forums I moderate useful to users. The "fair bit" that I will do definitely involves judging. So yes, I will judge lots of things, and I should. Such judging is for the benefit of legitimate users of MHF such as yourself. I try to judge according to the rules (posted as a sticky in almost every forum) and according to common sense. One person's liked font is another person's hated. The reason I changed the font is that if there are a lot of weird fonts in a post, it becomes much harder to do the "Reply With Quote", because there will be a whole raft of markup tags in the text. I'm not the only person who helps people out on MHF who prefers the straight-forward, simple font. Naturally, it's perfectly fine to use italics, bold, whatever. However, putting lots of non-default fonts in a particular post makes quoting much more difficult. As I said, the whole reason I do things on MHF is to make the site more useful to users, including you. Therefore, if you believe the preceding sentence, then the course of action that will turn out the best for you, that will make MHF more useful to you in your study of math, would be to follow the rules and cooperate with staff. And you shouldn't take that as an accusation, because generally I think you do follow the rules. Is this assignment, then, for marks? It is forum policy not knowingly to help with such problems. You can always double-check solutions to DE's yourself by plugging back the solution into the original DE and see if you get equality. It is also straight-forward to check the initial conditions. I'm not sure what there is for helpers to do here. Ok I checked them but i couldn't simplify to 0. Where have I gone wrong. I wasn't being serious (I added the lol to emphasise the tone of my sentence) x''= 2(1-t) -2t -2e^-t(t+1)-(2t+1)^2e^-t(t+1) plugging in: 4t(1-t) -2t^2 +2(1-t) -2t +t^2(1-t)+t-1+e^-t(t+1)*((2t+1)^2-2(2t+1) -1) Yeah, I think you might have made one or two small errors in taking your derivatives. I get the following: And right away, you can see there's a problem, because the cubed term in x(t) can't be canceled by anything in the other derivatives, and hence will remain. But there's no cubed term on the RHS of the original DE. So, I would say there's an error in your implementation of the variation of parameters approach. Can you please show that work? sorry that differentiation was all wrong because I was reading e^t(t-1) on the computer as e^(t(t-1)) complementary function; x=Ate^-t+Be^-t Particular solution; w= -e^-2t, w1= t^2e^-t -e^-t +te^-t , w2= -te^-t u1' = -t^2+e^t-te^t, so u1=(t-t^2)e^t u2'=te^t so u2 =(t-1)e^t solution follows Hmm. Well, the linearly independent solutions of the homogeneous equation are as you wrote. I buy your Wronskian of but I think your other computations are off. I get Try carrying those corrections through. Perhaps it is good to mention here that W_1 and W_2 are not Wronskians, although W is. They are determinants used in the context of Kramer's Rule to solve a system of equations. I had the function in w1, w2 on the other side so how do you decide which is x1 and which is x2 I got x=(2-t)e^-t+3t-t^2-2 but it all hinged on choosing x1(t) and x2(t) the other way. Last edited by poirot; April 20th 2011 at 09:08 AM. It shouldn't matter, as long as you're consistent. That is, if you look here, you'll see that as long as you pop the (0,f(x)) vector into the correct column, you're good to go. I'm having some difficulties myself. The particular "solution" I end up with doesn't actually solve the DE! I simply have to end up with a first-order polynomial as the particular solution, because anything higher order is going to have powers of t that are too high, and won't be canceled by anything in the derivative terms on the LHS of the DE. Why are we using variation of parameters for this? Clearly, the solution to the homogeneous equation is $x = c_1 e^{-t}+c_2te^{-t}$ and if we assume the particular solution is of the form $x_p = A+Bt$ then we observe that $x^{\prime\prime}+2x^{\prime}+x = t \implies 2B+A+Bt = t$ Thus 2B+A = 0 and B = 1, implying that A=-2. Thus, the general solution is $x = c_1e^{-t}+c_2te^{-t}-2+t$. Applying the initial conditions x(0)= x'(0) = 1, we observe that \begin{aligned} 1 &= c_1-2 \\ 1 &= -c_1-c_2+1\end{aligned} Thus, c_1 = 3 and c_2 = -3. Therefore, our particular solution should be $x = 3e^{-t}-3te^{-t}-2+t$ If you really want to use variation of parameters, you need to compute the Wronskians: $W_1 = \begin{vmatrix} 0 & te^{-t}\\ t & (1-t)e^{-t}\end{vmatrix} = -t^2e^{-t}$ $W_2 = \begin{vmatrix} e^{-t} & 0 \\ -e^{-t} & t\end{vmatrix} = te^{-t}$ $W = \begin{vmatrix} e^{-t} & te^{-t}\\ -e^{-t} & (1-t)e^{-t}\end{vmatrix} = (1-t)e^{-2t}+te^{-2t} = e^{-2t}$ So now, it follows that $u_1^{\prime} = \frac{W_1}{W} = \frac{-t^2e^{-t}}{e^{-2t}} = -t^2e^t$ $u_2^{\prime} = \frac{W_2}{W} = \frac{te^{-t}}{e^{-2t}} = te^t$ I leave it for you to verify that $u_1 = -(t^2-2t+2)e^t$ $u_2 = (t-1)e^t$ So it follows that the general solution is $x = c_1e^{-t}+c_2te^{-t}-(t^2-2t+2)+t(t-1)=c_1e^{-t}+c_2te^{-t}-2+t$ Applying the initial conditions should give you the equation $x = 3e^{-t}-3te^{-t}-2+t$ which is the same as above. I hope this clarifies things. Ah, thanks, Chris. I see now I had some sign errors in there. Your gray cells function well, unlike mine yesterday - spotted an error in my determinant treating a '0' as a '1'. Quite disconcerting April 20th 2011, 04:27 AM #2 April 20th 2011, 05:04 AM #3 Mar 2011 April 20th 2011, 05:36 AM #4 April 20th 2011, 05:37 AM #5 Mar 2011 April 20th 2011, 05:39 AM #6 April 20th 2011, 05:59 AM #7 Mar 2011 April 20th 2011, 06:35 AM #8 April 20th 2011, 07:59 AM #9 Mar 2011 April 20th 2011, 08:30 AM #10 April 20th 2011, 08:35 AM #11 Mar 2011 April 20th 2011, 08:44 AM #12 April 21st 2011, 10:35 AM #13 April 21st 2011, 10:58 AM #14 April 21st 2011, 11:56 AM #15 Mar 2011
{"url":"http://mathhelpforum.com/differential-equations/178155-anyone-want-some-de-pratice.html","timestamp":"2014-04-21T00:58:23Z","content_type":null,"content_length":"85086","record_id":"<urn:uuid:145bb8d3-e593-4072-b11a-3432a97128bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
History of Scottish Mathematics Gibson: History of Scottish Mathematics In 1927 the Edinburgh Mathematical Society began to publish Series 2 of the Proceedings of the Edinburgh Mathematical Society. The first paper in the new Series was by George Gibson and it was the first part of his two-part paper Sketch of the History of Mathematics in Scotland to the end of the 18th Century. The first part occupied the first 17 pages of Volume I, Part I issued in May 1927. The second part of his paper occupied pages 71 to 93 of Volume I, and was issued as the first paper in Part II. We present a version of the paper below. We have broken the paper into smaller parts for convenience of presenting it in our archive but this is a somewhat arbitrary division and certainly was not intended by Gibson. JOC/EFR April 2007 The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Extras/Gibson_history.html","timestamp":"2014-04-18T21:07:08Z","content_type":null,"content_length":"2960","record_id":"<urn:uuid:646f8b24-f930-4208-8226-8af8a35eeae8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Warnings building Perl itself on Windows 64 Strawberry's gcc-generated libs won't talk to Sybase's client libs, so I am back to compiling using VC2012, full edition. -perl 5.18.1 -I'm on a Win7 machine, 64 bit. -I'm using a 64 bit compiler: Microsoft (R) C/C++ Optimizing Compiler Version 17.00.50727.1 for x64 Copyright (C) Microsoft Corporation. All rights reserved. usage: cl [ option... ] filename... [ /link linkoption... ] I have read the README.win32 and set the appropriate values in the win32\Makefile file. But when I compile, I get lots of size mismatch warnings. Here is the unique set: conversion from 'IV' to 'DWORD', possible loss of data conversion from 'IV' to 'I32', possible loss of data conversion from 'IV' to 'U32', possible loss of data conversion from 'IV' to 'const I32', possible loss of data conversion from 'IV' to 'const U32', possible loss of data conversion from 'IV' to 'const int', possible loss of data conversion from 'IV' to 'int', possible loss of data conversion from 'IV' to 'volatile U32', possible loss of data conversion from 'PADOFFSET' to 'I32', possible loss of data conversion from 'PADOFFSET' to 'U32', possible loss of data conversion from 'SOCKET' to 'int', possible loss of data conversion from 'STRLEN' to 'NV', possible loss of data conversion from 'UINT_PTR' to 'UINT', possible loss of data conversion from 'UV' to 'I32', possible loss of data conversion from 'UV' to 'NV', possible loss of data conversion from 'UV' to 'U32', possible loss of data conversion from 'UV' to 'const unsigned int', possible loss of data conversion from 'UV' to 'gid_t', possible loss of data conversion from 'UV' to 'int', possible loss of data conversion from 'UV' to 'unsigned int', possible loss of data conversion from '__int64' to 'I32', possible loss of data conversion from '__int64' to 'U16', possible loss of data conversion from '__int64' to 'U32', possible loss of data conversion from '__int64' to 'const I32', possible loss of data conversion from '__int64' to 'const U32', possible loss of data conversion from '__int64' to 'const int', possible loss of data conversion from '__int64' to 'const long', possible loss of data conversion from '__int64' to 'int', possible loss of data conversion from '__int64' to 'long', possible loss of data conversion from '__int64' to 'unsigned int', possible loss of data conversion from '__int64' to 'volatile I32', possible loss of data conversion from 'const IV' to 'I32', possible loss of data conversion from 'const IV' to 'int', possible loss of data conversion from 'const IV' to 'uid_t', possible loss of data conversion from 'const PADOFFSET' to 'I32', possible loss of data conversion from 'const UV' to 'I32', possible loss of data conversion from 'const UV' to 'U32', possible loss of data conversion from 'const UV' to 'gid_t', possible loss of data conversion from 'const UV' to 'uid_t', possible loss of data conversion from 'const __int64' to 'I32', possible loss of data conversion from 'fpos_t' to 'long', possible loss of data conversion from 'intptr_t' to 'int', possible loss of data conversion from 'size_t' to 'DWORD', possible loss of data conversion from 'size_t' to 'I32', possible loss of data conversion from 'size_t' to 'U32', possible loss of data conversion from 'size_t' to 'U8', possible loss of data conversion from 'size_t' to 'const I32', possible loss of data conversion from 'size_t' to 'const int', possible loss of data conversion from 'size_t' to 'int', possible loss of data conversion from 'size_t' to 'long', possible loss of data conversion from 'size_t' to 'unsigned int', possible loss of data conversion from 'size_t' to 'unsigned long', possible loss of data I'm thinking I'm missing a flag setting somewhere. Here is a sample compile command: cl -c -nologo -GF -W3 -I..\lib\CORE -I.\include -I. -I.. -DWIN32 - +D_CONSOLE -DNO_STRICT -DWIN64 -DCONSERVATIVE -D_CRT_SECURE_NO_DEPRECA +TE -D_CRT_NONSTDC_NO_DEPRECATE -DPERLDLL -DPERL_CORE -O1 -MD -Zi -D +NDEBUG -GL -fp:precise -DPERL_IS_MINIPERL -Fo.\mini\win32thread.obj w I'll keep testing, but if someone has done this and would like to point out if I am missing a flag somewhere, I'd appreciate it.
{"url":"http://www.perlmonks.org/index.pl?node_id=1054907","timestamp":"2014-04-18T10:15:57Z","content_type":null,"content_length":"28126","record_id":"<urn:uuid:3b2db78a-3005-4da4-a4bb-a094723f2bb7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Cruxis Home > 1100 mm Telescope > Truss Frame Back to the 1100 mm Cruxis Telescope Truss Frame between mirror box and upper cage The truss tubes connect the mirror box to the upper cage. The design of the telescope calls for a truss frame with 8 tubes. Four tubes are about 3700 mm (12 ft) long and connect to the mirror box at the level of the primary mirror. The other four tubes connect to the top of the side wall and are about 3200 mm long. Tubing material and size The stiffness of a truss frame is primarily determined by the elasticity modulus of the material and the cross-section (i.e. the weight) of the tubes. Steel and aluminum have identical elasticity to weight ratio, whereas carbon tubes will produce about 2.5 times stiffer truss poles for equal weight. Carbon tubes would significantly reduce the weight and provide excellent stiffness, but their cost is prohibitive in the diameter and length that will be required - probably more than €2000. So classical aluminum tubes will be used for a total cost of around €200. A practical limit for the tube weight is about 0.5 kg per running meter. This would give a total truss frame weight of about (4 * 3.7 m + 4 * 3.2 m) * 0.5 kg/m = 14 kg (30 lbs), or about the same weight as the upper cage. Using heavier truss tubes would mean that the tubes would mostly be carrying their own weight. Several aluminum tube diameter / thickness configurations with weight around 0.5 kg/m are possible: □ round tube diameter 38 mm / thickness 1.5 mm => 0.464 kg/m □ round tube diameter 30 mm / thickness 2 mm => 0.475 kg/m □ square tube side 30 mm / thickness 1.5 mm => 0.462 kg/m The three options will give similar performance in a truss frame. We will use the 30 mm x 1.5 mm square tubing because they're more compact and easier to mount on a flat surface. The total weight of the truss tubes will then be about 13 kg (28 lbs). Transport and assembly of the truss tubes Square tubes simplify the assembly on a flat surface. On the mirror box simple stainless steel bolts with ergonomic cross knobs are sufficient to fix the tubes accurately and firmly. On the upper cage the tubes are bolted in pairs on a bracket. The tubes remain connected in pairs for easier transport and faster assembly. The truss poles have been cut in two for easier transport. The two halves are joined by a sleeve and convenient knob. Simplified computation of the sag of the Upper Cage The sag of the upper cage of the telescope pointing horizontally can quite easily be computed. We assume the following design data: □ Design weight: 13 kg (upper cage) + 7 kg (half total tube weight) = 20 kg □ Average free length of the tubes: L = 3350 mm □ Base separation of the tubes: C = 1100 mm □ Cross section of the 30 x 1.5 mm square tube: A = 171 mm^2 □ Section modulus of the 30 x 1.5 mm square tube: I = 23,200 mm^4 □ Elasticity modulus of aluminum: E = 70,000 N/mm^2 The stiffness of the frame is the combination of the following: □ The tension-compression truss stiffness of the 2 vertical triangles: k[truss] = AE C^2 / L^3 = 385 N/mm □ The bending stiffness of the 8 tubes anchored at the mirror box and the upper cage: k[bend] = 8 * 12 EI / L^3 = 4 N/mm These numbers validate the statement made in the previous paragraph that the bending stiffness is quite small compared to the truss frame stiffness resulting from axial loads in the vertical The total stiffness of the frame is the sum of the two values, approx. 389 N/mm. With the design load of 20 kg or approx 195 N, the sag would amount to 0.5 mm (0.02 inch) when the telescope is pointing horizontally. That deflection is quite acceptable and should not harm collimation (the upper cage will not be rotating while sagging). Furthermore this deflection can largely be compensated by some flexibility of the mirror cell support beams, see the mirror cell page. Finite Element computation of the sag of the Upper Cage and truss tubes Using a finite element model of the upper cage and the truss tubes, a more accurate computation can be made that takes into account the actual lay-out and weight distribution of the truss tubes and upper cage. Move the mouse over the image to see the deflections (magnified 100 times). The computed sag of the upper cage amounts to 0.45 mm (0.018") which compares very well to the result of the simplified analysis above. The longest truss tubes sag about 2 mm (0.08") in their Simplified buckling analysis of the tubes It's useful to verify the buckling of the tubes (consult this site for some theoretical background). The buckling load for a single truss tube with free ends is: P[buckling] = 10 EI / L^2 In our case the tube is anchored (clamped) at both sides, so we should use a "corrected" length in this formula of 0.65 times the free length of the longest tubes or about 2200 mm. This yields a buckling load P[buckling] = 3350 N. The tension/compression force in the 4 vertical truss tubes carrying the weight W = 195 N is: P = W L / 2 C = 300 N So there is a comfortable safety margin of about 11 with respect to the buckling load.
{"url":"http://www.cruxis.com/scope/scope1100_trusstubes.htm","timestamp":"2014-04-21T15:01:18Z","content_type":null,"content_length":"11907","record_id":"<urn:uuid:975b9bf7-eed9-4354-ba82-0eb6da3fa6ce>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving (1+x1.t)(1+x2.t)...(1+xn.t)>=(1+t)^n. Let x1,x2,...,xn be positive numbers satisfying (x1)(x2)(x3)...(xn)=1. Show that for any t>=0, (1+x1.t)(1+x2.t)...(1+xn.t)>=(1+t)^n. We can show that $\dfrac{\left(1+\frac t{x_i}\right)(1+tx_i)}{(1+t)^2}\geq 1$ hence we have either $(1+tx_1)\ldots(1+tx_n)\geq (1+t)^n$ or $\left(1+\dfrac t{x_1}\right)\ldots \left(1+\dfrac t{x_n}\ right) \geq (1+t)^n$. If it is the second case, put for $teq 0$$f(t) =\left(1+\dfrac t{x_1}\right)\ldots \left(1+\dfrac t{x_n}\right)-(1+t)^n$ and compute $f\left(\dfrac 1t\right).$ Nice. Could you explain one thing? If I understand correctly, you claim that $\dfrac{\prod_{i=1}^n(1+tx_i)}{(1+t)^n}\cdot\dfrac{ \prod_{i=1}^n(1+t/x_i)}{(1+t)^n}\ge1$, from where either the first or the second factor is $\ge 1$. Theoretically, the choice of which factor exceeds 1 can depend on $t$. However, it seems that later you require that either $\prod_{i=1}^n(1+tx_i)\geq (1+t)^n$for all $t$ or $\prod_{i=1}^n(1+t/x_i)\geq (1+t)^n$for all $t$. This allows you substituting $1/t$ for $t$ in case the second alternative is true. But is it possible, say, that the second alternative is true for $t = 5$, but the first alternative is true for $t = 1/5$? You're right, what I did cannot be correct because I didn't use the fact that $x_1\ldots x_n=1$. I think we can show the result by induction. It's not difficult for $n=2$ and it's true for $n$ and $x_1\ldots x_{n+1}=1$ were $x_i$ is positive. We assume $x_{n+1}\leq 1$ and $x_n\geq 1$ (by symmetry ). Let $y_i=x_i$ if $i\leq n-1$ and $y_n=x_nx_{n+1}$. We have $\prod_{i=1}^n(1+y_it)\geq (1+t)^ n$for any $t\geq 0$ hence $\prod_{i=1}^{n-1}(1+x_it)(1+x_nx_{n+1}t)\geq (1+t)^n$. Now we have to show that $(1+x_nx_{n+1}t)(1+t)\leq (1+x_{n+1}t)(1+x_nt)$. If we compute the difference we get $(1+x_ {n+1}t)(1+x_nt) -(1+x_nx_{n+1}t)(1+t)=t(x_{n+1}+x_n-x_nx_{n+1}-1)t(1-x_n)(x_{n+1}-1)$ which is positive. Let $f(x) = \ln(1 + e^x)$. It's easy to show that the second derivative is positive, so f is convex. By Jenson's Inequality, $f((a_1 + a_2 + \dots + a_n)/n) \leq f(a_1)/n + f(a_2)/n + \dots + f(a_n)/ n$ I.e., $\ln(1 + e^{(a_1 + a_2 + \dots + a_n)/n}) \leq \ln(1 + e^{a_1})/n + \ln(1 + e^{a_2})/n + \dots + \ln(1 + e^{a_n})/n$ for any $a_1, a_2, \dots , a_n$. Exponentiating, $1 + e^{(a_1 + a_2 + \ dots + a_n)/n} \leq (1+ e^{a_1})^{1/n} (1+ e^{a_2})^{1/n} \cdots (1+ e^{a_n})^{1/n}$. Now let $a_i = \ln(x_i t)$, so $e^{(a_1 + a_2 + \dots + a_n)/n} = (x_1 x_2 \cdots x_n)^{1/n} t = t$ and the inequality becomes $1+t \leq (1 + x_1 t)^{1/n} (1 + x_2 t)^{1/n} \cdots (1 + x_n t)^{1/n}$. Finally, raise both sides to the nth power.
{"url":"http://mathhelpforum.com/calculus/173727-proving-1-x1-t-1-x2-t-1-xn-t-1-t-n-print.html","timestamp":"2014-04-18T09:38:45Z","content_type":null,"content_length":"17265","record_id":"<urn:uuid:d6ac6140-1285-40fb-9c65-76bc22f861c5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Propagation of error 34,117pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In statistics, propagation of uncertainty (or propagation of error) is the affect of variables' uncertainties (or errors) on the uncertainty of a function based on them. Mainly, the variables are measured in an experiment, and have uncertainties due to measurement limitations (e.g. instrument precision) which propagate to the result. The uncertainty is usually defined by the absolute error — a variable that is probable to get the values x±Δx is said to have an uncertainty (or margin of error) of Δx. In other words, for a measured value x, the true value is probable to be in [x−Δx, x+Δx]. Uncertainties can also be defined by the relative error, Δx/x, and then it is usually written as percentage. It is assumed that the probability of the true value to be in distinct distances from the measured value is normally distributed, with the uncertainty being the standard deviation. This article explains how to calculate the uncertainty of a function, if the variables' uncertainties are known. General formula Let $f(x_1,x_2,...,x_n)$ be a function which depends on $n$variables $x_1,x_2,...,x_n$. The uncertainty of each variable is given by $\Delta x_j$: $x_j \pm \Delta x_j\, .$ If the variables are uncorrelated, we can calculate the uncertainty Δf of f that results from the uncertainties of the variables: $\Delta f = \Delta f \left(x_1, x_2, ..., x_n, \Delta x_1, \Delta x_2, ..., \Delta x_n \right) = \left( \sum_{i=1}^n \left(\frac{\partial f}{\partial x_i}\Delta x_i \right)^2 \right)^{1/2} \, ,$ where $\frac{\partial f}{\partial x_j}$ designates the partial derivative of $f$ for the $j$-th variable. If the variables are correlated, the covariance between variable pairs, C[i,k] := cov(x[i],x[k]), enters the formula with a double sum over all pairs (i,k): $\Delta f = \left( \sum_{i=1}^n \sum_{k=1}^n \left(\frac{\partial f}{\partial x_i}\frac{\partial f}{\partial x_k}C_{i,k} \right) \right)^{1/2}\, ,$ where C[i,i] = var(x[i]) = Δx[i]². After calculating $\Delta f$, we can say that the value of the function with it's uncertainty is: $f \pm \Delta f \, .$ Example formulas This table shows the uncertainty of simple functions, resulting from uncorrelated variables A, B, C with uncertainties ΔA, ΔB, ΔC, and a precisely-known constant c. function function uncertainty X = A ± B (ΔX)² = (ΔA)² + (ΔB)² X = cA (ΔX) = c(ΔA) X = c(A×B) or X = c(A/B) (ΔX/X)² = (ΔA/A)² + (ΔB/B)² X = c(A×B×C) or X = c(A/B)×C (ΔX/X)² = (ΔA/A)² + (ΔB/B)² + (ΔC/C)² X = cA^n (ΔX/X) = |n| (ΔA/A) X = ln cA ΔX = (ΔA/A) X = exp A</td> (ΔX/X) = ΔA Example application: Resistance measurement A practical application is an experiment in which one measures current, I, and voltage, V, on a resistor in order to determine the resistance, R, using Ohm's law, $R = V / I.$ Given the measured variables with uncertainties, I±ΔI and V±ΔV, the uncertainty in the computed quantity, ΔR is $\Delta R = \left( \left(\frac{\Delta V}{I}\right)^2+\left(\frac{V}{I^2}\Delta I\right)^2\right)^{1/2} = R\sqrt{\left(\frac{\Delta V}{V}\right)^2+\left(\frac{\Delta I}{I}\right)^2}.$ Thus, in this simple case, the relative error ΔR/R is simply the square root of the sum of the squares of the two relative errors of the measured variables. External links See also
{"url":"http://psychology.wikia.com/wiki/Propagation_of_error?oldid=10872","timestamp":"2014-04-24T01:53:35Z","content_type":null,"content_length":"65029","record_id":"<urn:uuid:5817ca11-4337-4d05-a038-20479531ba03>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Bijections and counting Quick description There is a bijection between two finite sets if and only if they have the same number of elements. Many [seemingly] difficult to count sets can easily be counted by mapping them bijectively to sets which are [seemingly] simpler to count. A bijection can also be referred to as a change of variable. This is a very general idea and the main problem solving technique it suggests is this: if a set appears difficult to count, try to find a representation of it that is easier to count. Basic mathematical notation. Example 1 (William Feller, Probability Theory, Vol I, Third edition). Let be a finite set and let be unordered strings of length from this alphabet.^♦ For a string , let be the number of times appears in . We can represent with a distribution of identical balls into cells by putting balls into cell . Each of these distributions, in their turn, can be uniquely represented by a string of length from the alphabet with 's and 's and which start and end with . The 's represent the identical balls and the 's the walls of cells. Let's denote the set of all strings of this form with . Here is an example: for and the string is represented by the string It is clear that and can be mapped bijectively. is very simple to count using multiplication and partitioning: Therefore, we have that Example 2 The reflection principle. (William Feller, Probability Theory, Third Edition, Chapter III). Let . Let be the coordinate maps. Define for and and . For , can be thought of as a trajectory of a process taking values in , which starts from , and at each discrete time step goes either up or down by . Let us fix and consider the set of trajectories that end up at position at their step which also visit the origin in their excursion. In other words Knowing the size of this set allows one to compute, among other things, the distribution of the first hitting time to a state as well as the last exit from a state of a symmetric random walk. The weak limits of these distributions under proper scaling gives the distributions of the same random variables for the Brownian motion. These in turn can be used to compute distributions of analogous quantities for more complicated processes derived from these simpler ones. Thus knowing the size of the set is useful and interesting. However, defined as is, the set is not simple to count because it involves a complicated constraint (at least to the human mind), i.e., that each trajectory being counted has to hit the origin. We will now map bijectively to a simpler set with no constraints and which is simple to count. We set , and define to be the set of trajectories that hit level at step . is a set of the same type as , except that there are no constraints on it. Now let us define the bijection that will map to . Let be the map that reflects along the time axis upto the first time it hits the origin. That is, for let be the first time and define as The following figure shows the action of on a sample path. It follows directly from its definition that is a bijection from to . One can count as follows. For any path let be the number of steps and be the number of steps in . By 's definition and is the same for all paths in and they satisfy The number of paths in is then if is even, and zero otherwise. Because is bijectively mapped to with , the binomial coefficient in the last display also gives . General discussion Using a change of variable to simplify a counting problem is a special case of the general idea of using a change of variable to translate one problem to another. Login or register to post comments Inline comments The following comments were made inline in the article. You can click on 'view commented text' to see precisely where they were made. Surely you are counting Tue, 21/04/2009 - 16:42 — gowers Surely you are counting unordered strings here? • Login or register to post comments Tue, 21/04/2009 - 17:01 — devin Thanks for the comment and sorry for the confusion. What I meant was "sorted". We can use `unordered', if "sorted" is confusing/ not standard. Thanks again. • Login or register to post comments
{"url":"http://www.tricki.org/article/Bijections_and_counting","timestamp":"2014-04-21T05:13:19Z","content_type":null,"content_length":"43024","record_id":"<urn:uuid:82b532fb-4768-41a5-98f7-a2edaf90cddd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Anand Louis Graduate Student College of Computing Georgia Tech email: anandl [at] gatech dot edu About Me | Research Interests | Papers | Links I am a fifth year graduate student in the ACO program at the College of Computing. I am advised by Santosh Vempala. Before coming here, I got my B. Tech from the Department of Computer Science and Engineering, IIT Delhi where I worked with Naveen Garg . > Research Interests: • Approximation Algorithms • Spectral Algorithms • Hardness of Approximation • Linear Programming Hierarchies Suffice for Directed Steiner Tree with Z. Friggstad, Y. Ko, J. Konemann, M. Shadravan, and M. Tulsiani IPCO, 2014 • Approximation Algorithm for Sparsest k-Partitioning with Konstantin Makarychev SODA, 2014 • The Complexity of Approximating Vertex Expansion with Prasad Raghavendra and Santosh Vempala FOCS, 2013 • Many Sparse Cuts via Higher Eigenvalues with Prasad Raghavendra, Prasad Tetali and Santosh Vempala STOC, 2012 • Algorithmic Extensions of Cheeger’s Inequality to Higher Eigenvalues and Partitions with Prasad Raghavendra, Prasad Tetali and Santosh Vempala RANDOM-APPROX, 2011 • A 3-approximation for facility location with uniform capacities with Ankit Aggarwal, Naveen Garg, Shubham Gupta The 14th Conference on Integer Programming and Combinatorial Optimization (IPCO), 2010 • Improved Algorithm for Degree Bounded Steiner Network Problem with Nisheeth Vishnoi 12th Scandinavian Symposium and Workshops on Algorithm Theory (SWAT), 2010 • Cut-Matching Games for Directed Graphs Manuscript, 2010 • Graph Partitioning Using Single Commodity Flows : An Emperical Study Anand Louis, Vinayaka Pandit Manuscript, 2008 Site last updated: September 15^th, 2013
{"url":"http://www.cc.gatech.edu/~alouis3/","timestamp":"2014-04-18T16:53:47Z","content_type":null,"content_length":"5810","record_id":"<urn:uuid:dbc9691f-8e9d-485a-bfb8-a3dd643145f9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
HowStuffWorks "How are Fibonacci numbers expressed in nature?" ­Is there a magic equation to the universe? A series of numbers capable of unraveling the most complicated organic properties or deciphering the plot of "Lost"? Probably not. But thanks to one medieval man's obsession with rabbits, we have a sequence of numbers that reflect various patterns found in nature. ­­­­In 1202, Italian mathematician Leonardo Pisano (also known as Fibonacci, meaning "son of Bonacci") pondered the question: Given optimal conditions, how many pairs of rabbits can be produced from a single pair of rabbits in one year? This thought experiment dictates that the female rabbits always give birth to pairs, and each pair consists of one male and one female. ­Think about it -- two newborn rabbits are placed in a fenced-in yard and left to, well, breed like rabbits. Rabbits can't reproduce until they a­re at least one month old, so for the first month, only one pair remains. At the end of the second month, the female gives birth, leaving two pairs of rabbits. When month three rolls around, the original pair of rabbits produce yet another pair of newborns while their earlier offspring grow to adulthood. This leaves three pairs of rabbit, two of which will give birth to two more pairs the following month. The order goes as follows: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144 and on to infinity. Each number is the sum of the previous two. This series of numbers is known as the Fibonacci numbers or the Fibonacci sequence. The ratio between the numbers (1.618034) is frequently called the golden ratio or golden number. At first glance, Fibonacci's experiment might seem to offer little beyond the world of speculative rabbit breeding. But the sequence frequently appears in the natural world -- a fact that has intrigued scientists for centuries. ­Want to see how these fascinating numbers are expressed in nature? No need to visit your local pet store; all you have to do is look around you.
{"url":"http://science.howstuffworks.com/math-concepts/fibonacci-nature.htm","timestamp":"2014-04-20T21:21:34Z","content_type":null,"content_length":"125777","record_id":"<urn:uuid:6e401c71-8498-4ab6-9ffe-fa8d1fdba2a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
The Changing Shape of Geometry: Celebrating a Century of Geometry and Geometry Teaching The book is an expanded collection of 57 articles published in Mathematical Gazette and Mathematics in School — two journals of The Mathematical Association, a British organization for teachers of mathematics — over about one hundred years. The Mathematical Association is the name taken by the Association for the Improvement of Geometrical Teaching in 1897. The latter was created in 1871. At the time, school and university geometry curricula were entirely based on Euclid's Elements and geometry was universally "viewed as the ideal vehicle for developing an understanding of formal proof" (p. 3). The first Teaching Committee was formed thirty years later (1902) to help design curriculum, assist mathematics teachers with advice and resources, and "to seek to influence national policies on mathematics education." This book celebrates the first centenary of Teaching Committee. The tone for the book is set in the General Introduction by C. Pritchard. "This book celebrates the best of geometry in all its simplicity, economy and elegance. ... If I were to add a fourth attribute, it would be 'surprise'." Pritchard surprises the reader then and there by offering two simple and elegant problems. Solution to the second one is postponed till the end of the introduction, so as to give the reader a chance to ponder it and, perhaps, to suggest that this is what geometry is about — problem solving. The articles have been grouped into six parts. The selection of articles is excellent, starting with the two presidential addresses What Is Geometry? that constitute Part I. For G. H. Hardy (1925), geometry is a collection of logical systems, while "the elementary geometry of schools and universities is not this or that geometry, but a most disorderly and heterogeneous collection of fragments from a dozen geometries or more." For M. Atiyah (1982), "...geometry is that part of mathematics in which visual thought is dominant. ...geometry is not so much a branch of mathematics as a way of thinking that permeates all branches." C. Pritchard designates Hardy's view that "...the elementary geometry of schools is a fundamentally and inevitably illogical subject, about whose details agreement can never be reached" as pessimistic. I'd call it insightful. Given a century of a deliberate effort to improve geometry teaching, I am inclined to think of Atiyah's implicit belief that the matter can be settled after detailed debate as unjustifiably naïve. "The exact balance [between the two modes of thinking] is naturally a subject for detailed debate and must depend on the level and ability of the students Part II, The History of Geometry, gathers articles on Babylonian, Greek, Chinese, Islamic, Indian mathematics and the role of more recent mathematicians, Girard Desargues and Henri Brocard. A short article by J. H. Webb took me by surprise. I was unaware of the popular English definition of a straight line as "the shortest distance between two points." Webb traces the usage to a mistranslation of Legendre's Éléments de Géométrie. Curiously, in Part III, on Pythagoras' Theorem, most articles are of a recent origin, written in the 1990s. Simple results, like Larry Hoehn's generalization of the Pythagorean theorem (and to which it is in fact equivalent) provide a compelling elementary argument against a widespread view on geometry as a petrified collection of axioms, definitions and theorems. In Part IV, The Golden Ratio, I liked best, in part for the same reason, J. F. Rigby's discovery of the famous number in the diagram depicting an equilateral triangle inscribed in a circle. Part V (which contains the largest number of articles) is on Recreational Geometry. It contains several wonderful dissection and tessellation articles. In particular, an introductory essay by Brian Bolt is followed by James Brunton's paper-folding construction of a pyramid, three copies of which combine into a cube. Unfortunately, it also contains several articles whose inclusion I may only explain as due to an unhealthy trend that seeks entertainment as a goal in itself. The mathematical contents of one of Tony Orton's articles end with the very first sentence to the effect that "There are many ways of dividing a square into two shapes of equal area." From here and the natural tessellation of the plane by congruent squares, one can obtain a multitude of "attractive" designs: cogs, lightning, cats, mountains... Doing this in class would be a horrendous waste of students' time. An article by Helen Morris provides a long list of games from around the world most of which could be played with nothing more than, say, pebbles and draft paper. These could tremendously enhance a geography lesson. But geometry? Part VI, The Teaching of Geometry, is remarkable as much for what it contains as for what it lacks. The focus is on geometry teaching in England starting with the last quarter of the 19^th century. As C. Pritchard mentioned in General Introduction, "The Mathematical Association (MA) came into existence in 1871 at a time when geometry teaching was in something of a tumult. So completely was the curriculum determined" by Euclid's Elements that "Sylvester sarcastically referred to the Elements as 'one of the advanced outposts of the British Constitution'." Originally called Association for the Improvement of Geometry Teaching, the MA played a leading role in attempts to make geometry teaching more suitable for mass education. This part makes for an absorbing reading. However, the picture of a century-long sequence of educational reforms that emerges makes one feel that the subtitle of the book, Celebrating a Century of Geometry and Geometry Teaching, may not be quite adequate. Indeed, by the end of the 20^th century geometry teaching called for anything but celebration. A 2001 Report of a Royal Society working group says in particular, "We believe that geometry has declined in status within the English mathematics curriculum and that this needs to be redressed. It should not be the 'subject which dare not speak its name'." How come? The book documents the milestones of the evolution but, unfortunately, provides no answers. Over more than a hundred years of deliberate activity, there is no hint of any attempt to document the failures, shortcomings or successes of a reform. Writing in 1956, A. W. Siddon could say, "...I do claim that my generation has done something for the improvement of the teaching of Mathematics." Other articles in the selection left me with little doubt that, compared to what followed, the dethroning of Euclid at the beginning of the 20^th century might have been the easy chapter in the history of geometry teaching. If not a feast for geometry teaching, the book is a true celebration of geometry proper. The selection from the two MA journals has been greatly enhanced with Foreword by D. Hofstadter and a collection of 30 "Desert Island Theorems." Hofstadter tells a very personal story of a gifted youth thrown aback by the formality and abstractness of geometric literature only to recover his love for geometry at a ripe age with the help of dynamic geometry software. The Desert Island Theorems are small gems chosen by a constellation of notable mathematicians. From An Isoperimetric Theorem (and I'd bet it's not the one you might be thinking about) by John Hersee to Two Right Tromino theorems by Solomon Golomb to Tait Conjectures by Ruth Lawrence, the collection is charming and the expositions are lucid. I found very few errors in the book. None was overly annoying; one that reminded me of Montaigne's On Custom was rather amusing. Jack Oliver formulates the Pythagorean theorem for a right triangle with sides b and c and hypotenuse a, and arrives at a correct identity (b + c)^2 - a^2 = 2bc, which he simplifies to the common a^2 + b^2 = c^2, instead of the correct b^2 + c^2 = a^2. To sum up, this book is a delightful collection of articles that belongs, I think, on the bookshelf of every math teacher, present and future. Most of it could be enjoyed by high school students and geometry fans. I'd keep the book in a travel bag. Should a misfortune land you on a desert island, the book might provide much needed consolation and hours of entertainment. Formerly an associate professor of mathematics, Alex Bogomolny makes a living by writing business modeling software. Most recently the software has been used to convince Israeli authorities that the diplomatic initiative known as the Roadmap to Peace may only bring more bloodshed. Unfortunately the attempt has failed.
{"url":"http://www.maa.org/publications/maa-reviews/the-changing-shape-of-geometry-celebrating-a-century-of-geometry-and-geometry-teaching?device=desktop","timestamp":"2014-04-23T16:36:02Z","content_type":null,"content_length":"103801","record_id":"<urn:uuid:cc0d705e-a111-478a-aed8-41be92c95bea>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
What is wrong with this picture? Posted by: Dave Richeson | January 28, 2009 What is wrong with this picture? This image illustrating the Pythagorean Theorem was created by the artist Mel Bochner. It appears on the cover of the January 2009 issue of the College Mathematics Journal. What is wrong with it? For the answer, visit the 360 blog. Very nice. Very common mistake, too – though I hadn’t before seen it in this context. By: Bert on January 29, 2009 at 7:39 pm • Very nice indeed—it was the folks at 360 who noticed it. The journal was sitting on my coffee table for a week and I never gave the cover a second look. By: Dave Richeson on January 30, 2009 at 12:14 am Posted in Math, Puzzle | Tags: College Mathematics Journal, Pythagorean Theorem
{"url":"http://divisbyzero.com/2009/01/28/what-is-wrong-with-this-picture/","timestamp":"2014-04-18T08:24:35Z","content_type":null,"content_length":"61106","record_id":"<urn:uuid:018b9d78-76a2-44ee-bc89-1194cc7959d8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Computation Sciences Lecture Series Second Meeting: Thursday, 19 February 2004 2:30 PM - 3:30 PM "An Introduction to Probabilistic Graphical Models and Their Lyapunov Functions and Algorithms for Inference and Learning" Brendan J. Frey Probabilistic and Statistical Inference Group Electrical and Computer Engineering University of Toronto ABSTRACT: Many problems in science and engineering require that we take into account uncertainties in the observed data and uncertainties in the model that is used to analyze the data. Probability theory (in particular, Bayes rule) provides a way to account for uncertainty, by combining the evidence provided by the data with prior knowledge about the problem. Recently, we have seen an increasing abundance of data and computational power, and this has motivated researchers to develop techniques for solving large-scale problems that require complex chains of reasoning applied to large datasets. For example, a typical problem that my group works on will have 100,000 to 1,000,000 or more unobserved random variables. In such large-scale systems, the structure of the probability model plays a crucial role and this structure can be easily represented using a graph. In this talk, I will review the definitions and properties of the main types of graphical model, and the Lyapunov functions and optimization algorithms that can be used to perform inference and learning in these models. Throughout the talk, I will use a simple example taken from the application area of computer vision, to demonstrate the concepts. return to schedule top of page 3:45 PM - 4:45 PM "Modeling and Inference of Dynamic Visual Processes" Ralf Koetter Assistant Professor Coordinated Science Laboratory and Department of Electrical Engineering University of Illinois, Urbana-Champaign ABSTRACT: The use of graphical models of sytems is a well established technique to characterize a represented behavior. While these models are often given by nature in some cases it is possible to choose the underlying graphical framework. If in addition the represented behavior satisfies certain linearity requirements, surprising structural properties of the underlying graphical models can be derived. We give an overview over a developing structure theory for linear systems in graphical models and point out numerous directions for further research. Examples of applications of this theory are given that cover areas as different as coding, state space models and network information theory. return to schedule top of page 5:00 PM - 6:00 PM "Computational Anatomy and Models for Image Analysis" Michael I. Jordan Department of Computer Science University of California Berkeley ABSTRACT: The formalism of probabilistic graphical models provides a unifying framework for the development of large-scale multivariate statistical models. Graphical models have become a focus of research in many applied statistical and computational fields, including bioinformatics, information theory, signal and image processing, information retrieval and machine learning. Many problems that arise in specific instances---including the key problems of computing marginals and modes of probability distributions---are best studied in the general setting. Exploiting the conjugate duality between the cumulant generating funciton and the entropy for exponential families, we develop general variational representations of the problems of computing marginals and modes. We describe how a wide variety of known computational algorithms---including mean field, sum-product and cluster variational techniques---can be understand in terms of these variational representations. We also present novel convex relaxations based on the variational framework. We present applications to problems in bioinformatics and information retrieval. [Joint work with Martin Wainwright] return to schedule back to CSLS home page top of page
{"url":"http://www.engr.wisc.edu/CSLS/abstracts_feb19.html","timestamp":"2014-04-19T20:35:00Z","content_type":null,"content_length":"14853","record_id":"<urn:uuid:2a67f517-8152-4b32-a121-08e51287f824>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
West Boxford Algebra 1 Tutor ...I would help you by assessing what modalities you use (Vision, touch, etc.) when trying to learn and use tools and techniques to make that learning easier. I have a BS in Speech Pathology and Audiology from Towson University. I have worked with HOH individuals in a number of settings; including... 45 Subjects: including algebra 1, chemistry, reading, physics ...Its at a higher demand. Not to say that other services are not rendered such as homework assistance, exam preparation and resume writing. Through WyzAnt, I have been tutoring independently 5 26 Subjects: including algebra 1, Spanish, ESL/ESOL, GED My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. 36 Subjects: including algebra 1, chemistry, English, reading ...I'm back in the US looking for opportunities to continue ESL teaching. I speak some Arabic (and some Spanish), enough to help if tutoring complete beginners. I have taught very beginner level, starting with children learning the alphabet through pre-intermediate learners. 4 Subjects: including algebra 1, statistics, ESL/ESOL, prealgebra ...I worked at the Haverhill Boys and Girls Club in the Fall of 2011. I worked in the homework room and tutored several kids. I help my 8 year old and 11 year old cousins with their homework when they need it. 16 Subjects: including algebra 1, English, writing, geometry
{"url":"http://www.purplemath.com/west_boxford_ma_algebra_1_tutors.php","timestamp":"2014-04-20T21:22:16Z","content_type":null,"content_length":"23945","record_id":"<urn:uuid:3dba1848-15ad-4549-a4e1-060345708330>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about numbers on The Math Less Traveled Tag Archives: numbers This has been making the rounds of the math blogosphere (blathosphere?), but in case you haven’t seen it yet, check out Cristóbal Vila’s awesome short video, Nature by Numbers. Especially appropriate given that I have been writing about Fibonacci numbers … Continue reading [This is the ninth, and, I think, final in a series of posts on the decadic numbers (previous posts: A curiosity, An invitation to a funny number system, What does "close to" mean?, The decadic metric, Infinite decadic numbers, More … Continue reading To recap: we’ve now defined the decadic metric on integers by where is not divisible by 10, and also . According to this metric, two numbers are close when their difference is decadically small. So, for example, and are at … Continue reading Continuing my series of posts exploring the decadic numbers… in my previous post, I explained that we will define a new “size function”, or metric, different from the usual “absolute value”, and written . Two numbers will be “close to” … Continue reading Consider the equation Solving this equation is no sweat, right? Let’s do it. First, we subtract from both sides: Now we can factor an out of the left side: Now, if the product of two things is zero, one of … Continue reading Here’s a neat problem from Patrick Vennebush of Math Jokes 4 Mathy Folks: Append the digit 1 to the end of every triangular number. For instance, from 3 you’d get 31, and from 666 you’d get 6,661. Now take a … Continue reading Several months ago, Matthew Watkins sent me a review copy of his new book, Secrets of Creation Volume One: The Mystery of the Prime Numbers. It’s taken me a while to get around to reviewing it, but not for lack … Continue reading Math Jokes 4 Mathy Folks A few months ago, Patrick Vennebush was kind enough to send me a review copy of his new book, Math Jokes 4 Mathy Folks. It’s a treasure-trove of math-related jokes with a huge range of … Continue reading
{"url":"http://mathlesstraveled.com/tag/numbers/","timestamp":"2014-04-18T23:19:58Z","content_type":null,"content_length":"66723","record_id":"<urn:uuid:c065279c-f562-4cf9-97f6-16de95b296f2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculator pages. Power (Wattage) Calculator. Source code supplied by: www.electronics2000.co.uk © Simon Carter. These calculators perform calculations associated with power (wattage). The bottom one calculates power dissipated in a resistor given the current flowing through it. Just fill in the two boxes on the left and press 'Calculate' to find the unknown value.
{"url":"http://highfields-arc.co.uk/constructors/olcalcs/calcpow.htm","timestamp":"2014-04-18T08:03:39Z","content_type":null,"content_length":"8837","record_id":"<urn:uuid:e8ade0cd-6939-430d-b8ea-cde18371b554>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Training parsers by inverse reinforcement learning G Neu and Csaba Szepesvari Journal of Machine Learning Volume 77, , 2009. ISSN http://www.sztaki.hu/~szcsaba/papers/MLJ-SISP-09.pdf One major idea in structured prediction is to assume that the predictor computes its output by nding the maximum of a score function. The training of such a predictor can then be cast as the problem of nding weights of the score function so that the output of the predictor on the inputs matches the corresponding structured labels on the training set. A similar problem is studied in inverse reinforcement learning (IRL) where one is given an environment and a set of trajectories and the problem is to nd a reward function such that an agent acting optimally with respect to the reward function would follow trajectories that match those in the training set. In this paper we show how IRL algorithms can be applied to structured prediction, in particular to parser training. We present a number of recent incremental IRL algorithms in a unied framework and map them to parser training algorithms. This allows us to recover some existing parser training algorithms, as well as to obtain a new one. The resulting algorithms are compared in terms of their sensitivity to the choice of various parameters and generalization ability on the Penn Treebank WSJ corpus.
{"url":"http://eprints.pascal-network.org/archive/00006345/","timestamp":"2014-04-17T15:41:46Z","content_type":null,"content_length":"7988","record_id":"<urn:uuid:e9d95e5f-0225-45e2-bff3-b82f31d1a371>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
February 27th, 2012 by Steven Pomeroy | No Comments | Filed in Math Art Spirolaterals – A Perfect Match for Doodlers! Have you ever had to wait in an office for a long time for an appointment without a book? Sure, they may have magazines at the dentist’s office or what have you, but chances are that they are months old. What do you do to entertain yourself while you wait? One solution is to draw spirolaterals – a great way to draw interesting math art with a few simple rules – plus a notepad, pen, and Spirolaterals are spiral structures generated by setting up simple rules for drawing lines, and performing reiterated iterations of these rules. Using complex algorithms, the number of spirolaterals that can be generated may indeed be infinite. These structures were first investigated in 1968 by Hal Abelson and his colleagues. Spirolaterals are open or closed, and can range from simple squares and rectangles to complex curves. A very simple spiral is a square. The rules for drawing a square are: Draw a line of length one, rotate your paper by 90 degrees and repeat. Repeat procedure until a closed design is generated. I have outlined these steps using the excellent spirolateral program at http://math.fau.edu/MLogan/Pattern_Exploration/Spirolaterals/SL.html: Iteration 1 Iteration 2 Iteration 3 Iteration 4 So now what do you get if you use the above algorithm, but change the angle to 60 degrees? You get a geometric figure which has an angle of 60 degrees between all of its line segments: That’s right – a hexagon! If you vary the line length as you progress in your iterations, much more interesting shapes can be made. Of course, you can do all of this roughly with pen and ink, but here I will generate the designs with the program mentioned above. Using variable lengths 1 and 2 (here it is set to letter codes MZ, which codes for 13 and 26, which is the same as 1 and 2, but the program draws the design bigger this way – LOL!), and an angle of 120 degrees, we get this design: This one was drawn using 1 5 9 lengths, and 35 degrees. It reminds me of the designs i used to make with the Spirograph toy when I was a kid: Length 1 (actually, it is set to Z, or 26) and 30 degrees: 1 2 3 4 5 6 7 and 35 degrees: Give it a try, either with pen and paper or by using the program. It really fun and interesting, and you get a sense of how patterns should develop before you actually draw them. Tags: Math Art, spirograph, spirolateral art, spirolaterals
{"url":"http://mathtricks.org/tag/spirolateral-art/","timestamp":"2014-04-21T10:28:24Z","content_type":null,"content_length":"27668","record_id":"<urn:uuid:754aac65-05b1-461a-9b35-c8b68cb6d26c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Heath, TX Prealgebra Tutor Find a Heath, TX Prealgebra Tutor ...I also hold Master Reading, Special Education, and ESL certifications from Texas in grades Kindergarten through 12th grade. I believe that all children are capable of learning if given the opportunity and the correct method of instruction. I use a hands on approach to assist children in making learning come alive and relevant to them. 21 Subjects: including prealgebra, reading, English, writing ...I was a tutor in college for students that needed help in math. I have a Master's degree in civil engineering and have practiced engineering for almost 40 years where math was important to performing my job. I hold a Master's Degree in Education with emphasis on instruction in math and science for grades 4th through 8th. 11 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...Word is a very powerful program that allows users to create just about any kind of document and be linked or supplemented by PowerPoint or Excel.Students will be tested to check their understanding of prealgebra knowledge, such as place value, number theory, arithmetic concepts, geometric concept... 14 Subjects: including prealgebra, Spanish, geometry, ESL/ESOL ...He is, however, equally as helpful in language studies. He is fluent in Spanish, and conversational in French. He, of course, has a great grasp of English, having spent many years as an ESL 37 Subjects: including prealgebra, Spanish, reading, chemistry Teaching math and helping students are my forte! I taught math several years in the Fort Worth ISD. For the last eight years I've been a homemaker and am eager to help students with math again. 14 Subjects: including prealgebra, reading, writing, English Related Heath, TX Tutors Heath, TX Accounting Tutors Heath, TX ACT Tutors Heath, TX Algebra Tutors Heath, TX Algebra 2 Tutors Heath, TX Calculus Tutors Heath, TX Geometry Tutors Heath, TX Math Tutors Heath, TX Prealgebra Tutors Heath, TX Precalculus Tutors Heath, TX SAT Tutors Heath, TX SAT Math Tutors Heath, TX Science Tutors Heath, TX Statistics Tutors Heath, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/Heath_TX_prealgebra_tutors.php","timestamp":"2014-04-20T16:35:07Z","content_type":null,"content_length":"23807","record_id":"<urn:uuid:f24fbb34-2e38-463b-bbd7-01f14145549c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Cosmic Inflation - A. Albrecht 4.2. A toy model During the ``outside R[J]'' regime, there is one growing and one decaying solution. The simplest system which has this qualitative behavior is the upside-down harmonic oscillator which obeys: The phase space trajectories are show on the left panel in Fig. 7. The system is unstable against the runaway (or growing) solution where |q| and |p| get arbitrarily large (and p and q have the same sign). This behavior ``squeezes'' any initial region in phase space toward the diagonal line with unit slope. The squeezing effect is illustrated by the circle which evolves, after a period of time, into the ellipse in Fig. 7. Figure 7. The phase space trajectories for an upside-down harmonic oscillator are depicted in the left panel. Any region of phase space will be squeezed along the diagonal line as the system evolves (i.e. the circle gets squeezed into the ellipse) . For a right-side-up harmonic oscillator paths in the phase space are circles, and angular position on the circle gives the phase of oscillation. Perturbations in the early universe exhibit first squeezing and then oscillatory behavior, and any initial phase space region will emerge into the oscillatory epoch in a form something like the dotted ``cigar'' due to the earlier squeezing. In this way the early period of squeezing fixes the phase of oscillation. The simplest system showing oscillatory behavior is the normal harmonic oscillator obeying This phase space trajectories for this system are circles, as shown in the right panel of Fig. 7. The angular position around the circle corresponds to the phase of the oscillation. The effect of having first squeezing and then oscillation is to have just about any phase space region evolve into something like the dotted ``cigar'' in the right panel. The cigar then undergoes rotation in phase space, but the entire distribution has a fixed phase of oscillation (up to a sign). The degree of phase coherence (or inverse ``cigar thickness'') is extremely high in the real cosmological case because the relevant modes spend a long time in the squeezing regime.
{"url":"http://ned.ipac.caltech.edu/level5/Albrecht/Alb4_2.html","timestamp":"2014-04-16T13:05:54Z","content_type":null,"content_length":"3896","record_id":"<urn:uuid:3a5c6396-a8a1-4d53-be49-19a694481499>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Show that if v . v' = 0 (both vectors) then speed v is constant ok, so if T' is zero, that means acceleration =0 so therefore the speed will be constant. right? If 1/2*v^2 =constant, (where v^2 is the dot product of the velocity by itself, equalling the squared speed), then it follows the speed must be constant. Suppose that you move, with uniform speed along a circle. Neither your velocity or accelerations are constants, but the acceleration vector is always orthogonal to the velocity vector. the velocity vector in this case is tangential to the circle (changing all the time in its direction), while the acceleration vector is strictly centripetal (changing all the time in its direction). non-uniform speed along the circle would make the acceleration somewhat tangentially orientated as well, i.e, its dot product with the velocity would be..non-zero
{"url":"http://www.physicsforums.com/showthread.php?t=587455","timestamp":"2014-04-20T21:22:43Z","content_type":null,"content_length":"72248","record_id":"<urn:uuid:822c8c9f-9ed9-44fb-bae2-7f39120f616c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
The Planted Tank Forum - View Single Post - One Way to Design a Planted Tank LED Light The angle is the advertised cone angle of the optic, not half of it. I corrected this on the spreadsheet, and added a calculation for the LED current for Cree XM-L and Cree XP-G LEDs. But, I don't know how to attach an Excel spreadsheet to this post. Once I know how to do that we can include both the correction and the added calculation. I used the Cree pdf for each LED model and derived an equation that works for most of the range of the Lumens vs current graphs here. The equations work for the 350 to 1500 mAmp range for both of them. For the XP-G the equation breaks down for lower currents, and for the XM-L it breaks down for higher currents, overestimating the lumens produced, in both cases. But, I wouldn't even consider using a higher current for either LED, just because of the cooling that would then be required for the heatsinks. Thanks to fizzout I learned how to to this: Here is the updated calculator, for which fizzout gets 95% of the credit. Edit: Added Cree XP-E-Q5 LEDs which are available at DealExtreme, and which are more suitable for lower height tanks. Attachment 44862 Edit: Added Cree XR-E-P4 Bin WD (5700-6350K), which are less than $3 each on DealExtreme. Attachment 44874
{"url":"http://www.plantedtank.net/forums/showpost.php?p=1798147&postcount=8","timestamp":"2014-04-25T03:36:01Z","content_type":null,"content_length":"19785","record_id":"<urn:uuid:c5e2690a-8751-41bb-be55-d8d35f461a4c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
If you have an equilateral triangle. That triangle is Author Message If you have an equilateral triangle. That triangle is [#permalink] 18 Aug 2008, 07:52 If you have an equilateral triangle. Joined: 05 Nov 2007 That triangle is enclosed perfectly in a circle such that each corner is exactly touching the edge of the circle. What is the radius of the circle with relation to one side of the triangle? Posts: 36 Followers: 0 Kudos [?]: 2 [0], given: 0 Re: Equilateral Triangle enclosed in a circle [#permalink] 18 Aug 2008, 07:56 This post received explanation in a minute... I think I remember reading somewhere that the center of the circle will be 2/3 from any of the 3 vertices of triangle. So if an equilateral triangle creates 2 30:60:90 triangles back-to-back, then the height of the triangle will be , and the radius should be 2/3 of that length. But the question asks for the relation of the radius to any side of the equilateral triangle. The relationship of the "height" of Joined: 30 Apr 2008 the equilateral triangle to a side is Posts: 1893 2:\sqrt{3} Location: Oklahoma . So this would be Schools: Hard Knocks because the radius of the circle to one side of the triangle. This would be the same as dividing Followers: 27 \frac{2}{3}\sqrt{3} Kudos [?]: 398 [1] by 2. so 2/3 * 1/2 = 1/3...or , given: 32 . I'm not sure if this is correct, but it seems logical to me. I think this is correct. see the following link: J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. Intern Re: Equilateral Triangle enclosed in a circle [#permalink] 18 Aug 2008, 08:47 Joined: 05 Nov 2007 This makes sense to me too. The link you attached, the images don't appear for me, but I vaguely remember reading that 1/3, 2/3 center point as well. Posts: 36 Thanks for the help and quick reply +1 Followers: 0 Kudos [?]: 2 [0], given: 0 Re: Equilateral Triangle enclosed in a circle [#permalink] 18 Aug 2008, 08:56 durgesh79 i think its very useful to memorize some common sin cos and tan values. Director sin 30 = 1/2, sin 60 = sqrt(3)/2, and 45 = 1/sqrt(2) cos 30 = sqrt(3)/2, cos 60 = 1/2, and cos 45 = 1/sqrt(2) Joined: 27 May 2008 tan 30 = 1/sqrt(3), tan 60 = sqrt(3), and tan 45 = 1 Posts: 552 just remember these 6 values ... its easy.. and it helps a lot in geamatry questions... Followers: 5 for example all i have to do is Kudos [?]: 143 [0], hypotneous = r given: 0 base = a/2 angle = 30 cos 30 = (a/2)/r = sqrt(3)/2 a/r = sqrt(3) Re: Equilateral Triangle enclosed in a circle [#permalink] 18 Aug 2008, 09:07 This post received SVP KUDOS Joined: 07 Nov 2007 gmatatouille wrote: Posts: 1833 If you have an equilateral triangle. That triangle is enclosed perfectly in a circle such that each corner is exactly touching the edge of the circle. Location: New York What is the radius of the circle with relation to one side of the triangle? Followers: 23 Thanks. Kudos [?]: 390 [3] , given: 5 tri-in-circle.gif [ 4.65 KiB | Viewed 1641 times ] Your attitude determines your altitude Smiling wins more friends than frowning Intern Re: Equilateral Triangle enclosed in a circle [#permalink] 18 Aug 2008, 09:43 Joined: 17 Aug 2008 The triangle can be divided into three equal sectors and their point of intersection will be the center of the circle, which is also the centroid of the triangle. The centroid of a triangle is (2/3)*(height); Height = (side*sqrt[3]/2). This will give us the radius to be side/sqrt[3]. Cheers. Posts: 20 Followers: 0 Kudos [?]: 0 [0], given: 0 Re: Equilateral Triangle enclosed in a circle [#permalink] 19 Aug 2008, 21:43 an arc on circle makes double the angle on the center to what it makes on the opposite side on the circle. Joined: 17 Jun 2008 With this logic, any of the sides of equilateral triangle will make 120 degree angle at the center of the circle. Posts: 1580 Now, if I divide the triangle made by two radii and one side of the triangle into two halves, each of the triangles will be 30, 60, 90 and if r is the radius, then the side of Followers: 10 triangle will become r multiplied by root 3. Kudos [?]: 167 [0], given: 0 Re: Equilateral Triangle enclosed in a circle [#permalink] 20 Aug 2008, 03:57 SVP CircleAngle.jpg [ 5.01 KiB | Viewed 1555 times ] Joined: 30 Apr 2008 Looking at the picture above, does Angle ABC work with this rule? If Angle ABC is 52 degrees, is arc AC 104 degrees even though the angle is not uniform (i.e., isosceles or Posts: 1893 scthakur wrote: Location: Oklahoma City an arc on circle makes double the angle on the center to what it makes on the opposite side on the circle. Schools: Hard With this logic, any of the sides of equilateral triangle will make 120 degree angle at the center of the circle. Now, if I divide the triangle made by two radii and one side of the triangle into two halves, each of the triangles will be 30, 60, 90 and if r is the radius, then the side of Followers: 27 triangle will become r multiplied by root 3. Kudos [?]: 398 [0], _________________ given: 32 J Allen Morris **I'm pretty sure I'm right, but then again, I'm just a guy with his head up his a$$. Re: Equilateral Triangle enclosed in a circle [#permalink] 20 Aug 2008, 12:51 gmatatouille wrote: If you have an equilateral triangle. CEO That triangle is enclosed perfectly in a circle such that each corner is exactly touching the edge of the circle. What is the radius of the circle with relation to one side of the triangle? Joined: 29 Aug 2007 Posts: 2504 Equilatral triangle with side a inscribed in a circle has r equal to a/sqrt (3) Followers: 48 Kudos [?]: 451 [0], given: 19 Verbal: new-to-the-verbal-forum-please-read-this-first-77546.html Math: new-to-the-math-forum-please-read-this-first-77764.html Gmat: everything-you-need-to-prepare-for-the-gmat-revised-77983.html gmatclubot Re: Equilateral Triangle enclosed in a circle [#permalink] 20 Aug 2008, 12:51
{"url":"http://gmatclub.com/forum/if-you-have-an-equilateral-triangle-that-triangle-is-69024.html","timestamp":"2014-04-16T20:18:33Z","content_type":null,"content_length":"167912","record_id":"<urn:uuid:0075f99c-3c23-48f3-9e4e-5ea8b3110428>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Method And Apparatus for Establishing Network Performance Model Patent application title: Method And Apparatus for Establishing Network Performance Model Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A method and apparatus for establishing a network performance model. The method includes: determining, according to performance data provided by network nodes and the probability of the performance data, a parameter α showing the correlation of the performance data of different network nodes in a whole network and a parameter β showing the distribution pattern of the performance data in the network; and establishing a Latent Dirichlet Allocation, LDA, network performance model by using the determined parameter α and the parameter β. A method for establishing a network performance model, comprising:determining, according to performance data provided by network nodes and the probability of the performance data, a parameter α showing the correlation of the performance data of different network nodes in a whole network and a parameter β showing the distribution pattern of the performance data in the network; andestablishing a Latent Dirichlet Allocation, LDA, network performance model by using the determined parameter α and the parameter β. The method of claim 1, further comprising at least one of:rounding the performance data provided by the network nodes and dividing the performance data provided by the network nodes into blocks; wherein the determining the parameter α and the parameter 8 comprises:determining, according to at least one of the rounded and divided performance data, the parameter α showing the correlation of the performance data of different network nodes in the whole network and the parameter β showing the distribution pattern of the performance data in the network. The method of claim 1, wherein the determining the parameter α and the parameter β comprises:determining the parameters α and β that make the received performance data occurs with the maximum The method of claim 3, wherein the determining the parameters α and β that make the received performance data occurs with the maximum probability comprises:determining the parameters α and β that make the received performance data occurs with the maximum probability by using the maximum likelihood approach. The method of claim 4, wherein the determining the parameters α and β that make the received performance data occurs with the maximum probability by using the maximum likelihood approach comprises:initializing a component number K of the LDA network performance model, in which K is greater than or equals to 2;establishing a likelihood function l(α,β) containing the parameters α and β; andcalculating, according to the received performance data, the parameters α and β that make the likelihood function l(α,β) reaches the maximum value. The method of claim 5, wherein the likelihood function l(α,β) containing the parameters α and β is l ( α , β ) = d = 1 M log p ( w d α , β ) , ##EQU00012## in whichM is the number of network nodes selected for sending performance data, w is the performance data sent from the No. d network node, and p(w |α,β) is the probability of w occurring under conditions of the parameters β and α; andthe likelihood function l(α,β) contains internal latent variables θ and Z , θ showing the distribution pattern of the performance data of network nodes and complying with the Dirichlet distribution Dir(α), and θ being a parameter of K dimension; Z showing the distribution pattern of performance data of the No. d network node and complying with the Multinomial(θ); α is a parameter of K dimension; when the probability of w is p(w ,β) with the parameters β and Z , β shall be a parameter of K×V dimension; and when w complies with the Gaussian distribution with the Z and β parameters, β is a parameter of times.K×K, dimension, V indicates the sample space of performance data, and K is the number of Gaussian components. The method of claim 6, wherein, when the probability of w is p(w ,β) with the parameters Z and β, the calculating of the parameters α and β that make the likelihood function l(α,β) reaches the maximum value further comprises:calculating, by using a variational inference, the parameters α and β that make the likelihood function l(α,β) reaches the maximum value. The method of claim 6, wherein the calculating, by using a variational inference, the parameters α and β that make the likelihood function l(α,β) reaches the maximum value comprises:introducing intermediate variables γ and φ into the likelihood function l(α,β) to obtain a simplified equivalent function L(γ,φ,α,β) of the likelihood function l(α,β), in which γ is a parameter of K dimension and φ is a parameter of K×V dimension;calculating, with the parameters α and β as known variables and the parameters γ and φ as independent variables, the optimized values of the parameters γ and φ by calculating the extremum of the simplified equivalent function L(γ,φ,α,β); andusing the optimized values of the parameters γ and φ in the simplified equivalent function L(γ,φ,α,β), calculating the extremum of the simplified equivalent function L(γ,φ,α,β) with the parameters α and β as the independent variables, and obtaining the values of the parameters α and β as the values of the parameters α and β that make the likelihood function l(α,β) reach the maximum value. The method of claim 6, wherein the establishing the LDA network performance model by using the determined parameters α and β comprises:determining, according to the determined parameters α and β, the Dir(α) with which the internal latent variable θ complies and the Multinomial(θ) with which the internal latent variable Z The method of claim 9, further comprising:generating simulated performance data by using the established LDA performance model. The method of claim 10, wherein the generating the simulated performance data by using the established LDA performance model comprises:selecting N as the amount number of performance data generated by No. d network node, wherein N complies with Poisson distribution;making, by using the determined parameter α, the internal variable θ to comply with the Dir(α) distribution;making Z , which corresponds to the simulated performance data w to comply with the Multinomial(θ) distribution;making w to comply with p(w ,β) according to the determined parameter β when the probability of w with the parameters Z and β is p(w ,β); or making w to comply with Gaussian distribution when w with the parameters Z and β complies with Gaussian distribution; wherein w is the No. n simulated performance data of the No. d network node and Z is the distribution pattern of the No. n performance data of the No. d network node. An apparatus for establishing a network performance model, comprising:a parameter determining unit, adapted to determine, according to performance data provided by network nodes and the probability of the performance data, a parameter α showing the correlation of the performance data of different network nodes in a whole network and a parameter β showing the distribution pattern of the performance data in the network; anda model establishing unit, adapted to establish a Latent Dirichlet Allocation, LDA, network performance model by using the parameter α and parameter β determined by the parameter determining unit. The apparatus of claim 12, wherein the parameter determining unit comprises:an initiating unit, adapted to initiate component number K of the LDA network performance model;a likelihood function setup unit, adapted to establish a likelihood function l(α,β) containing the parameters α and β according to the component number K initiated at the initiating unit, wherein the parameter α is a parameter of K dimension, the parameter β is a parameter of K×V dimension or times.K×K, dimension, V indicates the sample space of the performance data and K indicates Gaussian component number; anda parameter calculating unit, adapted to calculate, according to the received performance data, the values of the parameters α and β that allow the likelihood function l(α,β) established by the likelihood function setup unit to reach the maximum value. The apparatus of claim 13, wherein the parameter calculating unit comprises:an equivalent function establishing unit, adapted to introduce intermediate parameters γ and φ into the likelihood function l(α,β) established by the likelihood function setup unit to obtain a simplified equivalent function L(γ,φ,α,β) of the likelihood function l(α,β), wherein γ is a parameter of K dimension and φ is a parameter of K×V dimension;an intermediate variable calculating unit, adapted to calculate, with the parameters α and β as known variables and the parameters γ and φ as independent variables, the optimized values of the parameters γ and φ by calculating the extremum of the simplified equivalent function L(γ,φ,α,β) established by the equivalent function establishing unit; anda parameter α and β calculating unit, adapted to use the optimized values of the parameters γ and φ from the intermediate variable calculating unit into the simplified equivalent function L(γ,φ,α,β), calculate the extremum of the simplified equivalent function L(γ,φ,α,β) with the parameters α and β as the independent variables, and obtain the values of the parameters α and β. The apparatus of claim 12, wherein the model establishing unit comprises:a parameter θ determining unit, adapted to determine, according to the parameters α and β determined by the parameter determining unit, the Dir(α) distribution with which an internal latent variable θ of the LDA model complies; anda parameter Z determining unit, adapted to determine, according to the parameters α and β determined by the parameter determining unit, the Multinomial(θ) distribution with which an internal latent variable Z of the LDA model complies. The apparatus of claim 12, further comprising:a performance parameter generating unit, adapted to generate simulated performance data by using the LDA network performance model established by the model establishing unit. This application claims the benefit of Chinese Application No. 200710151586.1, filed Sep. 28, 2007. The disclosure of the above application is incorporated herein by reference. FIELD [0002] The present disclosure relates to computer network technologies, and particularly to a method and apparatus for establishing a network performance model. BACKGROUND [0003] The statements in this section merely provide background information related to the present disclosure and may not constitute prior art. As the network technologies develop rapidly, the number of users and new services keeps on expanding, and network operators have to try to provide the best services for users to survive in the market full of intense competition. Therefore, network performance has become the focus in such a circumstance. In practical applications, an operator usually needs to run network environment simulation to evaluate the network performance for network planning, optimization and Quality of Service (QoS) control. According to the operation principles of actual networks, network performance model can be established in a network environment simulation, and an actual network environment can be simulated by using the established network performance model. The network performance model in the existing technology is the Gaussian mixture model. The basic process of establishing a Gaussian mixture model includes firstly a step of providing performance data by a part of network nodes in a network. The performance data is described with plurality of components which affect the performance data. Each of the components is in compliance with Gaussian distribution. Therefore the components are generally called Gaussian components. The performance data in the established Gaussian mixture model equals the total weights of all Gaussian components. Suppose a piece of performance data is described with N Gaussian components, the Gaussian distribution mean value of the No. j Gaussian component is μ , the deviation of the Gaussian component is σ and the mixture weight value of the Gaussian component is ω , the probability density function of the performance data is ( s θ ) = j = 1 N ω j N s ( μ j , σ j 2 ) . ##EQU00001## ) and s is the performance data. The probability density function shows the probability density of the performance data when the Gaussian components of the performance data are determined. A matrix can be obtained with performance data on rows and Gaussian components on columns. A value in the matrix is the probability density corresponding to the performance data of the corresponding row and the Gaussian component of the corresponding column. Therefore the matrix shows the distribution of the Gaussian components of all measured performance data. A simulation environment can be established with the matrix as the parameters of the network performance model. However, when the Gaussian mixture model is used as the network performance model, the Gaussian component weights are derived solely from sample performance data provided by the network nodes, i.e., the network performance model is established for the network nodes that provided the performance data and is reliable only in showing the performance of the network nodes. The performance data of other network nodes in the network are not shown in the network performance model, which means the network performance model established with the conventional method is not suitable to the whole network and is not reliable in showing the performance of the whole network. In one sentence, the network performance model does not fit the whole network consisting of network nodes of same aggregation features, and nodes providing the performance data can be chosen at random from the aggregation space. SUMMARY [0007] The present disclosure provides a method and apparatus for establishing a network performance model which is reliable in showing the performance of a whole network. The method for establishing a network performance model includes: determining, according to performance data provided by network nodes and the probability of the performance data, a parameter α showing the correlation of the performance data of different network nodes in a whole network and a parameter β showing the distribution pattern of the performance data in the network; and establishing a Latent Dirichlet Allocation, LDA, network performance model by using the determined parameter α and the parameter β. The apparatus for establishing a network performance model includes: a parameter determining unit, adapted to determine, according to performance data provided by network nodes and the probability of the performance data, a parameter α showing the correlation of the performance data of different network nodes in a whole network and a parameter β showing the distribution pattern of the performance data in the network; and a model establishing unit, adapted to establish a Latent Dirichlet Allocation, LDA, network performance model by using the parameter α and parameter β determined by the parameter determining unit. It can be seen from the technical scheme that the method and apparatus provided by embodiments of the present disclosure can determine a parameter β showing the distribution pattern of the performance data in the whole network according to the performance data provided by network nodes and can further determine a parameter α showing the correlation of the performance data of different network nodes in the whole network. The combination of the determined parameters α and β shows the distribution pattern of the performance data of different network nodes in the network. The distribution pattern is key factor used for establishing the network performance model. The network performance model established by using the present disclosure is not only reliable in showing the performance of the network nodes that provide the performance data, but also fits other network nodes in the network and is thus reliable in showing the performance of the whole network. Further areas of applicability will become apparent from the description provided herein. It should be understood that the description and specific examples are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure. DRAWINGS [0014] The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way. FIG. 1 is a simplified flow chart of a method for establishing a network model according to an embodiment; [0016]FIG. 2 is a simplified flow chart of a method for estimating parameters α and β by using the maximum likelihood approach according to an embodiment; [0017]FIG. 3a is an internal structure scheme of an LDA model according to an embodiment before an intermediate variable is introduced; [0018]FIG. 3b is an internal structure scheme of an LDA model according to an embodiment after an intermediate variable is introduced; [0019]FIG. 3c is an internal structure scheme of an LDA model according to an embodiment with Gaussian distribution introduced; [0020]FIG. 4 is a structure scheme of a system for establishing a network performance model according to an embodiment; and FIG. 5 is a structure scheme of an apparatus for establishing a network performance model in accordance with an embodiment. DETAILED DESCRIPTION [0022] The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. Reference throughout this specification to "one embodiment," "an embodiment," "specific embodiment," or the like in the singular or plural means that one or more particular features, structures, or characteristics described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment," "specific embodiment," or the like in the singular or plural in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In order to make the objective, technical scheme and merits more apparent, a detailed description is hereinafter given with reference to specific embodiments and accompanying drawings. A method in embodiments mainly includes: determining, according to performance data provided by network nodes and the probability of the performance data, a parameter α showing the correlation of the performance data of different network nodes in a whole network and a parameter β showing the distribution pattern of the performance data in the network; and establishing a Latent Dirichlet Allocation (LDA) network performance model by using the determined parameter α and parameter β. The method may further include: generating simulated performance data for network nodes in the network by using the established LDA network performance model and eventually establishing a network performance simulation circumstance. FIG. 1 is a simplified flow chart of a method for establishing a network model according to an embodiment. As shown in FIG. 1, the method mainly includes the following processes. Block 101: A parameter α showing the correlation of performance data of different network nodes in a whole network and a parameter β showing the distribution pattern of the performance data in the network are determined by using the performance data provided by the network nodes. In this process, the performance data collected by the network nodes may be bandwidth, delay or other performance data. Besides, the collected performance data can be further processed to reduce the sample space and the processed performance data shall be used as sample data. The processing may include: reducing the accuracy of the performance data by rounding the performance data when the accuracy of the collected performance data is too high, e.g., rounding 5.21 to 5. When the scope of the collected performance data is too broad, the collected performance data is divided into different blocks, e.g., using 1 to indicate [0,10) and using 2 to indicate [10,20). In this process, the correlation of the performance data of the network nodes in the network is obtained from the performance data provided by a part of the network nodes, and the distribution pattern of the performance data is further obtained accordingly. Therefore the distribution pattern of the performance data of all network nodes in the network may be obtained. In this process, the parameters α and β are determined when the performance data occurs with the maximum probability. The determination can be made with the maximum likelihood approach or other The flow chart shown in FIG. 2 can be used for the determination. In this embodiment, the maximum likelihood approach is used for estimating parameters α and β. As shown in FIG. 2 , the process of estimating the parameters α and β is described as follows. Block 201: Component amount K of a network model is initiated and parameters α and β are configured. In this process, K indicates that the correlation of the network nodes in the network is determined by K factors and K≧2. The value of K is usually determined according to experience. α is the parameter of K dimension and β is the parameter of K×V dimension. Wherein, V indicates the sample space of the received performance data of the network nodes. Normally the initial value of the parameter α is 1 and the initial value of the parameter β is 0. Block 202: Likelihood function containing the parameters α and β is established. The likelihood function l(α,β) established in this process can be: ( α , β ) = d = 1 M log p ( w d α , β ) . ( 1 ) ##EQU00002## M is the number of network nodes chosen to provide performance data and w is the performance data sent from No. d network node. The likelihood function l(α,β) also contains latent internal variables θ and Z . θ indicates the distribution pattern of the performance data of different network nodes and complies with Dirichlet distribution Dir(α). Both θ and α are parameters of K dimension. ( α ) = 1 B ( α ) i = 1 K x i α i - 1 = Γ ( i = 1 K α i ) i = 1 K Γ ( α i ) i = 1 K x i α i - 1 ( 2 ) ##EQU00003## indicates the distribution pattern of the performance data of the No. d network node and complies with multinomial distribution Multinomial(θ), and the probability of w with parameters Z and β is p(w Block 203: The parameters α and β that make the likelihood function to reach the maximum value are calculated according to the received performance data. Because the likelihood function l(α,β) established in Block 202 shows the probability of w when the correlation of the performance data of network nodes equals parameters, and the distribution pattern of the performance data of the network nodes equals parameters. Therefore, the parameters α and β that allow the likelihood function l(α,β) to reach the maximum value equal the parameters α and β when the w reach the maximum value. Because the calculation for p(w |α,β) in Equation (1) is complicated, variational inference can be employed to simplify the calculation of parameters α and β. The simplified calculation is given as below: log p ( w d α , β ) = log ∫ Z d p ( θ , Z d , w d α , β ) θ = log ∫ z d q ( θ , Z d γ , Φ ) p ( θ , Z d , w d α , β ) q ( θ , Z d γ , Φ ) θ ≧ ∫ z d q ( θ , Z d γ , Φ ) log p ( θ , Z d , w d α , β ) q ( θ , Z d γ , Φ ) θ . ( 3 ) ##EQU00004## In the calculation, γ and φ are intermediate variables introduced into the variational inference. γ is a parameter of K dimension and φ is a parameter of K×V dimension. Similarly, q is also an introduced intermediate function and E indicates the expected value of the function q. FIG. 3a shows an internal structure of an LDA model before introducing intermediate variables and FIG. 3b shows an internal structure of an LDA model after introducing intermediate variables. In Equation (3), the difference between the left and the right of the sign of inequality is divergence K-L: ( q ( θ , Z d γ , Φ ) p ( θ , z w , α , β ) ) = log p ( w d α , β ) - ∫ z q ( θ , Z d γ , Φ ) log p ( θ , Z d , w d α , β ) q ( θ , Z d γ , Φ ) θ = log p ( w d α , β ) - ( E q [ log p ( θ , Z d , w d α , β ) ] - E q [ log q ( θ , Z d γ , Φ ) ] ( 4 ) ##EQU00005## It can be learnt from Equation (4) that: log p [log p(θ,Z [log q(θ,Z |γ,φ).para- llel.p.sub.(θ,Z It can be learnt from the property of the divergence K-L that D(q(θ,Z ,.al- pha.,β))≧0. Therefore when E [log p(θ,Z [log q(θ,Z |γ,φ)] reaches the maximum value, log p(w |α,β) also reaches the maximum value. ( γ , Φ , α , β ) = E q [ log p ( θ , Z d , w d α , β ) ] - E q [ log q ( θ , Z d γ , Φ ) ] = E q [ log p ( θ α ) ] + E q [ log p ( Z d θ ) ] + E q [ log p ( w d Z d , β ) ] - E q [ log q ( θ γ ) ] - E q [ log q ( Z d Φ ) ] = log Γ ( j = 1 k α j ) - i = 1 k log Γ ( α i ) + i = 1 k ( α i - 1 ) ( Ψ ( γ i ) - Ψ ( j = 1 k γ j ) ) + n = 1 N i = 1 k Φ ni ( Ψ ( γ i ) - Ψ ( j = 1 k γ j ) ) + n = 1 N i = 1 k Φ ni w n j log β ij - log Γ ( j = 1 k γ j ) + i = 1 k log Γ ( γ i ) - i = 1 k ( γ i - 1 ) ( Ψ ( γ i ) - Ψ ( j = 1 k γ j ) ) - n = 1 N i = 1 k Φ ni log Φ ni ( 5 ) ##EQU00006## The maximum value of Formula (5) is the extremum of Formula (5) with γ and φ as independent variables. When the values of parameters α and β are known variables, the optimized values of parameters γ and φ are: Φ ni ∝ β iv exp ( Ψ ( γ i ) - Ψ ( j = 1 k γ j ) ) γ i = α i + n = 1 N Φ ni . ( 6 ) ##EQU00007## The values of parameters γ and φ, i.e., the values of all γ and φ i Formulation (6), are calculated through iteration with initial values of parameters α and β. The obtained values of parameters γ and φ are used in the likelihood function of Equation (1): ( α , β ) = d = 1 M log p ( w d α , β ) . ##EQU00008## log p |α,β) is replaced with L(γ,φ,α,β) of Equation (5). Taking the parameters α and β as independent variables, the extremum of L(γ,φ,α,β) is calculated and therefore: β ij ∞ d = 1 M n = 1 N d Φ dn i * w dn j ; ( 7 ) L ( α ) = d = 1 M ( log Γ ( j = 1 K α j ) - i = 1 K log Γ ( α i ) + i = 1 K ( ( α i - 1 ) ( Ψ ( γ d i ) - Ψ ( j = 1 K γ d j ) ) ) ) ; ( 8 ) ##EQU00009 The value of the parameter β is calculated by calculating Formulation (7) with the obtained parameters γ and φ as variables. The value of parameter α can be obtained by calculating the extremum of Formulation (8) with parameter α as the variable. The extremum of Formulation (8) can be calculated with the Newton-Raphson method. Preferably, Blocks 201 to 203 are repeated with the obtained parameters α and β as initial variables to calculate the values of parameters γ and φ again and further to calculate the values of parameters α and β again. Then Blocks 201 to 203 are repeated for the second time with the values of parameters α and β obtained in the preceding repeated processes as the initial variables. The Blocks are repeated again and again until the values of parameters α and β show convergence, the convergent values of parameters α and β shall be taken as the final values of parameters α and β. Block 204: The obtained values of parameters α and β are saved. The saved parameters α and β are taken as the initial parameters α and β in the iteration calculation for establishing a network performance model next time. Block 102: An LDA network performance model is established by using the obtained parameters α and β. In this process, the LDA network performance model is established by using the parameters α and β to calculate the internal variables of the LDA model. The internal variable θ complies with Dir(α) distribution, i.e., ( θ α ) = Γ ( i - 1 K α i ) i = 1 K Γ ( α i ) θ 1 α 1 - 1 θ 2 α 2 - 1 θ K α K - 1 . ##EQU00010## For the No . d node, the Z shall comply with Multinomial(θ) distribution and dεV. When the network performance model is established, the following processes can be performed with the established network performance model. Block 103: Performance data is generated with the established LDA network performance model. Because the parameter α in the established LDA model shows the correlation of the performance data of the network nodes in the network and the parameter β shows the distribution pattern of the performance data, the LDA model including the combination of the parameters α and β can show the performance of the whole network. In this process, suppose the simulated performance data {w , w , . . . , w , . . . , w } of a network node need to be generated with the established LDA network performance model, the generation of the simulated performance data includes: selecting N as the amount number of the simulated performance data generated by the No. d network node, wherein N complies with Poisson distribution; making, by using the determined parameter α, θ to comply with Dir(α) distribution, i.e., ( θ α ) = Γ ( i - 1 K α i ) i = 1 K Γ ( α i ) θ 1 α 1 - 1 θ 2 α 2 - 1 θ K α K - 1 ; ##EQU00011## making Z[dn] , which corresponds to w , to comply with Multinomial(θ) distribution; making w to satisfy p(w ,β); and repeating the above processes so that the simulated performance data of multiple network nodes can be generated by the LDA network performance model. In these processes, w , is the No. n simulated performance data of the No. d network node and Z indicates the distribution pattern of the No. n simulated performance data of the No. d network node. The established LDA model shows the performances of all network nodes in the whole network. Therefore the simulated performance data generated in Block 103 can be the simulated performance data of the network nodes that provide performance data in Block 101, or be simulated performance data provided by other network nodes in the network. A network performance simulation environment can be established with the simulated performance data of network nodes generated in Block 103. The simulated performance data generated in Block 103 for the network nodes are assigned to the network nodes in the simulation environment. When the simulated performance data includes the delays and bandwidths of the network nodes, the simulated performance data shall be assigned to the network nodes in the simulation environment as the delays and bandwidths of the network nodes to establish a simulation environment which has the same distribution pattern as the real network. For example, when a simulation environment embodying the delays of the network nodes is needed, the parameter α that shows the correlation of the performance data of the network nodes in the network and the parameter β that shows the distribution pattern of performance data are determined according to the delay data provided by a part of the network nodes in the network and the probability of the delay data. An LDA model is established with the determined parameters α and β. Simulated delay data of all network nodes in the network are generated with the established LDA model and the network performance simulation environment is eventually established with the simulated delay data of all network nodes. Tests can be run in the simulation environment to offer evidences for optimization and QoS control of the real network. Furthermore, in the flow shown in FIG. 2 , given the parameters Z and β, w may also comply with Gaussian distribution. Therefore the p(w ,β) in the preceding processes can be replaced with Gaussian distribution. The calculation of the parameters α and β may still adopt the corresponding method in FIG. 2 as long as certain changes of the parameters are made, i.e., the parameter β is changed into 2×K×K . K is the number of Gaussian components; the Gaussian distribution has parameters μ and σ and the internal structure of corresponding LDA model is shown in FIG. 3c . The detailed calculation will not be described any further herein. [0064]FIG. 4 is a structure scheme of a system for establishing a network performance model according to an embodiment. As shown in FIG. 4 , the system includes: network node 401 and performance model establishing device 402. Network node 401 is adapted to provide performance data of the network node itself. Performance model establishing device 402 is adapted to determine, according to the performance data provided by network node 401 and the probability of the performance data, the parameter α that shows the correlation of performance data of network nodes and the parameter β that shows the distribution pattern of the performance data in the network, and establish an LDA network performance model with the established parameters α and β. The system may further include: performance data generating device 403, adapted to generate performance data by using the network performance model established at performance model establishing device 402. Performance data generating device 403 can be a standalone device or be integrated into performance model establishing device 402. A structure of performance model establishing device 402 is shown in FIG. 5, mainly including: parameter determining unit 510 and model establishing unit 520. Parameter determining unit 510 is adapted to determine, according to performance data provided by network nodes and the probability of the performance data, a parameter α showing the correlation of the performance data of different network nodes and a parameter β showing the distribution pattern of the performance data in the network. Model establishing unit 520 is adapted to establish a LDA network performance model by using the parameter α and the parameter β determined by parameter determining unit 510. The device may further include: performance data generating unit 530, adapted to generate the performance data by using the network performance model established by model establishing unit 520. Parameter determining unit 510 may further include: initiating unit 511, likelihood function setup unit 512 and parameter calculating unit 513. Initiating unit 511 is adapted to initiate component number K of the network performance model. Likelihood function setup unit 512 is adapted to establish a likelihood function l(α,β) of the parameters α and β according to the component number K from the initiating unit 511. The parameter α is a parameter of K dimension, the parameter β is a parameter of K×V dimension and V indicates the sample space of the performance data. Parameter calculating unit 513 is adapted to calculate, according to the performance data from network nodes, the values of the parameters α and β that allow the likelihood function l(α,β) established at likelihood function setup unit 512 to reach the maximum value. Parameter calculating unit 513 may further include: equivalent function establishing unit 5131, intermediate variable calculating unit 5132 and parameters α and β calculating unit 5133. Equivalent function establishing unit 5131 is adapted to introduce intermediate parameters γ and φ into the likelihood function l(α,β) established at likelihood function setup unit 512 to obtain a simplified equivalent function L(γ,φ,α,β) of the likelihood function l(α,β). γ is a parameter of K dimension and φ is a parameter of K×V dimension. Intermediate variable calculating unit 5132 is adapted to calculate, with the parameters α and β as the known variables and the parameters γ and φ as the independent variables, the optimized values of the parameters γ and φ by calculating the extremum of the simplified equivalent function L(γ,φ,α,β) established by equivalent function establishing unit 5131. Parameters α and β calculating unit 5133 is adapted to use the optimized values of the parameters γ and φ from intermediate variable calculating unit 5132 in the simplified equivalent function L (γ,φ,α,β) and calculate the extremum of the simplified equivalent function L(γ,φ,α,β) with the parameters α and β as the independent variables, and eventually obtain the values of the parameters α and β. Model establishing unit 520 may further include: parameter θ determining unit 521, adapted to determine, according to the parameters α and β determined by parameter determining unit 510, the Dir(α) distribution with which the internal variable θ of the LDA model complies; and Parameter Z determining unit 522, adapted to determine, according to the parameters α and β determined by parameter determining unit 510, the Multinomial(θ) distribution with which the internal variable Z of the LDA model complies. It can be seen from the preceding description that the method and apparatus provided by embodiments for establishing the network performance model can determine, according to the performance data provided by network nodes and the probability of the performance data, the parameter α that shows the correlation of the performance data of network nodes and the parameter β that shows the distribution pattern of the performance data in the network, and can further establish an LDA network performance model as the network performance model with the established parameters α and β. According to the received performance data, the method in embodiments not only can determine the parameter β that shows the distribution pattern of the performance data, but also the parameter α that shows the correlation of the performance data of the network nodes in the network. Therefore the distribution pattern of the performance data of all network nodes in the whole network can be obtained by using the parameters α and β. The distribution pattern of the performance data of all network nodes in the whole network is the basis of the network performance model and enables to the network performance model to reliably show the performance of the network nodes that provides the performance data as well as the performance of all other network nodes in the whole network, i.e., to be reliable in showing the performance of the whole network. The variational inference in combination with the maximum likelihood approach is employed in embodiments to determine the values of the parameters α and β that correspond to the maximum probability of the performance data. Therefore the simulation environment established by the network performance model will be closer to the performance of the actual network while the calculation of the parameters α and β is simpler and requires less data to be processed. The above are only exemplary embodiments and are not for use in limiting the protection scope thereof. All the modifications, equivalent replacements or improvements in the scope, spirit, and principles of the present disclosure shall be included in the protection scope of the present disclosure. Patent applications by Guangyu Shi, Shenzhen CN Patent applications by Youshui Long, Shenzhen CN Patent applications in class SIMULATING ELECTRONIC DEVICE OR ELECTRICAL SYSTEM Patent applications in all subclasses SIMULATING ELECTRONIC DEVICE OR ELECTRICAL SYSTEM User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20090089036","timestamp":"2014-04-21T10:19:15Z","content_type":null,"content_length":"73712","record_id":"<urn:uuid:d5725643-3d99-4ee2-83e4-fa9f6ef766bb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
how to find percentage? September 6th 2012, 05:32 AM #1 Sep 2012 Pak SarzmeeN Hello dear friends i m new here! i hava a question plz solve it find the percentage of property tax paid of Rs.7,488 when the property worth is 156,000 answer is 4.8% tell me plz how Re: how to find percentage? 7488/15600 x 100 = 7488/1560 = 4.8 Re: how to find percentage? thanks dear! and you dont believe i had find its answer a few second before your posting so again thaaaaaaaaaaaaanks and this is a good forum among others and you are fast among other thankssssssss September 6th 2012, 05:42 AM #2 Sep 2012 September 6th 2012, 05:52 AM #3 Sep 2012 Pak SarzmeeN
{"url":"http://mathhelpforum.com/math/203007-how-find-percentage.html","timestamp":"2014-04-23T12:16:32Z","content_type":null,"content_length":"35019","record_id":"<urn:uuid:b501e682-819e-4f44-982f-5520e672e5e0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] fixed effects anova in lme lmer Simon Blomberg s.blomberg1 at uq.edu.au Wed Jun 6 04:23:43 CEST 2007 ?gls in package nlme. It's like lme but with no random effects. But you can still model the variance-covariance properties of the data. On Tue, 2007-06-05 at 19:11 -0700, toby909 at gmail.com wrote: > Can lme or lmer fit a plain regular fixed effects anova? Ie a model without a > random effect, or have there be at least one random effect in order for these > functions to work? > Trying to run such, (1) without specifying a random effect produces an error, > (2) specifying that there is no random effect does not produce the same output > as an anova run in lm(); (2b) specifying that there is no random effect in lmer > crashed R (division by zero, I think). > Just trying to see the connection of fixed and random effects anova in R. STATA > gives me same results for both models up to the point where they differ. > Best Toby > dt1 = > as.data.frame(cbind(c(28,35,27,21,21,36,25,18,26,38,27,17,16,25,22,18),c(1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,4))) > summary(a1 <- lm(V1~factor(V2)-1, dt1)) > anova(a1) > summary(a1 <- lm(V1~factor(V2), dt1)) > anova(a1) > dt1$f = factor(dt1$V2) > summary(a2 <- lme(V1~f, dt1)) #1a > summary(a2 <- lme(V1~f, dt1, ~-1|f)) #2a > anova(a2) > lmer(V1~f, dt1) #1b > lmer(V1~f+(-1|f), dt1) #2b > ______________________________________________ > R-help at stat.math.ethz.ch mailing list > https://stat.ethz.ch/mailman/listinfo/r-help > PLEASE do read the posting guide http://www.R-project.org/posting-guide.html > and provide commented, minimal, self-contained, reproducible code. Simon Blomberg, BSc (Hons), PhD, MAppStat. Lecturer and Consultant Statistician Faculty of Biological and Chemical Sciences The University of Queensland St. Lucia Queensland 4072 Room 320, Goddard Building (8) T: +61 7 3365 2506 email: S.Blomberg1_at_uq.edu.au The combination of some data and an aching desire for an answer does not ensure that a reasonable answer can be extracted from a given body of data. - John Tukey. More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2007-June/133470.html","timestamp":"2014-04-16T19:42:25Z","content_type":null,"content_length":"5217","record_id":"<urn:uuid:5b5755c2-ab32-4266-9a4e-637b6c28c0ff>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00216-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: Lost in the Funhouse (John Barth) According to the "foreward to the Anchor Books Edition", this collection of short stories is "strung together on a few echoed and developed themes and [circles] back upon itself; not to close a simple circuit like that of Joyce's Finnegan's Wake, emblematic of Viconian etermal return, but to make a circuit with a twist to it, like a Möbius strip, emblematic of -- well, read the book." I did read the book, and recognized that the theme of creation (sexual and literary) seemed to tie the stories together. However, were it not for this passage in the foreward and for the first "story", I would not have thought to include it in this list of mathematical fiction. The first "story", called Frame-tale, is actually a Mobius strip! It is a single page with the words "ONCE UPON A TIME THERE" written at one edge and "WAS A STORY THAT BEGAN" on the opposite side, with instructions for joining the ends to make a Möbius strip. A visitor to this site, Birgit Gerdes, has written me to let me know that she believes there is a great deal of hidden mathematics in this collection of stories. In general, I shy away from such statements since I believe that you can "find" hidden math in any work of fiction (or non-fiction). However, since the author clearly had at least some mathematics in mind when writing this work, perhaps there is justification in finding mathematics in expressions such as "In sum..." which can ordinarily be used in English without any real mathematical implications. (Also, take a look at the review of this book by mathematician Nik Weaver at his "Math in Fiction" website.) Contributed by Evan Lots to think about and good fun too.
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf132","timestamp":"2014-04-19T17:27:05Z","content_type":null,"content_length":"9743","record_id":"<urn:uuid:888d6ce7-8ea5-4799-a5f7-e428d2be2470>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
And the seasons they go round and round And the painted ponies go up and down We're captive on the carousel of time We can't return we can only look behind From where we came And go round and round and round In the circle game... Oh, how I love Joni Mitchell's lyrics made famous by the inimitable Buffy Sainte-marie. Oh, how The Circle Game lyrics above describe my feelings about the state of U.S. math education. I feel I've been on this carousel forever. But I do believe that all is not hopeless. I do see promise out there despite all the forces resisting the changes needed to improve our system of education. Our math teachers already get it! They get that more emphasis should be placed on making math meaningful via applications to the real-world, stressing understanding of concepts and the logic behind procedures, reaching diverse learning styles using multiple representations and technology, preparing their students for the next high-stakes assessment, trying to ensure that no child is ... They've been hearing this in one form or another forever. BUT WHAT THEY NEED IS A CRYSTAL CLEAR DELINEATION OF ACTUAL CONTENT THAT MUST BE COVERED IN THAT GRADE OR THAT COURSE. The vague, jargon-filled, overly general standards which have been foisted on our professional staff for the past 20 years is frustrating our teachers to the point of demoralization. THIS IS NOT ABOUT THE MATH WARS. THIS IS NOT AN IDEOLOGICAL DEBATE. JUST TELL OUR MATH TEACHERS WHAT MUST BE COVERED AND LET THEM DO THEIR JOB! THE CHAOS THAT CURRENTLY EXISTS. AND IF YOU DON'T THINK THERE IS CHAOS OUT THERE, TALK TO THE PROFESSIONALS WHO HAVE TO DO THIS JOB EVERY DAY. Results of MathNotation's Third Online Math Contest The Common Core State Standards Initiative NCTM's latest response to the Core Standards Movement - the forthcoming Focus in High School Mathematics Validation Committee selected for draft of Core Standards The results of the latest round of ADP's Algebra 2 and Algebra 1 end of course exams It will take several posts to cover all of this... Consider using the following as Warm-Ups to sharpen minds before the lesson and to provide frequent exposure to standardized test questions (SAT, ACT, State Assessments, etc.). I hope these problems serve as models for you to develop your own. I strongly urge you to include similar questions on tests/quizzes so that students will take these 5-minute classroom openers seriously. I've provided answers and solutions/strategies for some of the questions below. The rest should emerge from the comments. MODEL QUESTION #1: For how many even integers, N, is N^2 less than than 100? Answer: 9 Always circle keywords or phrases. Here the keywords/phrases include "even integers" "less than". This question is certainly tied to the topic of solving the quadratic inequality, N^2 "<" 100 either by taking square roots with absolute values or by factoring. Of course, we know from experience, when confronted with this type of question on a standardized test, even our top students will test values like N = 2, 4, 6, ... However, the test maker is determining if the student remembers that integers can be negative as well and, of course, ZERO is both even and an integer! Thus, the values of N are -8,-6,-4,-2,0,2,4,6, and 8. MODEL QUESTION #2 If 99 is the mean of 100 consecutive even integers, what is the greatest of these 100 numbers? ANSWER: 198 There are several key ideas and reasoning needed here: (1) A sequence of consecutive even integers (or odd for that matter) is a special case of an arithmetic sequence. (2) BIG IDEA: For an arithmetic sequence, the mean equals the median! Thus, the terms of the sequence will include 98 and 100. (Demonstrate this reasoning with a simpler list like 2,4,6,8 whose median is 5). (3) The list of 100 even consecutive integers can be broken into two sequences each containing 50 terms. The larger of these starts with 100. Thus we are looking for the 50th consecutive even integer in a sequence whose first term is 100. (4) The student who has learned the formula (and remembers it!) for the nth term of an arithmetic sequence may choose to use it: a(n) = a(1) + (n-1)d. Here, n = 50 (we're looking for the 50th term!), a(1) = 100, d = 2 and a(100) is the term we are looking for. Thus, a(50) = 100 + (50-1)(2) = 198. However, stronger students intuitively find the greatest term, in effect inventing the formula above for themselves via their number sense. Thus, if 100 is the first term, then there are 49 more terms, so add 49x2 to 100. MODEL QUESTION #3: A SAMPLE OPEN-ENDED QUESTION FOR ALGEBRA II If n is a positive integer, let A denote the difference between the square of the nth positive even integer and the square of the (n-1)st positive even integer. Similarly, let B denote the difference between the square of the nth positive odd integer and the square of the (n-1)st positive odd integer. Show that A-B is independent of n, i.e., show that A-B is a constant. MODEL QUESTION #4: GEOMETRY If two of the sides of a triangle have lengths 2 and 1000, how many integer values are possible for the length of the third side? MODEL QUESTION #5: GEOMETRY There are eight distinct points on a circle. Let M denote the number of distinct chords which can be drawn using these points as endpoints. Let N denote the number of distinct hexagons which can be drawn using these points as vertices. What is the ratio of M to N? Answer: 1 Solution/Strategies: The student with a knowledge of combinations doesn't need to be creative here but a useful conceptual method is the following: Each hexagon is determined by choosing 6 of the 8 points (and connecting them in a clockwise fashion for example). For each such selection of 6 points, there is a uniquely determined chord formed by the 2 remaining points. Similarly, for each chord formed by choosing 2 points, there is a uniquely determined hexagon. Thus the number of hexagons is in 1:1 ratio with the number of chords. If we do not change the angle measures but increase the length of each side of a parallelogram by 60%, by what per cent is the area increased? (A) 36% (B) 60% (C) 120% (D) 156% (E) 256% There is still time to register for the upcoming MathNotations Third Online Math Team Contest, which should be administered on one of the days from Mon October 12th through Fri October 16th in a 45-minute time period. Registration could not be easier this time around. Just email me at dmarain "at" "gamil dot com" and include your full name, title, name and full address of your school (indicate if Middle or Secondary School). Be sure to include THIRD MATHNOTATIONS ONLINE CONTEST in the subject/title of the email. I will accept registrations up to Fri October 9th (exceptions can always be made!). * Your school can field up to two teams with from two to six members on each. (A team of one requires special approval). * Schools can be from anywhere on our planet and we encourage homeschooling teams as well. * The contest includes topics from 2nd year algebra (including sequences, series), geometry, number theory and middle school math. I did not include any advanced math topics this time around, so don't worry about trig or logs. * Questions may be multi-part and at least one is open-ended requiring careful justification (see example below). * Few teams are expected to be able to finish all questions in the time allotted. Teams generally need to divide up the labor in order to have the best chance of completing the test. * Calculators are permitted (no restrictions) but no computer mathematical software like Mathematica can be used. * Computers can be used (no internet access) to type solutions in Microsoft Word. Answers and solutions can also be written by hand and scanned (preferred). A pdf file is also fine. Ok, here's another sample contest problem, this time a "counting" question that is equally appropriate for middle schoolers and high schoolers: How many 4-digit positive integers have distinct digits and the property that the product of their thousands' and hundreds' digits equals the product of their tens' and units' digits? The math background here may be middle school but the reading comprehension level and specific knowledge of math terminology is quite high. This more than counting strategies is often an impediment. If this were an SAT-type question, an example would be given of such a number to give access to students who cannot decipher the problem, thereby testing the math more than the verbal side. On most contests, however, anything is fair game! Beyond understanding what the question is asking, I believe there are some worthwhile counting strategies and combinatorial thinking involved here. Enjoy it! Click More to see the result I came up with (although you may find an error and want to correct it!) My Unofficial Answer: 40 (Please feel free to challenge that in your comments!!_ ...Read more There is still time to register for the upcoming MathNotations Third Online Math Team Contest, which should be administered on one of the days from Mon October 12th through Fri October 16th in a 45-minute time period. Registration could not be easier this time around. Just email me at dmarain "at" "gamil dot com" and include your full name, title, name and full address of your school (indicate if Middle or Secondary School). Be sure to include THIRD MATHNOTATIONS ONLINE CONTEST in the subject/title of the email. I will accept registrations up to Fri October 9th (exceptions can always be made!). • Your school can field up to two teams with from two to six members on each. (A team of one requires special approval). • Schools can be from anywhere on our planet and we encourage homeschooling teams as well. • The contest includes topics from 2nd year algebra (including sequences, series), geometry, number theory and middle school math. I did not include any advanced math topics this time around, so don't worry about trig or logs. • Questions may be multi-part and at least one is open-ended requiring careful justification (see example below). • Few teams are expected to be able to finish all questions in the time allotted. Teams generally need to divide up the labor in order to have the best chance of completing the test. • Calculators are permitted (no restrictions) but no computer mathematical software like Mathematica can be used. • Computers can be used (no internet access) to type solutions in Microsoft Word. Answers and solutions can also be written by hand and scanned (preferred). A pdf file is also fine. The following is a sample of the open-ended "proof-type" questions on the test: Explain why each of the following statements is true. Justify your reasoning carefully using algebra as needed. The square of an odd integer leaves a remainder of 1 when divided by (a) 2 (b) 4 (c) 8 I may post a sample solution to this or you can include this in your comments to this post.
{"url":"http://mathnotations.blogspot.com/2009_10_01_archive.html","timestamp":"2014-04-17T06:41:16Z","content_type":null,"content_length":"214676","record_id":"<urn:uuid:96db4a21-9837-4913-a5be-815b068895ef>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
Break Even Point October 17th 2009, 06:29 PM Break Even Point A firm is selling two products, chairs and barstools, each at $50 per unit. Chairs have a variable cost of $25 and barstools $20. Fixed cost is $20,000. If the sales mix is 1:4 (1 chair for every 4 bar stools) what is the break even point in sales? In units of chairs and barstools? What is the best way to calculate break even point here? I have two different solutions that check out? using a weighted approach BEP Sales = Units= 38,095.25/50 = 761.90 Chairs = 761.90 * .25 = 190.48 Bar Stools= 761.90*.75 = 571.42 Combined VC = (20*4)+25=105 Bar stools= 137.93*4=551.72 October 19th 2009, 10:56 AM Your second solution hits the correct answers. Having a fixed mix of 1:4 lets you think of this one in terms of 'packages', with each package consisting of 1 chair and 4 stools. Each package has a unit sales price of 250, and a unit variable cost of 105. I haven't vetted your first solution in detail, but at a glance it looks like you're trying to weight the chairs:stools at 25:75 instead of 20:80. Fix your weights accordingly and both of your approaches will probably agree. October 19th 2009, 11:41 AM Aweseome, that did it. Units = 689.6552 Charis=units*.20 = 137.31 Barstools = units*.80=551.72
{"url":"http://mathhelpforum.com/business-math/108656-break-even-point-print.html","timestamp":"2014-04-19T09:15:01Z","content_type":null,"content_length":"5058","record_id":"<urn:uuid:290e40b9-8fd7-4c87-810c-7534f0b9523c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
expectation of a stochastic variable (Ito's formula) March 21st 2011, 05:32 AM expectation of a stochastic variable (Ito's formula) This question is from Mathematics for Finance, a chapter on Ito's Formula. I could apply Ito's to calculate the required variable, but I could not follow through the second part, to find its expectation. Perhaps I don't understand the Brownian motion (most likely that I don't). Unfortunately I don't have the solution to this question. PS $e^{rt}$ is a multiplication factor to find the value of a deposit/sum of money after t periods, given that the deposit pays interest rate r, and assuming continous compounding. The stochastic differential equation for the rate of inflation I is given by Find the equation satisfied by the real interest rate R defined to be $R=\frac{B}{I}$, where $B=e^{rt}$. [Hint: Consider $f(x,t)=\frac{e^{rt}}{x}$.] Show that I will apply Ito's formula to find dR. Let $R=\frac{e^{rt}}{I}, -> R=f(I,t)=\frac{e^{rt}}{I}$ $\frac{{\partial}^2{f}}{\partial{t}^2}=2e^{rt}\frac {1}{I^3}$ therefore, we calculate df(I,t) from the equation of dI by applying Ito's formula: $dR_t=[\frac{\partial{f}}{\partial{t}}(I,t)+\mu{I}\frac{\ partial{f}}{\partial{I}}+\frac{1}{2}\frac{{\partia l}^2{f}}{\partial{t}^2}\sigma^2I^2]dt+\frac{\partial{f}}{\partial{I}}\sigma{I}dz_t=(\ by dividing this by $R=\frac{e^{rt}}{I}$, we get Here where it gets tricky, $E(\frac{dR}{R})=E[(r-\mu+\sigma^2)dt]-E[\sigma{dz_t}]= ??? -0$ I know that E(dz)=0 as it is a Wiener process with mean=0 But what do I do with the remaining expectation on dt??? Somehow I need to show that $E[(r-\mu+\sigma^2)dt]=r-\mu+\sigma^2$ and I can only see that if I have $\int_0^{T=1}(r-\mu+\sigma^2)dt=r-\mu+\sigma^2$ BUT why T=1??? Is there an implicit assumption that values of $\sigma, \mu, r$ are given on a per annum basis, therefore T=1?
{"url":"http://mathhelpforum.com/advanced-statistics/175240-expectation-stochastic-variable-itos-formula-print.html","timestamp":"2014-04-16T20:34:42Z","content_type":null,"content_length":"9054","record_id":"<urn:uuid:8c0d4332-b33d-4207-9322-fc5458431567>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Correlation network March 22, 2011 By Dzidorius Martinaitis I came up with an idea to draw correlation network to get a grasp about relationship between a list of stocks. An alternative way to show correlation matrix would be head map, which can have limitations with big matrices (>100). Unfortunately, ggplot2 package doesn’t have a easy way to draw the networks, so I was left with igraph or network. I tried both, but somehow chose igraph. If you want to master either package I highly recommend to start from theoretical part – it is very well written and it will save your time trying to understand the package’s conception. Here is correlation matrix of stocks, which correlation coef. is more than 0.5: cor_mat<- matrix( runif(100), nr=10 ) cor_mat[ lower.tri(cor_mat, diag=TRUE) ]<- 0 cor_mat[ abs(cor_mat) < 0.5]<- 0 graph <- graph.adjacency(cor_mat>0.5, weighted=TRUE, mode="upper") E(graph)[ weight>0.7 ]$color <- "black" E(graph)[ weight>=0.65 &amp; weight<0.7 ]$color <- "red" E(graph)[ weight>=0.6 &amp;weight<0.65 ]$color <- "green" E(graph)[ weight>=0.55 &amp;weight<0.6 ]$color <- "blue" E(graph)[ weight<0.55 ]$color <- "yellow" V(graph)$label<- seq(1:10)#V(graph)$name graph$layout <- layout.fruchterman.reingold factor<-as.factor(cut(E(graph)$weight*10,c(4,5,6,7,8),labels=c(1,10,20,30))) png('corr_network.png',width=500) plot(decompose.graph(graph)[[which.max(sapply(decompose.graph(graph), vcount))]],edge.width =as.numeric(factor)*1.5,frame=T) legend("bottomleft", title="Colors", cex=0.75, pch=16, col=c("black", "blue","red", "green","pink"), legend=c(">70%", "65-70","60-65","55-60","50-55"), ncol=2) The code can be found on github. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/correlation-network/","timestamp":"2014-04-16T07:29:53Z","content_type":null,"content_length":"50210","record_id":"<urn:uuid:896d07ee-8423-4955-8e76-cfbf2a015fed>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by zane Total # Posts: 19 Calculate the volume of 4.0ml of nitogen gas at 754mmhg and 25 degrees celcius Spanish help please. I'm not very good at spanish. Will somebody please help? 4. Which expression is used to talk about what you have just done? A.) venir de B.) hacer la tarea C.) hacer un viaje D.) acabar de 5. Vengo de la clase. ¿Qué acabo de hacer? A.) Acabas de bailar. B.) A... use transformation to rescue animals, start out at the x. the animal is at the bull's eye (-4,2) translate 1 unit rotate 90 degree counterclockwise about the origin rotate 180 degree translate 3 units down reflect in the y-axis find the right order When 8 children go on a picnic, 6/8 of them wear jeans, how many wear jeans dark green is the unit rod 6/6 How many days in 2 years engineering science a boat can sail at 27 m.s in still water. the boat must cross a river that flows at 9 m.s towards the east . if the boat crosses the river by going up stream at an angle of 23 south, what will the size and angle of the resulting velocity be? 23 degrees south a boat can sail at 27 m.s in still water. the boat must cross a river that flows at 9 m.s towards the east . if the boat crosses the river by going up stream at an angle of 23 south, what will the size and angle of the resulting velocity be? find the general solution of the following first order differential equation. u(t): du/dt =(u/(t+2t)) pls show all working For each function, find a. f''(x), and b. f''(3). f(x)= 1 --- 6x^2 Factor out 14xy(-)/14xy(3x^4y^7) now cancel out 14xy your left with -1/3x^4y^7 College Algebra f(x)=-(-2)^3+112-8(-2)+7 f(-2)= 143 sam makes 70% of his free throws what is the probability that sam makes 2 free throws in a row If 250,148,571 is the total population what percent is 136,800,000? Blank have vibrating particles that dont have enough energy to move out of position? What vibrating particles dont have enough energy to move out of position? Two objects moving with a speed v travel in opposite directions in a straight line. The objects stick together when they collide, and move with a speed of v/10 after the collision. (a) What is the ratio of the final kinetic energy of the system to the initial kinetic energy? 1... social studies How did the Pilgrims become friends with the Wampanoag?
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=zane","timestamp":"2014-04-18T13:48:36Z","content_type":null,"content_length":"9239","record_id":"<urn:uuid:4aa27a22-2034-4f9b-8c33-27a6bde5997a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
A Love for Teaching Spring Break is a fabulous thing! I hope that everyone either had or is having a wonderful break! My team and I have found that this group of second graders are very competitive! I came up with this game (using water bottle caps of course) that my competitive kids will love! Students will use a timer, they will set the timer for one minute or two minutes and see how many math facts they can find and record during that time. They can race against a friend or just race against themselves trying to get more math facts in a smaller amount of time. They can do addition or subtraction! Another way to do it would be to use a stopwatch and have students see how long it takes them to come up with the 30 facts and then see if they can beat their own score over time. This would work great as a fast finisher for high students or as extra math practice for those students who struggle. All the students in between will love it too! I made up the addition and subtraction recording forms. They can be found : ) 12 comments: 1. Pinned this! Love the simplicity of the idea. Thanks for sharing! Luckeyfrog's Lilypad 1. Thanks for following my blog! If your class does postcards too, would you like to exchange a postcard? My class can send one to your school, and you could send one to ours! Luckeyfrog's Lilypad:) 2. Exchanging postcards would be great! I'm sure my students would love that! If you e-mail your school address to katrinap89@gmail.com we will send a postcard your way! 2. Love this! I am trying out some new things for my math block this coming fall, and I am excited to add this to it. Thanks for sharing! 3. This is a great idea I will be saving caps this summer. Appreciate the free printable too. 4. I am going to make this for my third and fourth graders. I am also going to add some basic multiplication and division facts. Thanks for the tip. 5. LOVE this green, math idea! I'll be sharing at my FB. Thanks for sharing, and I am now following! The Green Classroom 6. Thanks for the great idea! I have all of my family saving bottle caps for me now! :) Do you happen to have the printable for multiplication, too? I can make one but thought I'd ask you first. 7. You're Welcome! I don't have a multiplication recording sheet, sorry. 8. How do you have a race? Are the same 2 kids using the numbers at the same time? DOes one go first and the other 2nd, and if that is the case, won't they see the equations the first person made. I love the idea but would like some input on the best way to do it. 9. Which numbers are you using for the bottle caps? How many sets of each number? Do you have an answer key for the facts so the students can check their work? 10. Similar to "Anonymous" I too am curious about the nuts and bolts of how to play. And what you found to be successful in setting up the game, like how many bottle caps? what number range? did you have repeats of any numbers? etc. Thanks for the recording sheet. I love the accountability it provides.
{"url":"http://alove4teaching.blogspot.ca/2012/04/math-facts-race.html","timestamp":"2014-04-18T15:39:23Z","content_type":null,"content_length":"89657","record_id":"<urn:uuid:7c8e57f6-0107-4f72-b46b-bd24584c2a06>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
1.1.6 The External Field. Zeeman Energy Next: 1.1.7 Magnetoelastic interactions Up: 1.1 Micromagnetic Free Energy Previous: 1.1.5.1 Magnetostatic energy Contents Until now, we have treated the case of magnetic body not subject to external field. Therefore, all the energy terms introduced in the previous sections can be regarded as parts of the Helmholtz free energy functional. When the external field is considered, it is convenient to introduce the Gibbs free energy functional. In this respect, the additional term (see Eq. (1.17)) related to the external field , is itself a long-range contribution too. In fact, it can be seen as the potential energy of a continuous magnetic moments distribution [15] subject to external field : This energy term is referred in literature to as Zeeman energy. Massimiliano d'Aquino 2005-11-26
{"url":"http://wpage.unina.it/mdaquino/PhD_thesis/main/node16.html","timestamp":"2014-04-19T07:09:32Z","content_type":null,"content_length":"5002","record_id":"<urn:uuid:7352d9d6-0063-4f3f-9e49-b60adbe02c4b>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
an increase in A or in multifactor productivity could also increase Q / L or output per worker. To concentrate attention on what happens to Q / L or output per worker (and hence, unless the employment ratio changes, output per capita), Solow rewrote the Cobb-Douglas production function in what we shall refer to as per capita form: Q / L = A K ^ a L ^ b - 1 = A K ^ a / L ^ 1 - b since multiplying by L ^ b - 1 is the same as dividing by L ^ 1 - b . Also, since we assumed that a + b = 1, a = 1 - b. Q = A K ^ a / L ^ a = A ( K / L ) ^ a Defining q = Q / L and k = K / L, that is, letting small letters equal per capita variables , we have which is the key formula we will work with. We will examine how the model works when growth comes through capital accumulation, and how it works when growth is due to innovation. Growth by capital accumulation In addition to the production function, we need two other pieces of information: 1. the savings function -- how much of output do people in our model economy save? The simplest assumption (which we will examine in more detail later in the course, and will conclude can be a fairly good representation of people's behavior) is that people save a given fraction of output. For the sake of having a specific example, we assume that people save 1 / 4 of output, or what comes to the same thing, 25 cents for every dollar of income. The savings function is therefore: s = 0.25 q 2. the equilibrium condition . We shall find that if capital accumulation is the only source of growth, the economy will approach an equilibrium or steady state . It will reach the steady state when savings is just sufficient to replace the depreciated capital stock. If we assume that in each time period capital depreciates totally, the equilibrium condition is simply s = k Note that if depreciation were only 10 percent of capital stock, the equilibrium condition would be s = 0.10 k . Although this is a more realistic figure for yearly depreciation, we assume 100 percent depreciation for simplicity -- and if you are troubled by the lack of realism, you may think of our time periods as decades rather than years. Let A = 100 and a = 0.5 in the Solow per capita production function. Note that a = 0.5 means "take the square root of k" and A = 100 means "then multiply it by 100" to get the ouput per worker. That is, let our production function be: q = 100 k ^ 0.5 Consider what happens if we begin with 100 units of capital per worker. We can use the production function to calculate that q = 1000. The next step is to use the savings function to calculate how much of this output is saved. If s = 0.25 q then 250 units per capita of output are saved -- and the savings of one period become the capital of the next period. Note that this means in the next period the capital stock will have increased from 100 to 250 . Since the production function is unchanged, the output next period will be q = 100 (250) ^ 0.5 = 1581 We again note that savings is 0.25 of output; and .25 x 1581 = 395.3, so that savings next period will be 395.3. Therefore capital in the third period will be 395.3, and output in the third period will be: q = 100 (395.3) ^ 0.5 = 1988 This procedure can be continued as long as you can punch a calculator; the results for the first 7 periods are: │ Period │ Capital │ Output │ Savings │ Change in │ │ │ │ │ │ Output │ │ 1 │ 100 │ 1000 │ 250 │ ---- │ │ 2 │ 250 │ 1581 │ 395.3 │ 581 │ │ 3 │ 395.3 │ 1988 │ 497 │ 407 │ │ 4 │ 497 │ 2229 │ 557 │ 241 │ │ 5 │ 557 │ 2360 │ 590 │ 131 │ │ 6 │ 590 │ 2429 │ 607 │ 69 │ │ 7 │ 607 │ 2464 │ 616 │ 35 │ Note that output grows throughout, but that the change in output slows down -- since the production function exhibits diminishing returns, this is not surprising. Will the growth stop? That is, will output converge to a steady state? The answer is yes . We can find steady state equilibrium by making use of the equilibrium condition: s = k Substitute for s the savings function to obtain: 0.25 q = k Substitute for output the production function to obtain: 0.25 ( 100 k ^ 0.5 ) = k Finally, divide through by k ^ 0.5 to obtain: k ^ 0.5 = 25 and square both sides to get the equilibrium capital stock k = 625 If the equilibrium capital stock is 625, equilibrium output (found using the production function q = 100 k ^ 0.5 ) will be: q = 2500 Note that if savings is 1 / 4 of output, this means that equilibrium savings is 625 -- just enough to replace the capital stock next period, and to keep the economy in a steady-state with output at 2500 and capital stock of 625 ever after. Predictions of the model If the Solow model is correct, and if growth is due to capital accumulation , we should expect to find 1. Growth will be very strong when countries first begin to accumulate capital, and will slow down as the process of accumulation continues. Japanese growth was stronger in the 1950s and 1960s than it is now. 2. Countries will tend to converge in output per capita and in standard of living. As Hong Kong, Singapore, Taiwan (etc) accumulate capital, their standard of living will catch up with the initially more developed countries. When all countries have reached a steady state, all countries will have the same standard of living (at least if they have the same production function, which for most industrial goods is a reasonable assumption). Certainly there is some evidence favoring these predictions. However, there are some problems as well: 1. The US growth rate was lower , at least on a per capita basis, in the 19th century than in the twentieth century. 2. The Soviet Union under Stalin saved a higher percentage of national income than the US. Because of the higher savings rate and because it started from a lower level of capital, it should have caught up very rapidly. It did not. 3. Less developed countries, with some exceptions -- such as Taiwan, Korea, Singapore and Hong Kong -- are not in general catching up to the developed countries. Indeed, in many cases, the gap is increasing . Do these facts mean that the Solow model is wrong? Not necessarily, since increase in output per capita can be due to an increase in multifactor productivity as well as an increase in capital per
{"url":"http://www.pitt.edu/~mgahagan/Solow.htm","timestamp":"2014-04-18T11:31:59Z","content_type":null,"content_length":"9921","record_id":"<urn:uuid:7447d785-3417-40ef-9095-e8c4cf9cc838>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about numbers on The Math Less Traveled Tag Archives: numbers This has been making the rounds of the math blogosphere (blathosphere?), but in case you haven’t seen it yet, check out Cristóbal Vila’s awesome short video, Nature by Numbers. Especially appropriate given that I have been writing about Fibonacci numbers … Continue reading [This is the ninth, and, I think, final in a series of posts on the decadic numbers (previous posts: A curiosity, An invitation to a funny number system, What does "close to" mean?, The decadic metric, Infinite decadic numbers, More … Continue reading To recap: we’ve now defined the decadic metric on integers by where is not divisible by 10, and also . According to this metric, two numbers are close when their difference is decadically small. So, for example, and are at … Continue reading Continuing my series of posts exploring the decadic numbers… in my previous post, I explained that we will define a new “size function”, or metric, different from the usual “absolute value”, and written . Two numbers will be “close to” … Continue reading Consider the equation Solving this equation is no sweat, right? Let’s do it. First, we subtract from both sides: Now we can factor an out of the left side: Now, if the product of two things is zero, one of … Continue reading Here’s a neat problem from Patrick Vennebush of Math Jokes 4 Mathy Folks: Append the digit 1 to the end of every triangular number. For instance, from 3 you’d get 31, and from 666 you’d get 6,661. Now take a … Continue reading Several months ago, Matthew Watkins sent me a review copy of his new book, Secrets of Creation Volume One: The Mystery of the Prime Numbers. It’s taken me a while to get around to reviewing it, but not for lack … Continue reading Math Jokes 4 Mathy Folks A few months ago, Patrick Vennebush was kind enough to send me a review copy of his new book, Math Jokes 4 Mathy Folks. It’s a treasure-trove of math-related jokes with a huge range of … Continue reading
{"url":"http://mathlesstraveled.com/tag/numbers/","timestamp":"2014-04-18T23:19:58Z","content_type":null,"content_length":"66723","record_id":"<urn:uuid:c065279c-f562-4cf9-97f6-16de95b296f2>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Permutations & Combinations Replies: 1 Last Post: Jan 9, 2005 5:49 PM Re: Permutations & Combinations Posted: Jan 9, 2005 5:49 PM In article <v3r2u0dl4v9j4afof0fdfki0jvdi5cup85@4ax.com>, Jerry Beeler wrote: > I've been teaching 9th grade math for so long that ... well ... I think that > Christ was a teen when I started. > In any case, I'm now faced with some high level math and need a little help. > I need a good, easy-to-understand explanation of "combinations" versus > "permutations" with some example(s). > Do 'appreciate it. You can have a group of distinct kids and a (same number) group of distinct tasks (roles in a play, positions on a baseball team, ...). The number of different ways the kids can be assigned tasks is a permutation. You can have a group of n distinct kids and have to choose a committee of k members (k <= n). The number of ways the committee can be chosen is a combinations "n choose k". There should be hundreds of examples in any book on applied discrete math, as combinatorics is usually a chapter or two in such books. Kevin Karplus karplus@soe.ucsc.edu http://www.soe.ucsc.edu/~karplus Professor of Biomolecular Engineering, University of California, Santa Cruz Undergraduate and Graduate Director, Bioinformatics (Senior member, IEEE) (Board of Directors, ISCB) life member (LAB, Adventure Cycling, American Youth Hostels) Effective Cycling Instructor #218-ck (lapsed) Affiliations for identification only. submissions: post to k12.ed.math or e-mail to k12math@k12groups.org private e-mail to the k12.ed.math moderator: kem-moderator@k12groups.org newsgroup website: http://www.thinkspot.net/k12math/ newsgroup charter: http://www.thinkspot.net/k12math/charter.html
{"url":"http://mathforum.org/kb/message.jspa?messageID=3419231","timestamp":"2014-04-19T23:41:00Z","content_type":null,"content_length":"15910","record_id":"<urn:uuid:6f42068a-518b-482c-a35e-d4a23f829ecf>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Linear Programming - canonical form and simplex methods July 31st 2007, 12:42 AM #1 Jun 2007 Linear Programming - canonical form and simplex methods Hi All Consider the following linear programming model: Minimize Z= -3x1+5x2-6x3+10 subject to: a). Express model in canonical form and show that the all slack point is feasible b). Perform by hand one iteration of the simplex method for this model, starting from the all slack point. state the new basis list and the new feasible point. c). Perform by hand the next iteration of the simplex method to find the next feasible point and show that this is indeed the optimum, Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/17353-linear-programming-canonical-form-simplex-methods.html","timestamp":"2014-04-16T08:09:14Z","content_type":null,"content_length":"30088","record_id":"<urn:uuid:82c7f6da-88b4-4dc9-9b26-11b5bdefbbf6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Calque of complex, by Hermann Weyl. Complex comes from the Latin complexus (“braided together”) (from com- (“together”) + plectere (“to weave, braid”)), while symplectic comes from the corresponding Ancient Greek sym-plektos (συμ (sym), variant of σύν (syn) + πλεκτικός (plektikós), from πλέκω (plekō)). In both cases the suffix comes from Proto-Indo-European *plek-. Previously, the “symplectic group” had been called the “line complex group”. symplectic (not comparable) 1. (mathematics) Describing the geometry of differentiable manifolds equipped with a closed, nondegenerate 2-form External linksEdit Last modified on 3 August 2013, at 15:45
{"url":"http://en.m.wiktionary.org/wiki/symplectic","timestamp":"2014-04-21T10:07:52Z","content_type":null,"content_length":"18708","record_id":"<urn:uuid:c5b345f9-e0fb-4935-90f2-9fac913347da>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: To graph the equation y = 1/2-3 using the slope and y-intercept, you would have to do which of the following? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5089e4d1e4b077c2ef2e0b4e","timestamp":"2014-04-20T14:02:28Z","content_type":null,"content_length":"51807","record_id":"<urn:uuid:0dd420ea-d684-4eb4-882c-7e79225d277f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Percentage Questions These questions are somewhat harder, but give them your best shot! 1. Ian earns £420 a week after a 5% rise. What was his pay before? 2. An Iyonix PC is sold for £1200 at a reduction of 20% on its recommended retail price. What was the computer's original price? 3. Kenny has £3200 in a savings account. After a year, the bank pays him interest increasing his balance to £3360. What percentage rate was applied to the account? 4. On my sister's 15th birthday, she was 159 cm in height, having grown 6% since the year before. How tall was she the previous year?
{"url":"http://www.gcse.com/maths/percent_questions.htm","timestamp":"2014-04-21T02:00:11Z","content_type":null,"content_length":"4601","record_id":"<urn:uuid:c33340c8-4eaa-49c8-b094-ff39d052d97a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
Pacific Palisades Calculus Tutors ...If you think you might need a chemistry tutor, stop thinking and find one! The sooner the better for this subject. Geometry is a very different subject from the majority of mathematics (pre-algebra through differential equations/numerical methods) and can often make even the brightest math students stumble. 18 Subjects: including calculus, chemistry, algebra 2, SAT math ...I have a passion for math and teaching, and I am confident that I can help my students accomplish their goals in a much more efficient way than if they had to study alone. Classes that I teach at the community college level range from Prealgebra to Calculus. Whether the student I tutor is an elementary school kid, middle school, or high school student or adult college student, I can 16 Subjects: including calculus, French, geometry, piano ...I need to show the students how to solve the problems on their own. That way, when a test comes around they will know the concepts and how to solve their problems for themselves. I try to make it fun like a puzzle that needs to be solved. 18 Subjects: including calculus, chemistry, geometry, GRE ...I also recognize which advisors were able to help me and which ones were not. In turn, I've learned how to be patient while remaining resolute in jointly pursuing a scholastic goal. Lastly, I am in a perfect position to sympathize with the challenges facing students, as I myself am about to become one again! 58 Subjects: including calculus, English, reading, writing ...My other great passion lies in literature, words, and language. I originally intended to major in Classics and to that end I took several Latin courses as well as Ancient Greek at Columbia. I love the craft of writing and logic and therefore enjoy assisting with essays greatly. 38 Subjects: including calculus, chemistry, English, physics
{"url":"http://www.algebrahelp.com/Pacific_Palisades_calculus_tutors.jsp","timestamp":"2014-04-20T05:49:57Z","content_type":null,"content_length":"25513","record_id":"<urn:uuid:039c76dc-fcec-4a87-a9d2-6ad18d455ebb>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
2006 Proposed Stevens Institute/Darmstadt Technical University IRES/REU Proposed projects for the 2006 Proposed Stevens Institute/Darmstadt Technical University IRES/REU Project #: STEVENSS2006-01 Mentor: Johannes Buchmann, buchmann at cdc.informatik.tu-darmstadt.de, Darmstadt Technical University Project Title: Real-life applications of digital signature schemes Digital signatures are a mechanism by which someone, called a signer, can electronically "sign" a message so that receivers can verify that the message came from the signer and was not changed in transit. Most digital signatures rely on a cryptographic hash function for part of their operations. This project will explore one or more ways to makes digital signatures more efficient and to use these improvements in real world applications. Two examples are described below. Problem 1: The Merkle signature scheme, introduced by Ralph Merkle in 1990, is efficient and provably secure. However, its architecture is different from most signature schemes, in that it only allows a limited number of signatures. The goal here is to explore the use of the Merkle scheme in real applications such as SSL, https, etc. Problem 2: The Courtois-Finiasz-Sendries signature scheme, which is based on coding theory, computes the signature of a document m as the decoding of the hash of h(m xor i) where h is a hash function that hashes to vectors and i is a counter that makes h(m xor i) decodable. Unfortunately, one needs to inspect 100 million counters before a decodable vector is found. This signature algorithm is therefore not very efficient. The goal here is to modify the scheme to make it more efficient. Project #: STEVENS2006-2 Mentors: Alexander May, may at informatik.tu.darmstadt.de, Darmstadt Technical Institute and Susanne Wetzel, swetzel at cs.stevens.edu, Stevens Institute of Technology Project Title: Applications of computational algebraic number theory in cryptography Cryptography has strong ties to computational algebraic number theory. Examples in public key cryptography include the RSA and ElGamal cryptosystems which are based on the difficulty of factoring and solving the discrete logarithm problem, respectively. Lattice theory has not only proven to be an effective tool for cryptanalysis but is also expected to give rise to cryptographic primitives that sustain their strength even in the context of quantum computing. Algebraic structures have also taken a prominent place in secret key cryptography with the Advanced Encryption Standard. The exact problem will be chosen from the general topic of computational algebraic number theory in cryptography. Depending on the interests and expertise of the student, this project can be more focused on foundational mathematical results, or on implementation and experimentation. A background in mathematics, computer science, and programming is desired. Contacting the Center Document last modified on November 3, 2005.
{"url":"http://reu.dimacs.rutgers.edu/2006/proposedstevens.html","timestamp":"2014-04-17T09:34:44Z","content_type":null,"content_length":"4476","record_id":"<urn:uuid:c5e2f389-60ab-4463-872b-0dec04220eb8>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
User:Ben G. Fitzpatrick From OpenWetWare Contact Info Ben G. Fitzpatrick LMU Department of Mathematics 2726 University Hall 1 LMU Drive Los Angeles, CA 90045 USA I work in the Mathematics Department at Loyola Marymount University, where I serve as the Clarence J. Wallen, S.J., Chair of Mathematics. I learned about OpenWetWare from Dr. Kam Dahlquist of LMU, and I've joined to participate in Dahlquist's Lab wiki and to co-manage our course, Biomathematical Modeling (Math 388/Biol 398). Current Schedule • TR, 0925 - 1040. Math 388, Biomathematical Modeling, Seaver 120. • TR, 1050 - 1205. Math 498, Research, Seaver 202. • MW, 1400 - 1500. Office Hours, UH 2726 • TR, 1330 - 1430. Office Hours, UH 2726 • Office hours also by appointment. • 1988, PhD, Brown University, Division of Applied Mathematics • 1986, ScM, Brown University, Division of Applied Mathematics • 1983, MPS (Master of Probability and Statistics), Auburn University • 1981, BS, Auburn University, Department of Mathematics Research interests 1. Mathematical Modeling in Biology and Ecology. I am especially interested in deterministic and stochastic approaches to modeling the dynamics of living systems. Current projects include transcriptional networks controlling environmental stress in Saccharomyces cerevisiae and energy budgets and reproductive success for Argiope trifasciata. More theoretical work is on-going in rate distribution models for understanding diversity and survival-of-the-fittest in populations. 2. Statistical Analysis of Biological Systems under Stress. I am working with LMU biologists on various problems of detecting the response of bacteria, plants, and animals to environmental stresses, particularly heavy metal contaminants, in the Ballona Wetlands of Los Angeles. 1. , R., A. Ackleh, B. Fitzpatrick, G. Jacquez, J. Thibodeaux, R. Rommel, and N. Simonsen (2009) “Ecosystems Modeling of College Drinking: Development of a Deterministic Compartmental Model,”, J. Studies Alc Drugs, 70, 805-821. 2. , A., B. Fitzpatrick, R. Scribner, N. Simonsen, and J. Thibodeaux (2009) “Parameter Estimation in Ecosystems Modeling of College Drinking,” Math Comp Model, 50 (3/4), 2009, pp. 481-497. 2. , A., Fitzpatrick, B.G., and Theime, H. (2005) Rate Distributions and Survival of the Fittest. Discrete and Continuous Dynamical Systems Series B, 5: 917–928. 2. , A.S., Marshall, D., Fitzpatrick, B.G., and Heatherly, H. (1999) Survival of the Fittest in a Generalized Logistic Model. Math Models and Methods in Appl. Sci., 9: 1379–1391. 3. , B. G. (2008) “Statistical Considerations and Techniques for Understanding Physiological Data, Modeling, and Treatments,” Cardio Eng, Vol. 8, No. 2. pp. 135-143. 3. , B.G. (1991) Bayesian Analysis in Inverse Problems. Inverse Problems, 7: 675–702. 3. , B.G. (1995) Statistical Tests of Fit in Estimation Problems for Structured Population Modeling. Quart. Appl. Math., 53: 105–128. Useful links
{"url":"http://openwetware.org/index.php?title=User:Ben_G._Fitzpatrick&oldid=485739","timestamp":"2014-04-19T15:14:07Z","content_type":null,"content_length":"22415","record_id":"<urn:uuid:405ed7a6-d849-4a76-bdb7-ac674a8a504f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Doctor To: All Dr. Math Doctors From: Ian Underwood Subject: Ask Dr. Math in April Hi Math Doctors, During April, we received 8383 (about 279 per day), of which we answered 3952 (about 131 per day). That's just shy of 4000 answers, and just shy of 50%. We had an unusually high number of math doctors (37) contributing this month. Thanks to all of you, with special thanks to the following: 20/day | | Peterson 15/day | 10/day | | Rick 5/day | Jerry, Mitteldorf, Paul, Schwa | Anthony, Tom | Douglas, Floor, Jubal, Roy | Jeremiah, Tim 1/day | Allan, Ann, Fenton, Jaffee, Pete, Shawn, White, Wilkinson 1/wk | Achilles, Greenie, Jody, Jordi, Toughy, Twe, Wolfson Welcome to new intern Doctor Meryem! New archive format Last month I told you about some preview pages for the new Dr. Math archives. As of yesterday, those pages are live! For a quick overview of the new functionality, take a look at We've already received some favorable comments from users, and we hope that you'll find them easier to use than the old pages. Everything is reachable from the old URLs, so you shouldn't have to change any of your bookmarks. 20-questions interface One of the things we're working on is an additional archive interface that can be used by people who don't really have the vocabulary to describe their questions in a way that would let them use the standard browsing and searching interfaces efficiently. (A typical target user would be someone who is trying to solve a pair of simultaneous linear equations, but doesn't know how to say that.) The idea is to use a network of 'leading questions' to help patients either zero in on appropriate archived answers or FAQs, or else provide information that will help math doctors identify interesting questions more easily in triage. This work is still in the early stages, but at some point we're going to want to do some preliminary internal testing, using math doctors as subjects. If any of you would be interested in participating in these tests, please let me know so I can contact you when we're ready. That's it for April! Go forth, be fruitful, and teach kids to Dr. Ian Attending Physician Ask Dr. Math
{"url":"http://mathforum.org/dr.math/office_help/mathdoc.news/mathdocnews.apr02.html","timestamp":"2014-04-20T20:29:35Z","content_type":null,"content_length":"3337","record_id":"<urn:uuid:b33ec6c6-cd6e-4c15-9644-c5faf404c40c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Laws for Conjunctions and Disjunctions in Interval Type 2 Fuzzy Sets Bustince, H. and Montero de Juan, Francisco Javier and Barrenechea, E. and Pagola, M. (2008) Laws for Conjunctions and Disjunctions in Interval Type 2 Fuzzy Sets. In 2008 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS, VOLS. IEEE Xplore Digital Library, 1-5 . IEEE, Hong Kong, Peoples R China, pp. 1615-1620. ISBN 978-1-4244-1818-3 Restricted to Repository staff only until 2020. Official URL: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4630587 In this paper we study in depth certain properties of interval type 2 fuzzy sets. In particular we recall a method to construct different interval type 2 fuzzy connectives starting from an operator. We further study the law of contradiction and the law of excluded middle for these sets. Furthermore we analyze the properties: idempotency, absorption, and distributiveness. Item Type: Book Section Additional IEEE International Conference on Fuzzy Systems. JUN 01-06, 2008 Uncontrolled Connectives; Inference; Conorms; Systems; Norms Subjects: Sciences > Mathematics > Operations research ID Code: 16906 References: C. Alsina, E. Trillas and L. Valverde, “On some Logical Connectives for Fuzzy Sets Theory”, Journal of Mathematical Analysis and Applications,vol. 93, pp. 15–26, 1983. T. Arnauld, S.Tano, “Interval-valued fuzzy backward reasoning”, IEEE Transactions on Fuzzy Systems, 3(4)(1995), 425-437. K. Atanassov , “Intuitionistic fuzzy sets”, VII ITKR’s Session, Deposed in Central Sci. Techn. Library of Bulg. Acd. of Sciences, Sofia, pp.1684–1697, 1983. K. Atanassov, “ Intuitionistic fuzzy sets. Theory and Applications”,Physica-Verlag, Heidelberg, 1999). E. Barrenechea, Image Processing with Interval-valued Fuzzy Sets.Edge Detection. Contrast, Ph. D. Dissertation,Universidad Publica de Navarra, 2005. P. Burillo and H. Bustince, “Entropy on intuitionistic fuzzy sets and on interval-valued fuzzy sets”, Fuzzy Sets and Systems, vol. 78, pp.305–016, 1996. P. Burillo and H. Bustince, “Orderings in the referential set induced by an intuitionistic fuzzy relation”, Notes on IFS, vol. 1, pp. 93–103, 1995. P. Burillo and H. Bustince, “Construction theorems for intuitionistic fuzzy sets”, Fuzzy Sets and Systems, vol. 84, pp. 271–281, 1996. [9] H. Bustince and P. Burillo, “Interval-valued fuzzy relations in a set structure”, The Journal of Fuzzy Mathematics, vol. 4, pp. 765–785,1996. H. Bustince, “Indicator of inclusion grade for interval-valued fuzzy sets. Application to approximate reasoning based on interval-valued fuzzy sets”, International Journal of Approximate Reasoning, vol. 23,pp. 137–209, 2000. H. Bustince and P. Burillo, “Mathematical analysis of interval-valued fuzzy relations: Application to approximate reasoning”, Fuzzy Sets and Systems, vol. 113, pp. 205–219, 2000. H. Bustince and P. Burillo, “Structures on Intuitionistic Fuzzy Relations”,Fuzzy Sets and Systems, vol. 78, pp. 293–303, 1996. H. Bustince, J. Kacprzyk and V. Mohedano, “Intuitionistic Fuzzy Generators.Application to Intuitionistic Fuzzy Complementation”, Fuzzy Sets and ystems, vol. 114, pp. 485–504, 2000. H. Bustince, J. Montero, M. Pagola, E. Barrenechea and D. Gomez “A survey of interval-valued fuzzy sets,” in (Chapter 21) Handbook of Granular Computing, Edited by J. Wiley & Sons, H. Bustince, F. Herrera and J. Montero, Editors, Fuzzy Sets and Their Extensions: Representation, Aggregation and Models, Springer Verlag,Berlin, 2007. S.M. Chen, W.H. Hsiao, W.T. Jong, “Bidirectional approximate reasoning based on interval-valued fuzzy sets”, Fuzzy Sets and Systems,vol. 91, pp. 339–353, 1997. C. Cornelis, G. Deschrijver and E. Kerre, “Classification of Intuitionistic Fuzzy Implicators: an Algebraic Approach”, Proc. of the 6th Joint Conference on Information Sciences, Research Triangle Park, North Carolina, USA, 2002, pp. 105–108. C. Cornelis, G. Deschrijver and E. Kerre, “Intuitionistic Fuzzy Connectives Revisited”, Proc. of the Ninth International Conference IPMU 2002, Annecy-France July 2002, pp. G. Deschrijver,C. Cornelis and E.E. Kerre, “On the representation of Intuitionistic Fuzzy T-Norms and T-Conorms”, IEEE Transactions on Fuzzy Systems, vol. 12, pp. 45–61, 2004. D. Dubois and H. Prade, Fuzzy Sets and Systems: Theory and Applications, New York: Academic, 1980. F. Esteva, E. Trillas and X. Domingo, “Weak and strong negation function for fuzzy set theory”, Proc. Eleventh IEEE International Symposium on Multivalued Logic, Norman, Oklahoma, 1981, pp. 23–27. J. Fodor and M. Roubens,“Fuzzy Preference Modelling and Multicriteria Decision Support”, in: Theory and Decision Library, Kluwer Academic Publishers, 1994. J. Fodor, “On fuzzy implication operators”, Fuzzy Sets and Systems,vol. 42, pp. 293–300, 1991. M.B. Gorzalczany, “A method of inference in approximate reasoning based on interval-valued fuzzy sets”, Fuzzy Sets and Systems, vol. 21,pp. 1–17, 1987. M.B. Gorzalczany, “Interval-valued fuzzy controller based on verbal model of object”, Fuzzy Sets and Systems, vol. 28, pp. 45–53, 1988. M.B. Gorzalczany, “Interval-valued fuzzy inference involving uncertain (inconsistent) conditional propositions”, Fuzzy Sets and Systems,vol. 29, pp. 235–240, 1989. M.B. Gorzalczany, “An interval-valued fuzzy inference method. Some basic properties”, Fuzzy Sets and Systems, vol. 31, pp. 243–251, 1989. S. Jenei, “A more efficient method for defining fuzzy connectives”,Fuzzy Sets and Systems, vol. 90, pp. 25–35, 1997. J.M. Mendel, Robert I. Bob John, “Type-2 fuzzy sets made simple”,IEEE Transactions on Fuzzy Systems, vol. 10, pp. 117–127, 2002. J.M. Mendel, H. Wu, “Type-2 Fuzzistics for Symmetric Interval Type-2 Fuzzy Sets: Part 1, Forward Problems”, IEEE Transactions on Fuzzy Systems, vol. 14, pp. 781–792, 2006. J.M. Mendel, “Uncertain Rule-Based Fuzzy Logic Systems”, Editor Prentice-Hall, Upper Saddle River, NJ, 2001. J.M. Mendel, “Advances in type-2 fuzzy sets and systems”, Information Sciences, vol. 177, pp. 84–110, 2007. J. Montero, D. Gomez and H. Bustince, “On the relevance of some families of fuzzy sets”, Fuzzy Sets and Systems, vol. 158, pp. 2429–2442, 2007. S. V.Ovchinnikov, M. Roubens, “On strict preference relations”, Fuzzy Sets and Systems, vol. 43, pp. 319–326, 2001. R. Sambuc, “Function Φ-Flous, Application a l’aide au Diagnostic en Pathologie Thyroidienne”, These de Doctorat en Medicine, Marseille,1975. E. Trillas, “Sobre funciones de negaci´on en la teor´ıa de conjuntos difusos”, Stochastica, vol. III-1, pp. 47–59, 1979 (in Spanish). Reprinted (English version) in: Advances f Fuzzy Logic, Edited by S. Barro et altri-Universidad de Santiago de Compostela, pp. 31–43, 1998. E. Trillas, C. Alsina, J.M. Terricabras, “Introducci ´on a la l´ogica borrosa”, Ariel Matem´atica, 1995. E. Trillas, L. Valverde, “On mode and implication in approximate reasoning”, in: Approximate reasoning in Expert Systems, Edited by M.M. Gupta, A. Kandel, W. Bandler, J.B. Kiszka, North-Holland,Amsterdam, 157–166 , 1985. I.B. Turksen, “Interval valued fuzzy sets based on normal forms”,Fuzzy Sets and Systems, vol. 20, pp. 191–210, 1986. I.B. Turksen, , Z. Zhong “An approximate analogical reasoning schema based on similarity measures and interval-valued fuzzy sets,”, Fuzzy Sets and Systems, vol. 34, pp. 323–346, I.B. Turksen “Interval-valued fuzzy sets and compensatory AND”,Fuzzy Sets and Systems, vol. 51, pp. 295–307, 1992. S. Weber, “A general concept of fuzzy connectives,negations and implications based on t-norms and t-conorms”, Fuzzy sets and Systems,vol. 11, 115–134, 1983. L. A. Zadeh. “Fuzzy Sets”, Information Control, vol. 8, pp. 338–353,1965. L. A. Zadeh, “Outline of a new approach to analysis of complex systems and decision processes”, IEEE Transactions on Systems, Man,and Cybernetics, vol. 3, pp. 28–44, 1973. L. A. Zadeh, “The concept of a linguistic variable and its application to approximate reasoning I”, Information Sciences, vol. 8, pp. 199–249,1975. Deposited On: 29 Oct 2012 10:53 Last Modified: 07 Feb 2014 09:38 Repository Staff Only: item control page
{"url":"http://eprints.ucm.es/16906/","timestamp":"2014-04-18T18:15:50Z","content_type":null,"content_length":"47254","record_id":"<urn:uuid:aa22b544-1d02-426d-babd-9ff016cd507f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Hercules Geometry Tutor Find a Hercules Geometry Tutor ...As a doctoral student in clinical psychology, I was hired by the university to teach study skills, test taking skills, and time management to incoming freshmen students. As part of this job, I was trained in and provided materials for each of these topics. I often find, when working with my stu... 20 Subjects: including geometry, calculus, statistics, biology ...Since every person is unique, I personalize my approach to each student. This makes my work very effective and allows each student to truly improve! I have helped hundreds of students, both one on one and in a classroom setting, and many of them provide excellent references and referrals for me. 14 Subjects: including geometry, calculus, statistics, algebra 1 ...While I have had some German, and have worked in Germany for Siemens, I do not consider myself fluent in German. My response is usually "nur ein bisschen" (only a little) when asked if I speak German. When tutoring, I seek to begin with what the student knows and use the required textbook and a... 21 Subjects: including geometry, physics, algebra 1, calculus ...I know that children, like adults, have learning preferences whether they know it or not. I always try to get through to a student by utilizing the methods that will be the most effective. My favorite subjects to tutor are the Math and Science subjects. 29 Subjects: including geometry, Spanish, reading, chemistry ...I am a native speaker of Mandarine Chinese. I finished High School in Taiwan and then after 2 years in obligatory military service I came to US to study for college. I am an immigrant. 17 Subjects: including geometry, reading, physics, calculus
{"url":"http://www.purplemath.com/Hercules_geometry_tutors.php","timestamp":"2014-04-18T18:36:48Z","content_type":null,"content_length":"23740","record_id":"<urn:uuid:826e49e4-d8ae-4263-b87d-1ae0de426ca6>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Theory of Automata Theory of Automata Lecture N0. 16 Reading Material Chapter 7 Introduction to Computer Theory Applying an NFA on an example of maze, NFA with null string, examples, RE corresponding to NFA with null string (task), converting NFA to FA (method 1,2,3) examples Application of an NFA There is an important application of an NFA in artificial intelligence, which is discussed in the following example of a maze - and + indicate the initial and final states respectively. One can move only from a box labeled by other then L, M, N, O, P to such another box. To determine the number of ways in which one can start from the initial state and end in the final state, the following NFA using only single letter a, can help in this regard It can be observed that the shortest path which leads from the initial state and ends in the final state, consists of six steps i.e. the shortest string accepted by this machine is aaaaaa. The next larger accepted string is aaaaaaaa. Thus if this NFA is considered to be a TG then the corresponding regular expression may be written as Which shows that there are infinite many required ways It is to be noted that every FA can be considered to be an NFA as well , but the converse may not true. It may also be noted that every NFA can be considered to be a TG as well, but the converse may not true. It may be observed that if the transition of null string is also allowed at any state of an NFA then what will be the behavior in the new structure. This structure is defined in the following NFA with Null String If in an NFA, Y is allowed to be a label of an edge then the NFA is called NFA with Y (NFA-Y). An NFA-Y is a collection of three things Finite many states with one initial and some final states. Finite set of input letters, say, S = {a, b, c}. Finite set of transitions, showing where to move if a letter is input at certain state. Theory of Automata There may be more than one transitions for certain letter and there may not be any transition for a certain letter. The transition of Y is also allowed at any state. Consider the following NFA with Null string The above NFA with Null string accepts the language of strings, defined over Σ = {a, b}, ending in b. Consider the following NFA with Null string L, a The above NFA with Null string accepts the language of strings, defined over Σ = {a, b}, ending in a. It is to be noted that every FA may be considered to be an NFA-Y as well, but the converse may not true. Similarly every NFA-Y may be considered to be a TG as well, but the converse may not true. NFA to FA Two methods are discussed in this regard. Method 1: Since an NFA can be considered to be a TG as well, so a RE corresponding to the given NFA can be determined (using Kleene's theorem). Again using the methods discussed in the proof of Kleene's theorem, an FA can be built corresponding to that RE. Hence for a given NFA, an FA can be built equivalent to the NFA. Examples have, indirectly, been discussed earlier. Method 2: Since in an NFA, there may be more than one transition for a certain letter and there may not be any transition for certain letter, so starting from the initial state corresponding to the initial state of given NFA, the transition diagram of the corresponding FA, can be built introducing an empty state for a letter having no transition at certain state and a state corresponding to the combination of states, for a letter having more than one transitions. Following are the examples Consider the following NFA Using the method discussed earlier, the above NFA may be equivalent to the following FA a, b a, b Theory of Automata A simple NFA that accepts the language of strings defined over S = {a,b}, consists of bb and bbb The above NFA can be converted to the following FA a, b Method 3: As discussed earlier that in an NFA, there may be more than one transition for a certain letter and there may not be any transition for certain letter, so starting from the initial state corresponding to the initial state of given NFA, the transition table along with new labels of states, of the corresponding FA, can be built introducing an empty state for a letter having no transition at certain state and a state corresponding to the combination of states, for a letter having more than one transitions. Further examples are discussed in the next
{"url":"http://www.zeepedia.com/read.php?nfa_with_null_string_theory_of_automata&b=19&c=16","timestamp":"2014-04-18T23:41:53Z","content_type":null,"content_length":"71601","record_id":"<urn:uuid:90b3f081-8a73-4ff5-b9b8-58170bb0e55e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00091-ip-10-147-4-33.ec2.internal.warc.gz"}