content
stringlengths
86
994k
meta
stringlengths
288
619
$A^{2}\Pi ^{\sim} X^{2}\Sigma^{+}$ INTERACTION PARAMETERS FOR $^{12}C^{16}O^{+}, ^{13}C^{16}O^{+}$ AND $^{14}C^{16}O^{+}$ FROM DEPERTURBATION ANALYSES OF $A^{2}\Pi(v') X^{2}\Sigma^{+} (v^{\prime \prime})$ BANDS Breadcrumbs Navigation The Knowledge Bank is scheduled for regular maintenance on Sunday, April 20th, 8:00 am to 12:00 pm EDT. During this time users will not be able to register, login, or submit content. $A^{2}\Pi ^{\sim} X^{2}\Sigma^{+}$ INTERACTION PARAMETERS FOR $^{12}C^{16}O^{+}, ^{13}C^{16}O^{+}$ AND $^{14}C^{16}O^{+}$ FROM DEPERTURBATION ANALYSES OF $A^{2}\Pi(v') X^{2}\Sigma^{+} (v^{\prime \ prime})$ BANDS 2004-RE-01.jpg 326.1Kb JPEG image Title: $A^{2}\Pi ^{\sim} X^{2}\Sigma^{+}$ INTERACTION PARAMETERS FOR $^{12}C^{16}O^{+}, ^{13}C^{16}O^{+}$ AND $^{14}C^{16}O^{+}$ FROM DEPERTURBATION ANALYSES OF $A^{2}\Pi(v') X^{2}\Sigma^{+} (v ^{\prime \prime})$ BANDS Creators: Coxon, John A.; Kępa, R.; Kocan, A.; Piotrowska-Domagala, I. Issue Date: 2004 Several $A^{2}\Pi X^{2}\Sigma^{+}$ emission bands of $CO^{+}$, for which the A-state vibrational level is perturbed, have been recorded photographically. The well known $A(v^{\prime} = 0) ^{\sim} X(v^{\ast} = 10)$ and $A(v^{\prime} = 5) ^{\sim} X(v^{\ast} = 14)$ perturbations between the $A^{2}\Pi$ and $X^{2}\Sigma^{+}$ states of $^{12}C^{16}O^{+}$, previously analysed by Coxon and $Foster^{a}$, have been reinvestigated by analysis of the $0 - 2$ and $5 - 0$ bands. The standard deviations of the least squares fits are typically about $0.02 cm^{-1}$. Abstract: Similarly, the $A(v^{\prime} = 1) ^{\sim} X(v^{\ast} = 11)$ interaction in $^{13}C^{16}O^{+}$, first identified by $Jakubek^{b}$ in an analysis of the $0 - 1$ band of the $B^{2}\Sigma^ {+} - A^{2}\Pi$ system, has been studied by a deperturbation analysis of the rotational structure in the $A^{2}\Pi(v^{\prime} = 1) \to X^{2}\Sigma^{+} (v^{\prime \prime} = 0)$ band. For $^{14}C^{16}O^{+}$, the corresponding $A(v^{\prime} = 2) ^{\sim} X(v^{\ast} = 11)$ interaction has been observed for the first time by analysis of the $2 - 0$ and $2 - 1$ bands of the $A - X$ system. The expected isotopic self-consistency of the interaction parameters $\alpha$ and $\beta$ from individual bands of the three isotopomers is discussed. URI: http://hdl.handle.net/1811/21285 Other 2004-RE-01 Items in Knowledge Bank are protected by copyright, with all rights reserved, unless otherwise indicated. This item appears in the following Collection(s)
{"url":"http://kb.osu.edu/dspace/handle/1811/21285","timestamp":"2014-04-18T02:38:09Z","content_type":null,"content_length":"26135","record_id":"<urn:uuid:1c468985-3005-4f2c-9375-6436976f1c0f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Double Discretization Difference Schemes for Partial Integrodifferential Option Pricing Jump Diffusion Models Abstract and Applied Analysis Volume 2012 (2012), Article ID 120358, 20 pages Research Article Double Discretization Difference Schemes for Partial Integrodifferential Option Pricing Jump Diffusion Models Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain Received 10 September 2012; Revised 7 November 2012; Accepted 7 November 2012 Academic Editor: Carlos Vazquez Copyright © 2012 M.-C. Casabán et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A new discretization strategy is introduced for the numerical solution of partial integrodifferential equations appearing in option pricing jump diffusion models. In order to consider the unknown behaviour of the solution in the unbounded part of the spatial domain, a double discretization is proposed. Stability, consistency, and positivity of the resulting explicit scheme are analyzed. Advantages of the method are illustrated with several examples. 1. Introduction Since empirical studies revealed that the normality of the log returns, as assumed by Black and Scholes, could not capture features like heavy tails and asymmetries observed in market-data log-returns densities [1], a number of models try to explain these empirical observations: stochastic volatility [2, 3], deterministic local volatility [4, 5], jump diffusion [6, 7], and infinite activity Lévy models [8–11]. The two last types of models, discussed in [12] and [13, chapters 14, 15] allow to calibrate the model to market price of options and reproduce a wide variety of implied volatility skews/smiles. These models are characterized by partial integrodifferential equations (PIDEs) that involve a second-order differential operator and a nonlocal integral term that requires specific treatment and presents additional difficulties. In order to solve the PIDE problem numerically, Andersen and Andreasen [14] use an unconditionally stable ADI finite difference method and accelerate it using fast Fourier transform (FFT). In [15–17] wavelet methods are applied to infinite activity jump-diffusion models. Interesting analytic-numerical treatments for Lévy models have been introduced in [18–20]. The so-called COS method for pricing European options is presented in [18]. This is based on the knowledge of the characteristic function of the jump operator and the close relation of the characteristic function with the series coefficients of the Fourier-cosine expansion of the density function. In [19], an expansion of the characteristic function of local volatility models with Lévy jumps is developed. The authors in [20] derive an analytical formula for the price of European options for any model including local volatility and Poisson jump process by using Malliavin calculus techniques. Various authors apart of [14] used finite difference schemes for PIDEs in [21–27]. Discretization of the integral term leads to full matrices due to its nonlocal character. Dealing with the integral term several challenges arise, for instance, how to approximate the integral term and how to localizate a bounded computational domain, also the selection of the boundary conditions of the numerical domain and the problem of the double discretization of the differential and integral part of the PIDE. Tavella and Randall in [26] used an implicit time discretization and propose a stationary fairly rapid convergent iterative method to solve the full matrix problem quoted above but without a careful numerical analysis. A generalization of this iterative method to price American options is proposed in [25]. In the outstanding paper [22] the authors propose an explicit-implicit finite difference scheme for solving parabolic PIDEs with possibly singular kernels when the random evolution of the underlying asset is driven by a time-inhomogeneous jump-diffusion process. The authors study stability and convergence of the proposed scheme as well as rates of convergence. However, they use backward or forward difference quotients of only first order depending on the sign of the coefficient of the convection term in order to avoid oscillations. An improvable issue of [22] is that in order to approximate the truncated integral term, they assume a particular behavior of the solution outside of the bounded numerical domain. An efficient solution of PIDEs for the jump-diffusion Merton model is proposed in [24] with a very efficient treatment of the resulting dense linear system by using a circulant preconditioned conjugate gradient method. However, in [24], they only consider the particular case where the jump sizes have zero mean, . They also assume a particular behavior of the solution outside the bounded numerical domain. Almendral and Oosterlee [28] present an implicit discretization of the PIDE jump-diffusion model on an uniform grid using finite differences, where a splitting technique combined with FFT is used to accelerate the dense matrix-vector product. The authors also assume a particular behaviour of the solution outside of the bounded numerical domain, in a similar way to [24]. In [21] a finite difference method for PIDE associated with the CGMY infinite activity Lévy model is treated. The equations are discretized in space by the collocation method and in time by an explicit backward differentiation formula. The integral part is transformed into a Volterra equation. After integration by parts and taking advantage of the vanishing derivative behaviour of the payoff function for large asset values, the authors are able to truncate properly the integral for the case of put and butterfly options. In [27] the price of European and American options under PIDE Kou’s jump-diffusion model is solved using finite differences on nonuniform grids, and time stepping is performed using the implicit Rannacher scheme. The evaluation of the integral term is efficient from the computational cost point of view, assuming that the behaviour of the solution for large values of the underlying asset follows the asymptotic behaviour. For the sake of clarity in the presentation we recall that in a jump-diffusion model, the modified stochastic differential equation (SDE) for the underlying asset is where is the underlying stock price, is the drift rate, is the volatility, is the increment of Gauss-Wiener process, and is the Poisson process. The random variable representing the jump amplitude is denoted by , and the expected relative jump size is denoted by . The jump intensity of the Poisson process is denoted by . Based on the SDE (1.1) the resulting PIDE for a contingent claim is given by [7, 14, 29]: where is the risk-free interest rate, the probability density of the jump amplitude is given by , and is the payoff function. Merton’s jump-diffusion model assumes that jump sizes are log-normally distributed with mean and standard deviation , that is, In this paper we consider Merton’s jump-diffusion model for a vanilla call option with payoff function The aim of the paper is the construction and numerical analysis of an explicit finite difference numerical scheme of the PIDE (1.2)-(1.3), with a different treatment of the integral part from previously quoted authors. Instead of assuming long-term information about the solution we perform a full discretization of the integral part, involving the unknown function values in the numerical scheme, discriminating the finite truncation domain and the infinite remaining one. As a consequence, this strategy involves a double discretization with respect to the spatial variable. With respect to the time variable an explicit forward approximation is used. An account of the advantages of this explicit approach has been explained and applied in [30]. This paper is organized as follows. Section 2 deals with a transformation of variables in order to eliminate both the advection and the reaction terms of the PIDE (1.2). Then the integral part is split in two parts: a finite integral and an infinite one , and the last is again transformed into a finite integral. The separation point of the two split integrals is becoming a parameter that could be chosen according to the criteria used by [16, 22, 31]. A suitable choice of parameter is the one used by other authors when they truncate the numerical domain. For instance in [27], one takes ; in Section 6 we take . Section 3 deals with the construction of the numerical scheme and the selection of the numerical domain that always involves the difficulty of the consideration of the boundary conditions. For the case of a PIDE this issue is even more relevant because the values throughout all the unbounded integral domain are unknown. The spatial numerical domain is divided in two parts by the parameter : and . In the domain, the stepsize discretization is , consisting in equidistributed mesh points . The domain is transformed into the by transformation . In the transformed domain we consider a stepsize discretization and mesh points with . When the interval is reversed to the domain , the reversed mesh points become nonuniformly distributed. Hence, the numerical scheme for problem (2.9)-(2.10) is forward in time with time-step discretization . The approximation of is centered in of the unique parameter , and in the approximation of involves a nonuniform stepsize discretization depending on and the value of . The numerical approximation of the integrals is evaluated using trapezoidal quadrature rules with stepsizes and , respectively. The boundary conditions at the boundary of our numerical domain are as follows. At we assume that the solution is zero according to the vanilla call option problem. At our largest finite value considered we assume a linear behavior of the solution. This hypothesis has been previously used in [32]. In Section 4 sufficient conditions for stability and positivity of the numerical solution are given in terms of the three parameters and as well as of the parameter . Consistency of the scheme is treated in Section 5. Section 6 includes illustrative examples showing the possible advantages of our new discretization approach. Finally conclusions are shown in Section 7. If is a vector in , we denote its infinite norm . Vector is said to be nonnegative if for all . Then we denote . For a matrix in , we denote by . Matrix is said to be nonnegative if for all , and we denote . 2. Transformation of the Integrodifferential Problem For the sake of convenience we introduce a transformation of variables to remove both the advection and the reaction terms of the PIDE problem (1.2)-(1.3). Let us consider the transformation and note that problem (1.2)-(1.3) is transformed into the problem In order to approximate the integral appearing in (2.2) and further discretization it is convenient the change of the variable Let us denote Taking , let us decompose Following [33, page 201] let us consider the substitution into , obtaining the expression Taking into account (2.4)–(2.8), the problem (2.2)-(2.3) can be written in the form 3. Numerical Scheme Construction In this section a difference scheme for problem (2.9)-(2.10) is constructed. With respect to the time variable, given with , let be the time-step discretization , and , , with integer. With respect to the spatial variable , given an arbitrary positive fixed , we construct a uniform grid in , with the spatial step discretization , with , being integer. Note that the integral given by (2.6) requires the evaluation of the unknown at points . As this integral has been transformed into (2.8) over the interval ] for the variable , see (2.7), we consider a uniform mesh of into points, of the form , where is integer, , with . Taking into account (2.7), for the original variable in , one has and since , Thus the spatial domain is split into points from those only the first are equidistributed. Let us denote For the approximation of we consider two types of finite differences: for the internal points of , and denoting , for the points lying in . Note that the discrete operator has different expressions depending on the ubication of , see (3.5) and (3.6). From the previous approximations, for the internal points we have where and are approximations of the composite trapezoidal type of integrals appearing in (2.6), (2.8): Let us denote . The approximation takes the form where the first term for does not appear due to the null value of the limit of function given by (1.4) as tends to zero. On the other hand, considering the assumption that has asymptotic linear behaviour as () and using (1.4) it follows that the integrand of (3.9) verifies Consequently, the last term of related to vanishes, and one gets The numerical scheme (3.7)– (3.12) needs to incorporate the transformed initial condition and the boundary conditions for : and assuming linear behaviour of the solution for large values of the spatial variable at any time, we have and null integral term approximation . Hence, considering (3.7) for , one gets For the sake of convenience in the study of stability we introduce a vector formulation of the scheme (3.7)–(3.15). Let us consider the vector in as and let be the tridiagonal matrix related to the differential part and defined by where Let be the matrix in related to the integral part whose entries for each fixed in are defined by From the previous notation the scheme (3.7)–(3.15) can be written in the form 4. Positive and Stability of the Numerical Solution Dealing with prices of contracts modeled by PIDE, the solution must be nonnegative. In this section we show that numerical solution provided by scheme (3.7)–(3.15) is conditionally positive and We begin with the following result. Lemma 4.1. With previous notation, assume that stepsizes , in and , and in satisfy (C1), (C2). Then matrix given by (3.17) is nonnegative. Proof. From (3.18), for one has and for . On the other hand, for , we have that Thus, under condition (C1), condition (4.1) holds true. With respect to the nonuniform grid, note that for , from (3.18 ) one gets that , , and . From (3.18) we also have that In order to guarantee the nonnegativeness of the remaining entries of matrix , let us introduce the function for . With this notation, we have that appearing in (3.18) satisfies Note that Taking into account that , with , then for , both numerator and denominator of (4.5) are positive. Thus Thus is strictly increasing for , and its minimum is achieved at with the value and condition (4.4) holds true under the condition From condition (C2), properties (4.2) and (4.8) are satisfied and matrix is nonnegative. Note that as matrix defined by (3.19) is always nonnegative, from Lemma 4.1 and (3.20) starting from nonnegative initial vector , the following result is established. Theorem 4.2. With the hypotheses and notation of Lemma 4.1, the solution of the scheme (3.7)–(3.15) is nonnegative if the initial values . The next result will be used below to guarantee stability. Lemma 4.3. Matrices and defined by (3.17), (3.18), and (3.19) satisfy the following. (1)Under conditions (C1) and (C2) of Lemma 4.1, . (2), where . Proof. By Lemma 4.1, under hypotheses (C1) and (C2), all the entries of matrix are nonnegative. Thus, We also have Hence and from the definition of , it follows that . This proves part 1. From (3.19), for a fixed , with one gets where are the trapezoidal approximation rules with and points, approximating the integrals respectively. Let be the maximum of in , given by Note that the integrand is increasing in the interval and decreasing for . In order to upper bound (4.12) we consider two cases. Firstly, let us assume that and let us denote to be the first integer with , such that From the properties of the lower Riemann sums, it follows that Taking into account the values and located just before and after , together with (4.17), from (4.12) it follows that In an analogous way, for the second situation where , one gets again (4.18). Finally, if is also true that Now we will upper bound (4.13). Let us denote with . Let be the maximum of : In this case we could distinguish the three possible situations ; , and upper bounding the lower Riemann sums relative to (4.13) one gets From (4.11), (4.18), and (4.22) together with the fact that , one gets for each value of , Taking into account that , for , it follows that and . Hence, from (4.23) one gets independently of the value of the size of matrix . For the sake of clarity and as there are many definitions of stability in the literature we recall our concept of stability in the next definition. Definition 4.4. Let be a numerical solution of the PIDE (2.9), (2.10) computed from the scheme (3.7)–(3.15) with stepsizes in , in , and in . Let be the corresponding vector form, that is, of (3.20). We say that is strongly uniformly stable, if where is independent of , , , and . If the property (4.25) is satisfied for appropriate relationships between the stepsizes , , and , then one says that the strong uniform stability is conditional. Theorem 4.5. With the previous notation, the numerical solution of the scheme (3.7)–(3.15) is strongly uniformly stable if one satisfies the condition together with Proof. Note that scheme (3.7)–(3.15) is equivalent to the vector form scheme (3.20). Under condition (4.26), by Lemma 4.3 one gets, after taking norms in (3.20), Hence, from (4.27), Bernouilli’s inequality, and , one gets Thus the conditional strong uniform stability is established. 5. Consistency We say that a numerical difference scheme is consistent with a PIDE, if the exact theoretical solution of the PIDE approximates well to the exact solution of the difference scheme as the stepsize discretization tends to zero, [34, 35]. Let us write the scheme (3.7)–(3.12) in the form , where and let us write the PIDE (2.9) in the form where where and are given by (2.6)–(2.8). Let us denote to be the value of the theoretical solution of PIDE (5.2). Let such that . We denote by the following expression the local truncation error : In order to prove the consistency, we must show that Assuming that is twice continuously partially differentiable with respect to and using Taylor’s expansions about one gets where Let us assume that admits four times continuous partial derivatives with respect to , and let us denote In accordance with [34, page 101] let us explain the local consistency error of , see (2.6), by where in (5.11) means the second derivative with respect to the variable , see [33]. In an analogous way, let us explain the local consistency error of the unbounded integral , see (2.8), by Summarizing, one gets Thus which proves the consistency of the scheme with the PIDE. 6. Numerical Results In the following examples the code was run on Matlab. The first example illustrates that stability conditions of Theorem 4.5 cannot be removed. Example 6.1. Consider the vanilla call option problem (1.2)–(1.5) under Merton jump diffusion model with parameters , , , , , , and . Taking , , and , Figure 1 shows that when the stability conditions (4.26) are satisfied results are good , while if the stability conditions are broken results are unreliables. The next example shows the robustness of our numerical scheme under changes of the jump intensity of the model. Example 6.2. Taking the same parameters of Example 6.1 apart from and the stepsize discretizations , , and , Figure 2 shows the variation of the solution with parameter , where corresponds to the Black-Scholes case. In the next example, the error is the difference between the numerical solution computed by (3.7)–(3.15) and (2.1) and the exact solution given by Merton’s formula [7]. Example 6.3 shows that the error of the numerical solution with fixed decreases with the uniform stepsize about the strike , while the error close to the truncation separation point remains stationary when decreases. This fact agrees with facts illustrated in [28, pages 15-16]. Example 6.3. Consider the vanilla call option problem (1.2)–(1.5) under Merton jump diffusion model with parameters , , , , , , and . For , , and , the Figure 3 shows the variation of the absolute error of the solution under changes of the stepsize . The next Example 6.4 shows that the errors in the right boundary of the numerical domain when one uses finite difference schemes, quoted by [28], can be reduced with our double spatial discretization by decreasing the stepsize . Example 6.4. Taking the problem of Example 6.3 with fixed , Figure 4 shows the error reduction of the numerical solution about the right boundary of the numerical domain when parameter decreases, while the error about the strike remains stationary. 7. Conclusions This work introduces a new discretization strategy for solving partial integrodifferential equations which involves the discretization of the unknown in the unbounded part of the integral. This fact increases the accuracy of the numerical solution in the boundary of the numerical domain as it is shown in Example 6.4. This paper was supported by the Spanish M.E.Y.C. Grant DPI2010-20891-C02-01. 1. J. Y. Campbell, A. W. Lo, and A. C. MacKinlay, The Econometrics of Financial Markets, Princeton University Press, Princeton, NJ, USA, 1997. 2. S. Heston, “A closed-form solution for options with stochastic volatility with applications to bond and currency options,” Review of Financial Studies, vol. 6, no. 2, pp. 327–343, 1993. 3. J. Hull and A. White, “The pricing of options with stochastic volatilities,” The Journal of Finance, vol. 42, no. 2, pp. 281–300, 1987. 4. J. C. Cox and S. A. Ross, “The valuation of options for alternative stochastic processes,” Journal of Financial Economics, vol. 3, no. 1-2, pp. 145–166, 1976. 5. B. Dupire, “Pricing with a smile,” Risk Magazine, vol. 1, pp. 18–20, 1994. 6. S. G. Kou, “A jump diusion model for option pricing,” Management Science, vol. 48, no. 8, pp. 1086–1101, 2002. 7. R. C. Merton, “Option pricing when the underlying stocks are discontinuous,” Journal of Financial Economics, vol. 3, pp. 1125–2144, 1976. 8. O. E. Barndorff-Nielsen, “Processes of normal inverse Gaussian type,” Finance and Stochastics, vol. 2, no. 1, pp. 41–68, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt 9. E. Eberlein, “Application of generalized hyperbolic Lévy motions to finance,” in Lévy Processes|Theory and Applications, O. Barndor-Nielsen, T. Mikosch, and S. Resnick, Eds., pp. 319–336, Birkhäuser, Boston, Mass, USA, 2001. View at Zentralblatt MATH 10. I. Koponen, “Analytic approach to the problem of convergence of truncated Lévy ights towards the Gaussian stochastic process,” Physical Review E, vol. 52, no. 1, pp. 1197–1199, 1995. 11. D. Madan and F. Milne, “Option pricing with variance gamma martingale components,” Mathematical Finance, vol. 1, no. 4, pp. 39–55, 1991. 12. R. Cont and P. Tankov, Financial Modelling with Jump Processes, Chapman & Hall/CRC Financial Mathematics Series, Chapman & Hall/CRC, Boca Raton, Fla, USA, 2004. 13. A. Pascucci, PDE and Martingale Methods in Option Pricing, vol. 2 of Bocconi & Springer Series, Springer, Milan, Italy, 2011. View at Publisher · View at Google Scholar 14. L. Andersen and J. Andreasen, “Jump-diffusion processes: volatility smile fitting and numerical methods for option pricing,” Review of Derivatives Research, vol. 4, no. 3, pp. 231–262, 2000. 15. A.-M. Matache, P.-A. Nitsche, and C. Schwab, “Wavelet Galerkin pricing of American options on Lévy driven assets,” Quantitative Finance, vol. 5, no. 4, pp. 403–424, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 16. A.-M. Matache, T. von Petersdorff, and C. Schwab, “Fast deterministic pricing of options on Lévy driven assets,” Mathematical Modelling and Numerical Analysis, vol. 38, no. 1, pp. 37–71, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 17. A.-M. Matache, C. Schwab, and T. P. Wihler, “Fast numerical solution of parabolic integrodifferential equations with applications in finance,” SIAM Journal on Scientific Computing, vol. 27, no. 2, pp. 369–393, 2005. View at Publisher · View at Google Scholar 18. F. Fang and C. W. Oosterlee, “A novel pricing method for European options based on Fourier-cosine series expansions,” SIAM Journal on Scientific Computing, vol. 31, no. 2, pp. 826–848, 2008/09. View at Publisher · View at Google Scholar 19. S. Pagliarani, A. Pascucci, and C. Riga, “Adjoint expansions in local Lévy models,” SSRN eLibrary, 2011. 20. E. Benhamou, E. Gobet, and M. Miri, “Smart expansion and fast calibration for jump diffusions,” Finance and Stochastics, vol. 13, no. 4, pp. 563–589, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 21. A. Almendral and C. W. Oosterlee, “Accurate evaluation of European and American options under the CGMY process,” SIAM Journal on Scientific Computing, vol. 29, no. 1, pp. 93–117, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 22. R. Cont and E. Voltchkova, “A finite difference scheme for option pricing in jump diffusion and exponential Lévy models,” SIAM Journal on Numerical Analysis, vol. 43, no. 4, pp. 1596–1626, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 23. Y. d'Halluin, P. A. Forsyth, and G. Labahn, “A penalty method for American options with jump diffusion processes,” Numerische Mathematik, vol. 97, no. 2, pp. 321–352, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 24. E. W. Sachs and A. K. Strauss, “Efficient solution of a partial integro-differential equation in finance,” Applied Numerical Mathematics, vol. 58, no. 11, pp. 1687–1703, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 25. S. Salmi and J. Toivanen, “An iterative method for pricing American options under jump-diffusion models,” Applied Numerical Mathematics, vol. 61, no. 7, pp. 821–831, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 26. D. Tavella and C. Randall, Pricing Financial Instruments, Wiley, New York, NY, USA, 2000. 27. J. Toivanen, “Numerical valuation of European and American options under Kou's jump-diffusion model,” SIAM Journal on Scientific Computing, vol. 30, no. 4, pp. 1949–1970, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 28. A. Almendral and C. W. Oosterlee, “Numerical valuation of options with jumps in the underlying,” Applied Numerical Mathematics, vol. 53, no. 1, pp. 1–18, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 29. M. Briani, C. La Chioma, and R. Natalini, “Convergence of numerical schemes for viscosity solutions to integro-differential degenerate parabolic problems arising in financial theory,” Numerische Mathematik, vol. 98, no. 4, pp. 607–646, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 30. R. Company, L. Jódar, E. Ponsoda, and C. Ballester, “Numerical analysis and simulation of option pricing problems modeling illiquid markets,” Computers & Mathematics with Applications, vol. 59, no. 8, pp. 2964–2975, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 31. R. Kangro and R. Nicolaides, “Far field boundary conditions for Black-Scholes equations,” SIAM Journal on Numerical Analysis, vol. 38, no. 4, pp. 1357–1368, 2000. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 32. H. Windcli, P. A. Forsyth, and K. R. Vetzal, “Analysis of the stability of the stability of the linear boundary condition for the Black-Scholes equation,” Journal of Computational Finance, vol. 8, no. 1, pp. 65–92, 2004. 33. P. J. Davis and P. Rabinowitz, Methods of Numerical Integration, Computer Science and Applied Mathematics, Academic Press, New York, NY, USA, 2nd edition, 1984. 34. P. Linz, Analytical and Numerical Methods for Volterra Equations, vol. 7, Society for Industrial and Applied Mathematics (SIAM), Philadelphia, Pa, USA, 1985. View at Publisher · View at Google 35. G. D. Smith, Numerical Solution of Partial Differential Equations, Clarendon Press, Oxford, UK, 3rd edition, 1985.
{"url":"http://www.hindawi.com/journals/aaa/2012/120358/","timestamp":"2014-04-16T19:17:40Z","content_type":null,"content_length":"731935","record_id":"<urn:uuid:9d61b61c-e2f9-491a-acc9-a7407a42b92b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
LBase: Semantics for Languages of the Semantic Web LBase: Semantics for Languages of the Semantic Web W3C Working Group Note 10 October 2003 This version: Latest version: Previous version: This document presents a framework for specifying the semantics for the languages of the Semantic Web. Some of these languages (notably RDF [RDF-PRIMER] [RDF-VOCABULARY] [RDF-SYNTAX] [RDF-CONCEPTS] [ RDF-SEMANTICS], and OWL [OWL]) are currently in various stages of development and we expect others to be developed in the future. This framework is intended to provide a framework for specifying the semantics of all of these languages in a uniform and coherent way. The strategy is to translate the various languages into a common 'base' language thereby providing them with a single coherent model We describe a mechanism for providing a precise semantics for the Semantic Web Languages (referred to as SWELs from now on. The purpose of this is to define clearly the consequences and allowed inferences from constructs in these languages. Status of This Document This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/. Publication as a Working Group Note does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress. This document results from discussions within the RDF Core Working Group concerning the formalization of RDF and RDF-based languages. The RDF Core Working Group is part of the W3C Semantic Web Activity. The group's goals and requirements are discussed in the RDF Core Working Group charter. These include requirements that... • The RDF Core group must take into account the various formalizations of RDF that have been proposed since the publication of the RDF Model and Syntax Recommendation. The group is encouraged to make use both of formal techniques and implementation-led test cases throughout their work. • The RDF schema system must provide an extensibility mechanism to allow future work (for example on Web Ontology and logic-based Rule languages) to provide richer facilities. This document is motivated by these two requirements. It does not present an RDF Core WG design for Semantic Web layering. Rather, it documents a technique that the RDF Core WG are using in our discussions and in the RDF Semantics specification. The RDF Core WG solicit feedback from other Working Groups and from the RDF implementor community on the wider applicability of this technique. Note that the use of the abbreviation "SWEL" in Lbase differs from the prior use of "SWeLL" in the MIT/LCS DAML project. In conformance with W3C policy requirements, known patent and IPR constraints associated with this Note are detailed on the RDF Core Working Group Patent Disclosure page. Review comments on this document are invited and should be sent to the public mailing list www-rdf-comments@w3.org. An archive of comments is available at http://lists.w3.org/Archives/Public/ Discussion of this document is invited on the www-rdf-logic@w3.org list of the RDF Interest Group (public archives). A model-theoretic semantics for a language assumes that the language refers to a 'world', and describes the minimal conditions that a world must satisfy in order to assign an appropriate meaning for every expression in the language. A particular world is called an interpretation, so that model theory might be better called 'interpretation theory'. The idea is to provide a mathematical account of the properties that any such interpretation must have, making as few assumptions as possible about its actual nature or intrinsic structure. Model theory tries to be metaphysically and ontologically neutral. It is typically couched in the language of set theory simply because that is the normal language of mathematics - for example, this semantics assumes that names denote things in a set IR called the 'universe' - but the use of set-theoretic language here is not supposed to imply that the things in the universe are set-theoretic in nature. The chief utility of such a semantic theory is not to suggest any particular processing model, or to provide any deep analysis of the nature of the things being described by the language, but rather to provide a technical tool to analyze the semantic properties of proposed operations on the language; in particular, to provide a way to determine when they preserve meaning. Any proposed inference rule, for example, can be checked to see if it is valid with respect to a model theory, i.e. if its conclusions are always true in any interpretation which makes its antecedents true. We note that the word 'model' is often used in a rather different sense, eg as in 'data model', to refer to a computational system or data structures of some kind. To avoid misunderstanding, we emphasise that the interpretations referred to in a model theory are not, in general, intended to be thought of as things that can be computed or manipulated by computers. There will be many Semantic Web languages, most of which will be built on top of more basic Semantic Web language(s). It is important that this layering be clean and simple, not just for human understandability, but also to enable the construction of robust semantic web agents that use these languages. The emerging current practice is for each of the SWELs to be defined in terms of their own model theory, layering it on top of the model theories of the languages they are layered upon. While having a model theory is clearly desireable, and even essential, for a SWEL, this direct-construction approach has several problems. It produces a range of model theories, each with its own notion of consequence and entailment. It requires expertise in logic to make sure that model theories align properly, and model-theoretic alignment does not always sit naturally with interoperability requirements. Experience to date (particularly with the OWL standard under development at the time of writing by the W3C Webont working group) shows that quite difficult problems can arise when layering model theories for extensions to the 'basic' RDF layer [RDF] of the semantic web. Moreover, this strategy places a very high burden on the 'basic' layer, since it is difficult to anticipate the semantic demands which will be made by all future higher layers, and the expectations of different development and user communities may conflict. Further, we believe that a melange of model theories will adversely impact developers building agents that implement proof systems for these layers, since the proof systems will likely be different for each layer, resulting in the need to micro-manage small semantic variations for various dialects and sub-languages (cf. the distinctions between various dialects of OWL). In this document, we use an alternative approach to for defining the semantics for the different SWELs in a fashion which ensures interoperability. We first define a basic language L[base] which is expressive enough to state the content of all currently proposed web languages, and has a fixed, clear model-theoretic semantics. Then, the semantics of each SWEL L[i] is defined by specifying how expressions in the L[i] map into equivalent expressions in L[base], and by providing axioms written in L[base] which constrain the intended meanings of the SWEL special vocabulary. The L[base] meaning of any expression in any SWEL language can then be determined by mapping it into L[base] and adding the appropriate language axioms, if there are any. The intended result is that the model theory of L[base] is the model theory of all the Semantic Web Languages, even though the languages themselves are different. This makes it possible to use a single inference mechanism to work on these different languages. Although it will possible to exploit restrictions on the languages to provide better performance, the existence of a reference proof system is likely to be of utility to developers. This also allows the meanings of expressions in different SWELs to be compared and combined, which is very difficult when they all have distinct model The idea of providing a semantics for SWELs by translating them into logic is not new [see for example Marchiolri&Saarela, Fikes&McGuinness] but we plan to adopt a somewhat different style than previous 'axiomatic semantics', which have usually operated by mapping all RDF triples to instances of a single three-place predicate. We propose rather to use the logical form of the target language as an explication of the intended meaning of the SWEL, rather than simply as an axiomatic description of that meaning, so that RDF classes translate to unary predicates, RDF properties to binary relations, the relation rdf:type translates to application of a predicate to an argument, and list-valued properties in OWL or DAML can be translated into n-ary or variadic relations. The syntax and semantics of L[base] have been designed with this kind of translation in mind. It is our intent that the model theory of L[base] be used in the spirit of its model theory and not as a programming language, i.e., relations in L[i] should correspond to relations in L[base], variables should correspond to variables and so on. It is important to note that L[base] is not being proposed as a SWEL. It is a tool for specifying the semantics of different SWELs. The syntax of L[base] described here is not intended to be accessible for machine processing; any such proposal should be considered to be a proposal for a more expressive SWEL. By using a well understood logic (i.e., first order logic [Enderton]) as the core of L[base], and providing for mutually consistent mappings of different SWELs into L[base], we ensure that the content expressed in several SWELs can be combined consistently, avoiding paradoxes and other problems. Mapping type/class language into predicate/application language also ensures that set-theoretical paradoxes do not arise. Although the use of this technique does not in itself guarantee that mappings between the syntax of different SWELs will always be consistent, it does provide a general framework for detecting and identifying potential inconsistencies. It is also important that the axioms defining the vocabulary items introduced by a SWEL are internally consistent. Although first-order logic (and hence L[base]) is only semi-decideable, we are confident that it will be routine to construct L[base] interpretations which establish the relevant consistencies for all the SWELs currently contemplated. In the general case, future efforts may have to rely on certifications from particular automated theorem provers stating that they weren't able to find an inconsistency with certain stated levels of effort. The availablity of powerful inference engines for first-order logic is of course relevant here. In this document, we use a version of first order logic with equality as L[base]. This imposes a fairly strict monotonic discipline on the language, so that it cannot express local default preferences and several other commonly-used non-monotonic constructs. We expect that as the Semantic Web grows to encompass more and our understanding of the Semantic Web improves, we will need to replace this L[base] with more expressive logics. However, we expect that first order logic will be a proper subset of such systems and hence we will be able to smoothly transition to more expressive L[base] languages in the future. We note that the computational advantages claimed for various sublanguages of first-order logic, such as description logics, logical programming languages and frame languages, are irrelevant for the purposes of using L[base] as a semantic specification language. We will use First Order Logic with suitable minor changes to account for the use of referring expressions (such as URIs) on the Web, and a few simple extensions to improve utility for the intended Any first-order logic is based on a set of atomic terms, which are used as the basic referring expressions in the syntax. These include names, which refer to entities in the domain, special names, and variables. L[base] distinguishes the special class of urirefs, defined to be a URI reference in the sense of [URI]. Urirefs are used to refer to both individuals and relations between the individuals. A name may be any string of unicode characters not starting with the characters ')','(', '\', '?','<' or ''' , and containing no whitespace characters, or any string of unicode characters enclosed by the symbols '<' and '>'. The <-> enclosed style is provided to allow names which would otherwise violate the L[base] syntactic conventions; in this case it is understood that the actual name is the enclosed string. For example, the name '<br />' (eight characters, including a space) can be written in L[base] as <'<br />'>. L[base] allows for various collections of special names with fixed meanings defined by other specifications (external to the L[base] specification). There is no assumption that these could be defined by collections of L[base] axioms, so that imposing the intended meanings on these special names may go beyond strict first-order expressiveness. (In mathematical terms, we allow that some sets of names refer to elements of certain fixed algebras, even when the algebra has no characteristic first-order description.) Each such set of names has an associated predicate which is true of the things denoted by the names in the set. At present, we assume two categories of such fixed names: numerals and quoted strings, with associated predicate names 'NatNumber' and 'String' respectively. We expect that other categories of special names will be introduced to handle, eg. XML structures. Numerals are defined to be strings of the characters '0123456789', and are interpreted as decimal numerals in the usual way. Since arithmetic is not first-order definable, this is the first and most obvious place that L[base] goes beyond first-order expressiveness. Quoted strings are arbitrary character sequences enclosed in (single) quotation marks, and are interpreted as denoting the string inside the quotation marks. To avoid ambiguity, single quote marks in strings are prefixed by a backslash character '\' which acts an escape character, so that '\'A\\'' denotes the string 'A\'. Double quote marks have no special interpretation. The associated predicate names NatNumber, String and Relation (see below) are considered to be special names. A variable is any non-white-space character string starting with the character '?'. The characters '(', ',' and ')' are considered to be punctuation symbols. The categories of punctuation, whitespace, names, special names and variables are exclusive and each such string can be classified by examining its first character. This is not strictly necessary but is a useful convention. Any L[base] language is defined with respect to a vocabulary, which is a set of non-special names. We require that every L[base] vocabulary contain all urirefs, but other expressions are allowed. (We will require that every L[base] interpretation provide a meaning for every special name, but these interpretations are fixed, so special names are not counted as part of the vocabulary.) There are several aspects of meaning of expressions on the semantic web which are not yet treated by this semantics; in particular, it treats URIs as simple names, ignoring aspects of meaning encoded in particular URI forms [RFC 2396] and does not provide any analysis of time-varying data or of changes to URI denotations. The model theory also has nothing to say about whether an HTTP uri such as "http://www.w3.org/" denotes the World Wide Web Consortium or the HTML page accessible at that URI or the web site accessible via that URI. These complexities may be addressed in future extensions of L[base]; in general, we expect that L[base] will be extended both notationally and by adding axioms in order to track future standardization efforts. We do not take any position here on the way that urirefs may be composed from other expressions, e.g. from relative URIs or Qnames; the model theory simply assumes that such lexical issues have been resolved in some way that is globally coherent, so that a single uriref can be taken to have the same meaning wherever it occurs. Similarly, the model theory given here has no special provision for tracking temporal changes. It assumes, implicitly, that urirefs have the same meaning whenever they occur. To provide an adequate semantics which would be sensitive to temporal changes is a research problem which is beyond the scope of this document.. Even though the exact syntax chosen for L[base] is not important, we do need a syntax for the specification. We follow the same general conventions used in most standard presentations of first-order logic, with one generalization which has proven useful. We will assume that there are three sets of names (not special names) which together constitute the vocabulary: individual names, relation names, and function names, and that each function name has an associated arity, which is a non-negative integer. In a particular vocabulary these sets may or may not be disjoint. Expressions in L[base] (speaking strictly, L[base] expressions in this particular vocabulary) are then constructed recursively as follows: A term is either a name or a special name or a variable, or else it has the form f(t1,...,tn) where f is an n-ary function name and t1,...,tn are terms. A formula is either atomic or boolean or quantified, where: an atomic formula has the form (t1=t2) where t1 and t2 are terms, or else the form R(t1,...,tn) where R is a relation name or a variable and t1,...,tn are terms; a boolean formula has one of the forms (W1 and W2 and ....and Wn) (W1 or W2 or ... or Wn) (W1 implies W2) (W1 iff W2) (not W1) where W1, ...,Wn are formulae; and a quantified formula has one of the forms (forall (?v1 ...?vn) W) (exists (?v1 ... ?vn) W) where ?v1,...,?vn are variables and W is a formula. (The subexpression just after the quantifier is the variable list of the quantifier. Any occurrence of a variable in W is said to be bound in the quantified formula by the nearest quantifer to the occurrence which includes that variable in its variable list, if there is one; otherwise it is said to be free in the formula.) Finally, an L[base] knowledge base is a set of formulae. Formulae are also called 'wellformed formulae' or 'wffs' or simply 'expressions'. In general, surplus brackets may be omitted from expressions when no syntactic ambiguity would arise. Some comments may be in order. The only parts of this definition which are in any way nonstandard are (1) allowing 'special names', which was discussed earlier; (2) allowing variables to occur in relation position, which might seem to be at odds with the claim that L[base] is first-order - we discuss this further below - and (3) not assigning a fixed arity to relation names. This last is a useful generalization which makes no substantial changes to the usual semantic properties of first-order logic, but which eases the translation process for some SWEL syntactic constructs. (The computational properties of such 'variadic relations' are quite complex, but L[base] is not being proposed as a language for computational use.) The following definition of an interpretation is couched in mathematical language, but what it amounts to intuitively is that an interpretation provides just enough information about a possible way the world might be - a 'possible world' - in order to fix the truth-value (true or false) of any L[base] well formed formula in that world. It does this by specifying for each uriref, what it is supposed to be a name of; and also, if it is a function symbol, what values the function has for each choice of arguments; and further, if it is a relation symbol, which sequences of things the relation holds between. This is just enough information to determine the truth-values of all atomic formulas; and then this, together with a set of recursive rules, is enough to assign a truth value for any L[base] formula. In specifying the following it is convenient to define use some standard definitions. A relation over a set S is a set of finite sequences (tuples) of members of S. If R is a relation and all the elements of R have the same length n, then R is said to have arity n, or to be a n-ary relation. Not every relation need have an arity. If R is an (n+1)-ary relation over S which has the property that for any sequence <s1,...,sn> of members of S, there is exactly one element of R of the form <s0, s1, ..., sn>, then R is an n-ary function; and s0 is the value of the function for the arguments s1, ...sn. (Note that an n-ary function is an (n+1)-ary relation, and that, by convention, the function value is the first argument of the relation, so that for any n-ary function f, f(y,x1,...,xn) means the same as y = f(x1,...,xn).) The conventional textbook treatment of first-order interpretations assumes that relation symbols denote relations. We will modify this slightly to require that relation symbols denote entities with an associated relation, called the relational extension, and will sometimes abuse terminology by referring to the entities with relational extensions as relations. This device gives L[base] some of the freedom to quantify over relations which would be familiar in a higher-order logic, while remaining strictly a first-order language in its semantic and metatheoretic properties. We will use the special name Relation to denote the property of having a relational extension. Let VV be the set of all variables, and NN be the set of all special names. We will assume that there is a globally fixed mapping SN from elements of NN to a domain ISN (i.e, consisting of character strings and integers). The exact specification of SN is given for numerals by the usual reading of a decimal numeral to denote a natural number and for quoted strings by the dequotation rules described earlier. An interpretation I of a vocabulary V is then a structure defined by: • a set ID, called the domain or universe of I; • a mapping IS from (V union VV) into ID; • a mapping IEXT from IR, a subset of ID, into a relation over ID+ISN (ie a set of tuples of elements of ID+ISN). which satisfies the following conditions: • for any n-ary function symbol f in V, IEXT(I(f)) is an n-ary function over ID+ISN. • IEXT(I(NatNum)) = {<n>, n a natural number} • IEXT(I(String)) = {<s>, s a character string} • IEXT(I(Relation)) = IR An interpretation then specifies the value of any other L[base] expression E according to the following rules: │if E is: │then I(E) is: │ │a name or a variable │IS(E) │ │a special name │SN(E) │ │a term f(t1,...,tn) │the value of IEXT(I(f)) for the arguments I(t1),...,I(tn) │ │an equation (A=B) │true if I(A)=I(B), otherwise false │ │a formula of the form R(t1,...,t2)│true if IEXT(I(R)) contains the sequence <I(t1),...,I(tn)>, otherwise false │ │(W1 and ...and Wn) │true if I(Wi)=true for i=1 through n, otherwise false │ │(W1 or ...or Wn) │false if I(Wi)=false for i=1 through n, otherwise true │ │(W1 <=> W2) │true if I(W1)=I(W2), otherwise false │ │(W1 => W2) │false if I(W1)=true and I(W2)=false, otherwise true │ │not W │true if I(W)=false, otherwise false │ If B is a mapping from a set W of variables into ID, then define [I+B] to be the interpretation which is like I except that [I+B](?v)=B(?v) for any variable ?v in W. │if E is: │then I(E) is: │ │(forall (?v1,...,?vn) W)│false if [I+B](W)=false for some mapping B from {?v1,...,?vn} into ID, otherwise true │ │(exist (?v1,...,?vn) W) │true if [I+B](W)=true for some mapping B from {?v1,...,?vn} into ID, otherwise false │ Finally, a knowledge base is considered to be true if and only if all its elements are true, .i.e. to be a conjunction of its elements. Intuitively, the meaning of an expression containing free variables is not well specified (it is formally specified, but the interpretation of the free variables is arbitrary.) To resolve any confusion, we impose a familiar convention by which any free variables in a sentence of a knowledge base are considered to be universally quantified at the top level of the expression in which they occur. (Equivalently, one could insist that all variables in any knowledge-base expression be bound by a quantifier in that expression; this would force the implicit quantification to be made These definitions are quite conventional. The only unusual features are the incorporation of special-name values into the domain, the use of an explicit extension mapping, the fact that relations are not required to have a fixed arity, and the description of functions as a class of relations. The explicit extension mapping is a technical device to allow relations to be applied to other relations without going outside first-order expressivity. We note that while this allows the same name to be used in both an individual and a relation position, and in a sense gives relations (and hence functions) a 'first-class' status, it does not incorporate any comprehension principles or make any logical assumptions about what relations are in the domain. Notice that no special semantic conditions were invoked to treat variables in relation position differently from other variables. In particular, the language makes no comprehension assumptions whatever. The resulting language is first-order in all the usual senses: it is compact and satisfies the downward Skolem-Lowenheim property , for example, and the usual machine-oriented inference processes still apply, in particular the unification algorithm. (One can obtain a translation into a more conventional syntax by re-writing every atomic sentence using a rule of the form R(t1,...,tn) => Holds(R, t1,...,tn), where 'Holds' is a 'dummy' relation indicating that the relation R is true of the remaining arguments. The presentation given here eliminates the need for this artificial translation, but its existence establishes the first-order properties of the language. To translate a conventional first-order syntax into the L[base] form, simply qualify all quantifiers to range only over non-Relations. The issue is further discussed in (Hayes & Menzel ref). ) Allowing relations with no fixed arity is a technical convenience which allows L[base] to accept more natural translations from some SWELs. It makes no significant difference to the metatheory of the formalism compared to a fixed-arity sytnax where each relation has a given arity. Treating functions as a particular kind of relation allows us to use a function symbol in a relation position (albeit with a fixed arity, which is one more than its arity as a function); this enables some of the translations to be specified more efficiently. As noted earlier, incorporating special name interpretations (in particular, integers) into the domain takes L[base] outside strict first-order compliance, but these domains have natural recursive definitions and are in common use throughout computer science. Mechanical inference systems typically have special-purpose reasoners which can effectively test for satisfiability in these domains. Notice that the incorporation of these special domains into an interpretation does not automatically incorporate all truths of a full theory of such structures into L[base]; for example, the presence of the integers in the semantic domain does not in itself require all truths of arithmetic to be valid or provable in L[base]. 2.4 Axiom schemes An axiom scheme stands for an infinite set of L[base] sentences all having a similar 'form'. We will allow schemes which are like L[base] formulae except that expressions of the form "<exp1>...<expn> ", ie two expressions of the same syntactic category separated by three dots, can be used, and such a schema is intended to stand for the infinite knowledge base containing all the L[base] formulae gotten by substituting some actual sequence of appropriate expressions (terms or variables or formulae) for the expression shown, which we call the L[base] instances of the scheme. (We have in fact been using this convention already, but informally; now we are making it formal.) For example, the following is an L[base] scheme: (forall (?v1...?vn)(R(?v1...?vn) implies Q(a, ?v2...?vn))) - where the expression after the first quantifier is an actual scheme expression, not a conventional abbreviation - which has the following L[base] instances, among others: (forall (?x)(R(?x) implies Q(a, ?x))) (forall (?y,?yy,?z)(R(?y, ?yy, ?z) implies Q(a,?y,?yy,?z))) Axiom schemes do not take the language beyond first-order, since all the instances are first-order sentences and the language is compact, so if any L[base] sentence follows from (the infinite set of instances of) an axiom scheme, then it must in fact be entailed by some finite set of instances of that scheme. We note that L[base] schemes should be understood only as syntactic abbreviations for (infinite) sets of L[base] sentences when stating translation rules and specifying axiom sets. Since all L[base] expressions are required to be finite, one should not think of L[base] schemes as themselves being sentences; for example as making assertions, as being instances or subexpressions of L[base] sentences, or as being posed as theorems to be proved. Such usages would go beyond the first-order L[base] framework. (They amount to a convention for using infinitary logic: see [Hayes& Menzel] for details.) This kind of restricted use of 'axiom schemes' is familiar in many textbook presentations of logic. Following conventional terminology, we say that I satisfies E if I(E)=true, and that a set S of expressions entails E if every interpretation which satisfies every member of S also satisfies E. If the set S contains schemes, they are understood to stand for the infinite sets of all their instances. Entailment is the key idea which connects model-theoretic semantics to real-world applications. As noted earlier, making an assertion amounts to claiming that the world is an interpretation which assigns the value true to the assertion. If A entails B, then any interpretation that makes A true also makes B true, so that an assertion of A already contains the same "meaning" as an assertion of B; we could say that the meaning of B is somehow contained in, or subsumed by, that of A. If A and B entail each other, then they both "mean" the same thing, in the sense that asserting either of them makes the same claim about the world. The interest of this observation arises most vividly when A and B are different expressions, since then the relation of entailment is exactly the appropriate semantic licence to justify an application inferring or generating one of them from the other. Through the notions of satisfaction, entailment and validity, formal semantics gives a rigorous definition to a notion of "meaning" that can be related directly to computable methods of determining whether or not meaning is preserved by some transformation on a representation of knowledge. Any process or technique which constructs a well formed formula F[output] from some other F[input] is said to be valid if F[input] entails F[output], otherwise invalid. Note that being an invalid process does not mean that the conclusion is false, and being valid does not guarantee truth. However, validity represents the best guarantee that any assertional language can offer: if given true inputs, it will never draw a false conclusion from them. Imagine we have a Semantic Web Language L[i]. To provide a semantics for L[i] using L[base], we must provide: • a procedure for translating expressions in L[i] to expressions in L[base]. This process will also consequently define the subset of L[base] that is used by L[i]. • a set of vocabulary items introduced by L[i] • a set of axioms and/or axiom schemas (expressed in L[base] or L[base] schema) that capture the intended meanings of the terms in (2). Given a set of expressions G in L[i], we apply the procedure above to obtain a set of equivalent well formed formulae in L[base]. We then conjoin these with the axioms associated with the vocabulary introduced by L[i] (and any other language upon which L[i] is layered). If there are associated axiom schemata, we appropriately instantiate these and conjoin them to these axioms. The resulting set, referred to as A(G), is an axiomatic equivalent of G. There are several different 'styles' one could adopt for writing axiomatic equivalents. The most conservative amounts to simply transliterating the basic vocabulary of the SWEL into L[base] syntactic form, then relying on the axioms to determine their meaning. In cases where the axioms amount to an 'iff' definition of the vocabulary item, however, this could be shortened by translating the SWEL vocabulary into the defined form directly, resulting in a simpler translation. For example, in giving an axiomatic equivalent for OWL-DL, the meaning of rdfs:subClassOf can be captured adequately by translating it directly into the form of a logical implication: aaa rdfs:subClassOf bbb =(translates into)=> (forall (?x) (aaa(?x) implies bbb(?x) )) This direct translation removes 'rdfs:subClassOf' from the the axiomatic equivalent altogether, however, so makes it impossible to express other RDFS truths about the rdfs:subClassOf property. This would be acceptable if we were concerned only with OWL-DL, which imposes a syntactic restriction which forbids such propositions; but it is not acceptable when we wish to relate different SWELs to one another, which is the primary goal here. We therefore focus on the 'conservative' style of translation where the burden of expressing the meaning of the SWEL vocabulary falls largely on the As an illustrative example, we give in the following table a sketch of the axiomatic equivalent for RDF graphs using the RDF(S) and OWL vocabularies, in the form of a translation from N-triples. Note, this should not be referred to as an accurate or normative semantic description. │RDF expression E │L[base] expression TR[E] │ │a plain literal "sss" │'sss' , with any internal occurrences of ''' prefixed with '\' │ │a plain literal "sss"@ttt │the term pair('sss','ttt') │ │a typed literal "sss"^^ddd │the term LiteralValueOf('sss',TR[ddd]) │ │an RDF container membership property name of the form rdf:_nnn│rdf-member(nnn) │ │any other URI reference aaa │aaa or <aaa> │ │a blank node │a variable (one distinct variable per blank node) │ │a triple │TR[bbb](TR[aaa]) and rdfs:Class(TR[bbb]) │ │aaa rdf:type bbb . │ │ │any other triple │TR[bbb](TR[aaa] TR[ccc]) and rdf:Property(TR[bbb]) │ │aaa bbb ccc . │ │ │an RDF graph │The existential closure of the conjunction of the translations of all the triples in the graph.│ │a set of RDF graphs │The conjunction of the translations of all the graphs. │ RDF axioms │rdf:type(?x ?y) implies ?y(?x) │ │ │ │rdf:Property(rdf:type) │ │rdf:Property(rdf:subject) │ │rdf:Property(rdf:predicate) │ │rdf:Property(rdf:object) │ │rdf:Property(rdf:first) │ │rdf:Property(rdf:rest) │ │rdf:Property(rdf:value) │ │rdf:List(rdf:nil) │ │ │ │NatNumber(?x) implies rdf:Property(rdf-member(?x)) │ │ │ │pair(?x ?y)=pair(?u ?v) iff (?x=?u and ?y=?v) ;; uniqueness for pairs, required by graph syntax rules.│ RDFS axioms │rdfs:Resource(?x) │ │ │ │rdfs:Class(?y) implies (?y(?x) iff rdf:type(?x ?y)) │ │ │ │rdfs:range(?x ?y) implies ( ?x(?u ?v)) implies ?y(?v) ) │ │ │ │rdfs:domain(?x ?y) implies ( ?x(?u ?v)) implies ?y(?u) ) │ │ │ │rdfs:subClassOf(?x ?y) implies │ │(rdfs:Class(?x) and rdfs:Class(?y) and (forall (?u)(?x(?u) implies ?y(?u))) │ │ │ │rdfs:Class(?x) implies ( rdfs:subClassOf(?x ?x) and rdfs:subClassOf(?x rdfs:Resource) ) │ │ │ │( rdfs:subClassOf(?x ?y) and rdfs:subClassOf(?y ?z) ) implies rdfs:subClassOf(?x ?z) │ │ │ │rdfs:subPropertyOf(?x,?y) implies │ │(rdf:Property(?x) and rdf:Property(?y) and (forall (?u ?v)(?x(?u ?v) implies ?y(?u ?v))) │ │ │ │rdf:Property(?x) implies rdfs:subPropertyOf(?x ?x) │ │ │ │( rdfs:subPropertyOf(?x ?y) and rdfs:subPropertyOf(?y ?z) ) implies rdfs:subPropertyOf(?x ?z) │ │ │ │rdfs:ContainerMembershipProperty(?x) implies rdfs:subPropertyOf(?x rdfs:member) │ │ │ │rdf:XMLLiteral(?x) implies rdfs:Literal(?x) │ │ │ │String(?y) implies rdfs:Literal(?y) │ │ │ │(String(?x) and LanguageTag(?y)) implies rdfs:Literal(pair(?x,?y)) │ │ │ │rdfs:Datatype(?x) implies (?x(?y) implies rdfs:Literal(?y) ) │ │ │ │NatNumber(?x) implies │ │(rdfs:ContainerMembershipProperty(rdf-member(?x)) and │ │rdfs:domain(rdf-member(?x) rdfs:Resource) and │ │rdfs:range(rdf-member(?x) rdfs:Resource) ) │ │ │ │rdfs:Class(rdfs:Resource) │ │rdfs:Class(rdf:Property) │ │rdfs:Class(rdfs:Class) │ │rdfs:Class(rdfs:Datatype) │ │rdfs:Class(rdf:Seq) │ │rdfs:Class(rdf:Bag) │ │rdfs:Class(rdf:Alt) │ │rdfs:Class(rdfs:Container) │ │rdfs:Class(rdf:List) │ │rdfs:Class(rdfs:ContainerMembershipProperty) │ │rdfs:Class(rdf:Statement) │ │rdf:Property(rdfs:domain) │ │rdf:Property(rdfs:range) │ │rdf:Property(rdfs:subClassOf) │ │rdf:Property(rdfs:subPropertyOf) │ │rdf:Property(rdfs:comment) │ │rdf:Property(rdfs:seeAlso) │ │rdf:Property(rdfs:isDefinedBy) │ │rdf:Property(rdfs:label) │ │;; the rest of the axioms are direct transcriptions of the RDFS axiomatic triples, using the RDF to L[base] transcription rules:│ │ │ │rdfs:domain(rdf:type rdfs:Resource) │ │rdfs:domain(rdfs:domain rdf:Property) │ │rdfs:domain(rdfs:range rdf:Property) │ │rdfs:domain(rdfs:subPropertyOf,rdf:Property) │ │rdfs:domain(rdfs:subClassOf rdfs:Class) │ │rdfs:domain(rdf:subject rdf:Statement) │ │rdfs:domain(rdf:predicate rdf:Statement) │ │rdfs:domain(rdf:object rdf:Statement) │ │rdfs:domain(rdf:member rdfs:Resource) │ │rdfs:domain(rdf:first rdf:List) │ │rdfs:domain(rdf:rest rdf:List) │ │rdfs:domain(rdfs:seeAlso rdfs:Resource) │ │rdfs:domain(rdfs:isDefinedBy rdfs:Resource) │ │rdfs:domain(rdfs:comment rdfs:Resource) │ │rdfs:domain(rdfs:label rdfs:Resource) │ │rdfs:domain(rdfs:value rdfs:Resource) │ │ │ │rdfs:range(rdf:type rdfs:Class) │ │rdfs:range(rdfs:domain rdfs:Class) │ │rdfs:range(rdfs:range rdfs:Class) │ │rdfs:range(rdfs:subPropertyOf rdf:Property) │ │rdfs:range(rdfs:subClassOf rdfs:Class) │ │rdfs:range(rdf:subject rdfs:Resource) │ │rdfs:range(rdf:predicate rdfs:Resource) │ │rdfs:range(rdf:object rdfs:Resource) │ │rdfs:range(rdf:member rdfs:Resource) │ │rdfs:range(rdf:first rdfs:Resource) │ │rdfs:range(rdf:rest rdf:List) │ │rdfs:range(rdfs:seeAlso rdfs:Resource) │ │rdfs:range(rdfs:isDefinedBy rdfs:Resource) │ │rdfs:range(rdfs:comment rdfs:Literal) │ │rdfs:range(rdfs:label rdfs:Literal) │ │rdfs:range(rdfs:value rdfs:Resource) │ │ │ │rdfs:subClassOf(rdf:Alt rdfs:Container) │ │rdfs:subClassOf(rdf:Bag rdfs:Container) │ │rdfs:subClassOf(rdf:Seq rdfs:Container) │ │rdfs:subClassOf(rdfs:ContainerMembershipProperty rdf:Property) │ │ │ │rdfs:subPropertyOf(rdfs:isDefinedBy rdfs:seeAlso) │ │ │ │rdfs:Datatype(rdf:XMLLiteral) │ │rdfs:subClassOf(rdfs:Datatype rdfs:Class) │ RDF Datatyped Literal axioms │rdfs:Literal(LiteralValueOf(?x ?y)) iff ?y(LiteralValueOf(?x ?y))│ │rdfs:Datatype(?y) implies rdfs:Class(?y) │ │rdfs:Datatype(?y) implies (exists (?x) ?y(?x) ) │ In addition, for each datatype named ddd , one needs a datatype theory consisting of all axioms of the following form, or the equivalent: │rdfs:Datatype(ddd) │ │ddd(LiteralValueOf('aaa' ddd)) where aaa is a legal lexical form for the datatype │ │not ddd(LiteralValueOf('aaa' ddd)) where aaa is any string which is not a legal lexical form for the datatype.│ If there is some notational framework in (or added to) L[base] which enables one to write terms denoting the members of the value space of the datatype, then the database theory can also contain all true axioms of the form LiteralValueOf('aaa' ddd) = [L2V(ddd,aaa)] where the square brackets indicate the presence of the appropriate term for that value. For example, using decimal numerals to denote the integers, this could be all equations of the form LiteralValueOf('345' xsd:integer) = 345 Such axioms, or equivalents, would be needed in order to connect the translation to other theories which used the more conventional notations. In some cases, a datatype theory can be summarized in a finite number of axioms. For example, the datatype theory for xsd:string can be stated by a single axiom: (String(?x) iff xsd:string(?x) ) and (String(?x) implies LiteralValueOf(?x xsd:string) = ?x ) 3.1 Relation between the two kinds of semantics Given a SWEL L[i], we can provide a semantics for it either by providing it with a model or by mapping it into L[base] and utilizing the model theory associated with L[base]. Given a set of expressions G in Li and its axiomatic equivalent in L[base] A(G), any L[base] interpretation of A(G) defines an Li interpetation for G. The natural Li interpretation from its own model theory will in general be simpler than the L[base] interpretation: for example, interpretations of RDF will not make use of the universal quantification, negation or disjunction rules, and the underlying structures need have no functional relations. In general, therefore, the most 'natural' semantics for L[i] will be obtained by simply ignoring some aspects of the L[base] interpretation of A(G). (In category-theoretic terms, it will be the result of applying an appropriate forgetful functor to the L[base] structure.) Nevertheless, this extra structure is harmless, since it does not affect entailment in L[i] considered in isolation; and it may be useful, since it provides a natural way to define consistency across several SWELs at the same time, and to define entailment from KBs which express content in different, or even in mixed, SWELs simultaneously. For these reasons we propose to adopt it as a convention that the appropriate notion of satisfaction for any SWEL expression G is in fact defined relative to an L[base] interpretation of A(G). The following diagram illustrates the relation between L[i], L[base], G and interpretations of G according to the different model theories. The important point to note about the above diagram is that if the L[i] to L[base] mapping and model theory for L[i] are done consistently, then the two 'routes' from G to a satisfying interpretation will be equivalent. This is because the L[i] axioms included in the L[base] equivalent of G should be sufficient to guarantee that any satisfying interpretation in the L[base] model theory of the L [base] equivalent of G will contain a substructure which is a satisfying interpretation of G according to the L[i] model theory, and vice versa. The utility of this framework for combining assertions in several different SWELs is illustrated by the following diagram, which is an 'overlay' of two copies of the previous diagram. Note that the G1+G2 equivalent in this case contains axioms for both languages, ensuring (if all is done properly) that any L[base] interpretation will contain appropriate substructures for both If the translations into L[base] are appropriately defined at a sufficient level of detail, then even tighter semantic integration could be achieved, where expressions which 'mix' vocabulary from several SWELs could be given a coherent interpretation which satisfies the semantic conditions of both languages. This will be possible only when the SWELS have a particularly close relationship, however. In the particular case where one SWEL (the one used by G2) is layered on top of another (the one used by G1), the interpretations of G2 will be a subset of those of G1 The L[base] described above has several deficiencies as a base system for the Semantic Web. In particular, • It does not capture the social meaning of URIs. It merely treats them as opaque symbols. A future web logic should go further towards capturing this intention. • At the moment, L[base] does not provide any facilities related to the representation of time and change. However, many existing techniques for temporal representation use languages similar to L [base] in expressive power, and we are optimistic that L[base] can provide a useful framework in which to experiment with temporal ontologies for Web use. • It might turn out that some aspects of what we want to represent on the the semantic web requires more than can be expressed using the L[base] described in this document. In particular, L[base] does not provide a mechanism for expressing propositional attitudes or true second order constructs. A future version of L[base], which includes the above L[base] as a proper subset, might have to include such facilities. We would like to thank members of the RDF Core working group, Tim Berners-Lee, Richard Fikes, Sandro Hawke, Jim Hendler and Peter Patel-Schneider for comments on various versions of this document. A Mathematical Introduction to Logic, H.B.Enderton, 2^nd edition, 2001, Harcourt/Academic Press. [Fikes & McGuinness] R. Fikes, D. L. McGuinness, An Axiomatic Semantics for RDF, RDF Schema, and DAML+OIL, KSL Technical Report KSL-01-01, 2001 [Hayes & Menzel] P. Hayes, C. Menzel, A Semantics for the Knowledge Interchange Format , 6 August 2001 (Proceedings of 2001 Workshop on the IEEE Standard Upper Ontology) Web Ontology Language (OWL) Reference Version 1.0, Mike Dean, Dan Connolly, Frank van Harmelen, James Hendler, Ian Horrocks, Deborah L. McGuinness, Peter F. Patel-Schneider, and Lynn Andrea Stein. W3C Working Draft 12 November 2002. This version is http://www.w3.org/TR/2002/WD-owl-ref-20021112/ . Latest version is available at http://www.w3.org/TR/owl-ref/. [Marchiori & Saarela] M. Marchioi, J. Saarela, Query + Metadata + Logic = Metalog, 1998 Resource Description Framework (RDF): Concepts and Abstract Syntax, Klyne G., Carroll J. (Editors), World Wide Web Consortium Working Draft, 10 October 2003 (work in progress). This version is http://www.w3.org/TR/2003/WD-rdf-concepts-20031010/. The latest version is http://www.w3.org/TR/rdf-concepts/ RDF/XML Syntax Specification (Revised), Beckett D. (Editor), World Wide Web Consortium Working Draft, 10 October 2003 (work in progress). This version is http://www.w3.org/TR/2003/ WD-rdf-syntax-grammar-20031010/. The latest version is http://www.w3.org/TR/rdf-syntax-grammar/ RDF Semantics, Hayes P. (Editor), World Wide Web Consortium Working Draft, 10 October 2003 (work in progress). This version is http://www.w3.org/TR/2003/WD-rdf-mt-20031010/. The latest version is RDF Test Cases, Grant J., Beckett D. (Editors) World Wide Web Consortium Working Draft, 5 September 2003 (work in progress). This version is http://www.w3.org/TR/2003/WD-rdf-testcases-20031010/. The latest version is http://www.w3.org/TR/rdf-testcases/. Resource Description Framework (RDF) Model and Syntax, W3C Recommendation, 22 February 1999 RDF Primer, Manola F., Miller E. (Editors), World Wide Web Consortium Working Draft, 5 September 2003 (work in progress). This version is http://www.w3.org/TR/2003/WD-rdf-primer-20031010/. The latest version is http://www.w3.org/TR/rdf-primer/ RDF Vocabulary Description Language 1.0: RDF Schema, Brickley D., Guha R.V. (Editors), World Wide Web Consortium, November 2002. Consortium Working Draft, 10 October 2003 (work in progress). This version is http://www.w3.org/TR/2003/WD-rdf-schema-20031010/. The latest version is http://www.w3.org/TR/rdf-schema/ T. Berners-Lee, Fielding and Masinter, RFC 2396 - Uniform Resource Identifiers (URI): Generic Syntax, August 1998. T. Bray, J. Paoli, C.M. Sperberg.McQueen, E. Maler. Extensible Markup Language (XML) 1.0 (Second Edition), W3C Recommendation 6 October 2000 Since the version of 23 January, the definition of quoted strings has been modified to simplify character escaping; the syntax allowing names to be enclosed in < > introduced; and the 'XMLThing' category of special names deleted; it was underspecifed and not necessary. Several minor editorial changes have been made throughout the document (heading numbers corrected, etc.) . The example translation of RDF/RDFS has been updated so as to conform to the description given in the RDF Semantics document, and the discussion of axiomatic equivalents expanded. Thanks to Peter Patel-Schneider for critical comments on the earlier version.
{"url":"http://www.w3.org/TR/2003/NOTE-lbase-20031010/","timestamp":"2014-04-23T19:36:57Z","content_type":null,"content_length":"70035","record_id":"<urn:uuid:3156251f-05ff-41ed-866c-a3a6fc1197c3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
Developing Theories of Types and Computability via Realizability PhD Thesis L. Birkedal We investigate the development of theories of types and computability via realizability. In the first part of the thesis, we suggest a general notion of realizability, based on weakly closed partial cartesian categories, which generalizes the usual notion of realizability over a partial combinatory algebra. We show how to construct categories of so-called assemblies and modest sets over any weakly closed partial cartesian category and that these categories of assemblies and modest sets model dependent predicate logic, that is, first-order logic over dependent type theory. We further characterize when a weakly closed partial cartesian category gives rise to a topos. Scott's category of equilogical spaces arises as a special case of our notion of realizability, namely as modest sets over the category of algebraic lattices. Thus, as a consequence, we conclude that the category of equilogical spaces models dependent predicate logic; we include a concrete description of this model. In the second part of the thesis, we study a notion of relative computability, which allows one to consider computable operations operating on not necessarily computable data. Given a partial combinatory algebra A, which we think of as continuous realizers, with a subalgebra A#, which we think of as computable realizers, there results a realizability topos RT(A,A#), which one intuitively can think of as having ``continous objects and computable morphisms''. We study the relationship between this topos and the standard realizability toposes RT(A) and RT(A#) over A and A#. In particular, we show that there is a localic local map of toposes from RT(A,A#) to RT(A#). To obtain a better understanding of the relationship between the internal logics of RT(A,A#) and RT(A#), we then provide a complete axiomatization of arbitrary local maps of toposes, a new result in topos theory. Based on this axiomatization we investigate the relationship between the internal logics of two toposes connected via a local map. Moreover, we suggest a modal logic for local maps. Returning to the realizability models we show in particular that the modal logic for local maps in the case of RT(A,A#) and RT(A#) can be seen as a _modal logic for computability_. Moreover, we characterize some interesting subcategories of RT(A,A#) (in much the same way as assemblies and modest sets are characterized in standard realizability toposes) and show the validity of some logical principles in RT(A,A#).
{"url":"http://www.cs.cmu.edu/Groups/LTC/abstracts/devttc.html","timestamp":"2014-04-24T02:28:01Z","content_type":null,"content_length":"3149","record_id":"<urn:uuid:41610053-c16b-49a6-b21f-302312fe238e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Converse of Parallel Lines Theorem - Problem 2 If we apply the Converse of the Corresponding Angles Parallel Lines Theorem, then we can determine what does y need to be for these lines to be parallel? So if we set these two equal to each other which would mean that they’re congruent then we can assume that these two lines must be parallel. So let’s do that. Let’s say 110 minus y must equal your corresponding angle which is 120 minus 3y. So we’ve got some negative variables here, so to make it positive, I’m going to add 3y to both sides. So you’ve got 110 plus 2y is equal to 120, so if I subtract 110 from both sides, we find that 2y is equal to 10 which means y must be 5. So what value of y? Y must be 5. If you’re interested 110 minus 5, that would mean that this angle would be 105 degrees and since these two are congruent, that means that this angle as well needs to be 105 degrees. Since they are corresponding and congruent, these two lines must be parallel. parallel lines theorem transversal
{"url":"https://www.brightstorm.com/math/precalculus/equations-of-lines-parabolas-circles/converse-of-parallel-lines-theorem-problem-2/","timestamp":"2014-04-19T01:51:45Z","content_type":null,"content_length":"56479","record_id":"<urn:uuid:8ef8f222-904f-4d73-bd24-3a6cae8385f9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Backpropagating modes of surface polaritons on a cross-negative interface « journal navigation Backpropagating modes of surface polaritons on a cross-negative interface Optics Express, Vol. 13, Issue 2, pp. 417-427 (2005) We show that backpropagating modes of surface polaritons can exist at the interface between two semi-infinite cross-negative media, one with negative permittivity (ε<0) and the other with negative permeability (µ<0). These single-interface modes that propagate along the surface of a cross-negative interface are physically of interest, since the single-negative requirements imposed on the material parameters can easily be achieved at terahertz and potentially optical frequencies by scaling the dimension of artificially structured planar materials. Conditions for material parameters that support a backpropagating mode of the surface polaritons are obtained by considering dispersion relation and energy flow density transported by surface polaritons and confirmed numerically by simulation of surface polariton propagation resonantly excited at a cross-negative interface by attenuated total reflection. © 2005 Optical Society of America 1. Introduction Artificial material with a magnetic response at terahertz (THz) frequencies that were recently demonstrated in a planar structure [ 1. T. J. Yen, W. J. Padilla, N. Fang, D. C. Vier, D. R. Smith, J. B. Pendry, D. N. Basov, and X. Zhang, “Terahertz magnetic response from artificial materials,” Science 303, 1494–1496 (2004). [CrossRef] [PubMed] 2. S. Linden, C. Enkrich, M. Wegener, J. Zhou, T. Koschny, and C. M. Soukoulis, “Magnetic response of metamaterials at 100 terahertz,” Science 306, 1351–1353 (2004). [CrossRef] [PubMed] ] are now of great interest because of their potential to produce a double-negative metamaterial ( <0 and <0) simultaneously in the THz or potentially optical regimes. Negative-index metamaterials, whose permittivity and permeability are simultaneously negative and are known not to exist in a natural form in materials, were made by composing an array of metallic wires and split-ring resonators (SRRs) [ 3. D. R. Smith, W. J. Padilla, D. C. Vier, S. C. Nemat-Nasser, and S. Schultz, “Composite medium with simultaneously negative permeability and permittivity,” Phys. Rev. Lett. 84, 4184–4187 (2000). [CrossRef] [PubMed] 4. R. A. Shelby, D. R. Smith, and S. Schultz, “Experimental verification of a negative index of refraction,” Science 292, 77–79 (2001). [CrossRef] [PubMed] ]. The negative refraction of electromagnetic waves was observed in a microwave transmission experiment as had been predicted theoretically by Veselago in his 1968 pioneering paper [ 5. V. G. Veselago, “The electromagnetics of substances with simultaneously negative values of ε and µ,” Sov. Phys. Usp. 10, 509–514 (1968). [CrossRef] ]. In addition to the negative refraction, some interesting phenomena such as reversed Doppler shift, reversed Cerenkov radiation, reversed radiation pressure, and imaging by a slab of a negative index metamaterial are also presented, all of which are direct results of the group velocity inversion of electromagnetic waves that propagate in such media. On the other hand, the artificially structured materials formed by an array of nonmagnetic conduction, SRRs exhibit negative permeability. However, when the artificially structured materials are combined with plasmonic wires that exhibit negative permittivity, the SRRs and wires cannot be patterned on the same planar substrate, The main reason is that the negative magnetic response of an artificial planar structure, in which both the SRRs and the wires are assumed to be formed on the same layer, can be achieved only for magnetic fields with a varying flux that is normal to the SRR plane. The square array of SRRs has a relative permeability of Fω ^2 ω ^2 ), where the strength ( ), resonance frequency ( ω [0] ), and lifetime (Γ) of the magnetic dipole resonance are defined mainly by the structure parameters of SRRs, and the wire grids placed between the SRRs also have the relative permittivity of ω ^2 with the plasma frequency ( ω [p] ) given by the width and radius of the wires [ 6. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, “Low frequency plasmons in thin-wire structures,” J. Phys. Condens. Matter 10, 4785–4809 (1998). [CrossRef] 7. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, “Magnetism from conductors and enhanced nonlinear phenomena,” IEEE Trans. Microwave Theory Tech.47, 2075–2084 (1999). [CrossRef] ]. To open a frequency range in which the are negative simultaneously, the planar structure must satisfy the necessary condition of ω [0] >1. After further consideration under the assumption that the period of the SRRs is 50 for a typical THz response, we can conclude that the gap size between the inner and the outer rings in the SRR must be of the order of nanometers [ ]. This requirement of a few nanometers pattern in the artificial planar structures makes it necessary for us to choose another structure, such as a layered planar structure with alternating layers having a SRR array for <0 and wire grids for <0. It has also been reported that periodic assembly of two alternating layers, one with negative permittivity and the other with negative permeability, can support backpropagating modes with an effectively negative index of refraction [ 9. D. R. Fredkin and A. Ron, “Effectively left-handed (negative index) composite material,” Appl. Phys. Lett. 81, 1753–1755 (2002). [CrossRef] ]. The backpropagating modes in the layered structure are Bloch waves for which propagation directions are limited to a surface normal. Therefore, such an artificial planar structure with SRRs might not be applicable for the system of alternating slabs because the magnetic fields of Bloch waves do not vary their flux normal to the SRR faces. We show that the existence of backpropagating modes of surface polaritons that propagate along the single-boundary surface of two layers have µ<0 and ε<0, respectively. The boundary surface or cross-negative interface can be formed in a layered structure composed of artificial magnetic (SRRs) and electrical (plasmonic wires) layers since the surface polaritons that propagate on the cross-negative interface can have magnetic fields that oscillate normal to the surface of the SRR layer. We first find relative conditions for the material parameters of the two semi-infinite layers that support the backpropagating surface polaritons by considering dispersion relation and energy flow density transported by the surface polaritons. We then confirm our findings numerically by simulating backpropagation of the surface polaritons that are resonantly excited by attenuated total reflection (ATR) of a Gaussian beam incident on the cross-negative interface. The surface polaritons defined by energy quanta of surface localized oscillations of electric or magnetic dipoles are substantially supported by negative values of the material parameters [ 11. J. Yoon, G. Lee, S. H. Song, C. -H. Oh, and P. -S. Kim, “Surface-plasmon photonic band gaps in dielectric gratings on a flat metal surface,” J. Appl. Phys. 94, 123–129 (2002). [CrossRef] ]. The negative permittivity enables resonant excitation of the -polarized surface electric polaritons (SEPs), which are surface localized longitudinal oscillations of the electric dipoles, whereas the negative permeability makes possible the s-polarized surface magnetic polaritons (SMPs), which are surface localized longitudinal oscillations of the magnetic dipoles. The surface polaritons propagate along the boundary surface with their electric and magnetic fields localized and evanesce into both adjoined materials. Excitation of a single normal mode of the surface polaritons is influenced simultaneously by the material parameters of both media adjoined at the interface. Therefore, one can expect backpropagating modes of surface polaritons that have negative group velocity not only in double-negative media, but also at the cross-negative interface of two media in which only <0 is in one medium and only <0 is in the other. It is also known that, in thin slab geometry such as plasma or an ionic crystal [ 12. A. A. Oliner and T. Tamir, “Backward waves on isotropic plasma slabs,” J. Appl. Phys. 33, 231–233 (1962). [CrossRef] 13. K. L. Kliewer and R. Fuchs, “Optical modes of vibration in an ionic crystal slab including retardation. I. Nonradiative region,” Phys. Rev. 144, 495–503 (1966). [CrossRef] ] and a metal-film optical waveguide[ 14. P. Tournois and V. Laude, “Negative group velocities in metal-film optical waveguides,” Opt. Commun. 137, 41–45 (1997). [CrossRef] ], surface polaritons with negative group velocity can exist even when only the permittivity is negative (and the permeability is equal to one everywhere). In these cases only the SEP modes can be excited on both sides of the slab, and they must be coupled to each other to concentrate more energy into the slab. If the thickness of the slab increases, the coupled SEP modes with negative group velocities would no longer exist. In contrast, single-interface modes that are excited at single boundaries of two semi-infinite media with double negativity or cross negativity, would be distinguishable from the coupled modes in thin slabs for the following two reasons: SMP modes as well as SEP modes are allowed for single-interface modes with negative group velocities; and a frequency range for the backpropagating SMP modes can be tuned flexibly by means of, for example, varying dimensions of the artificially structured materials [ 1. T. J. Yen, W. J. Padilla, N. Fang, D. C. Vier, D. R. Smith, J. B. Pendry, D. N. Basov, and X. Zhang, “Terahertz magnetic response from artificial materials,” Science 303, 1494–1496 (2004). [CrossRef] [PubMed] 5. V. G. Veselago, “The electromagnetics of substances with simultaneously negative values of ε and µ,” Sov. Phys. Usp. 10, 509–514 (1968). [CrossRef] ]. Therefore, the single-interface modes play an important role for left-handed responses of composite media at THz and even higher frequencies. 2. Generalized conditions of material parameters for surface polariton excitation First we analyze all the possible combinations of the four material parameters that support either SEPs or SMPs of two media that constitute a planar boundary. Consider a surface electromagnetic wave, φ(x,z)=A exp (iβx-γ[s] |z|), localized near the interface (z=0 plane) of two media for which relative material parameters are given by (ε[s] ,µ[s] ), where medium index s has a value of 1 for z <0 or 2 for z>0. β is the propagation constant in the direction of the x axis and γ[s] is the decay constant in each medium. Wave amplitude A represents the y component of the magnetic field or the electric field when the surface-localized electromagnetic wave is coupled to a SEP or a SMP, respectively. By requiring that all the field components satisfy Maxwell equations for all the positions including the interface, we can obtain a relationship between the propagation and the decay constants of the SMP-coupled waves (SMP modes) and the SEP-coupled waves (SEP modes) in terms of material parameters (ε [1],µ [1]) and (ε [2],µ [2]). For SMP modes, the relationship is k [0] ]. The relative conditions of the four material parameters that support excitation of the SMP modes can be derived from Eqs. (1) . For an additional procedure, assume that the all-material parameters are complex valued but dominated by their real parts such that ), | |□ 1 and ), | |□ 1, where all the symbols on the right-hand sides are real. When the propagation and decay constants are expanded by powers of and terms up to their first orders are taken, they can be rewritten as where the imaginary parts relative to the real parts are given by It can be seen that have values of the same orders of magnitude as , because the terms that include the real parts of the material parameters in Eqs. (6) have numerators with multiplication factors of , which are the same combinations of as their denominators, respectively, and this results in overall factors of to be of the order of 1. Therefore, the terms that include Eqs. (4) are negligible for the propagation and decay constants according to the previous assumption for the imaginary parts of the material parameters. If is a real value, the propagation constant βSMP has a dominant real part and the corresponding wave function expresses the propagation state with a relatively slow decaying profile along the interface. Otherwise, if is an imaginary value, the wave hardly propagates and decays rapidly. Thus it is reasonable to regard real-valued as a necessary condition, namely, in-plane propagation condition, for SMPs. To apply a similar manner to decay constant , a surface localization condition for SMPs could be obtained as must be a real value. To complete the surface localization condition, it is necessary for to be positive for both media. This condition requires µ′ [1] µ′ [2] <0, when we approximate Eq. (3) to be µ′ [1] γ′[SMP,] [2] µ′ [2] . In summary, the in-plane propagation condition reduces to µ′ [1] ε′ [1] µ′ [2] ε′ [2] when -1< µ′ [2] µ′ [1] <0, or µ′ [1] ε′ [1] µ′ [2] ε′ [2] µ′ [2] µ′ [1] <-1. And the surface localization condition for a SMP reduces to ε′ [1] µ′ [1] ε′ [2] µ′ [2] when -1< µ′ [2] µ′ [1] <0, or ε′ [1] µ′ [1] ε′ [2] µ′ [2] µ′ [2] µ′ [1] <-1. For SEP modes the relative conditions for in-plane propagation and surface localization can be obtained simply by interchanging the relative permittivity and permeability positions with each Figure 1 shows diagrams for visualization of these relative conditions for SMP modes in the blue region and SEP modes in the red region. To represent four possible sign combinations of (a) ( ε′ [1] µ′ [1] >0), (b) ( ε′ [1] µ′ [1] >0), (c) ( ε′ [1] µ′ [1] <0), and (d) ( ε′ [1] µ′ [1] <0). the normalized values of ( ε′ [1] ε′ [1] µ′ [1] µ′ [1] |) are marked by black dots and their complementary values by white dots. The dashed curves in Figs. 1 (a) and (d) represent the critical boundaries of ε′ [2] µ′ [2] ε′ [1] µ′ [1] for the surface localization conditions; the dashed lines in Figs. 1 (b) and (c) reveal ε′ [2] µ′ [2] ε′ [1] µ′ [1] for the in-plane propagation conditions. The color densities in the figure indicate the normalized total energy flow densities of the SMP modes ( ) and the SEP modes ( ) that are defined in Eq. (3) Some interesting properties of the surface polaritons can be observed in the Fig. 1 diagrams. Because the SEP and the SMP modes do not overlap, there are no possible combinations among the four material parameters that allow simultaneous excitation of the SEP and SMP modes. Even if we introduce absorption losses of the media by adding imaginary parts in the material parameters, some overlapped areas might appear only near the critical boundaries, but the SEP and SMP modes excited at the overlapped areas must be weakly localized, damping modes. Each of the regions with SEP or SMP modes consists of two parts separated by a white dot, indicating the (- ε′ [1] ε′ [1] µ′ [1] µ′ [1] |). Physical implication of the two distinguished parts becomes obvious after evaluation of normalized total energy flow densities transported by the SEP and SMP modes: where subscript a represents SEP or SMP, ξ′[SEP,s] ≡ε′[s] , and ξ′[SMP,s] ≡µ′[s] . p⃑[a,s] represent the energy flow density that is derived by integrating time-averaged Poynting vector S⃑[a] (z) over the respective surface normal distance through medium s: From the localization condition of ξ′ <0 the propagation directions of p⃑ [a,1] p⃑ [a,2] are always opposite each other [ 16. D. L. Mills and E. Burstein, “Polaritons: the electromagnetic modes of media,” Rep. Prog. Phys. 37, 817–926 (1974). [CrossRef] ]. When ξ′ <0 and | p⃑ [a,1] p⃑ [a,2] |, for example, the normalized total energy flow density is negative. This can be thought of as the mode with negative group velocity with respect to phase velocity because the Poynting vector direction is always equal to the group velocity for linear waves that propagate in a homogeneous medium with arbitrary spatial and temporal dispersion [ 17. A. Bers, “Note on group velocity and energy propagation,” Amer. J. Phys. 68, 482–484 (2000). [CrossRef] 18. B. E. A. Saleh and M. C. Teich, “Polarization and crystal optics,” in Fundamentals of Photonics (Wiley, New York, 1991), Chap. 6. [CrossRef] ]. The calculation results of the normalized total energy flow densities are depicted by color density in Fig. 1 . Note that the colored scale bars indicate the normalized ranges of the 1≥ ≥-1 in red and the 1≥ ≥-1 in blue. The surface polaritons are excited at the regions in which 0> ≥-1 (1≥ >0) have a negative (positive) group velocity. These density plots obviously show that the regions that allow negative group velocity do not consist of only the double-negative areas in which either of the two media has ε <0 and <0 simultaneously, but also the cross-negative areas in which ε′ [2] >0 and µ′ [2] <0 while ε′ [1] <0 and µ′ [1] >0 or vice versa, such as is indicated by the arrows in Figs. 1 (b) and Another interesting property of the surface polaritons is that the propagation and decay constants defined by Eqs. (4) are invariant when the signs of the real parts are simultaneously inversed, such as from ( ε′ [1] µ′ [1] ε′ [2] µ′ [2] ) to (- ε′ [1] µ′ [1] ε′ [2] µ′ [2] ). The excitation conditions of surface polaritons were derived from kinetic consideration of the propagation and decay constants, therefore it can be concluded that, if a SEP (SMP) can propagate on a boundary with material parameters of ( ε′ [1] µ′ [1] ε′ [2] µ′ [2] ), the inverted material parameters of (- ε′ [1] µ′ [1] ε′ [2] µ′ [2] ) also support a SEP (SMP) with the same propagation and decay constants as the original ones. For this reason, Figs. 1 (a) and (d) and (b) and (c) show centrosymmetry with each other. However, there are differences in the field distributions between the two SEP (SMP) modes with ( ε′ [1] µ′ [1] ε′ [2] µ′ [2] ) and (- ε′ [1] µ′ [1] ε′ [2] µ′ [2] ). For example, the magnetic field of a SEP mode can be expressed by H [0] exp e [y] . According to the Maxwell equations the corresponding electric field is given by β e [x] iγ e [z] where the +(-) sign is taken for z>0 (z<0). Inverting the signs of material parameters does not alter the magnetic field, whereas the electric field is changed by a shift in its phase because of . As a consequence, only the direction of the Poynting vector is reversed. This is also confirmed by comparison of the upper-left region with the lower-right region in Figs. 1 (a) and (d) or the upper-right region with the lower-left region in Figs. 1 (b) and (c), where distribution of the normalized total energy flow density is centrosymmetric with different propagation directions. This dependence of surface polariton modes on simultaneous sign inversion of the real parts of the material parameters can be explained more generally by conjugation symmetry of the frequency domain Maxwell equations [ 19. A. Lakhtakia, “Conjugation symmetry in linear electromagnetism in extension of materials with negative real permittivity and permeability scalars,” Microwave Opt. Technol. Lett. 40, 160–161 (2004). [CrossRef] ]. In a source-free and isotropic system, the frequency domain Maxwell equations are invariant under transformation such that If E(r,ω) and H(r,ω) are solutions of the Maxwell equations in system Σ with the spatial distributions of material parameters ε(r,ω) and µ(r,ω), then Ē(r,ω)=E*(r,ω) and H̄(r,ω)=H*(r,ω) are also exact solutions for conjugate system Σ¯, whose relative permittivity and permeability are -ε*(r,ω) and -µ*(r,ω), respectively. Note that Σ¯ is given by the sign inversion of the real parts of the material parameters for Σ. It is also noted that the time-averaged Poynting vector given by Re[E(r,ω)×H*(r,ω)] is invariant under such transformation, and. as a consequence, the group velocity direction of the conjugate modes between Σ and Σ¯ is equal for each mode. But complex conjugate operation on the spatial field amplitudes changes the signs of the wave vectors of plane-wave components of which E( r,ω) and H(r,ω) are composed, resulting in inversion of the phase velocities. 3. Backpropagating modes of surface magnetic and surface electric polaritons on crossnegative interfaces We now consider characteristics of the surface polaritons excited at the cross-negative areas in Fig. 1 (b). [Those in Fig. 1 (c) are of symmetrical cases only.] Suppose that the two adjoined media are composed of a metal with ( ε′ [1] µ [1] =1) and a metamaterial with ( ε [2] µ′ [2] <0) over a frequency range lower neighbor of Ω . The material parameters of ε [1] µ [2] can be expressed in the form of Ω is a normalized frequency defined by is the plasma frequency of the metal. ε [1] (Ω) is a plasmonic form of a Drude model and µ [2] (Ω) is from an array of planar SRRs with a resonance frequency of Ω 7. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, “Magnetism from conductors and enhanced nonlinear phenomena,” IEEE Trans. Microwave Theory Tech.47, 2075–2084 (1999). [CrossRef] is the area fraction of the internal opening in the SRR and set to be 0.5, hereafter. To show schematically the frequency dependence of excitation of the SMP and SEP modes, we assume that Γ =0 and Γ =0 under the approximation of small damping losses in the media. Figure 2 (a) shows a parametric plot of ( ε′ [2] ε′ [1] µ′ [2] µ′ [1] |) as a function of Ω for three different Ω of 0.8(>Ω ), Ω , and 0.4(<Ω ), where . Only the frequency range of 0<Ω<1 is considered since ε′ [1] <0, and the positions marked by A’s and B’s represent the boundary frequencies of Ω(A1)=0.4, Ω(A2)=0.4472, Ω(A3)=0.4714, Ω(A4)=1/√2, Ω(B1)=1/√2, Ω(B2)=0.8, Ω(B3)=0.8528, andΩ(B4)=0.8944. At the critical resonance frequency of Ω (dash-dot curves), there is no surface polariton mode in the cross-negative areas ( ε′ [2] ε′ [1] |>0 and µ′ [2] µ′ [1] |<0). When Ω =0.8 (curves with open circles) as an example of Ω , a SEP(-) band that supports the SEP modes with negative group velocity is located in the frequency range of Ω(B2)<Ω<Ω(B3), whereas a SEP(+) band with positive group velocity is in 0<Ω<Ω(B1) and a SMP(+) band is in Ω(B3)<Ω<Ω(B4). In contrast, when Ω =0.4 (curves with solid circles), a single SMP(-) band appears in Ω(A2)<Ω<Ω(A3), and two SEP(+) bands appear in 0<Ω<Ω(A1) and (A3) Ω <Ω<Ω(A4), respectively. After further analysis, we can finally conclude that, for the cross-negative areas shown in Fig. 1 (b), the SEP(-) modes appear only for the case of , and at the frequency bands of if 1/√2<Ω <1 ; the SMP(-) modes only for , and at the frequency band of The dispersion relations of the SMP modes in Eq. (1) are depicted by the open circles in Figs. 2 (b) and (c), corresponding to the resonance frequencies of Ω =0.4 and Ω =0.8, respectively, and . For comparison, that of the SEP modes is also presented by solid circles and that of bulk modes that propagate in the metamaterial is represented by dashed curves. The negative slopes between Ω(A2) and Ω(A3) in Fig. 2 (b) and between Ω( 2) and Ω( 3) in Fig. 2 (c) clearly show the negative group velocities of the SMP(-) and SEP(-) modes, respectively. 4. Numerical demonstration of backpropagating modes on cross-negative interfaces To demonstrate the backpropagation behavior of two modes with opposite group velocities we chose four points in the dispersion curves in Figs. 2 (b) and (c), two typical SMP modes at M1 (Ω=0.87,B=0.53) for SMP(-) and M2 (Ω=0.87,B=0.53) for SMP(+), and two SEP modes at E1 (Ω=0.50,B=0.35) for SEP(+) and E2 (Ω=0.84,B=0.49) for SEP(-). With an ATR configuration composed of four stacked layers and a semi-infinite dielectric, metal, metamaterial, and semi-infinite air as indicated in Fig. 3 (a), we evaluated propagation characteristics of two of the SMP and SEP modes on a cross-negative interface between the metal ( >0) and the metamaterial ( <0) by using the plane-wave expansion method. A Gaussian beam that is incident from the dielectric with a finite waist is assumed to be TE and TM polarized for SMP and SEP modes, respectively. The calculation results are shown in Fig. 3 . In the calculation it is assumed that small damping constants of Γ and Γ and that the dielectric ( =2.25) and air ( =1.0) are semi-infinite. The thickness of the metamaterial is 10× for all cases, where ) is the plasma wavelength. The thicknesses of the metal layers are given differently by 2.7× ƛ [p] , 4× ƛ [p] , 3× ƛ [p] and 3.25× ƛ [p] Figs. 3 (c), and (d), respectively, to guarantee high coupling efficiency from the incident beam to the surface modes. It is apparent that the excited modes are not coupled modes but single-interface modes at the cross-negative (metal–metamaterial) interface, because no field enhancement is seen at the dielectric–metal or the metamaterial–air interfaces for all the cases. Figure 3 (a) clearly shows the leftward propagation of the electric fields ( ) of the SMP(-) mode, as depicted by a dotted arrow near the cross-negative interface. The Gaussian beam incident from the dielectric is not only reflected from the dielectric–metal boundary, but it is coupled resonantly to the SMP(-) mode near the cross-negative interface. The electric field of the SMP(-) mode is more concentrated on the metamaterial layer with a group velocity antiparallel to its phase velocity. Evidence of the negative group velocity can also be found intuitively by observing the reemitted fields that radiate back into the dielectric medium. The reemitted fields that are shown just under the SMP(-) propagation region reveal wave fronts parallel to the reflected beam. As a consequence we can confirm that the phase velocity of the SMP(-) mode is positive in the direction, but the group velocity is negative. The SMP(+) mode at point M2, on the other hand, is stretched toward the metal layer as shown in Fig. 3 (b). It has a group velocity parallel to its phase velocity, which can also be checked by the wave fronts of the reemitted fields parallel to the reflected wave fronts. For the SEP modes depicted in Figs. 3 (c) and (d), the SEP(+) mode has its magnetic field concentrated more in the metamaterial layer than in the metal layer, similar to the SMP(-) mode in Fig. 3 (a), but it has rightward or forward propagation. The reverse is shown for the SEP(-) mode in Fig. 3 (d) with the SMP(+) mode in Fig. 3 (b). If we recall that the energy flow density is directly proportional to | (SMP modes) or | (SEP modes) as described in Eq. (10) , these differences in field concentration can easily be understood by the fact that backpropagating SMP or SEP modes should have more energy in a diamagnetic or a metal layer, respectively. One effect of material absorption to propagation loss is that backpropagating modes undergo the same losses as forward propagating modes. The propagation length of the mode decreases directly proportional to the imaginary part of the material parameters of both media, and this can be determined by the relative imaginary part of the propagation constant Eq. (6) . We introduced as the effective number of spatial oscillations in the same manner as the definition of the resonance quality factor for temporal oscillations. For the SMP(-) mode in Fig. 3 (a), the relative damping constants of Γ and Γ =45.71. If we take into consideration ten times larger damping constants of Γ and Γ =4.56, which is a ten times smaller value as expected from the dependence of on the imaginary parts of the material parameters. For a more practical case, we consider an SMP(-) mode excited at the interface between a gold and a two-dimensional magnetic metamaterial as reported in Ref. 20. When γ [1] Hz for gold [ 21. M. A. Ordal, L. L. Long, R. J. Bell, S. E. Bell, R. R. Bell, R. W. Alexander Jr., and C. A. Ward, “Optical properties of the metals Al, Co, Cu, Au, Fe, Pb, Ni, Pd, Pt, Ag, Ti, and W in the infrared and far infrared,” Appl. Opt. 22, 1099–1120 (1983). [CrossRef] [PubMed] ], and ω [0] γ [2] Hz for metamaterial, excitation of the SMP(-) modes can occur in the frequency range from 1.56×10 Hz to 1.63×10 Hz. At 1.5836×10 Hz, for example, ( ε′ [2] ε′ [1] µ′ [2] µ′ [1] ,-0.5) and the corresponding SMP(-) mode has =3.06, which means that the mode undergoes a significant amount of propagation loss. 5. Conclusion We have derived the excitation conditions and propagation characteristics of backpropagating modes of surface polaritons by means of material parameters. We found four general properties of surface polaritons. First, simultaneous excitation of SEP and SMP modes is inhibited in general. Second, backpropagating surface modes with negative group velocity can be observed not only at the boundary of double-negative media but also at the cross- negative interface of two media where only ε<0 in one medium and only µ<0 in the other. Backpropagating modes of surface magnetic polaritons at cross-negative interfaces have been confirmed in detail by evaluation of their dispersion relations and ATR coupling behavior based on the plane-wave expansion method. Third, the two propagation directions, parallel and antiparallel to the phase velocity, are inherently determined by the values of the material parameters, regardless of their frequency dispersive characteristics. In particular, antiparallel propagation is also possible even when no double-negative medium is involved, such as the cross-negative media composed of two nontransparent media: one with negative permittivity only and the other with negative permeability only. Fourth, if a set of material parameters supports a parallel (antiparallel) propagating SEP (SMP), the sign inverted set supports antiparallel (parallel) propagating SEP (SMP) without changing the propagation and decay constants. It is worth noting that the magnetic fields (B[x] ) of the SMP(-) mode always have a varying flux normal to the metamaterial surface, which proves that efficient production of a negative magnetic response (µ<0) can be achieved in an artificial planar structure. Therefore, it is possible that such a cross-negative interface with a single-negative requirement imposed on the material parameters of two adjoined media can be implemented by stacking two different types of planar structure: one with a negative permeability, such as a SRR; the other with a negative permittivity, such as a metallic grid. The scalability of these separated planar structures could enable us to realize surface polaritonic devices with lefthanded behavior in THz and potentially optical frequencies. This research was supported by the Korea Science and Engineering Foundation through the Engineering Research Center program of the Integrated Photonics Technology Research Center. References and links 1. T. J. Yen, W. J. Padilla, N. Fang, D. C. Vier, D. R. Smith, J. B. Pendry, D. N. Basov, and X. Zhang, “Terahertz magnetic response from artificial materials,” Science 303, 1494–1496 (2004). [CrossRef] [PubMed] 2. S. Linden, C. Enkrich, M. Wegener, J. Zhou, T. Koschny, and C. M. Soukoulis, “Magnetic response of metamaterials at 100 terahertz,” Science 306, 1351–1353 (2004). [CrossRef] [PubMed] 3. D. R. Smith, W. J. Padilla, D. C. Vier, S. C. Nemat-Nasser, and S. Schultz, “Composite medium with simultaneously negative permeability and permittivity,” Phys. Rev. Lett. 84, 4184–4187 (2000). [CrossRef] [PubMed] 4. R. A. Shelby, D. R. Smith, and S. Schultz, “Experimental verification of a negative index of refraction,” Science 292, 77–79 (2001). [CrossRef] [PubMed] 5. V. G. Veselago, “The electromagnetics of substances with simultaneously negative values of ε and µ,” Sov. Phys. Usp. 10, 509–514 (1968). [CrossRef] 6. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, “Low frequency plasmons in thin-wire structures,” J. Phys. Condens. Matter 10, 4785–4809 (1998). [CrossRef] 7. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, “Magnetism from conductors and enhanced nonlinear phenomena,” IEEE Trans. Microwave Theory Tech.47, 2075–2084 (1999). [CrossRef] 8. Details are available at http://microoptics.hanyang.ac.kr/home/DNMMbySurfacePatterning.pdf. 9. D. R. Fredkin and A. Ron, “Effectively left-handed (negative index) composite material,” Appl. Phys. Lett. 81, 1753–1755 (2002). [CrossRef] 10. A. D. Boardman, Electromagnetic Surface Modes (Wiley, New York, 1982). 11. J. Yoon, G. Lee, S. H. Song, C. -H. Oh, and P. -S. Kim, “Surface-plasmon photonic band gaps in dielectric gratings on a flat metal surface,” J. Appl. Phys. 94, 123–129 (2002). [CrossRef] 12. A. A. Oliner and T. Tamir, “Backward waves on isotropic plasma slabs,” J. Appl. Phys. 33, 231–233 (1962). [CrossRef] 13. K. L. Kliewer and R. Fuchs, “Optical modes of vibration in an ionic crystal slab including retardation. I. Nonradiative region,” Phys. Rev. 144, 495–503 (1966). [CrossRef] 14. P. Tournois and V. Laude, “Negative group velocities in metal-film optical waveguides,” Opt. Commun. 137, 41–45 (1997). [CrossRef] 15. H. Raether, Surface Plasmons on Smooth and Rough Surfaces and on Gratings (Springer-Verlag, Berlin, 1988). 16. D. L. Mills and E. Burstein, “Polaritons: the electromagnetic modes of media,” Rep. Prog. Phys. 37, 817–926 (1974). [CrossRef] 17. A. Bers, “Note on group velocity and energy propagation,” Amer. J. Phys. 68, 482–484 (2000). [CrossRef] 18. B. E. A. Saleh and M. C. Teich, “Polarization and crystal optics,” in Fundamentals of Photonics (Wiley, New York, 1991), Chap. 6. [CrossRef] 19. A. Lakhtakia, “Conjugation symmetry in linear electromagnetism in extension of materials with negative real permittivity and permeability scalars,” Microwave Opt. Technol. Lett. 40, 160–161 (2004). [CrossRef] 20. N. -C. Panoiu and R. M. Osgood Jr., “Influence of the dispersive properties of metals on the transmission characteristics of left-handed materials,” Phys. Rev. E 68, 016611 (2003). [CrossRef] 21. M. A. Ordal, L. L. Long, R. J. Bell, S. E. Bell, R. R. Bell, R. W. Alexander Jr., and C. A. Ward, “Optical properties of the metals Al, Co, Cu, Au, Fe, Pb, Ni, Pd, Pt, Ag, Ti, and W in the infrared and far infrared,” Appl. Opt. 22, 1099–1120 (1983). [CrossRef] [PubMed] OCIS Codes (240.5420) Optics at surfaces : Polaritons (240.6690) Optics at surfaces : Surface waves ToC Category: Research Papers Original Manuscript: September 1, 2004 Revised Manuscript: December 14, 2004 Published: January 24, 2005 Jaewoong Yoon, Seok Song, Cha-Hwan Oh, and Pill-Soo Kim, "Backpropagating modes of surface polaritons on a cross-negative interface," Opt. Express 13, 417-427 (2005) Sort: Year | Journal | Reset 1. T. J. Yen, W. J. Padilla, N. Fang, D. C. Vier, D. R. Smith, J. B. Pendry, D. N. Basov, and X. Zhang, �??Terahertz Magnetic Response from Artificial Materials�??, Science 303, 1494-1496 (2004). [CrossRef] [PubMed] 2. S. Linden, C. Enkrich, M. Wegener, J. Zhou, T. Koschny, and C. M. Soukoulis, �??Magnetic response of metamaterials at 100 terahertz�??, Science 306, 1351-1353 (2004). [CrossRef] [PubMed] 3. D. R. Smith, W. J. Padilla, D. C. Vier, S. C. Nemat-Nasser, and S. Schultz, �??Composite medium with simultaneously negative permeability and permittivity�??, Phys. Rev. Lett. 84, 4184-4187 (2000). [CrossRef] [PubMed] 4. R. A. Shelby, D. R. Smith, and S. Schultz, �??Experimental Verification of a Negative Index of Refraction�??, Science 292, 77-79 (2001). [CrossRef] [PubMed] 5. V. G. Veselago, �??The electromagnetics of substances with simultaneously negative values of E and µ�??, Sov. Phys. Usp. 10, 509-514 (1968). [CrossRef] 6. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, �??Low frequency plasmons in thin-wire structures�??, J. Phys.: Condens. Matter 10, 4785-4809 (1998). [CrossRef] 7. J. B. Pendry, A. J. Holden, D. J. Robbins, and W. J. Stewart, �??Magnetism from conductors and enhanced nonlinear phenomena�??, IEEE Trans. Microw. Theory Tech. 47, 2075-2084 (1999). [CrossRef] 8. Details are available at http://microoptics.hanyang.ac.kr/home/DNMMbySurfacePatterning.pdf. 9. D. R. Fredkin and A. Ron, �??Effectively left-handed (negative index) composite material�??, Appl. Phys. Lett. 81, 1753-1755 (2002). [CrossRef] 10. A. D. Boardman, Electromagnetic Surface Modes (John Wiley & Sons, New York, 1982). 11. J. Yoon, G. Lee, S. H. Song, C. �??H. Oh, and P. �??S. Kim, �??Surface-plasmon photonic band gaps in dielectric gratings on a flat metal surface�??, J. Appl. Phys. 94, 123-129 (2002). [CrossRef] 12. A. A. Oliner and T. Tamir, �??Backward waves on isotropic plasma slabs�??, J. Appl. Phys. 33, 231-233 (1962). [CrossRef] 13. K. L. Kliewer and R. Fuchs, �??Optical modes of vibration in an ionic crystal slab including retardation. I. Nonradiative region�??, Phys. Rev. 144, 495-503 (1966). [CrossRef] 14. P. Tournois and V. Laude, �??Negative group velocities in metal-film optical waveguides�??, Opt. Commun. 137, 41-45 (1997). [CrossRef] 15. Heinz Raether, Surface Plasmons on Smooth and Rough Surfaces and on Gratings (Springer-Verlag Berlin Heidelberg, 1988). 16. D. L. Mills and E. Burstein, �??Polaritons: the electromagnetic modes of media�??, Rep. Prog. Phys. 37, 817-926 (1974). [CrossRef] 17. A. Bers, �??Note on group velocity and energy propagation�??, Ame. J. Phys. 68, 482-484 (2000) [CrossRef] 18. B. A. Saleh and M. C. Teich, �??Polarization and crystal optics�?? in Fundamentals of Photonics (John Wiley & Sons, Inc., 1991), pp. 214-210 [CrossRef] 19. A. Lakhtakia, �??Conjugation symmetry in linear electromagnetism in extension of materials with negative real permittivity and permeability scalars�??, Microw. Opt. Technol. Lett. 40, 160-161 (2004). [CrossRef] 20. N. �??C. Panoiu and R. M. Osgood, Jr., �??Influence of the dispersive properties of metals on the transmission characteristics of left-handed materials�??, Phys. Rev. E 68, 016611 (2003). OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-13-2-417&id=82378","timestamp":"2014-04-23T16:58:41Z","content_type":null,"content_length":"200605","record_id":"<urn:uuid:ef06c889-bb29-4768-b734-9ad59e2794e9>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm Analysis The algorithms in this document can be run and tested individually in NetBeans. Instead of creating a project for each, create a single project, , with separate main classes for each. We do not particularly need the auto-created class, so delete it if you'd like. For each of the sample programs do the following: 1. From the Projects window, right-click either on □ the Miscell project □ the Source Packages entry □ the miscell package (this is most efficient) 2. Select New → Other and from the list select Java Main Class (only need to go through Other once). 3. In the pop-up window (Name and Location), set the Class Name something related to the algorithm of interest. For example, you might be creating entries like this: Class Name: LinearSearch1, LinearSearch2, ... package: miscell 4. In some situations you'll want to create a simple Java Class, not Java Main Class. 5. Replace the class content by the suggested content by copy/paste. 6. In every case you will need the import statement: 7. Run the program by locating it in Source Packages → miscell, right-clicking and selecting Run File. 8. Repeated runs are easily done by clicking the "rerun" button 9. To make repeated runs even easier, select Customize in the drop-down below <default config>. In this window, use the Browse button to specify the Main Class. With this in place, you can use the "run project" button Linear Search For example, consider simple linear search of an integer array for a given key. The following class representing a simple random linear search might be something like this: In order to create a general version which could be reused, we should consult what Java does in the class: There is no linear search algorithm, but there is a binary search. Here are two relevant static functions: int binarySearch(int[] a, int key) int binarySearch(int[] a, int fromIndex, int toIndex, int key) The " " argument name is misleading since the range is really between toIndex 1 , inclusive. Thus these two calls are equivalent: java.util.Arrays.binarySearch(A, key); java.util.Arrays.binarySearch(A, 0, A.length, key); return value is supposed to be the position at which the is found. If not found, a value is returned. These algorithm specifications suggest a more general way to write linear search, namely using our own package, , and our own class with two versions: util.MyArrays.linearSearch(int[] a, int key) util.MyArrays.linearSearch(int[] a, int fromIndex, int toIndex, int key) We should also consider what to do if the latter of these calls is provided with invalid arguments such as these: linearSearch(A, -3, 2, 15); linearSearch(A, 3, 2, 15); In these cases, we want to throw an Exception. A few tests of leads us to add this starter code: if (fromIndex < 0 || toIndex < 0 || fromIndex > toIndex) { throw new IllegalArgumentException(); In Netbeans, create the Java Class Class Name MyArrays package util with this content: package util; public class MyArrays { public static int linearSearch(int[] A, int fromIndex, int toIndex, int key) { if (fromIndex < 0 || toIndex < 0 || fromIndex > toIndex) { throw new IllegalArgumentException(); int i; for (i = fromIndex; i < toIndex; ++i) { if (key == A[i]) { return i; return -1; public static int linearSearch(int[] A, int key) { return linearSearch(A, 0, A.length, key); Apply this search algorithm by creating the following main class: Other Java types Consider expanding the linear search algorithm to other types such as . These two are very different because , like is a type and is an . With respect to , or any of the other primitive types, we have to write separate functions for each even though they effectively do the same thing. Regarding types, such as , we would want a version like this: The problem with generating an example is how to "meaningfully" generate an array of Strings and a search key. A simple example (not terribly meaningful) is this: Abstract time for linear search Regarding the notion of "abstract time" we usually say to "count the number of comparisons" to get a measure of the actual time. There are several deviations from reality: • we're overlooking any setup, like initializing the variables found and i, as well as other repetitive features, like incrementing i comparing it to the size • We're ignoring the actual cost of a comparison. The Object comparison say, for String type, is not constant and is likely to be more time-consuming than comparison of primitive types. Nevertheless, simplifications such as this are useful for algorithm analysis because we can more easily go on and describe the behavior for arbitrarily large arrays. Analysis of Linear Search The worst case is if the element is not found or is found at the end of the search in which case there are comparisons. The best case is that the thing you're looking for is in the first slot and so there's only one comparison. We say: • the best case is 1 comparison • the worst case is n comparisons Average case for linear search A simple guess might be to take the average of the best and worst, getting . This answer turns out to be correct, but we must derive it more methodically. First of all, we need to make the assumption that the search is . Why? Consider an array of size 10, holding arbitrary integers and the key is an arbitrary integer. Given that there are about 4 billion 32-bit integers, the probability that our key is one of these 10 is effectively zero! In addition, we will assume that the key is equally likely to be found at each of the positions. Using this notion, the average cost is: ( total cost for finds at all positions ) / number of positions The cost for finding at position , for values i = 0 ... n-1 . Therefore we derive: 1 + 2 + ⋅⋅⋅ + n n*(n+1)/2 n + 1 average cost = = = n n 2 Language of Asymptotics Asymptotics has to do with describing the behavior of functions, specifically algorithmic timing functions, for arbitrarily large problem sizes. Often a problem size can be characterized by a single , e.g., the size of an array. Big-O Notation We would really like to present information about this algorithm which ignores the constants. The "big O notation" and other complexity terminology allows us do precisely that. We say: T(n) = O(f(n)) or T(n) is O(f(n)) if there are positive constants C, k, such that T(n) ≤ C * f(n) for all n ≥ k The big-O notation is simply a convenience for expressing the relation between an unknown "timing function", , and a known reference function, . In reality is a of functions, of which is a member. Nevertheless, it is convenient to be able to make statements like this "the worst case time for such-and-such-algorithm is O(n*log(n))" and have a rigorous basis for what you're saying. The idea of " n ≥ k " in the definition means , i.e., if we ignore some initial finite portion. For example, suppose T(n) = n + 100 We can say: T(n) ≤ 101 * n, for all n ≥ 1 but we can also say T(n) ≤ 2 * n, for all n ≥ 100 getting a "better" asymptotic constant, , in the sense that it is smaller (we can always make it larger). In either case we have proved that n + 100 = O(n) by finding constants which make the definition statement work. Linear Search in big-O terms Going back to linear search we observed that when counting comparisons: the best case is 1 the worst case is n the average case is (n+1)/2 = ½ n + ½ In big-O terminology, we would say this about linear search: the best case time is O(1) the worst case time is O(n) the average case time is O(n) Binary Search Binary search searches a array in the most efficient way possible. This algorithm employs a simple example of a strategy in which we subdivide the problem into equal-sized "sub-problems". The idea is simple: compare the key to the "middle" element, if not equal, either look left or look right portion based on whether the key is less than, or greater than, the middle element. First Binary Search Implementation This algorithm expresses itself most naturally in a recursive manner based on the way I say: search the whole array invokes search of one half or another. In order to express the recursive nature, the parameter of the algorithm must allow arbitrary beginnings and ends. Our initial coding might look something like this: int binarySearch(int[] A, int fromIndex, int toIndex, int key) { if (fromIndex == toIndex) { return - 1; int mid = (fromIndex + toIndex) / 2; if (key == A[mid]) { return mid; // else if (key < A[mid]) { return binarySearch(A, fromIndex, mid, key); // else (key > A[mid]) return binarySearch(A, mid+1, toIndex, key); keywords are optional because of the statements. One has to keep in mind that the division used to compute division (i.e., fractional truncation). As is the case with , the second range parameter, , is not included in the search range. Our textbook makes, in my opinion, the unfortunate decision to make the second parameter position in his array-based algorithms the index, so beware. Binary Search Visitation tree Lay out the array positions that the algorithm would visit in a binary tree, where the root is the middle of the array, the left and right subtrees are generated by searches of the left and right subarrays, respectively. Here are the binary search trees for arrays of size 7 and 10, resp.: Note the "left-leaning" aspect of the latter tree. This reflects the fact that our algorithm, when it cannot split the array exactly in half, will have one more element on the left size compared to the right side. Algorithmic correctness proof by induction The proof uses a form of induction called induction, whereby we assume true the thing we want to prove for values (within a suitable range) up to that point. Consider the execution of: The execution of int pos = binarySearch(A, fromIndex, toIndex, key) We want to say that: • if pos ≥ 0 then fromIndex ≤ pos < toIndex and A[pos] == key • if pos < 0 then A[i] != key, for all fromIndex ≤ i < toIndex. The proof is by induction on the array size, len = toIndex - fromIndex It is a good idea to run a few examples by hand. Again, keep in mind that computer integer division is . In mathematical terms, division of two integers with integer result: is expressed mathematically as the "floor" function , which simply truncates any decimal part. Base case: len = 0 This means that toIndex == fromIndex . The cannot be present in the array and the algorithm indicates failure. Inductive case: len ≥ 1 Because the array is sorted, it is obvious that the algorithm will work so long as: • fromIndex ≤ mid < toIndex • the left (fromIndex,mid) and right (mid+1,toIndex) ranges both have fewer than len elements. The even and odd cases values need to be considered separately. Write: toIndex = fromIndex + len and compute: mid = (fromIndex + toIndex)/2 = (2*fromIndex + len)/2 = fromIndex + len/2, if len is even = fromIndex + (len-1)/2, if len is odd i. len is even Because len is even and positive, we have mid = fromIndex + len/2, len/2 > 0, len/2 < len Computing the length of both sides gives: mid - fromIndex = len/2 < len toIndex - (mid+1) = len - len/2 - 1 = len/2 - 1 < len ii. len is odd Because len is odd we have mid = fromIndex + (len-1)/2, (len-1)/2 ≥ 0, (len-1)/2 = len/2 < len To get the exact numbers, it's better to express len = 2*k + 1, where k = len/2. Computing the length of both sides gives: mid - fromIndex = (len-1)/2 = k = len/2 < len toIndex - (mid+1) = len - (len-1)/2 - 1 = 2*k+1 - k - 1 = k = len/2 < len Summarizing, we see that • when len is even, the split is unequal, the left side having len/2 elements and the right side one less. • when len is odd, the split is equal, both sides having len/2 elements Logarithms in analysis Logarithms, particularly base-2 logarithms, are important because they represent the number of times things can be halved. In particular, there are roughly term in this sequence: n, n/2, n/4, ..., 1 The definition of logarithm to the base x = log[b]n means b^x = n, i.e., b^log[b]n = n The most common bases are these: • base 2, used in computer science • base 10, used in other sciences • base e = 2.718..., used in mathematics, where e is the base of the natural logarithm, the so-called Euler's number which is one of the fundamental constants of mathematics, like π All bases differ by a constant factor. b^log[b]2 = 2 (b^log[b]2)^x = b^(log[b]2)*x = 2^x x = log[2](n) b^log[b]2 * log[2]n = 2^log[2]n = n This means, according to what the definition of log[b]n = log[b]2 * log[2]n (think of the 2's as "canceling out"), or simplifying, log[2]n = log[b]n / log[b]2 Therefore, regarding big-O logarithms of all bases are equal. In scientific computations, is understood to be the base-10 logarithm and the natural logarithm. In computer language library functions, often means the natural logarithm and base-2 logarithm is usually written with an explicit base. For example, in Java, the function is the natural logarithm, but we can easily write the base-2 logarithm: static double log_2(double x) { return Math.log(x) / Math.log(2); For our purposes, since the base-2 is preeminent, we'll drop the 2 and assume: log(n) = log[2](n) Integer logarithms In computational settings in which logarithms appear, they always appear in these integer formats: flr(log n) = the largest power of 2 so that 2^power ≤ n ceil(log n) = the smallest power of 2 so that n ≤ 2^power For example, flr(log 8) = 3 ceil(log 8) = 3 flr(log 10) = 3 ceil(log 10) = 4 Binary Search Worst Case From an analysis perspective, with an array of size n > 1 n even => left side n/2, right side n/2 - 1 n odd => both sides have size (n-1)/2 = n/2 = worst case number of comparisons in binary search of an array of size In the worst case, we would end up consistently exploring the left side with T(1) = 1 T(n) = 1 + T( n/2 ), n > 1 Our claim is that T(n) = O(log n) . In fact we want to prove: T(n) ≤ 2 * log(n), n ≥ 2 Look at a comparison of some computed values of T(2) = 2, log(2) = 1 T(3) = 2, log(3) = 1.x T(4) = 3, log(4) = 2 T(5) = 3, log(5) = 2.x Observe that the multiplier 2 satisfies the inequality we claim for each value. Proof by induction. Again, we use induction. We have verified that for base cases n = 2, 3, 4, 5 T(n) ≤ 2 * log(n), n ≥ 2 Assume valid up to (but not including) . This means that we can make the inductive assumption and assume this to be true: T(n/2) ≤ 2 * log(n/2), n/2 < n Then the proof goes like this: T(n) = 1 + T(n/2) (the recurrence) ≤ 1 + 2 * log( n/2 ) (substitute from inductive assumption) = 1 + 2 * (log(n) 1) (properties of log) = 2 * log(n) 1 (simple algebra) ≤ 2 * log(n) (becoming larger) The key algebraic step relies on the property: log(a/b) = log(a) log(b) which we are using like this: log(n/2) = log(n) log(2) = log(n) 1 However, because division, this last statement is not technically correct when is odd. For simplicity, we'll ignore this technicality. Demo program It's useful to see a programmatic comparison of as can be done with the following program: public class MainBS { static int T(int n) { if (n == 1 ) { return 1; return 1 + T( n/2 ); // base-2 logarithm static double log(double x) { return Math.log(x) / Math.log(2); public static void main(String[] args) { for (int n = 2; n <= 32; ++n) { int t_val = T(n); double log_val = log(n); System.out.println("T(" + n + ")/log(" + n + ")\t" + (t_val/log_val)); A run yields values like this which indicate that get closer for larger values of , although not in a uniform sense in that powers of 2 "stand out" in the table. T(2)/log(2) 2.0 T(18)/log(18) 1.1990623328406573 T(3)/log(3) 1.2618595071429148 T(19)/log(19) 1.1770445668331913 T(4)/log(4) 1.5 T(20)/log(20) 1.1568910657987959 T(5)/log(5) 1.2920296742201793 T(21)/log(21) 1.138351243484765 T(6)/log(6) 1.1605584217036249 T(22)/log(22) 1.1212191210878772 T(7)/log(7) 1.0686215613240666 T(23)/log(23) 1.1053236472875188 T(8)/log(8) 1.3333333333333333 T(24)/log(24) 1.0905214599276576 T(9)/log(9) 1.2618595071429148 T(25)/log(25) 1.0766913951834827 T(10)/log(10) 1.2041199826559246 T(26)/log(26) 1.0637302677668157 T(11)/log(11) 1.1562593052715513 T(27)/log(27) 1.0515495892857623 T(12)/log(12) 1.1157717826045193 T(28)/log(28) 1.0400729883825475 T(13)/log(13) 1.080952617709279 T(29)/log(29) 1.0292341623021721 T(14)/log(14) 1.0505981401487743 T(30)/log(30) 1.018975235452531 T(15)/log(15) 1.023832099239262 T(31)/log(31) 1.0092454329104992 T(16)/log(16) 1.25 T(32)/log(32) 1.2 T(17)/log(17) 1.22325271059113 Improvement on the worst-case order constant In the above sections we proved that T(n) ≤ 2 * log(n), n ≥ 2 Can we do better? It is not true that T(n) ≤ log(n) per se, but the proof above can easily be adpated to prove this inequality: T(n) ≤ log(n) + 1, n ≥ 2 Thus, including the extra "1" term the constant multiplier is effectively 1, which is an improvement over the multiplier 2 indicated above. Order class hierarchy The functions which characterize algorithm timing tend to fall into a few common ones: O(1) constant time O(log n) logarithmic time O(n) linear time O(n * log n) O(n^2) quadratic time O(n^2 * log n) O(b^n) each base b generates a distinct order class These order classes are upwardly inclusive, i.e., if T(n) = O(log n) , then of course, T(n) = O(n) . We're usually interested in the "best fit" in the sense of finding the order class to which belongs. In order to characterize the "best fit" of an order class, we need two other notions: Lower bound: Ω We say: T(n) = Ω(f(n)) (or T(n) is Ω(f(n))) if there are positive constants , such that T(n) ≥ C * f(n), for all n ≥ k Exact bound: Θ We say: T(n) = Θ(f(n)) (or T(n) is Θ(f(n))) T(n) = O(f(n)) and T(n) = Ω(f(n)) This means that there are positive constants , such that C[1] * f(n) ≤ T(n) ≤ C[2] * f(n), for all n ≥ k concept gives the precise sense to the notion of "order class" because it completely characterizes the behavior of a timing function relative to a reference function up to a constant multiple. Order Summary Officially there are three considerations. • O: upper bound, meaning "we can do at least this well" up to a constant factor • Ω: lower bound, ignoring constant factors, meaning, "we cannot expect to do better than this" up to a constant factor • Θ: characterization of the run-time behavior, up to a constant factor Unofficially, the big-O terminology dominates the discussion in algorithmic analysis. Authors commonly use even when they really mean . If the exact order class is not known it means that a complete mathematical understanding of the run-time behavior is lacking. Other algorithmic terminology Asymptotic dominance of one function by another is expressed by the T(n) = o(f(n)) This means that for every c (no matter how small), there is a such that T(n) ≤ c * f(n), n ≥ k For the most part this means the following: lim[n → ∞] T(n)/f(n) = 0 Asymptotic dominance expresses the relationship of the reference functions in the order class hierarchy above: 1 = o(log(n)) log(n) = o(n) n = o(n^2) n^2 = o(2^n) 2^n = o(3^n) The first of these relations, log(n) = o(n) , is proved using L'Hôpital's rule from calculus, substituting a continous variable for the integer lim[x → ∞] log[2](x)/x = lim[x → ∞] log[2](e) * ln(x)/x = lim[x → ∞] log[2](e) * ln′(x) / x′ = lim[x → ∞] log[2](e) * 1/x / 1 = lim[x → ∞] log[2](e) / x = 0 Asymptotic equality is written this way: T(n) ≈ (f(n)) and it means: lim[n → ∞] T(n)/f(n) = 1 The "wavy" equal lines suggest that these two functions are essentially the same for large values. For example, in a polynomial function, we can effectively ignore all but the highest order term. For example, if T(n) = 100 * n + 200 * n^2 + 3 * n^3 T(n) ≈ 3 * n^3 Unfortunately the Weiss textbook does not define this relation. Asymptotic equality is, in some sense, similar to the exact bound , except that it gives a precise order constant, which is often of interest when you want to compare two timing functions within the same order class. For example, let W(n) = worst case time for linear search A(n) = average case time for linear search Both functions are exactly linear time and we would write: W(n) = Θ(n) A(n) = Θ(n) However, the order constants are different and this is expressed using W(n) ≈ n A(n) ≈ ½ n Binary Search average case The average case timing is more complicated. As in the case of linear search, we assume a successful search, and that each of the array positions are equally likely to hold the search key. We are mostly interested in getting some sense about how much the average case might be, and proving that it is still logarithmic. Technically we want to prove the lower bound Average binary search time(n) = Ω(log(n)) Combined with the fact that the average time can only be better than the worst-case time, which is we can then conclude that Average binary search time(n) = Θ(log(n)) Additionally we want to get some idea about what the order constant might be. Counting the total number of comparisons In order to compute the average number of comparisons, we need to find a way to compute the number of comparison for all possible nodes in the positional visitation tree. The of a node is its distance from the root. The root, at level 0, counts for 1 comparison. Both of its children count for 2 comparisons each, etc. Thus, Total comparisons = ∑[all nodes at level i] (i+1) In general, at level , if it is full, there will be children, each contributing comparisons. A binary tree is if every level is full. In general, the binary search position visitation tree will not be perfect, but it can be argued that • The levels 0 to flr(log n)-1 are all full: thus a total of L = flr(log n) levels are full. • The maximum level of a node is flr(log n) (which may not be full) We reproduce the depictions of the binary search visitation trees for arrays of size 7 and 10: flr(log(7)) = 2 flr(log(10)) = 3 Computing a lower bound The following is a useful expression which gives total number of comparisons in a tree with of levels (levels Comparisons(L) = 1 + 2*2^1 + 3*2^2 + ... + L*2^L-1 = (L-1)*2^L + 1 Using this equation we can derive a lower bound on the number of comparisons for binary search. Using L = flr(log n) , we get total comparisons ≥ Comparisons( flr(log n) ) = ( flr(log n) - 1 ) * 2^flr(log n) + 1 flr(log n) expression truncates the decimal part of and so it will subtract away less than 1 from , i.e., flr(log(n)) > log(n)-1 flr(log n) and doing the algebra, we get total comparisons > ( log(n) - 2 ) * 2^log(n) - 1 + 1 = log(n) * 2^log(n) - 1 - 2^log(n) + 1 = ½ * n * log(n) - n + 1 Dividing the total comparisons by gives the average, i.e., average comparisons > ½ * log(n) - 1 + 1/n > ½ * log(n) - 1 Therefore, the average number of comparisons is Exact bounds Going one step further, we indicated above that: worst case comparisons ≤ log(n) + 1 meaning that this inequality is, of course, true as well for the average comparisons. We conclude that ½ * log(n) - 1 ≤ average comparisons ≤ log(n) + 1 We conclude that the average and worst-case comparisons are both where the relevant multiplicative constant varies between ½ and 1. Binary Search Implementations First of all, create a simple test program to see what Java does. Create the main class with this content: package miscell; import java.util.*; public class JavaBinarySearch { public static void main(String[] args) { Random rand = new Random(); int A[] = new int[20], key; for (int i = 0; i < A.length; ++i) { A[i] = rand.nextInt(A.length * 2); key = rand.nextInt(A.length * 2); System.out.println("A (initial) = " + Arrays.toString(A)); System.out.println("A (sorted) = " + Arrays.toString(A)); System.out.println("key = " + key); int pos; pos = Arrays.binarySearch(A, key); "\n" + "binary search from Arrays class" + "\n" + "return = " + pos + "\n" + "found = " + (pos >= 0) + "\n" Our version is created by adding code to the class. Here is the code util.MyArrays code added private static int count; public static int getCount() { return count; } public static void setCount(int count) { MyArrays.count = count; } public static int binarySearch(int[] A, int fromIndex, int toIndex, int key) { if (fromIndex == toIndex) { return - fromIndex - 1; int mid = (fromIndex + toIndex) / 2; if (key == A[mid]) { return mid; if (key < A[mid]) { return binarySearch(A, fromIndex, mid, key); // else key > A[mid] return binarySearch(A, mid+1, toIndex, key); public static int binarySearch(int[] A, int key) { return binarySearch(A, 0, A.length, key); Failed search value Of what use is the negative return value? We have modified the return value in case of failure. Before we wrote simply However, we can provide better information which indicates the position that the should be added. This is done by: We are insured that this value is negative under all circumstances. In this case, when the is not found the negative return value can be used to compute the correct insert position int insert_pos = - ret - 1; We can then correctly add to the array, (maintaining sortedness) if we • shift the array contents in positions insert_pos to toIndex one to the right, • set A[insert_pos] = key It's good to verify this with an example. Suppose the array is Consider the failed search for . The successive search ranges would be: (0,5), (3,5), (3,4), (3,3) The return value would be which would be correctly interpreted as an insert position of . It's also important to ensure that the "extreme" cases work out correctly: • A failed search for key = -10 will maintain fromIndex = 0 on all steps. At the end, the return value will be -1, signifying that the correct insert position is 0. • A failed search for key = 50 will search this sequence of ranges: (0,5), (3,5), (5,5). At the end, the return value will be -6, signifying that the correct insert position is 5 (off the right Error Checking We've left out error checking code from public static int binarySearch(int[] A, int fromIndex, int toIndex, int key) { if (fromIndex < 0 || toIndex < 0 || fromIndex > toIndex) { throw new IllegalArgumentException(); The problem with introducing the error checking code inside the recursive function is that it is inefficient; only the invocation needs to check for an invalid range, not all the recursive calls. A better approach is to have the public binarySearch function call a recursive function like this: public static int binarySearch(int[] A, int fromIndex, int toIndex, int key) { if (fromIndex < 0 || toIndex < 0 || fromIndex > toIndex) { throw new IllegalArgumentException(); return _binarySearch(A, fromIndex, toIndex, key); private static int _binarySearch(int[] A, int fromIndex, int toIndex, int key) { if (fromIndex == toIndex) { return - fromIndex - 1; } int mid = (fromIndex + toIndex) / 2; if (key == A[mid]) { return mid; if (key < A[mid]) { return _binarySearch(A, fromIndex, mid, key); // else key > A[mid] return _binarySearch(A, mid+1, toIndex, key); Supporting object types The situation is more complicated than that of linear search for object types because binary search is based the of objects, meaning that we have to be able to determine not just equality, but a less than/greater than inequality of elements. In particular the Java API for supports the ability to pass in a user-defined comparator instead of using relying upon the innate comparability of elements. A comparator is always passed in with the parameter type: is a generic type variable. public static int binarySearch( Object[] A, int fromIndex, int toIndex, Object key) if (fromIndex == toIndex) { return - fromIndex - 1; int mid = (fromIndex + toIndex) / 2; ++count; // optional counting code int comp = ((Comparable) key).compareTo(A[mid]); if (comp == 0) { return mid; if (comp < 0) { return binarySearch(A, fromIndex, mid, key); // else comp > 0 return binarySearch(A, mid+1, toIndex, key); public static int binarySearch(Object[] A, Object key) { return binarySearch(A, 0, A.length, key); // when a user-defined comparator is provided ... public static <T> int binarySearch( T[] A, int fromIndex, int toIndex, T key, Comparator<? super T> c) if (fromIndex == toIndex) { return -fromIndex - 1; int mid = (fromIndex + toIndex) / 2; ++count; // optional counting code int comp = c.compare(key, A[mid]); if (comp == 0) { return mid; if (comp < 0) { return binarySearch(A, fromIndex, mid, key); // else comp > 0 return binarySearch(A, mid + 1, toIndex, key); public static <T> int binarySearch(T[] A, T key, Comparator<? super T> c) { return binarySearch(A, 0, A.length, key, c); In the first case, assuming the innate comparability of elements, the the member function is used. Thus we assume the key value is by forcing a cast, getting this code: int comp = ((Comparable) key).compareTo( A[mid] ); The value of determines one of the three relations: less than if comp < 0, equals if comp == 0, greater than if comp > 0. In the latter case, when a comparator is passed, the equivalent idea is to make a comparison using this object as follows: int comp = c.compare(key, A[mid]); Binary vs. Linear search The problem with binary search is that, although the search time is much faster, the array must be sorted for it work. The best algorithms for sorting a random array have a run time of O(n * log n) . So there is no advantage of binary search over linear search if every search is on a fresh array. Here is how we compare the two algorithms: Linear Search Binary Search create array: O(n) O(n) prepare array: - O(n*log(n)) search array: O(n) O(log(n)) If, using a single array, we do O(log n) searches then, with the Linear Search total time is O(n * log n) which would break even with Binary Search. To put this in perspective, if we have an array with 1 million, or 2 entries, then, after sorting, we would need to do roughly 20 searches for Binary Search to break even with Linear Search. Integer exponentiation We want to write an algorithm to compute for an integer n ≥ 0 . The obvious linear-time algorithm involves repeated multiplication by . A more subtle version uses the binary representation of the exponent. The basis of the algorithm is this: n = 2 * n/2, if n even n = 2 * n/2 + 1, if n odd Throughout this section, for convenience, we'll represent integer truncated division by simple division, i.e., n/2 is really flr(n/2) Using properties of exponents, we have: x^0 = 1 x^1 = x and then we can write either: a. n even: x^n = (x^2)^n/2 n odd: x^n = x * (x^2)^n/2 b. n even: x^n = (x^n/2)^2 n odd: x^n = x * (x^n/2)^2 Both the " " equations the " " equations can be the source of recursive algorithms. Here is the full code: public static double powA(double x, int n) { if (n == 0) return 1; if (n == 1) return x; if (n % 2 == 0) return powA( x * x, n/2 ); return x * powA( x * x, n/2 ); public static double powB(double x, int n) { if (n == 0) return 1; if (n == 1) return x; double val = powB(x, n/2); if (n % 2 == 0) return val * val; return x * val * val; public static void main(String[] args) throws Exception { double x = 2.0; int n = 10; "powA(" + x + "," + n + ") = " + powA(x,n) + "\n" + "powB(" + x + "," + n + ") = " + powB(x,n) The proof of correctness is more-or-less obvious for the recursive functions . If we count each multiplication as " ", then, for the worst-case time function , we compute: • for powA: 1 multiplication to compute x * x, T(n/2) for the recursive call to n/2, 1 multiplication afterwards. • for powB: T(n/2) for the recursive call to n/2, 2 multiplications afterwards. In either case, we get T(0) = 0 T(1) = 0 T(n) = T(n/2) + 2, n ≥ 2 It can be proved by induction that: T(n) = O( log(n) ) Note that we were careful to write the function like this: if (n % 2 == 0) return powB(x,n/2) * powB(x,n/2); return x * powB(x,n/2) * powB(x,n/2); The reason is that it would make the recurrence relation this: T[1](0) = 0 T[1](1) = 0 T[1](n) = 2 * T(n/2) + 2 The factor of in front of to enter a different order class. It can be proved that T[1](n) = Θ( n ) by inductively proving two inequalities i. T[1](n) ≥ n - 1, n ≥ 1 ii. T[1](n) ≤ 2*n - 2, n ≥ 1 The iterative version However difficult you consider recursion, it is much more transparent when compared to an iterative version. Here is the code: public static double powC(double x, int n) { double val = 1; while(n != 0) { if (n % 2 == 1) val *= x; n = n/2; x = x*x; return val; Proving the correctness of an iterative algorithm is technically harder than the correctness of a recursive algorithm, which is more-or-less a straight-froward induction proof. In particular, an iterative algorithm uses variables which change state in order to control the iteration. A proof requires that you establish a loop invariant , which is a logical statement expressed using the program variables so that: • it is initially true with the variable initializations • assuming that it is true at some step, it remains true at the next step using the modified variable values • when the loop is terminated, the invariant "proves" what you want the loop to compute. Let's look at the algorithm with some annotations: b = x; p = n; double val = 1; while(n != 0) { if (n % 2 == 1) val *= x; n = n/2; x = x*x; The variables (base) and (power) represent the initial values of , respectively. We want to argue that after the loop terminates: val = b^p The additional annotated variables (base) and are needed to express the invariant because change state. The invariant we want is this: val * x^n = b^p It is true initially. Using the values val = 1, x = b, n = p we have that val * x^n = 1 * b^p = b^p Now assume it is true up to a certain point and then consider the next iteration step. Let , and be the new values of , respectively, then: x′ = x * x n′ = n/2 val′ = val * x, if n odd = val, if n even We must show: val′ * (x′)^n′ = val * x^n There are two cases: • n even: val′ * (x′)^n′ = val * (x^2)^n/2 = val * x^2*(n/2) = val * x^n • n odd: val′ * (x′)^n′ = val * x * (x^2)^n/2 = val * x^2*(n/2) + 1 = val * x^n The loop terminates when equals 0, and so we get: val * x^0 = b^p, i.e., val = b^p
{"url":"http://www.cs.wcupa.edu/rkline/ds/analysis.html","timestamp":"2014-04-21T07:22:25Z","content_type":null,"content_length":"85585","record_id":"<urn:uuid:c66229ab-b983-49d6-9d9b-ccc8a8f10a0b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Relativity Science Calculator - Galileo Galilei Galileo Galilei "Galileo ... is the father of modern physics -- indeed of modern science" - Albert Einstein ( 1879 - 1955 ) Galileo Galilei ( Tuscan - Italian, 1564 - 1642 ) by Giusto Sustermans From "Discorsi e dimostrazioni matematiche, intorno à due nuove scienze" ( Discourses and Mathematical Demonstrations Relating to Two New Sciences, Third Day: Naturally Accelerated Motion )^∗, 1638, by Galileo Galilei, his final work in physics covering his preceding 30 years, stated " ... that in equal times bodies moving at different speeds cover distances in proportion to their speeds" "A motion is said to be uniformly accelerated, when starting from rest, it acquires, during equal time-intervals, equal increments of speed" wherein he wrote his famous "Law of Falling Bodies" equation: As a result of the above equation, Galileo Galilei was the first to experimentally make the attempt to determine the value of of g[n] or simply g, the standard acceleration of earth's gravity [ latin: gravitas, gravis ( heavy ) ] effect at sea level, where the modern accepted value is 9.80665 m s^-2. Please note that g is a vector quantity owning to the fact that it points between the centers of any two masses which makes manifest the appearance of gravity. Nevertheless, Galileo's two major contributions to modern physics were the "Law of Falling Bodies" and the "Law of Inertia"^∗∗. In the experiment, Galileo presumably used a water clock comprised of an "extremely accurate balance" to measure the amount of water collected and hence to measure durations of elapsed time during which balls of different weights [ and therefore different masses ] were rolled along an inclined ramp in order to study the effects of earth's gravity. Out of these experiments he derived his famous "Law of Falling Bodies". Thomas Harriot - Preceded Galileo in Celestial Observations Using a Telescope Before Galileo Galilei's telescope peered at the Moon by several months earlier, there was English Thomas Harriot ( 1560 - 1621 ), astronomer, mathematician and celestial cartographer who first drew an extra-terrestial map of something outside of the bounds of earth. Harriot did this in July, 1606 of the Moon and continued to draw ever more detailed maps of lunar surface and craters whose accuracy remained unchallenged for several decades thereafter. One of Only Two Extant Galileo Telescopes Arrives in Philadelphia Unpacking 400 Year - Old Galileo Telescope The James Webb Space Telescope and Sunshield: Extraction and Deployment - launch date: 2014 note: English source translation: Henry Crew and Alfonso de Salvio - Macmillan, 1914 note: latin: in + ars = iners, meaning unskilled or artless whereas Kepler used the word for bodies at rest and Newton gave the word 'inertia' its modern mathematical meaning of bodies in undisturbed, straight - line motion unless subject to forces of acceleration. [ Mail this page to a friend ] The best browser for this site is Firefox for our Windows friends and Safari or Firefox for Mac folks Your ip address is: 54.243.12.156 This document was last modified on: Tuesday, 02-Jul-2013 16:19:22 PDT Your browser is: CCBot/2.0 (http://commoncrawl.org/faq/)
{"url":"http://www.relativitycalculator.com/Galileo.shtml","timestamp":"2014-04-19T09:25:05Z","content_type":null,"content_length":"25319","record_id":"<urn:uuid:8c36919a-41b3-4032-8f8f-11d8fd2d0560>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Show a set is compact January 22nd 2011, 08:46 AM #1 Show a set is compact If A and B are compact subsets of some metric space M with metric d, then I think that the cross product $A\times B$ is a compact subset of the metric space $M^2$ where the metric in this space is defined by What is the best way to show that AxB is indeed compact? Is there a particularly good way to do this? I think I have a proof showing that every sequence in AxB has a convergent subsequence. Showing that every sequence has a convergent subsequence is probably the easiest way to do it. You would need to use the fact that a sequence $\{(x_n,y_n)\}$ in $A\times B$ converges if and only if $\{x_n\}$ converges in $A$ and $\{y_n\}$ converges in $B$. On the other hand, it might be possible to use the definition directly: take an open cover of $A\times B$ and try to show that it has a finite subcover. You can relate this to the compactness of $A$ and $B$ by using the projection maps: $\pi_1: A\times B\to A$ given by $\pi_1(x,y)=x$ and $\pi_2: A\times B\to B$ given by $\pi_2(x,y)=y$. By the way, I think you mean "cartesian product" and not "cross product" I think that both methods suggested by roninpro should work. Show that the given metric induces the product topology, and invoke the theorem that finite (easy proof) or arbitrary (Tychonoff) products of compact spaces are compact. January 22nd 2011, 08:51 AM #2 January 22nd 2011, 09:02 AM #3 Senior Member Nov 2010 Staten Island, NY January 22nd 2011, 10:32 AM #4
{"url":"http://mathhelpforum.com/differential-geometry/169033-show-set-compact.html","timestamp":"2014-04-21T10:22:56Z","content_type":null,"content_length":"40238","record_id":"<urn:uuid:541e479c-ea7c-40a2-843b-6b7e36247ea3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Electrostatic energy Next: Ohm's law Up: Electrostatics Previous: Introduction Consider a collection of We know that a static electric field is conservative, and can consequently be written in terms of a scalar potential: We also know that the electric force on a charge The work we would have to do against electrical forces in order to move the charge from point The negative sign in the above expression comes about because we would have to exert a force Let us build up our collection of charges one by one. It takes no work to bring the first charge from infinity, since there is no electric field to fight against. Let us clamp this charge in position at 578) and Eqs. (579), this work is given by Let us now bring the third charge into position. Since electric fields and scalar potentials are superposable, the work done whilst moving the third charge from infinity to Thus, the total work done in assembling the three charges is given by This result can easily be generalized to The restriction that This is the potential energy (i.e., the difference between the total energy and the kinetic energy) of a collection of charges. We can think of this as the work needed to bring static charges from infinity and assemble them in the required formation. Alternatively, this is the kinetic energy which would be released if the collection were dissolved, and the charges returned to infinity. But where is this potential energy stored? Let us investigate further. Equation (584) can be written is the scalar potential experienced by the Let us now consider the potential energy of a continuous charge distribution. It is tempting to write by analogy with Eqs. (585) and (586), where is the familiar scalar potential generated by a continuous charge distribution. Let us try this out. We know from Maxwell's equations that so Eq. (587) can be written Vector field theory yields the standard result Application of Gauss' theorem gives where 593) falls off like 593) reduces to where the integral is over all space. This is a very nice result. It tells us that the potential energy of a continuous charge distribution is stored in the electric field. Of course, we now have to assume that an electric field possesses an energy density We can easily check that Eq. (594) is correct. Suppose that we have a charge This follows from Eq. (580), since the electric field generated by a spherical charge distribution (outside itself) is the same as that of a point charge Thus, Eq. (596) becomes The total work needed to build up the sphere from nothing to radius This can also be written in terms of the total charge Now that we have evaluated the potential energy of a spherical charge distribution by the direct method, let us work it out using Eq. (594). We assume that the electric field is radial and spherically symmetric, so for 594), (603), and (604) yield which reduces to Thus, Eq. (594) gives the correct answer. The reason we have checked Eq. (594) so carefully is that on close inspection it is found to be inconsistent with Eq. (585), from which it was supposedly derived! For instance, the energy given by Eq. (594) is manifestly positive definite, whereas the energy given by Eq. (585) can be negative (it is certainly negative for a collection of two point charges of opposite sign). The inconsistency was introduced into our analysis when we replaced Eq. (586) by Eq. (588). In Eq. (586), the self-interaction of the 588). Thus, the potential energies (585) and (594) are different, because in the former we start from ready-made point charges, whereas in the latter we build up the whole charge distribution from scratch. Thus, if we were to work out the potential energy of a point charge distribution using Eq. (594) we would obtain the energy (585) plus the energy required to assemble the point charges. What is the energy required to assemble a point charge? In fact, it is infinite. To see this, let us suppose, for the sake of argument, that our point charges are actually made of charge uniformly distributed over a small sphere of radius 601), the energy required to assemble the We can think of this as the self-energy of the which enables us to reconcile Eqs. (585) and (594). Unfortunately, if our point charges really are point charges then 585) and (594) differ by an infinite amount. What does this all mean? We have to conclude that the idea of locating electrostatic potential energy in the electric field is inconsistent with the existence of point charges. One way out of this difficulty would be to say that all elementary charges, such as electrons, are not points, but instead small distributions of charge. Alternatively, we could say that our classical theory of electromagnetism breaks down on very small length-scales due to quantum effects. Unfortunately, the quantum mechanical version of electromagnetism (quantum electrodynamics, or QED, for short) suffers from the same infinities in the self-energies of particles as the classical version. There is a prescription, called renormalization, for steering round these infinities, and getting finite answers which agree with experiments to extraordinary accuracy. However, nobody really understands why this prescription works. The problem of the infinite self-energies of elementary charged particles is still unresolved. Next: Ohm's law Up: Electrostatics Previous: Introduction Richard Fitzpatrick 2006-02-02
{"url":"http://farside.ph.utexas.edu/teaching/em/lectures/node56.html","timestamp":"2014-04-20T03:43:00Z","content_type":null,"content_length":"37537","record_id":"<urn:uuid:2862b324-c7e5-4268-a27f-1367e2c5a5bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Perspective Tools Manual I used to make illustrations by creating planar projections of solids in the earlier version of Sketchpad, but I started each of them from scratch. Now the process is much simpler with the custom tools that were introduced with version 4 of Sketchpad. Look at some of these examples to get a feel for the tools' capabilities. For a detailed explanation of the control functions follow this link to Perspective Controls. To use these tools in your own drawings, download the Sketchpad file Perspective Tools and save it into the folder labeled Tool Folder. This will make it available to you through the Custom Tool button on the tool bar. GSP-5 Changes Some significant changes have been made to these tools to make them compatible with GSP-5. For an explanation of these changes, see GSP-5 Issues. The Tools This was last updated on June 6, 2010. This is an add-on collection of tools for solid images. They do not stand alone. Some of the setup tools from Perspective_Tools are required. See the documentation and tutorial on the Perspective Solids link. This was last updated on June 6, 2010. The document below has some sketches that were created with these tools. You may want to check them out first to get an idea about the capabilities of the tools. Making Your Own Images Begin by selecting the tool Perspective Setup. Select or create four points in this order: x-y center, z center, image center, and dial. The points can be anywhere on the screen. They can be free points, or they can be constrained by some construction. The point labels do not matter. The setup tool will construct hidden points beneath them. Below is an explanation of the function of each of these points. x-y center - This is the origin of the x-y plane, viewed from above. The x and y coordinates of a pre-image point are determined by a point's position in relation to this point. z center - This is a reference point for the z coordinate of the pre-image. image center - This is the point of the image that corresponds to the origin of the pre-image. dial - This is the axis of the dial that controls the rotations. As soon as the setup is complete, click the "Initialize" button to set the positions of some of the objects. This button will not be needed again. It can now be hidden. The setup tool takes care of all of the assumed inputs. To transform a point, only two objects need to be selected. The first is the horizontal position of the point with respect to the point x-y center. The second is the vertical position with respect to the point z center. Try this simple construction. It is an irregular pyramid. The labels are shown for the reference illustrations. They do not have to have these names. In fact, on certain constructions there can be conflicts if they are given these names. Construct the base ABCDE near the x-y center. Point F is the horizontal location of the apex. Points G and H will determine the heights of the base and the apex. Select the Point tool and select these points in order: A, G B, G C, G D, G E, G The second point is the same for each entry because all of the base points lie in a horizontal plane at a height determined by point G. If you followed that sequence correctly, five white points will have been drawn near image center. Connect them with line segments. Now use the Point tool again to plot the apex. F, H Use line segments to connect the image of the apex to the images of the five base points. Experiment with the controls and see how they affect the image. It is still possible to change the shape of the base and the two points that control the heights. The image will change with them. Now try a locus. It is easy enough to draw a circle in the x-y plane, but in the perspective view it will appear as an ellipse. First draw a circle using x-y center as the circle center. Construct point J on the circle. Use the Point tool to construct its image at the same height as the base. J, G Select point J and the new image point and construct the locus. It will be an ellipse. Construct a Cube The point z center may be anywhere on the screen. It may even be hidden. Heights of points are measured with respect to z center, but the reference points do not have to be directly above or below it. Often it is convenient to place z center on a horizontal line through x-y center. That way we can benefit from the symmetry of certain solids, such as the cube. It is assumed that if you have come this far, you know enough about Sketchpad to do certain intermediate level constructions. Construct a horizontal line with two points on it. Use the Perspective Setup tool. Designate the right point as x-y center and the left point as z center. Construct square ABCD with horizontal and vertical sides and centered on x-y center. If the cube is to be centered on the origin, then these four points can be used not only for the horizontal location, but also for the vertical. The height of the top vertices of the cube will be the same as the height of point A above z center. The bottom vertices will have the height of point B. Here is the selection sequence for drawing the eight vertices of the cube with the Point tool: A, A B, A C, A D, A A, B B, B C, B D, B Now draw the edges by connecting the vertices with line segments. Having eight disconnected points on the screen can be confusing because it is difficult to tell which are in front and which are in back. It may be helpful to use the spin control to move them. The points will be move about a vertical axis in the same direction as the spin control is turning. When plotting more complex images, it is a good idea periodically to stop plotting points and do parts of the line work. This might be a good time to play around with the controls. Look at it in Perspective view. It is the most life-like. The images of opposite faces will not be congruent and parallel edges will point toward a common vanishing point. This is because different parts of the cube are projected at different scales, depending on their distance from the observer. Now click on the Orthogonal Projection button. After it shifts, all of the parallel edges will be projected with the same scale. In the case of the cube, that means that their images will be congruent, making it impossible to distinguish the near edges from the far edges. The Isometric Projection view is an orthogonal projection with the image rotated into such a position that any distances measured parallel to the x, y, or z-axis will have the same scale. If you ever had a traditional drafting class, you probably learned to do this with a T-square and a 30-60-90 triangle. It is quite effective for showing dimensions, but this example shows some of the shortcomings. It can create ambiguous vertices. The cube does not even look like a cube unless you know what you want to see. It looks like a regular hexagon with three diagonals. Plotting by Coordinates Most of the perspective drawings from the Whistler Alley site are constructed dynamically, as shown above, so that the position of each point is controlled by two other points. This makes it possible to manipulate the image by dragging objects on the screen. There is also a group of tools for plotting points by coordinates. In general, this method is more cumbersome, but it is also much easier to Before plotting anything, it will be necessary to execute the Perspective Setup tool as described above. It is not necessary to have coordinate axes showing, but it might help. To construct them, choose the custom tool xyz-axes. Click one time only on some open space near the top or bottom of the screen. Choose the selection arrow tool immediately in order to deactivate the tool. The axis length is set randomly, so they will probably be too short or too long for your needs. On the point that you clicked is a linesegment controlling the length of the axes. Drag the red point to control the length. There will also be a new hide/show button for the axes. In Preferences set the distance unit to centimeters. Next, from the Graph menu, choose New Parameter. Define these three parameters: x = 2 cm y = 3 cm z = –1 cm Here the point (2, 3, –1) is plotted using the image center point as the origin. Experiment with changing the coordinates and the viewing perspective. In the following example, an octahedron will be plotted, with all six vertices at a unit distance from the image center. This way, only three numbers will be required for the coordinates. Hide or delete the previously plotted point. From the Graph menu, choose New Parameter. Define these three parameters: t[1] = 1 cm t[2] = 0 cm t[3] = –1 cm Choose the custom tool Plot (x,y,z). Click in order on the parameters t[1], t[2], t[2]. Notice that t[2] is used twice in a row. After clicking it the first time, it will no longer be highlighted. Drag the cursor off of the number and then back before making the last click. The point (1, 0, 0) is plotted. Using the same three parameters, plot the remaining vertices and connect them with line segments. Here are the coordinates of all six vertices, beginning with the one that was just plotted: (1, 0, 0) (-1, 0, 0) (0, 1, 0) (0, -1, 0) (0, 0, 1) (0, 0, -1) It is easy to get the coordinates in the wrong order, especially if a large group of points is being plotted. To avoid this, keep an eye on the prompt that appears on the status bar. It will indicate the name of the next coordinate required. Use the Plot (x,y,z) tool to plot by rectangular coordinates. Two other coordinate plotting tools are available: Plot (r,theta,z) for cylindrical coordinates, and Plot (rho,theta,phi) for spherical The distance units will be consistent near the origin, but they are subject to the scale control. The coordinates may be defined by parameters, measurements or calculations. If the coordinate has no dimension or an incorrect dimension (e.g., an angle where a distance is required), then the point will still be plotted, but the units of the coordinate will be set by the corresponding unit in the Preferences. Of course, all of these points will be projected onto the plane of the screen, so accurate distance and angle measurements will not be possible. For a look at some related topics, see the links below: Back to the Sketchpad Workshop Back to Whistler Alley Mathematics Last update: June 13, 2010 ... Paul Kunkel whistling@whistleralley.com For email to reach me, the word geometry must appear in the body of the message.
{"url":"http://whistleralley.com/GSP/perspective/perspective.htm","timestamp":"2014-04-17T07:08:26Z","content_type":null,"content_length":"13740","record_id":"<urn:uuid:59b4c9f8-39ac-4716-a026-d1b5f897f0ac>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
Representation Theorem for Generators of BSDEs Driven by Abstract and Applied Analysis Volume 2013 (2013), Article ID 342038, 10 pages Research Article Representation Theorem for Generators of BSDEs Driven by -Brownian Motion and Its Applications ^1Department of Mathematics, Donghua University, 2999 North Renmin Road, Songjiang, Shanghai 201620, China ^2School of Mathematics, Shandong University, 27 Shanda Nanlu, Jinan 250100, China Received 14 September 2013; Accepted 10 November 2013 Academic Editor: Litan Yan Copyright © 2013 Kun He and Mingshang Hu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We obtain a representation theorem for the generators of BSDEs driven by -Brownian motions and then we use the representation theorem to get a converse comparison theorem for -BSDEs and some equivalent results for nonlinear expectations generated by -BSDEs. 1. Introduction Let be a probability space, and, for fixed , let be a standard Brownian motion and let be the augmentation of . Then Pardoux and Peng [1] introduced the backward stochastic differential equations (BSDEs) and proved the existence and uniqueness result of the BSDEs. In 1997, Peng [2] promoted -expectations based on BSDEs. One of the important properties of -expectations is comparison theorem or monotonicity. Chen [3] first considers a converse result of BSDEs under equal case. After that, Briand et al. [4] obtained a converse comparison theorem for BSDEs under general case. They also derived a representation theorem for the generator . Following this paper, Jiang [5] discussed a more general representation theorem then, in his another paper [6], showed a more general converse comparison theorem. Here the representation theorem is an important method in solving the converse comparison problem and other problems (see Jiang [7]). Peng [8–13] defined the -expectations and -Brownian motions (-BMs) and proved the representation theorem of -expectation by a set of singular probabilities, which differs from nonlinear -expectations because -expectations are equivalent with a group of absolutely continuous probabilities with respect to the probability measure . Soner et al. [14] obtained an existence and uniqueness result of 2 BSDEs. Recently, Hu et al. [15] proved another existence and uniqueness result on BSDEs driven by -Brownian motions (-BSDEs). An important advantage of -BSDEs is the easiness to define the nonlinear expectations. Hu et al. in [16] gave a comparison theorem for -BSDEs and talked about the properties of corresponding nonlinear expectations. In this paper, we consider the representation theorem for generators of -BSDEs and then consider the converse comparison theorem of -BSDEs and some equivalent results for nonlinear expectations generated by -BSDEs. In the following, in Section 2, we review some basic concepts and results about -expectations. We give the representation theorem of -BSDEs in Section 3. In Section 4, we consider the applications of representation theorem of -BSDEs, which contain the converse comparison theorem and some equivalent results for nonlinear expectations generated by 2. Preliminaries We review some basic notions and results of -expectation, the related spaces of random variables, and the backward stochastic differential equations driven by a -Brownian motion. The readers may refer to [10, 13, 15, 17–19] for more details. Definition 1. Let be a given set and let be a vector lattice of real valued functions defined on , namely, for each constant and if . is considered as the space of random variables. A sublinear expectation on is a functional satisfying the following properties: for all , one has (a)monotonicity: if , then ;(b)constant preservation: ;(c)subadditivity: ;(d)positive homogeneity: for each . is called a sublinear expectation space. Definition 2. Let and be two -dimensional random vectors defined, respectively, in sublinear expectation spaces and . They are called identically distributed, denoted by , if , for all, where denotes the space of bounded and Lipschitz functions on . Definition 3. In a sublinear expectation space , a random vector , , is said to be independent of another random vector , under , denoted by , if for every test function one has . Definition 4 (-normal distribution). A -dimensional random vector in a sublinear expectation space is called -normally distributed if for each one has where is an independent copy of ; that is, and . Here, the letter denotes the function where denotes the collection of symmetric matrices. Peng [13] showed that is -normally distributed if and only if for each , , , is the solution of the following -heat equation: The function is a monotonic, sublinear mapping on and implies that there exists a bounded, convex, and closed subset such that where denotes the collection of nonnegative elements in . In this paper, we only consider nondegenerate -normal distribution; that is, there exists some such that for any . Definition 5. (i) Let denote the space of -valued continuous functions on with and let be the canonical process. Set Let be a given monotonic and sublinear function. -expectation is a sublinear expectation defined by for all , where are identically distributed -dimensional -normally distributed random vectors in a sublinear expectation space such that is independent of for every . The corresponding canonical process is called a -Brownian motion. (ii) For each fixed , the conditional -expectation for , where without loss of generality we suppose , is defined by where For each fixed , we set For each , we denote by (resp., ) the completion of (resp., ) under the norm . It is easy to check that for and can be extended continuously to . For each fixed , is a -dimensional -Brownian motion, where , and . Let , , be a sequence of partitions of such that ; the quadratic variation process of is defined by For each fixed , the mutual variation process of and is defined by Definition 6. For fixed , let be the collection of processes in the following form: for a given partition of , where , . For , one denotes by , the completion of under the norms , , respectively. For each , we can define the integrals and for each , . For each with , we can define Itô's integral . Let . For and , set . Denote by the completion of under the norm . We consider the following type of -BSDEs (in this paper, we always use Einstein convention): where satisfy the following properties.(H1)There exists some such that for any .(H2)There exists some such For simplicity, we denote by the collection of processes such that , , is a decreasing -martingale with and . Definition 7. Let and and satisfy (H1) and (H2) for some . A triplet of processes is called a solution of (13) if for some the following properties hold:(a);(b). Theorem 8 (see [15]). Assume that and and satisfy (H1) and (H2) for some . Then, (13) has a unique solution . Moreover, for any , one has , , and . We have the following estimates. Proposition 9 (see [15]). Let and , satisfy (H1) and (H2) for some . Assume that for some is a solution of (13). Then, there exists a constant depending on , , , such that where . Proposition 10 (see [15, 20]). Let and be fixed. Then, there exists a constant depending on and such that Theorem 11 (see [16]). Let , , be the solutions of the following -BSDEs: where , and satisfy (H1) and (H2) for some and are RCLL processes in such that . If is an increasing process, then for . In this paper, we also need the following assumptions for -BSDE (13).(H3)For each fixed , and are continuous.(H4)For each fixed , , , and (H5)For each , . Assume that ; and satisfy (H1), (H2), and (H5) for some . Let be the solution of -BSDE (13) corresponding to , , and on . It is easy to check that on for . Following [16], we can define consistent nonlinear expectation and set . 3. Representation Theorem of Generators of -BSDEs We consider the following type of -FBSDEs: where and , . We now give the main result in this section. Theorem 12. Let , , and be Lipschitz functions and let and satisfy (H1), (H2), (H3), and (H4) for some . Then, for each and , one has Proof. For each fixed , we write instead of for simplicity. We have for each (see [16, 19]). Thus, by Theorem 8, -BSDE (22) has a unique solution and . We set, for , Applying Itô’s formula to on , it is easy to verify that solves the following -BSDE: From Proposition 9, hold for some constant , only depending on , , , and . By Proposition 10 and the Lipschitz assumption, we obtain where is a constant depending on , , , , , , , and . Noting that (see [16, 19]), where depends on and , and the following inequality holds: Together with assumption (H4), we get where depends on , , , , , , , and . Now, we prove (23). Let us consider where It is easy to check that , where depends on , , and . Thus, by (29), we get which implies . We set By the Lipschitz condition, we can get , where depends on , , , and . Noting that (see [16, 19]), where depends on , , and , we obtain which implies . Now, we set It is easy to deduce that , where depends on . Then, Take limit on both sides of the above inequality and use assumption (H4); then, we have On the other hand, Then, we have The proof is complete. 4. Some Applications 4.1. Converse Comparison Theorem for -BSDEs We consider the following -BSDEs: where . We first generalized the comparison theorem in [16]. Proposition 13. Let and satisfy (H1) and (H2) for some , . If , then, for each , one has for . Proof. From the above -BSDEs, we have where By the assumption, it is easy to check that is a decreasing process. Thus, using Theorem 11, we obtain for . Remark 14. Suppose and let , , , and . It is easy to check that . Thus, does not imply and . Now, we give the converse comparison theorem. Theorem 15. Let and satisfy (H1), (H2), (H3), (H4), and (H5) for some , . If for each and , then q.s.. Proof. For simplicity, we take the notation , . For each fixed , let us consider where . By Theorem 12, we have, for each , Since , Take a such that . Therefore, q.s. By the assumptions (H2) and (H3), it is easy to deduce that q.s. In the following, we use the notation , . Corollary 16. Let and be deterministic functions and satisfy (H1), (H2), (H3), and (H5) for some , . If for each , then . Proof. Taking as in Theorem 15, since and are deterministic, we could get , for , . And the proof in Theorem 15 still holds true. 4.2. Some Equivalent Relations We consider the following -BSDE: where . We use the notation . Proposition 17. Let and satisfy (H1), (H2), (H3), (H4), and (H5) for some and fix . Then, one has(1) for , , and if and only if for each , , , ,(2) for , , and if and only if for each , , , , ,(3) for , , , and if and only if for each , , , , , ,(4) for , , and if and only if for each , , , , Proof. (1) “” part. For each fixed , , , , we take where . Then, by Theorem 12 and , we can obtain We choose such that , which implies (47). “” part. Let be the solution of -BSDE (46) corresponding to terminal condition . We claim that is the solution of -BSDE (46) corresponding to terminal condition on . For this, we only need to check that, for , By (47) we can get which implies (53). The proof of (1) is complete. (2) “” part. For each fixed , , , , , we consider and , where and . Then, by Theorem 12 and , we obtain We choose , such that and , which implies (48). “” part. Let
{"url":"http://www.hindawi.com/journals/aaa/2013/342038/","timestamp":"2014-04-18T08:56:15Z","content_type":null,"content_length":"1045851","record_id":"<urn:uuid:fdf81134-cdd6-4ca7-8c13-4eac4eb9884a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Clear Lake Shores, TX Find a Clear Lake Shores, TX Math Tutor ...While it might sound tedious and superfluous, many of the more complex topics are easily understood from the basics. Finally, it is important to inspire passion and a desire to learn in each student. I do this by transferring my own passion for learning and by applying the material to real life problems. 22 Subjects: including trigonometry, SAT math, photography, public speaking ...I have extensive experience in developing custom applications with MS Access. Importing or linking data from different data sources, writing custom forms. Extensive experience with SQL queries and query building. 14 Subjects: including ACT Math, statistics, differential equations, algebra 1 ...I look forward to working with you and contributing to your success.A real advantage in learning mathematics is that all of your knowledge carries forward. The skills you learn early will be used in all of your future mathematics courses. So once you learn the basics, you will build on them as you progress. 20 Subjects: including calculus, logic, algebra 1, algebra 2 ...My math background and lifelong pursuit of astronomy enable me to tutor in astronomy and algebra with a high degree of expertise. I generally start tutoring by answering specific questions from the student. This takes care of the student's urgent needs. 12 Subjects: including geometry, statistics, algebra 1, trigonometry ...As an accredited public librarian, I qualified each person's individual needs and wants while finding the best materials to help them study, write, or speak effectively. I performed reference queries for children, tweens, teens, college students, and adults using reference materials, subject-int... 44 Subjects: including ACT Math, reading, Spanish, GED Related Clear Lake Shores, TX Tutors Clear Lake Shores, TX Accounting Tutors Clear Lake Shores, TX ACT Tutors Clear Lake Shores, TX Algebra Tutors Clear Lake Shores, TX Algebra 2 Tutors Clear Lake Shores, TX Calculus Tutors Clear Lake Shores, TX Geometry Tutors Clear Lake Shores, TX Math Tutors Clear Lake Shores, TX Prealgebra Tutors Clear Lake Shores, TX Precalculus Tutors Clear Lake Shores, TX SAT Tutors Clear Lake Shores, TX SAT Math Tutors Clear Lake Shores, TX Science Tutors Clear Lake Shores, TX Statistics Tutors Clear Lake Shores, TX Trigonometry Tutors Nearby Cities With Math Tutor Anahuac Math Tutors Bacliff Math Tutors Bayou Vista, TX Math Tutors El Lago, TX Math Tutors Hillcrest, TX Math Tutors Kemah Math Tutors Liverpool, TX Math Tutors Nassau Bay, TX Math Tutors Port Bolivar Math Tutors Seabrook, TX Math Tutors Shoreacres, TX Math Tutors Taylor Lake Village, TX Math Tutors Tiki Island, TX Math Tutors Timber Cove, TX Math Tutors Tod, TX Math Tutors
{"url":"http://www.purplemath.com/clear_lake_shores_tx_math_tutors.php","timestamp":"2014-04-17T04:09:27Z","content_type":null,"content_length":"24195","record_id":"<urn:uuid:5edf1754-5f21-4502-9a3b-3610e38b4e99>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Laurel, MD Precalculus Tutor Find a Laurel, MD Precalculus Tutor ...This has kept my skills sharp and I've been able to help a number of students with geometry since I started tutoring. One of my more recent students had a D in her geometry course but with help was able to finish with a B. I can help with both algebra and calculus based physics. 19 Subjects: including precalculus, reading, calculus, physics ...My students like me for being friendly and nice. I love kids. My way of learning and teaching is to make things clear and simple. 9 Subjects: including precalculus, physics, geometry, calculus ...I have about 10 years of private teaching experience on these instruments. My music degrees are from the University of Maryland (Bachelor of Music in 2004, Master of Music in 2006). I also graduated with a Bachelor of Science degree in Physics from The College of New Jersey in 1993. Whether a ... 15 Subjects: including precalculus, physics, algebra 1, calculus I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi... 15 Subjects: including precalculus, calculus, physics, GRE ...I realize the importance of making sure young students have a total grasp of math concepts, as this is the foundation for future success in the higher level math courses, which will follow them throughout life. As a Mathematics/Computer science major in undergraduate school, I graduated with a 3... 33 Subjects: including precalculus, reading, English, geometry Related Laurel, MD Tutors Laurel, MD Accounting Tutors Laurel, MD ACT Tutors Laurel, MD Algebra Tutors Laurel, MD Algebra 2 Tutors Laurel, MD Calculus Tutors Laurel, MD Geometry Tutors Laurel, MD Math Tutors Laurel, MD Prealgebra Tutors Laurel, MD Precalculus Tutors Laurel, MD SAT Tutors Laurel, MD SAT Math Tutors Laurel, MD Science Tutors Laurel, MD Statistics Tutors Laurel, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/Laurel_MD_Precalculus_tutors.php","timestamp":"2014-04-20T11:10:45Z","content_type":null,"content_length":"23914","record_id":"<urn:uuid:0d495632-328a-46f1-b98b-d1bb9cc5c8c9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 41 "... The set A is low for Martin-Lof random if each random set is already random relative to A. A is K-trivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of Ambos-Spies and Kucera [2 ..." Cited by 79 (21 self) Add to MetaCart The set A is low for Martin-Lof random if each random set is already random relative to A. A is K-trivial if the prefix complexity K of each initial segment of A is minimal, namely K(n)+O(1). We show that these classes coincide. This implies answers to questions of Ambos-Spies and Kucera [2], showing that each low for Martin-Lof random set is # 2 . Our class induces a natural intermediate # 3 ideal in the r.e. Turing degrees (which generates the whole class under downward closure). Answering - J. Symbolic Logic , 2005 "... We compare various notions of algorithmic randomness. First we consider relativized randomness. A set is n-random if it is Martin-Lof random relative to . We show that a set is 2-random if and only if there is a constant c such that infinitely many initial segments x of the set are c-incompre ..." Cited by 38 (16 self) Add to MetaCart We compare various notions of algorithmic randomness. First we consider relativized randomness. A set is n-random if it is Martin-Lof random relative to . We show that a set is 2-random if and only if there is a constant c such that infinitely many initial segments x of the set are c-incompressible: C(x) c. The `only if' direction was obtained independently by Joseph Miller. This characterization can be extended to the case of time-bounded C-complexity. , 2002 "... We investigate combinatorial lowness properties of sets of natural numbers (reals). The real A is super-low if A # # ,andA is jump-traceable if the values of (e) can be e#ectively approximated in a sense to be specified. We investigate those properties, in particular showing that super-lownes ..." Cited by 28 (12 self) Add to MetaCart We investigate combinatorial lowness properties of sets of natural numbers (reals). The real A is super-low if A # # ,andA is jump-traceable if the values of (e) can be e#ectively approximated in a sense to be specified. We investigate those properties, in particular showing that super-lowness and jump-traceability coincide within the r.e. sets but none of the properties implies the other within the #-r.e. sets. Finally we prove that, for any low r.e. set B, there is is a K-trivial set A ##T B. 1 - Proceedings of the Twelfth Workshop of Logic, Language, Information and Computation (WoLLIC 2005). Electronic Lecture Notes in Theoretical Computer Science 143 , 2006 "... ..." - J. Math. Log "... Abstract. As a natural example of a 1-random real, Chaitin proposed the halting probability Ω of a universal prefix-free machine. We can relativize this example by considering a universal prefix-free oracle machine U. Let Ω A U be the halting probability of U A; this gives a natural uniform way of p ..." Cited by 21 (7 self) Add to MetaCart Abstract. As a natural example of a 1-random real, Chaitin proposed the halting probability Ω of a universal prefix-free machine. We can relativize this example by considering a universal prefix-free oracle machine U. Let Ω A U be the halting probability of U A; this gives a natural uniform way of producing an A-random real for every A ∈ 2 ω. It is this operator which is our primary object of study. We can draw an analogy between the jump operator from computability theory and this Omega operator. But unlike the jump, which is invariant (up to computable permutation) under the choice of an effective enumeration of the partial computable functions, Ω A U can be vastly different for different choices of U. Even for a fixed U, there are oracles A = ∗ B such that Ω A U and Ω B U are 1-random relative to each other. We prove this and many other interesting properties of Omega operators. We investigate these operators from the perspective of analysis, computability theory, and of course, algorithmic randomness. 1. - Mathematical Logic Quarterly "... Let ω denote the set of natural numbers. For functions f, g: ω → ω, we say that f is dominated by g if f(n) < g(n) for all but finitely many n ∈ ω. We consider the standard “fair coin ” probability measure on the space 2 ω of infinite sequences of 0’s and 1’s. A Turing oracle B is said to be almost ..." Cited by 17 (9 self) Add to MetaCart Let ω denote the set of natural numbers. For functions f, g: ω → ω, we say that f is dominated by g if f(n) < g(n) for all but finitely many n ∈ ω. We consider the standard “fair coin ” probability measure on the space 2 ω of infinite sequences of 0’s and 1’s. A Turing oracle B is said to be almost everywhere dominating if, for measure one many X ∈ 2 ω, each function which is Turing computable from X is dominated by some function which is Turing computable from B. Dobrinen and Simpson have shown that the almost everywhere domination property and some of its variant properties are closely related to the reverse mathematics of measure theory. In this paper we exposit some recent results of Kjos-Hanssen, Kjos-Hanssen/Miller/Solomon, and others concerning LR-reducibility and almost everywhere domination. We also prove the following new result: If B is almost everywhere dominating, then B is superhigh, i.e., 0 ′′ is - J. LOGIC AND COMPUTATION , 2007 "... We investigate notions of randomness in the space C[2 N] of nonempty closed subsets of {0, 1} N. A probability measure is given and a version of the Martin-Löf test for randomness is defined. Π 0 2 random closed sets exist but there are no random Π 0 1 closed sets. It is shown that any random 4 clos ..." Cited by 11 (8 self) Add to MetaCart We investigate notions of randomness in the space C[2 N] of nonempty closed subsets of {0, 1} N. A probability measure is given and a version of the Martin-Löf test for randomness is defined. Π 0 2 random closed sets exist but there are no random Π 0 1 closed sets. It is shown that any random 4 closed set is perfect, has measure 0, and has box dimension log2. A 3 random closed set has no n-c.e. elements. A closed subset of 2 N may be defined as the set of infinite paths through a tree and so the problem of compressibility of trees is explored. If Tn = T ∩ {0, 1} n, then for any random closed set [T] where T has no dead ends, K(Tn) ≥ n − O(1) but for any k, K(Tn) ≤ 2 n−k + O(1), where K(σ) is the prefix-free complexity of σ ∈ {0, 1} ∗. - Mathematical Logic Quarterly , 2004 "... examine the randomness and triviality of reals using notions arising from martingales and prefix-free machines. ..." - CCA 2005, Second International Conference on Computability and Complexity in Analysis, Fernuniversität Hagen, Informatik Berichte 326:103–116 , 2005 "... The present work investigates several questions from a recent survey of Miller and Nies related to Chaitin’s Ω numbers and their dependence on the underlying universal machine. It is shown that there are universal machines for which ΩU is just x 21−H(x). For such a universal machine there exists a c ..." Cited by 10 (6 self) Add to MetaCart The present work investigates several questions from a recent survey of Miller and Nies related to Chaitin’s Ω numbers and their dependence on the underlying universal machine. It is shown that there are universal machines for which ΩU is just x 21−H(x). For such a universal machine there exists a co-r.e. set X such that ΩU[X] = � p:U(p)↓∈X 2−|p | is neither left-r.e. nor Martin-Löf random. Furthermore, one of the open problems of Miller and Nies is answered completely by showing that there is a sequence Un of universal machines such that the truth-table degrees of the ΩUn form an antichain. Finally it is shown that the members of hyperimmunefree Turing degree of a given Π0 1-class are not low for Ω unless this class contains a recursive set. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.138.669","timestamp":"2014-04-18T19:11:23Z","content_type":null,"content_length":"34653","record_id":"<urn:uuid:1f6f9e88-9542-480a-a97a-9956a43009dc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Galileo's Inclined Plane Experiment In 1603, Galileo performed a classic experiment in mechanics: he measured the distances covered by a ball rolling on an inclined plane, which slows down the ball, compared to its free fall. The distance traveled grows by the increasing ratios 1, 3, 5, 7, … units for any angle, leading to the key observation that the total distance covered is proportional to the time squared ), with the progression 1, 4, 9, 16, ….
{"url":"http://demonstrations.wolfram.com/GalileosInclinedPlaneExperiment/","timestamp":"2014-04-17T12:52:32Z","content_type":null,"content_length":"42147","record_id":"<urn:uuid:1c924ded-067d-4aaa-b07a-75f416b268bc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Stress Testing Assets or Portfolios According to dictionary, Stress Testing is a form of testing used to determine the stability of a given system or entity. In our case, finance, Stress Test allows to measure how much return an asset or a portfolio would deliver in case of dramatic change in one or several economic factors like S&P500, EUROSTOXX50, volatility VIX, 3m T-bill, 10Y T-note, Inflation, Aaa Yield or Baa Yield. For - What happens if equity markets crash by more than -30% in April 2011? - What happens if 10Y interest rates go up by at least +1% in March 2011? - What happens if oil prices rise by 100% in 2011, inflation increases by +3% and gold collapses by -50%, altogether? Some software provide the asset stress return with some probabilities, use simulation or use copula to describe the relation between assets. Other software are so complex that it takes two hours to stress test one asset or they do not allow the stress test of several economic factors or they do not stress the correlation between the economic factors (i.e. coefficients correlation between economic factors increase when the market crashes). A good stress test model should be transparent to the user, easy to explain to a client, allow to stress test an asset which has never had an historical negative return and correctly forecast future asset stress returns. The following model computes an asset or portfolio stressed return with a scenario on multiple economic factors and on their respective correlation coefficients: the expected beta between the asset and the factor, using the expected correlation between the economic factors Fi equal to: and the most significant factor between the asset and each factor Fi defined as: Computing the stress test for a hedge fund (10 years track record, max historical monthly loss of -3.82%) using the above model would give the following results: 1) The green cells represent the economic factors' assumptions 2) The hedge fund return (i.e.-26.07%) over one month is shown in the last column, assuming the above assumptions are realized: 3) The hedge fund return (i.e.-58.47%) over one month is shown in the last column, assuming the above assumptions are realized and the economic factors' historical correlation coefficients are The Stress Test model is available in AlternativeSoft platform.
{"url":"http://alternativesoft.com/stress_test.html","timestamp":"2014-04-21T05:19:43Z","content_type":null,"content_length":"13300","record_id":"<urn:uuid:b6d8b6c4-969d-4c9b-b6f1-58de228e2dd9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Executive Committee Vote Date: 01/21/97 at 21:56:03 From: Keith Cooke Subject: Probability and Statistics? 124 Delegates attend an annual convention at which a new 13-member executive committee will be elected from a list of 26 candidates. Each of the delegates must vote for 10 candidates (i.e must place an X next to at least 10 names on the list of 26). The question posed is: What is the lowest number of votes a candidate could get and be elected? I'm not sure whether this is a combinatorial, statistical, or impossible question. Under worst (or best) case scenario, I guess you could be elected with just one vote, but the question then becomes, what is the probability of that happening? I think that a number of practical assumptions would have to be made (e.g., everyone doesn't vote for the same ten candidates - random lists, no spoiled ballots, etc.) in order to figure this out. Date: 01/26/97 at 22:36:18 From: Doctor Mitteldorf Subject: Re: Probability and Statistics? Dear Keith, It seems to me that the "worst case" from the point of view of democracy would be if there were 10 candidates who were very popular, so that everyone voted for the same 10, and then there would be three more slots on the executive committee for which no candidates had any votes at all. You're right, that if this scenario "almost" obtained, it would be possible for all the votes but one to be concentrated in the top 12 candidates, so that the 13th prevailed with just one vote. To calculate the probability of this happening, you need some extra assumptions about human behavior. A very unrealistic assumption, but one that makes for an interesting and challenging statistics problem, would be to assume that every vote is random - that each delegate is equally likely to cast a vote for each candidate. Then there's a complicated combinatorial problem: how many ways can the votes be distributed so that there's a candidate who gets elected with only one vote? (The total number of ways the votes are distributed should go in the denominator, but that's relatively easy to compute.) If I had to answer this question for some practical purpose, I might decide that the combinatorial problem is too difficult, and I would use a "Monte Carlo simulation." I would program a computer to act like 124 voters selecting 10 candidates from a list of 26. The computer could make random selections, and tally up the votes in a tiny fraction of a second, then repeat the entire election several million times, noticing how many of the times resulted in this skewed result where someone gets elected by one vote. I'd go off and have lunch, and when I got back, the computer would give me an estimate of the probability that this might happen - under the very unrealistic assumption that each delegate is equally likely to cast a vote for each candidate. If we knew more about the voting behavior of real people in the situation, we could make the Monte Carlo simulation more realistic. (For example, we could say that everyone who votes for candidate A has an 80 percent chance of also voting for B, but only a 10 percent chance of voting for Z.) I can't tell if this is a practical problem for which you're seeking practical guidance, or an abstract mathematical amusement. If it's the latter, you might want to try formulating the problem in the way that makes it most interesting to think about. Try smaller numbers, and try counting the possible distributions of votes, and see it a formula suggests itself that extends to the larger numbers. But if it's a practical answer you're after, the Monte Carlo simulation is the way to go. -Doctor Mitteldorf, The Math Forum Check out our web site! http://mathforum.org/
{"url":"http://mathforum.org/library/drmath/view/56546.html","timestamp":"2014-04-17T01:21:03Z","content_type":null,"content_length":"8719","record_id":"<urn:uuid:db2dab5d-3859-43f7-b1b2-3c70d3433ce3>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Graph Waveform Array VI 1. Open a blank VI and build the front panel shown in Figure 1. 1. Place an array, located on the Controls>>All Controls>>Array &Cluster palette, on the front panel. 2. Label the array Waveform Array. 3. Place a numeric indicator, located on the Controls>>Numeric Indicators palette, in the array shell. 4. Place a waveform graph, located on the Controls>>Graph Indicators palette, on the front panel.
{"url":"http://cnx.org/content/m12237/latest/?collection=col11408/latest","timestamp":"2014-04-17T16:14:43Z","content_type":null,"content_length":"90003","record_id":"<urn:uuid:eb63e828-6bb5-42eb-9b37-77efc4a84cd7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
Homomorphisms & the isomorphism thm. April 12th 2010, 01:50 PM Homomorphisms & the isomorphism thm. P: G to H is a group homomorphism, L = Image(P) = {h in H | h= P(g) for some g in G}. A. Show L is a subgroup of H. Now assume G= $Z_{12}$ under addition and H= $S_{3}$. B. Show that P can't be onto. Which subgroups of $S_{3}$ are possible for L? C. Find a specific example of some group homomorphism P from G to H which isn't the trivial map. my thoughts: for part A, I had that P(g)P(h) = P(gh) for any g, h in G, P( $1_G$)= $1_H$ and P( $G^{-1}$)= P $(G)^{-1}$, by the definition of a hom, but was told that this is wrong because I need to discuss L. What am I missing here? for B, I know the possible subgroups of H are <1>, <(123)>, <(12)>,<(23)>, <(13)>, and H, all of order 1, 2, 3 or 6, which all divide the order of G -- from there I don't know how to show there is no onto hom. for (C), I have no idea how to find a mapping that will work! Thank you so much for any help! April 12th 2010, 03:12 PM Is it closed under mult. Does it contain the identity element? Does it contain inverses? Now assume G= $Z_{12}$ under addition and H= $S_{3}$. B. Show that P can't be onto. Which subgroups of $S_{3}$ are possible for L? C. Find a specific example of some group homomorphism P from G to H which isn't the trivial map. Why don't you think about this one for a little more. April 12th 2010, 05:55 PM Does this mean that, accordingly, L = the image is an abelian subgroup, so the only possibilities are <1>, <(123)>, <(12)>,<(23)>, <(13)>, and not all of H, so P is not onto? Then, for part (C), does the following map work? P(0) --> e P(1, 3, 5, 7, 9, 11) ---> (12) P(2,4,6, 8,10) ----> e If so, the way I found it was basically by guessing...I guess what I'm asking is, is there a systematic way to find this sort of thing dealing with the orders of elements? I tried to find a map onto the group of <(123)>, but I couldn't figure that one out. Thank you!! April 12th 2010, 08:48 PM Does this mean that, accordingly, L = the image is an abelian subgroup, so the only possibilities are <1>, <(123)>, <(12)>,<(23)>, <(13)>, and not all of H, so P is not onto? Then, for part (C), does the following map work? P(0) --> e P(1, 3, 5, 7, 9, 11) ---> (12) P(2,4,6, 8,10) ----> e If so, the way I found it was basically by guessing...I guess what I'm asking is, is there a systematic way to find this sort of thing dealing with the orders of elements? I tried to find a map onto the group of <(123)>, but I couldn't figure that one out. Thank you!! I'm not quite sure what you're asking. The reason that the map in your example can't be surjective is that it would imply that $S_3$ is abeilan since $\mathbb{Z}_{12}$ is. April 12th 2010, 10:38 PM That makes sense, thanks. Just to confirm, does that mean that <1>, <(123)>, <(12)>,<(23)>, <(13)> are all possibilities for L, as they are not abelian?
{"url":"http://mathhelpforum.com/advanced-algebra/138744-homomorphisms-isomorphism-thm-print.html","timestamp":"2014-04-18T08:30:08Z","content_type":null,"content_length":"19673","record_id":"<urn:uuid:a2459e1d-a6bf-4d89-bd2d-bfd3c897a059>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Coordinate geometry October 8th 2009, 06:38 PM #1 Sep 2009 Coordinate geometry How is the equation of curve going to become equation of pair of straight line passing through origin when we homgenise it with equation of straight line intesecting the curve at two points? (mention the reasons if any) i can not understand what you really mean of two straight lines, then homogenized, is it parameterized? two lines. . . . i know of a hyperbola in its degenerate case . . . . see this, the dotted lines is the degenerate case coordinte geometry "Pair Of Straight Lines | TutorVista.com" you can understand by seeing this October 9th 2009, 12:17 AM #2 October 9th 2009, 12:32 AM #3 Sep 2009
{"url":"http://mathhelpforum.com/geometry/106950-coordinate-geometry.html","timestamp":"2014-04-21T16:34:22Z","content_type":null,"content_length":"32901","record_id":"<urn:uuid:255f0adf-0d5c-41aa-9e6f-1a0cef409108>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Brachistochrone Replies: 1 Last Post: Mar 13, 2013 8:26 AM Torsten Re: Brachistochrone Posted: Mar 13, 2013 8:26 AM Posts: 1,439 Registered: 11/8 "Melissa" wrote in message <khp06m$seh$1@newscl01ah.mathworks.com>... /10 > Hey! I'm trying to model a brachistochrone curve given a start and end point(randomly chosen). I'm not sure how to implement the parametric equations of the model with just two points to start off. Any suggestions? Let A=(x1,y1) and B=(x2,y2) be the two randomly chosen points (where the points are ordered such that y2>y1). Then the parametric Brachistochrone curve between the two points is given by The other condition the curve has to fulfill is that it passes through A. for a certain value t*. This is a system of two equations in two unknowns (R and t*). Once you have determined R and t*, the curve (x(t),y(t)) for t in [0:t*] is the brachistochrone curve you want to determine. You may solve the two-equation-system using MATLAB's fsolve or you may eliminate R and solve the equation for t* using MATLAB's fzero. Best wishes
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2440660&messageID=8615310","timestamp":"2014-04-21T03:15:20Z","content_type":null,"content_length":"15002","record_id":"<urn:uuid:55a316bd-9a35-4d0d-a4bf-38ff7bc80c8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
need help with a math formula January 27th 2013, 05:23 AM need help with a math formula i have a problem, Sally's art project is in the shape of a circle. She wants to put a piece of glitter string around the adge of her project. if her project has a radius of 4 inches, how much string will she need? I have done it twicw and gotten 2 different answers one 16 than 8 just need for some one to show how thanks January 27th 2013, 06:26 AM Re: need help with a math formula Please tell us exactly what you did and how you got those two answers. You appear to be saying that the string is going around the circumference of the circle. Is that correct? Do you know a formula for the circumference of a circle? (If the string is on the circumference of the circle, neither 16 nor 8 is even close to the correct answer.) January 27th 2013, 08:47 PM Re: need help with a math formula Length of string required is same as perimeter of circle = 2πr = 2 × 3.14 × 4 = 25.12 inch
{"url":"http://mathhelpforum.com/new-users/212106-need-help-math-formula-print.html","timestamp":"2014-04-16T09:11:11Z","content_type":null,"content_length":"4420","record_id":"<urn:uuid:d9a5cf00-f25c-4a17-92c0-beb7299ab726>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Permutations and Job Interview Questions The permutation of a set is the number of ways that the items in the set can be uniquely ordered. For example, the permutations of the set { 1, 2, 3} are { 1, 2, 3}, { 1, 3, 2}, { 2, 1, 3}, { 2, 3, 1}, { 3, 1, 2} and { 3, 2, 1}. For N objects, the number of permutations is N! (N factorial, or 1 * 2 * 3 * ... N). Aside from theoretical interest in set theory, permutations have some practical use. Permutations can be used to define switching networks in computer networking and parallel processing (see Figure 1). Permutation also used in a variety of cryptographic algorithms. Figure 1, An Omega permuation network, from Interconnection Networks: an engineering approach. by Duato, Yalamanchili and Ni, Morgan Kaufmann, 2003, Pg. 32 Permutation Algorithms and Job Interview Questions I wrote this web page as a result of a job interview. This may seem like a strange chain of motivation. Let me explain... If you take a look at my resume you'll see that I've worked at a number of different companies, so I've done a fair amount of interviewing. There was even a time when I interviewed for jobs just to keep in practice. Those interviewing a candidate for a software engineering position understandably would like to discover whether the candidate can write software. One way to determine this is to ask about past projects. What did the project involve? What did the candidate contribute to the project? What was the architecture of the software system? What problems came up and how were they solved? For most of my career, interviews for software engineering positions followed this format. The first time I encountered a radically different interview style was when I interviewed with Microsoft in June of 2000. A Microsoft recruiter looking for people for their compiler group called me up and persuaded me to travel to Redmond, Washington to interview with the Microsoft compiler group. I spent a day talking to the main compiler group and another half a day talking to Microsoft Research. As one rotates through the members of the group that you are interviewing with at Microsoft, you are asked again and again to go to a white board and write algorithms in the form of C/C++ code. After a morning of this, I was taken out to lunch by a group member. We went to a good Italian restaurant and I thought that I would have a respite from problem solving. I was wrong. Over lunch he asked me one of those famous Microsoft thought puzzles. After lunch it was back to white board problems. The people I talked to at Microsoft asked very little about the work I had done in the past. I don't think quickly on my feet and I get "stage fright". Even for algorithms that I've written many times, like in-order binary tree traversal, I've frozen and been unable to recall the algorithm. The Microsoft style interview seems to be spreading. I've encountered this style of interview at nVidia and at two EDA start-up companies (at nVidia a guy who looked a bit like the rocker Joe Jackson walked into the conference room, told me his name and immediately started firing programming problems at me). To date I've never been offered a job after an interview like this. On a number of occasions I've pointed out to the interviewer that if they want to know whether I can write code, they can look at bearcave.com where I've published thousands of lines of C++ and Java code. For what ever reason, the interviewers have been unwilling to accept this as evidence of my software engineering abilities. Perhaps the reason is that, deprived of their white board problems, they don't know what else to ask. Interviewing people for an job is difficult. The interviewer must decide what characteristics are important and what questions will illuminate these characteristics. The interviewer may also have to interact with the interviewee on a person-to-person level, which they may be uncomfortable with. It is a lot easier asking someone to write an algorithm on the white "Write the code on the white board" interviewers must confront the problem of picking an algorithm that is small enough to fit on a white board, complex enough to be difficult and solvable in the time allotted to the interview. In some cases the result is a "tricky" algorithm. In one case the answer was an algorithm with a time complexity of summation of N (e.g., 1 + 2 + 4 + ... N), which one would never use in practice since there were better algorithms which did not meet the artificial constraints of the interviewer's problem. I spent many years working on compiler design and implementation. A modern compiler could be regarded as a collection of complex algorithms, some of which are heuristic. So for years I've been collecting and reading books on software algorithms. This came in handy when, during a phone interview, I was asked what the time complexity of a permutation was (e.g., N!). The interviewer then asked me how I would implement permutation and I said that one would use a recursive swapping algorithm. Since this is difficult to describe over the phone, they asked me to mail them the algorithm. The last algorithm on this web page (the lexicographic permutation algorithm, inspired by Bryan Flamig's algorithm, was the result). The fact that I knew the answer to the questions about permutation does not mean that I am, or am not, a good software engineer. These are remarkably tricky recursive algorithms. They are not easy to figure out and they are not like most algorithms I've worked with. A hiring decision made on such a basis is rather arbitrary. Permutations algorithms are interesting and they are, on rare occasion, useful. But they should not be used as questions in job interviews. Here, for the use and study of the next victim of such pointless questions, are three permutation algorithms, with links to a few more. Permutation Algorithms Permutation sets are usually calculated via recursion. Recursive algorithms have several common characteristics: the algorithms are powerful, they can be difficult to understand and as a result, can be difficult to develop. There are a variety of methods for recursively calculating permuations. I've listed three here: 1. A recursive permutation algorithm closely based on a Java algorithm published on Alexander Bogomolny web page Counting And Listing All Permutations 2. A permutation algorithm based on a string permutation algorithm from the course notes of the University of Exeter's Computational Physics class (PHY3134) (I was not able to identify an author). 3. An ordered (lexicographic) permuation algorithm. This algorithm is based on a a permutation algorithm from the book Practical Algorithms in C++ by Bryan Flamig, John Wiley and Sons, 1995 Additional lexicographic permutation algorithm can be found on Alexander Bogomolny's permutations web page, including a lexicographic permutation algorithm based on one invented by Edsger W. The following recursive permutation code is based on Alexander Bogomolny's algorithm. Alexander Bogomolyn's unordered permutation algorithm #include <stdio.h> void print(const int *v, const int size) if (v != 0) { for (int i = 0; i < size; i++) { printf("%4d", v[i] ); } // print void visit(int *Value, int N, int k) static level = -1; level = level+1; Value[k] = level; if (level == N) print(Value, N); for (int i = 0; i < N; i++) if (Value[i] == 0) visit(Value, N, i); level = level-1; Value[k] = 0; const int N = 4; int Value[N]; for (int i = 0; i < N; i++) { Value[i] = 0; visit(Value, N, 0); The result, for N = 4, is shown below. If the sets are sorted it become clear that the result is correct. University of Exeter algorithm The Exeter permuations are in close to sorted order. If a sorting step (using bubble sort) were added to produce lexicographic ordering, the Exeter algorithm may still be faster than mine, which uses a recursive set of rotations. #include <stdio.h> void print(const int *v, const int size) if (v != 0) { for (int i = 0; i < size; i++) { printf("%4d", v[i] ); } // print void permute(int *v, const int start, const int n) if (start == n-1) { print(v, n); else { for (int i = start; i < n; i++) { int tmp = v[i]; v[i] = v[start]; v[start] = tmp; permute(v, start+1, n); v[start] = v[i]; v[i] = tmp; int v[] = {1, 2, 3, 4}; permute(v, 0, sizeof(v)/sizeof(int)); The Exeter algorithm is building what is, in effect, a temporary set on the recursive stack. The output of this code shown below. The sets that are out of lexicographic order are marked. It appears that bubble sort would reorder the permutations in N steps. 1 4 2 3 <== 2 4 1 3 <== 4 2 1 3 <== 4 3 1 2 <== 4 1 2 3 <== An Ordered Lexicographic Permutation Algorithm based on one published in Practical Algorithms in C++ The next permutation algorithm produces the permutations in lexicographic (or sorted) order. Here we assume that there is a defined ordering between the elements in the set, which can define a sorting order. For example, in the set {1, 2, 3} the sorting order is 1 < 2 < 3 or in the set {a b c}, a < b < c. The algorithm is diagrammed below for a set of four objects. The algorithm recursively calculates the permutations of larger and larger subsets of the set (e.g., a set of two elements, a set of three elements...}. There is a cost incurred when the permuation is produced in lexicographic order. After each recursive step, the subset is rotated back into the original order in the preceeding stage. The time complexity of this reordering appears to be similar to the cost that would be incurred by sorting the final permutation set, so this reordering may not be justified. Figure 2 A C++ implementation of this algorithm is shown below. The source code can be downloaded here #include <stdio.h> void print(const int *v, const int size) if (v != 0) { for (int i = 0; i < size; i++) { printf("%4d", v[i] ); } // print void swap(int *v, const int i, const int j) int t; t = v[i]; v[i] = v[j]; v[j] = t; void rotateLeft(int *v, const int start, const int n) int tmp = v[start]; for (int i = start; i < n-1; i++) { v[i] = v[i+1]; v[n-1] = tmp; } // rotateLeft void permute(int *v, const int start, const int n) print(v, n); if (start < n) { int i, j; for (i = n-2; i >= start; i--) { for (j = i + 1; j < n; j++) { swap(v, i, j); permute(v, i+1, n); } // for j rotateLeft(v, i, n); } // for i } // permute back to Miscellaneous Software
{"url":"http://www.bearcave.com/random_hacks/permute.html","timestamp":"2014-04-17T09:36:12Z","content_type":null,"content_length":"14176","record_id":"<urn:uuid:251f5455-80e4-429a-9146-f6f3f91cce79>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
The Tom Bearden Website Gabriel Kron and the Negative Resistor At the time of his death, Gabriel Kron was arguably the greatest electrical scientist ever produced by the United States. It appears that the availability of this Heaviside energy component surrounding any portion of the circuit may be the long sought secret to Gabriel Kron's open path that enabled him to produce a true negative resistor in the 1930s, as the chief scientist for General Electric on the U.S. Navy contract for the Network Analyzer at Stanford University. Kron was never permitted to release how he made his negative resistor, but did state that, when placed in the Network Analyzer, the generator could be disconnected because the negative resistor would power the circuit. Since a negative resistor converges surrounding energy and diverges it into the circuit, it appears that Kron's negative resistor gathered energy from the Heaviside component of energy flow as an open path flow of energy connecting together the local vicinities of any two separated circuit components that had been discarded by previous electrodynamicists following Lorentz. Hence Kron referred to it as the open path. Particularly see Gabriel Kron, The frustrating search for a geometrical model of electrodynamic networks, circa 1962. We quote: ...the missing concept of "open-paths" (the dual of "closed-paths") was discovered, in which currents could be made to flow in branches that lie between any set of two nodes. (Previously following Maxwell engineers tied all of their open-paths to a single datum point, the 'ground'). That discovery of open-paths established a second rectangular transformation matrix... which created 'lamellar' currents... A network with the simultaneous presence of both closed and open paths was the answer to the author's years-long search. A true negative resistor appears to have been developed by the renowned Gabriel Kron, who was never permitted to reveal its construction or specifically reveal its development. For an oblique statement of his negative resistor success, see Gabriel Kron, Numerical solution of ordinary and partial differential equations by means of equivalent circuits, Journal of Applied Physics, Vol. 16, Mar. 1945a, p. 173. Quoting: When only positive and negative real numbers exist, it is customary to replace a positive resistance by an inductance and a negative resistance by a capacitor (since none or only a few negative resistances exist on practical network analyzers). Apparently Kron was required to insert the words none or in that statement. See also Gabriel Kron, Electric circuit models of the Schr๖dinger equation, Phys. Rev. 67(1-2), Jan. 1 and 15, 1945, p. 39. We quote: Although negative resistances are available for use with a network analyzer, . Here the introductory clause states in rather certain terms that negative resistors were available for use on the network analyzer, and Kron slipped this one through the censors. It may be of interest that Kron was a mentor of Sweet, who was his prot้g้. Sweet worked for the same company, but not on the Network Analyzer project. However, he almost certainly knew the secret of Kron's open path discovery and his negative resistor. Pooh-poohing the Kron negative resistor is just sheer na๏vet้. Kron was one of the greatest electrical scientists of all time, and applied full general relativity to rotating machines, electrical circuits and generators, etc. Simply go check his papers in the literature. Even today, there are few electrodynamicists really able to fully comprehend his advanced work. And his direct quotations from his own published technical papers in the literature leave no doubt he had made a negative resistor. Further, other scientists have commented on Kron's discovery of the open path connecting any two points in a circuit, and usable to provide energy transfer at will. The mechanism by which he did this is what Kron was never allowed to reveal. Excerpted from On Extracting Electromagnetic Energy from the Vacuum, IC-2000, by Tom Bearden.
{"url":"http://www.cheniere.org/misc/kron.htm","timestamp":"2014-04-18T05:39:24Z","content_type":null,"content_length":"11030","record_id":"<urn:uuid:88ece4ae-fad1-4ba3-a8b7-5cd34469d93a>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Overbrook Hills, PA Algebra Tutor Find an Overbrook Hills, PA Algebra Tutor ...Both have become increasingly independent with their studies - which allows me the time to help other students. I look forward to working with you!Algebra 1 lays the foundation for all higher math classes. This subject requires abstract thinking, which is why so many students struggle. 23 Subjects: including algebra 2, algebra 1, reading, writing ...And as part of a summer camp, I have helped students with geometry to get them ready for their SAT math. While I am certified to teach biology and chemistry, I have a strong math foundation and have helped a summer camp put together curriculum material for math, including algebra. In high school my algebra teacher awarded me with a special certificate for getting 100% on her final 12 Subjects: including algebra 1, algebra 2, geometry, chemistry ...I am passionate about writing as all facets of life require clear communication. Novels and plays, classic and contemporary, are also passions of mine for who doesn't like a good story! My certificate and over forty credits at the graduate level qualify me to tutor many topics pertaining to English, writing, and public speaking. 17 Subjects: including algebra 1, reading, writing, English ...It doesn't have to be confusing and stressful. As an elementary teacher, I looked for ways to make math fun, interesting, and relevant for my students. I'm able to bring those same strategies to the students I tutor. 12 Subjects: including algebra 1, English, reading, grammar ...The balance of technical teaching and conceptual guidance will depend on the student's age, prior knowledge, and goals. I'm a lifelong musician. In addition to having a BM in composition, I play multiple instruments, and I have experience doing recording and digital production. 8 Subjects: including algebra 1, algebra 2, prealgebra, Java Related Overbrook Hills, PA Tutors Overbrook Hills, PA Accounting Tutors Overbrook Hills, PA ACT Tutors Overbrook Hills, PA Algebra Tutors Overbrook Hills, PA Algebra 2 Tutors Overbrook Hills, PA Calculus Tutors Overbrook Hills, PA Geometry Tutors Overbrook Hills, PA Math Tutors Overbrook Hills, PA Prealgebra Tutors Overbrook Hills, PA Precalculus Tutors Overbrook Hills, PA SAT Tutors Overbrook Hills, PA SAT Math Tutors Overbrook Hills, PA Science Tutors Overbrook Hills, PA Statistics Tutors Overbrook Hills, PA Trigonometry Tutors Nearby Cities With algebra Tutor Bala, PA algebra Tutors Belmont Hills, PA algebra Tutors Bywood, PA algebra Tutors Carroll Park, PA algebra Tutors Cynwyd, PA algebra Tutors Drexelbrook, PA algebra Tutors Kirklyn, PA algebra Tutors Llanerch, PA algebra Tutors Merion Park, PA algebra Tutors Merion Station algebra Tutors Merion, PA algebra Tutors Oakview, PA algebra Tutors Penn Valley, PA algebra Tutors Penn Wynne, PA algebra Tutors Westbrook Park, PA algebra Tutors
{"url":"http://www.purplemath.com/Overbrook_Hills_PA_Algebra_tutors.php","timestamp":"2014-04-17T04:18:34Z","content_type":null,"content_length":"24472","record_id":"<urn:uuid:6c5c3e8c-5ea1-4dfb-851f-e5115d2ab4a7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Electronic Journal of Differential Equations, Vol. 2001(2001), No. 77, pp. 1-14. Title: Asymptotic behavior of solutions for some nonlinear partial differential equations on unbounded domains Authors: Jacqueline Fleckinger (Univ. Toulouse-1, Toulouse, France) Evans M. Harrell II (Georgia Inst. of Technology, Atlanta, USA) Francois de Thelin (Univ. Paul Sabatier, Toulouse, France) Abstract: We study the asymptotic behavior of positive solutions $u$ of $$ -\Delta_p u({\bf x}) = V({\bf x}) u({\bf x})^{p-1}, \quad p>1;\ {\bf x} \in \Omega,$$ and related partial differential inequalities, as well as conditions for existence of such solutions. Here, $\Omega$ contains the exterior of a ball in $\mathbb{R}^N$ $1
{"url":"http://ejde.math.txstate.edu/Volumes/2001/77/abstr.asc","timestamp":"2014-04-16T13:22:56Z","content_type":null,"content_length":"1391","record_id":"<urn:uuid:8dbd904e-2aea-456d-b8ee-ed28ec144d60>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Elliptic cohomology Special and general types Special notions The generalized (Eilenberg-Steenrod) cohomology theory/spectrum called $tmf$ – for topological modular forms – is in a precise sense the union of all elliptic cohomology theories/elliptic spectra ( Hopkins 94). More precisely, $tmf$ is the homotopy limit in E-∞ rings of the elliptic spectra of all elliptic cohomology theories, parameterized over the moduli stack of elliptic curves $\mathcal{M}_{ell}$. That such a parameterization exists, coherently, in the first place is due to the Goerss-Hopkins-Miller theorem. In the language of derived algebraic geometry this refines the commutative ring-valued structure sheaf $\mathcal{O}$ of the moduli stack of elliptic curves to an E-∞ ring-valued sheaf $\mathcal{O}^{top}$, making $(\mathcal{M}_{ell}, \mathcal{O}^{top})$ a spectral Deligne-Mumford stack, and $tmf$ is the E-∞ ring of global sections of that structure sheaf (Lurie). The construction of $tmf$ has motivation from physics (string theory) and from chromatic homotopy theory: 1. from string theory. Associating to a space, roughly, the partition function of the spinning string/superstring sigma-model with that space as target spacetime defines a genus known as the Witten genus, with coefficients in ordinary modular forms. Now, the interesting genera typically appear as the values on homotopy groups (the decategorification) of orientations of multiplicative cohomology theories; for instance the A-hat genus, which is the partition function of the spinning particle/superparticle is a shadow of the Atiyah-Bott-Shapiro Spin structure-orientation of the KO spectrum. Therefore an obvious question is which spectrum lifts this classical statement from point particles to strings. The spectrum $tmf$ solves this: there is a String structure orientation of tmf such that on homotopy groups it reduces to the Witten genus of the superstring (Ando-Hopkins-Rezk 10). Mathematically this means for instance that $tmf$-cohomology classes help to detect elements in the string cobordism ring. Physically it means that the small aspect of string theory which is captured by the Witten genus is realized more deeply as part of fundamental mathematics (chromatic stable homotopy theory, see the next point) and specifically of elliptic cohomology. Since the full mathematical structure of string theory is still under investigation, this might point the way: A properly developed theory of elliptic cohomology is likely to shed some light on what string theory really means. (Witten 87, very last sentence) 2. from chromatic homotopy theory. The symmetric monoidal stable (∞,1)-category of spectra (finite spectra) has its prime spectrum parameterized by prime numbers $p$ and Morava K-theory spectra $K (n)$ at these primes, for natural numbers $n$. The level $n$ here is called the chromatic level. In some sense the part of this prime spectrum at chromatic level 0 is ordinary cohomology and that at level 1 is topological K-theory. Therefore an obvious question is what the part at level 2 would be, and in some sense the answer is $tmf$. (This point of view has been particularly amplified in the review (Mazel-Gee 13) of the writeup of the construction in (Behrens 13), which in turn is based on unpublished results based on (Hopkins 02)). For purposes of stable homotopy theory this means for instance that $tmf$ provides new tools for computing more homotopy groups of spheres via an Adams-Novikov spectral sequence. (Here $\mathcal{M}_{cub}$ is obatined by furthermor adding also the cuspidal cubic curve, hence we have canonical maps $\mathcal{M}_{ell}\to \mathcal{M}_{\overline{ell}}\to \mathcal{M}_{cusp} \to \ The Goerss-Hopkins-Miller theorem equips these three moduli stacks with E-∞ ring-valued structure sheaves $\mathcal{O}^{top}$ (and by Lurie (Survey) that makes them into spectral Deligne-Mumford stacks which are moduli spaces for derived elliptic curves etc.) The $tmf$-spectrum is defined to be the $E_\infty$-ring of global sections of $\mathcal{O}^{op}$ (in the sense of derived algebraic geometry, hence the homotopy limit of $\mathcal{O}^{top}$ over the etale site of $\mathcal{M}$). More precisely one sets • $TMF \coloneqq \Gamma(\mathcal{M}_{ell}, \mathcal{O}^{top})$; • $Tmf \coloneqq \Gamma(\mathcal{M}_{\overline{ell}}, \mathcal{O}^{top})$; • $tmf \coloneqq$ the connective cover? of $Tmf$ (also $\simeq \Gamma(\mathcal{M}_{\overline{cub}}, \mathcal{O}^{top})$ (Hill-Lawson 13, p. 2 (?)). Decomposition via Arithmetic fracture squares We survey here some aspects of the explicit construction in (Behrens 13), a review is also in (Mazel-Gee 13), The basic strategy here is to use arithmetic squares in order to decompose the problem into smaller more manageable pieces. Write $\overline{\mathcal{M}_{ell}}$ for the compactified moduli stack of elliptic curves. In there one finds the pieces $\array{ \overline{\mathcal{M}_{ell}} &\stackrel{\iota_{p}}{\leftarrow}& (\overline{\mathcal{M}_{ell}})_p \\ {}^{\mathllap{\iota_{\mathbb{Q}}}}\uparrow \\ (\overline{\mathcal{M}_{ell}})_{\mathbb{Q}} given by rationalization $(\overline{\mathcal{M}_{ell}})_{\mathbb{Q}} = \overline{\mathcal{M}_{ell}} \underset{Spec(\mathbb{Z})}{\times} Spec(\mathbb{Q})$ (hence this is the moduli of elliptic curves over the rational numbers) and by p-completion $(\overline{\mathcal{M}_{ell}})_p = (\overline{\mathcal{M}_{ell}}) \underset{Spec(\mathbb{Z})}{\times} Spf(\mathbb{Z}_p)$ for any prime number $p$, where $\mathbb{Z}_p$ denotes the p-adic integers and $Spf(-)$ the formal spectrum. (Hence this is the moduli of elliptic curves over p-adic integers). This induces the arithmetic square decomposition which realizes $\mathcal{O}^{top}$ as the homotopy fiber product in $\array{ \mathcal{O}^{top} &\to& \prod_p (\iota_p)_\ast \mathcal{O}^{top}_p \\ \downarrow && \downarrow^{\mathrlap{L_{\mathbb{Q}}}} \\ (\iota_{\mathbb{Q}})_\ast \mathcal{O}^{top}_{\mathbb{Q}} &\ stackrel{\alpha_{arith}}{\to}& \left( \prod_p (\iota_p)_\ast \mathcal{O}^{top}_p \right)_{\mathbb{Q}} }$ Here $\mathcal{O}^{top}_{\mathbb{Q}}$ can be obtained directly, and to obtain $\mathcal{O}^{top}_p$ one uses in turn another fracture square, now decomposing via K(n)-localization into $K(1)$-local and $K(2)$-local pieces. Stacks from spectra There is a way to “construct” the tmf-spectrum as the E-∞ ring of global sections of a structured (∞,1)-topos whose underlying space is essentially the moduli stack of elliptic curves. We sketch some main ideas of this construction. The context – derived geometry over formal duals of $E_\infty$-rings The discussion happens in the context of derived geometry in the (∞,1)-topos $\mathbf{H}$ over a small version of the (∞,1)-site of formal duals of E-∞ rings (ring spectra). This is equipped with some subcanonical coverage. For $R \in E_\infty Ring$ we write $Spec R$ for its image under the (∞,1)-Yoneda embedding $(E_\infty Ring)^{op} \hookrightarrow \mathbf{H}$. Because the sphere spectrum is the initial object in $E_\infty Ring$. Coverings by the Thom spectrum The crucial input for the entire construction is the following statement. This means that $Spec M U$ plays the role of a cover of the point. This allows to do some computations with ring spectra locally on the cover $Spec M U$ . Since $M U^*$ is the Lazard ring, this explains why formal group laws show up all over the place. To see this, first notice that the problem of realizing $R = tmf$ or any other ring spectrum as the ring of global sections on something has a tautological solution : almost by definition (see generalized scheme) there is an $E_\infty$-ring valued structure sheaf $\mathcal{O}Spec(R)$ on $Spec R$ and its global sections is $R$. So we have in particular $tmf \simeq \mathcal{O}(Spec(tmf)) \,.$ In order to get a less tautological and more insightful characterization, the strategy is now to pass on the right to the $Spec M U$-cover by forming the (∞,1)-pullback $\array{ Spec(tmf) \times Spec(M U) &\to& Spec(tmf) \\ \downarrow && \downarrow \\ Spec(M U) &\to& * \simeq Spec(\mathbb{S}) } \,.$ The resulting Cech nerve is a groupoid object in an (∞,1)-category given by $\cdots \stackrel{\to}{\stackrel{\to}{\to}} Spec(tmf) \times Spec(MU) \times Spec(MU) \stackrel{\to}{\to} Spec(tmf) \times Spec(MU)$ which by formal duality is $\cdots \stackrel{\to}{\stackrel{\to}{\to}} Spec (tmf \wedge MU \wedge MU) \stackrel{\to}{\to} Spec ( tmf \wedge MU)$ where the smash product $\wedge$ of ring spectra over the sphere spectrum $\mathbb{S}$ is the tensor product operation on function algebras formally dual to forming products of spaces. As a groupoid object this is still equivalent to just $Spec(tmf)$. Decategorification: the ordinary moduli stack of elliptic curves To simplify this we take a drastic step and apply a lot of decategorification: by applying the homotopy group (∞,1)-functor to all the $E_\infty$-rings involved these are sent to graded ordinary ring s $\pi_*(tmf)$, $\pi_*(M U)$ etc. The result is an ordinary simplicial scheme $\cdots \stackrel{\to}{\stackrel{\to}{\to}} Spec (\pi_*(tmf \wedge M U \wedge M U)) \stackrel{\to}{\to} Spec ( \pi_*(tmf \wedge M U)) \,,$ which remembers the fact that its structure rings are graded by being equipped with an action of the multiplicative group $\mathbb{G} = \mathbb{A}^\times$ (see line object). This general Ansatz is discussed in (Hopkins). This simplicial scheme, which is degreewise the formal dual of a graded ring of generalized homology-groups one can show is in fact a groupoid, hence a stack: effectively the moduli stack of elliptic curves. $\mathcal{M}_{ell}$. See (Henriques). In fact if in this construction one replaced $Spec tmf$ by the point, one obtains the simplicial scheme $\cdots \stackrel{\to}{\stackrel{\to}{\to}} Spec (\pi_*(M U \wedge M U)) \stackrel{\to}{\to} Spec ( \pi_*(M U))$ which one finds is the moduli stack of formal group laws $\mathcal{M}_{fg}$. Explicit computation of homotopy groups by a spectral sequence Now, a priori these underived stacks remember little about the original derived schemes $Spec tmf$ etc. They may not even carry any $E_\infty$-ring valued structure sheaf anymore (though some of them If they do carry an $E_\infty$-ring valued structure sheaf $\mathcal{O}$, one can compute the homotopy groups of its global sections by a spectral sequence $H^p(\mathcal{M}_{ell}, \pi_q(\mathcal{O})) \Rightarrow \pi_{p+q} \mathcal{O}(\mathcal{M}_{ell}) \,.$ But it turns out that even if the derived structure sheaf does not exist, this spectral sequence may still converge and may still compute the homotopy groups of the ring spectrum that one started with. This gives one way to compute the homotopy groups of $tmf$. For the case of $tmf$ one finds that the homotopy sheaves $\pi_q(\mathcal{O}(\mathcal{M}_{ell}))$ are simple: they vanish in odd degree and are tensor powers $\omega^{\otimes k}$ of the canonical line bundle $\omega$ in even degree $2 k$, where the fiber of $\omega$ over an elliptic curve is the tangent space of that curve at its identity element. A section of $\omega^{\otimes k}$ is a modular form of weight $k$. So the whole problem of computing the homotopy groups of $tmf$ boils down to computing the abelian sheaf cohomology of the moduli stack of elliptic curves with coefficients in these abelian groups of modular forms — and then examining the resulting spectral sequence. This can be done quite explicitly in terms of a long but fairly elementary computation in ordinary algebra. A detailed discussion of this computation is in (Henriques) Inclusion of circle 2-bundles Write $B^2 U(1) \simeq K(\mathbb{Z},3)$ for the abelian ∞-group whose underlying homotopy type is the classifying space for circle 2-bundle. Write $\mathbb{S}[B^2 U(1)]$ for its ∞-group ∞-ring. There is a canonical homomorphism of E-∞ rings $\mathbb{S}[B^2 U(1)] \to tmf \,.$ See (Ando-Blumberg-Gepner 10, section 8). Maps to K-theory and to Tate K-theory The inclusion of the compactification point (representing the nodal curve but being itself the cusp of $\mathcal{M}_{\overline{ell}}$) into the compactified moduli stack of elliptic curves $\mathcal {M}_{\overline{ell}}$ is equivalently the inclusion of the moduli stack of 1-dimensional tori $\mathcal{M}_{1dtori} = \mathcal{M}_{\mathbb{G}_m}$ (Lawson-Naumann 12, Appendix A) $\mathcal{M}_{\mathbb{G}_m} \simeq \mathbf{B}\mathbb{Z}_2 \longrightarrow \mathcal{M}_{\overline{ell}} \to \mathcal{M}_{FG}$ and pullback of global sections of Goerss-Hopkins-Miller-Lurie theorem-wise $E_\infty$-ring valued structure sheaves yields maps $KO \longleftarrow \longleftarrow \mathbb{S}$ exhibiting KO $= \Gamma(\mathcal{M}_{\mathbb{G}_m}, \mathcal{O}^{top})$. At least after 2-localization the canonical double cover of the compactification of $\mathcal{M}_{\mathbb{G}_m} \simeq \mathbf{B}\mathbb{Z}_2$ similarly yields under $\Gamma(-,\mathcal{O}^{top})$ the inclusion of $ko$ as the $\mathbb{Z}_2$-homotopy fixed points of $ku$ (see at KR-theory for more on this) $\array{ ku_{(2)} \\ \uparrow \\ ko_{(2)} }$ and combined with the above this comes with maps from $tmf$ by restriction along the inclusion of the nodal curve cusp as $\array{ ku_{(2)} & \longleftarrow & tmf_1(3)_{(2)} \\ \uparrow && \uparrow \\ ko_{(2)} & \longleftarrow & tmf_{(2)} } \,,$ (Lawson-Naumann 12, theorem 1.2), where $tmf_1(3)$ denotes topological modular forms with level-3 structure (Mahowald-Rezk 09). Moreover, including not just the nodal curve cusp but its formal neighbourhood which is the Tate curve, there is analogously a canonical map of $E_\infty$-rings $tmf \longrightarrow KO[ [ q ] ]$ to Tate K-theory (this is originally asserted in Ando-Hopkins-Strickland 01, details are in Hill-Lawson 13, appendix A). Witten genus and string orientation The $tmf$-spectrum is the codomain of the Witten genus, or rather of its refinements to the string orientation of tmf with value in topological modular forms $\sigma : M String \to tmf \,.$ The original Witten genus is the value of the composite of this with the map to Tate K-theory on homotopy groups. (Ando-Hopkins-Rezk 10) Chromatic filtration Anderson self-duality The spectrum $tmf$ is self-dual under Anderson duality, more precisley $tmf[1/2]$ is Anderson-dual to $\Sigma^{21} tmf[1/2]$ (Stojanoska 11, theorem 13.1) Modular equivariant versions See at modular equivariant elliptic cohomology and at Tmf(n). Substructure of the moduli stack of curves and the (equivariant) cohomology theory associated with it via the Goerss-Hopkins-Miller-Lurie theorem: The idea of a generalized cohomology theory with coefficients the ring of topological modular forms providing a home for the refined Witten genus of • Edward Witten, Elliptic Genera And Quantum Field Theory , Commun.Math.Phys. 109 525 (1987) (Euclid) and produced as a homotopy limit of elliptic cohomology theories over the moduli stack of elliptic curves was originally announced, as joint work with Mark Mahowald and Haynes Miller, in • Michael Hopkins, section 9 of Topological modular forms, the Witten Genus, and the theorem of the cube, Proceedings of the International Congress of Mathematics, Zürich 1994 (pdf) (There the spectrum was still called ”$eo_2$” instead of ”$tmf$”.) The details of the definition then appeared in A central tool that goes into the construction is the Goerss-Hopkins-Miller theorem, see there for references on that. Expositions include See also An actual detailed account of the construction of $tmf$ (via decomposition by arithmetic squares) is spelled out in A complete account of the computation of the homotopy groups of $tmf$ (following previous unpublished computations) is in • Tilman Bauer, Computation of the homotopy groups of the spectrum $tmf$ (pdf) A survey of how this works is in • Akhil Mathew, The homotopy groups of $TMF$ (pdf) (This presents as an instructive much simpler but analogous case the construction of KO in analogy to the construction of $tmf$, more details on this are in Mathew 13, section 3.) and course notes that go through the construction of tmf and the computation of its homotopy groups are here: The non-connective version of this is discussed in The $\mathbb{Z}_2$-homology of $tmf$ is discussed in The refinement of the Witten genus to a morphism of E-∞ rings to $tmf$, hence the string orientation of tmf is due to • Michael Hopkins, Topological modular forms, the Witten Genus, and the theorem of the cube, Proceedings of the International Congress of Mathematics, Zürich 1994 (pdf) • Matthew Ando, Michael Hopkins, Neil Strickland, Elliptic spectra, the Witten genus and the theorem of the cube, Invent. Math. 146 (2001) 595–687 MR1869850 • Michael Hopkins, Algebraic topology and modular forms, Proceedings of the ICM, Beijing 2002, vol. 1, 283–309 (arXiv:math/0212397) • Matthew Ando, Michael Hopkins, Charles Rezk, Multiplicative orientations of KO-theory and the spectrum of topological modular forms, 2010 (pdf) see also remark 1.4 of • Paul Goerss, Topological modular forms (after Hopkins, Miller and Lurie) (pdf). and for more on the sigma-orientation see Discussion of twisted cohomology with coefficients in $tmf$ is in section 8 of • Matthew Ando, Andrew Blumberg, David Gepner, Twists of K-theory and TMF, in Robert S. Doran, Greg Friedman, Jonathan Rosenberg, Superstrings, Geometry, Topology, and $C^*$-algebras, Proceedings of Symposia in Pure Mathematics vol 81, American Mathematical Society (arXiv:1002.3004) Topological modular forms with level N-structure – $tmf(N)$ – is discussed in • Mark Mahowald Charles Rezk, Topological modular forms of level 3, Pure Appl. Math. Quar. 5 (2009) 853-872 (pdf) • Donald Davis, Mark Mahowald, Connective versions of $TMF(3)$ (arXiv:1005.3752) • Vesna Stojanoska, Duality for Topological Modular Forms (arXiv:1105.3968) • Tyler Lawson, Niko Naumann, Strictly commutative realizations of diagrams over the Steenrod algebra and topological modular forms at the prime 2, Int. Math. Res. Not. (2013) (arXiv:1203.1696) • Michael Hill, Tyler Lawson, Topological modular forms with level structure (arXiv:1312.7394) The self-Anderson duality of $tmf$ is discussed in (Stojanoska 11).
{"url":"http://ncatlab.org/nlab/show/tmf","timestamp":"2014-04-19T10:07:00Z","content_type":null,"content_length":"162571","record_id":"<urn:uuid:af7b14cd-ad8a-492e-8bda-785561bdebbf>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Are you ready to retire? Mathematical models estimate the value of pension plans This cartoon relates to pensions and early retirement. Credit: (c) Fran, Jantoo.com There comes a time in each of our lives when we consider starting a pension plan –either on the advice of a friend, a relative, or of our own volition. The plan of choice may depend on various factors, such as the age and salary of the individual, number of years of expected employment, as well as options to retire early or late. One possible plan is a defined pension plan, where the benefit amount is typically based on the employee's number of years of service at the time of retirement and the salary and/or average salary over an employment period. For instance, the employee may receive a fraction of the average salary during a certain number of years. In a paper published last month in the SIAM Journal on Applied Mathematics, authors Carmen Calvo-Garrido, Andrea Pascucci, and Carlos Vázquez present a partial differential equation (PDE) model governing the value of a defined pension plan including the option for early retirement. "The employer bears the liability of the pension and the value of this liability is understood as the value of the pension plan," says author Carlos Vázquez. "It is important to develop mathematical models to compute the value of this liability in order to estimate the financial situation of the institution or company that has the obligation with the pension plan member." The analysis in the paper uses modeling tools similar to those used in quantitative finance, for instance, for pricing American options. The model assumes that the wage or salary of an employee at any given time is governed by a stochastic differential equation, which in turn depends on the time of recruitment, current salary of the employee and age of entry. Uncertainty of the salary is assumed to depend only on volatility, which refers to the uncertainty or risk associated with a value or asset. "Models need to reproduce the uncertainties associated with the underlying factors of the plan (salary, interest rate and so on) and should allow one to compute the pension plan price in order to reproduce situations in different scenarios," author Andrea Pascucci explains. The authors obtain the value of a defined benefit pension plan including the option for early retirement for the employee, thus computing the value of the pension plan as well as the region of early retirement. "If the pension plan incorporates the option of early retirement by the member, then the additional question arises: when is it optimal to retire? Mathematical modeling tools allow us to pose the problem in terms of partial differential equations," says Vázquez. The optimal retirement problem is a "free boundary problem" for the underlying PDE. Most applications of PDEs involve domains with boundaries, and certain boundary conditions need to be satisfied in order to solve the PDEs. Free boundary problems deal with solving PDEs where part of the boundary is unknown in advance, referred to as a free boundary. Thus, in addition to standard boundary conditions, an additional condition must be imposed at the free boundary. The free boundary in this problem is the optimal retirement boundary between the region where it is optimal to retire and the region where it is optimal to continue working. "The practical solution of the PDE model to obtain pension plan prices from the data requires the use of suitable numerical algorithms to be run on a computer," says author M. Carmen Calvo-Garrido. "From the numerical solutions, we can identify at each date, for a given salary and average salary, if it is optimal to retire or not, and also to obtain the value of the pension plan in any case." Mathematical analysis provides rigorous justification of the correctness of the model, also proving the expected qualitative properties. Future directions may involve the application of similar modeling techniques to study the evolution of wages and salaries. "We are working on a more complete model for salaries evolution that includes the possibility of jumps (due to economic crisis, sudden increase or decrease in salaries, etc)," says Vázquez. "PDE problems including realistic, stochastic interest rate models also present a very challenging topic. The calibration of model parameters is an interesting and difficult problem due to the need of suitable real data." More information: Mathematical Analysis and Numerical Methods for Pricing Pension Plans Allowing Early Retirement, M. Carmen Calvo-Garrido, Andrea Pascucci, and Carlos Vázquez, SIAM Journal on Applied Mathematics, 73(5), 1747-1767 (Online publish date: September 4, 2013). epubs.siam.org/doi/abs/10.1137/120864751
{"url":"http://phys.org/news/2013-10-ready-mathematical-pension.html","timestamp":"2014-04-17T11:13:08Z","content_type":null,"content_length":"72631","record_id":"<urn:uuid:94aa9d99-9b77-4ba4-87a4-47d102e3b295>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Resource Studio Math Resource Studio 4.4.2 Schoolhouse Technologies Inc. in Education / Mathematics Create professional-quality mathematics worksheets to provide students in grades K to 10 with the skills development and practice they need as part of a complete numeracy program. Over 70 mathematics worksheet activities can be produced to advance and reinforce skills in number operations, number concepts, fractions, numeration, time, measurement, money, problem solving and more. Using the friendly, intuitive interface, you will be able to see exactly what the worksheet will look like on paper as you design it. You will have complete control over the layout of the worksheets with customizing features that let you change the number of questions, as well as the title, comments, instructions, date, picture, and font. The worksheets that you create with Mathematics Worksheet Factory are not pre-designed but are randomly generated based on a complex set of algorithms corresponding to the specific mathematical structure of each type of worksheet. This allows for virtually unlimited original worksheets. Download Math Resource Studio and give it a trial run before you buy. File Size: 7.51 MB License: Demo Price: $64.00 Related: - - - - - - - - - - - Platform: Win2000, Win7 x32, WinVista, WinXP Downloads: Total: 52 | This Month: 1 Released: 2010-01-13 Math Resource Studio Similar Software Math tool for high school math, middle school math teaching and studying. Function graphing and analyzing: 2D, 2.5D function graphs and animations, extrema, root, tangent, limit,derivative, integral, .... Free download of Math Studio 2.8.1 MATH FLIGHT - Windows Practice basic arithmetic with activities ... report cards. Race against timer or airplane. Print math fun (test) papers. Hall of fame. Youth & Adult modes. Math games, negative numbers and more.... Math Flight also includes multiple set-up options. Specify number range, number of math questions, focus on a specific number and flight .... Free download of Math Flight 1.4 Math calculator, also derivative calculator, integral calculator, calculus calculator, ... used to calculate expression, derivative, root, extremum, integral. Math calculator, also a derivative calculator, integral calculator, calculus calculator, expression calculator, equation solver, can be used to calculate expression, derivative, root, extremum, integral. .... Free download of Math Calculator 2.5.1 Math software for students studying precalculus and calculus. Math Center Level 2 consists of a Scientific Calculator, ... is a further development of Graphing Calculator2D from Math Center Level 1. It has extended functionality: hyperbolic functions are added. There are also added .... Free download of Math Center Level 2 1.0.2.1 Math software for students studying precalculus. Can be interesting for teachers teaching precalculus. Math Center Level 1 consists of Graphing calculator 2D, Advanced Calculator, Simple Calculator, Simple Calculator, Simple Rational Calculator, and Simple Integer Calculator called from the Control Panel. Simple calculator is a general purpose .... Free download of Math Center Level 1 1.0.1.9 Popular Software in Education / Mathematics Application for displaying 2D and 3D graphs for functions of 3 dimentions. Free download of 3D Graph 2.12 Best data analyzer, 2D/3D-plotting, calc and presentation program.. Free download of SimplexNumerica 9.2.9.4 Graph maker to create 2d, 2.5d, 3D and 4d function graphs and animations.. Free download of Function Grapher 3.9.1 High-level interpreted language, primarily intended for numerical computations. Free download of Octave 3.6.4 Create constructions with points, vectors, and segments. Free download of GeoGebra Portable 5.0 Beta
{"url":"http://www.downloadtyphoon.com/math-resource-studio/inforilgqrqy","timestamp":"2014-04-20T16:44:39Z","content_type":null,"content_length":"35703","record_id":"<urn:uuid:b3ee3e07-86c2-45d6-9491-22a3f3de993a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
The experts list, or: how can a journalist find out how to compute pi to high precision? The experts list, or: how can a journalist find out how to compute pi to high precision? A reporter at the Wisconsin State Journal called me the other day with a really good question. He had heard that pi had been computed to ten trillion decimal places. And he wanted to know: how could you possibly measure a circle that precisely? So how did he know to call me? Because I’m on the experts list, which UW-Madison’s public relations office set up to give journalists the opportunity to consult a Wisconsin professor on just about any subject. Topics of current news interest get promoted to the front: on today’s front page we’ve got the professor who can talk about Kim Jong-Il, the professor who can talk about the Supreme Court’s decision to take up the Arizona immigration law, and the professor who can talk about the Scott Walker recall. (I have a feeling that last guy is going to be on the front page for a while.) Such a simple idea, but such a good one! The UW-Madison ought to be a resource for Wisconsin journalists — and everybody else in Wisconsin, for that matter. Good for the PR office for making it as easy as possible to reach faculty who want to face the public and share what they know. Oh: here’s the article about pi in the WSJ, by Dave Tenenbaum. I thought it came out well! 3 thoughts on “The experts list, or: how can a journalist find out how to compute pi to high precision?” 1. Jordan — you fooled me with WSJ. I guess I have been in NYC too long. I think you are being too harsh with the comment: “In geometry, pi (3.14159….) is defined as the ratio of circumference to diameter, and “if this is the only definition of pi, then computing it to many decimal places would be impossible,” said Jordan Ellenberg, professor of mathematics at UW-Madison. Logically, this would amount to “garbage in, and garbage out,” because the result of a calculation cannot be more precise than the starting terms.” Archimedes did pretty well with this method — “On the measurement of the circle” may be one of the four or five greatest math papers ever written. Archimedes worked with an inscribed and circumscribed 96-gon — whatever you think of this method, it is certainly not “garbage in, garbage out”. You can get more decimals of accuracy by continuing to double the number of sides, using his recursion for the side length. This requires the careful approximation of square roots — we have no idea how Archimedes obtained his rational approximations, although perhaps he had derived the basics of the theory of continued fractions from the Euclidean algorithm. Your comment hinges on the meaning of the word “many”. Maybe that’s a good subject for another article in the WSJ! 2. You make a good point! I think I was responding to Tenenbaum’s original question about measuring physical circles. Of course, my definitional fig leaf can be that “the limit of the ratio of perimeter to long diagonal of an n-gon as n goes to infinity” is not LITERALLY the same definition as “ratio of circumference to diameter…!” 3. Dear Dick, I was also momentarily surprised that a writer from the WSJ was using the U of W expert list as a resource! Is it really true that Archimedes’s methods remain a mystery? Best wishes, Tagged experts, pi, public relations, uw, wisconsin, wisconsin state journal
{"url":"http://quomodocumque.wordpress.com/2011/12/19/the-experts-list-or-how-can-a-journalist-find-out-how-to-compute-pi-to-high-precision/","timestamp":"2014-04-16T15:59:50Z","content_type":null,"content_length":"65159","record_id":"<urn:uuid:c15619cb-cd56-4e38-bbe3-3fde81447ac0>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Pine Valley, TX Math Tutor Find a Pine Valley, TX Math Tutor ...I am a Trinity University graduate and I have over 4 years of tutoring experience. I really enjoy it and I always receive great feedback from my clients. I consider my client's grade as if it were my own grade, and I will do whatever it takes to make sure you get it, and at the same time make sure our sessions are easy and enjoyable. 38 Subjects: including ACT Math, reading, writing, English ...Algebra is not necessarily easy, but it is completely logical. There is nothing you learn early that will be contradicted by later lessons. My approach in working with you on algebra 1 and algebra 2 is first to assess your familiarity and comfort with basic concepts, and explain and clarify the... 20 Subjects: including calculus, logic, algebra 1, algebra 2 ...We can beat that monster together. I'm a graduate of Wellesley College (undergrad) and Harvard University (masters), with an SAT of 760V/750M and a GRE score of 800V/770M. I have scored a 5 on both AP English tests, as well as the US History, Government, and Calculus exams (and a 4 on Art Histo... 32 Subjects: including geometry, Bible studies, ISEE, sociology ...I am also proficient in excel, pivot tables, sensitivity analysis, macros, and VBA. Let me know how I can help you. I am flexible with some schedules during weekends.I am a K-12 certified bilingual teacher. 21 Subjects: including calculus, elementary (k-6th), reading, study skills ...I have been teaching/tutoring Algebra 2 for over 25 years. I use special techniques and "cute memorable sayings" to help students remember certain algebraic skills. I also point out possible mistakes during explanations to help avoid them while doing homework. 6 Subjects: including algebra 1, algebra 2, geometry, precalculus Related Pine Valley, TX Tutors Pine Valley, TX Accounting Tutors Pine Valley, TX ACT Tutors Pine Valley, TX Algebra Tutors Pine Valley, TX Algebra 2 Tutors Pine Valley, TX Calculus Tutors Pine Valley, TX Geometry Tutors Pine Valley, TX Math Tutors Pine Valley, TX Prealgebra Tutors Pine Valley, TX Precalculus Tutors Pine Valley, TX SAT Tutors Pine Valley, TX SAT Math Tutors Pine Valley, TX Science Tutors Pine Valley, TX Statistics Tutors Pine Valley, TX Trigonometry Tutors Nearby Cities With Math Tutor Bordersville, TX Math Tutors Cloverleaf, TX Math Tutors Crabb, TX Math Tutors Eastgate, TX Math Tutors Golden Acres, TX Math Tutors Harrisburg, TX Math Tutors Howellville, TX Math Tutors Inks Lake Village, TX Math Tutors Long Point, TX Math Tutors Sandy Point, TX Math Tutors Satsuma, TX Math Tutors Sunny Side, TX Math Tutors Sylvan Beach, TX Math Tutors Timber Cove, TX Math Tutors Trammells, TX Math Tutors
{"url":"http://www.purplemath.com/Pine_Valley_TX_Math_tutors.php","timestamp":"2014-04-20T13:57:50Z","content_type":null,"content_length":"23987","record_id":"<urn:uuid:0183becf-870c-451f-827e-abea85b6d2ad>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
The Product Aggregate in T-SQL Versus the CLR Noticeably absent from the SQL Server built-in aggregate functions is product—the multiplication of numbers in a set of values. Perhaps one reason is because it can so often fail; I demonstrate this with a simple loop, meant to simulate the product over 309 values in a column: DECLARE @product float = 1.79, @loopRowPosition int = 1; while( @loopRowPosition <= 308 ) select @product *= 10, @loopRowPosition += 1; The result is 1.79E+308, the maximum positive number for the float data type. But change the start value to 1.798 and you get an arithmetic overflow. Put another way: multiply small numbers in a small column expression and meltdown. Still the aggregate is useful in limited situations, and so we’ll develop it the expected way—in the CLR—but then also several ways in T-SQL. The ground rule for T-SQL versions is simple: no loops. To make the solutions in both environments more flexible, we’ll extend them to support a choice over invariance to nulls. In our analysis, topics we’ll cover include the following: • How they work with grouping • How they rate as to accuracy, safety, and performance • What happens when we try to make generic aggregate-simulation functions from ad hoc code This article is as much about good algorithm development as it is about the product aggregate. As our tactics evolve, at any point a wrong turn can harm performance or introduce subtle errors. The Basic Aggregate in the SQL CLR Below are the four methods of the product solution in C#. private SqlDouble product; private SqlInt16 countOfMultipliers; public void Init() product = 1; countOfMultipliers = 0; public void Accumulate(SqlDouble multiplier) product *= (multiplier.IsNull) ? 1 : multiplier; countOfMultipliers += (SqlInt16)((multiplier.IsNull) ? 0 : 1); public void Merge(ProductAggregate mergeProductAggregate) public SqlDouble Terminate() return (0 == countOfMultipliers) ? SqlDouble.Null : product; The Accumulate method is invoked once for each number in the input column and computes the product. The ternary operator, which tests for null values, gives us our invariance to nulls property—i.e. the ability to ignore nulls and return a result over non-null numbers, as do the built-in aggregates. The Merge method is called when the number set is partitioned and the product computed over multiple threads. The result is returned in the Terminate method. The data member countOfMultipliers is incremented during accumulation using the same test for nulls done for the product member, allowing it to return null when all input values are null or the input is empty. C’est simple, ce n’est pas? The Aggregate in T-SQL Remember the rule for our T-SQL versions: no loops. We can devise quicker, more elegant solutions. Here is the second and final rule: the code must always return a value, possibly null, just as the system aggregates do. The Sample Table This is the sample table for all code examples. The first five columns are populated from system view sys.messages. Table population is limited to 2000 rows—much more and arithmetic overflow occurs for my sample values. I added four columns, whose meanings are as follows: • multiplier. Our multiplicands of random values in the range 0.214-2.382, scale 15. • groupcol. A computed, persisted column of values either ‘a’, ‘b’ or ‘c’ for group testing. • yearcol. A computed, persisted column of values ‘2001’ – ‘2004’ also for group testing. • ID. An IDENTITY surrogate key that aids the performance of one of our query forms. Three T-SQL Solutions The three basic T-SQL solutions follow. I’ll refer to them throughout by the labels given. CLR Simulation The first solution is a kind of mirror of the CLR code: where data member product accumulates the result, we’ll employ a scalar variable, also initialized to 1. DECLARE @product AS float= 1; SELECT @product *= multiplier FROM aggr.T_ProductTest; Hardly a hat trick—until we consider that the product will be one when aggr.T_ProductTest has no rows (we want a null result). Where the CLR solution solves the problem by setting and later testing a second data member, countOfMultipliers, we’ll substitute an outer join on a derived table, which I prefer over introducing another variable outside the central query: @product *= ( prod.multiplier * onz.one ) aggr.T_ProductTest prod RIGHT OUTER JOIN select 1 as one ) onz 1 = 1; -- any tautology will do When the table has rows, each number is multiplied by one. When it doesn’t, prod.multiplier isn’t even a null value for the multiplication, so the action in the SELECT statement cannot be applied. In this case the right outer join forces the SELECT clause to be evaluated over one row and @product becomes null because multiplicand prod.multiplier is now null. Starting with our second technique we make a clean break from the CLR approach. The float variable is not needed, although we still need to accommodate an empty result set: WITH cteRecursiveProduct( level, product ) as select level = ID, product = multiplier from aggr.T_ProductTest where ID = ( select max( ID ) from aggr.T_ProductTest ) UNION ALL select level = prodCTE.level - 1, product = ( prodCTE.product * prod.multiplier ) from cteRecursiveProduct prodCTE inner join aggr.T_ProductTest prod prodCTE.level - 1 = prod.ID select product = product * onz.one from cteRecursiveProduct RIGHT OUTER JOIN select 1 as one ) onz isnull(level, 1) = onz.one OPTION ( MAXRECURSION 0 ); What is required in many cases is that we put more than 100 frames on the call stack—the maximum allowed by default—and so we allow unlimited frames with the query hint . In recursion, there is one row returned for each intermediate product for the multiplicands seen so far, so we get the final product at 1 in the reverse ID order strategy. The join condition on is modified to return the 1 value regardless of whether it exists. Our third solution involves an arithmetic trick, but shares with the recursive technique the advantage of being able to be placed wholly within a larger query. Let f(x) be a function that transforms each multiplier in a column into a common base number and sums their logs: f(x) = sum( log( multiplier ) ) In this case, SQL Server system function log() uses the number e as the base by default. If the multipliers are 8 and 10, for example, they would be represented as e^2.0794 and e^2.3026 to four decimal places, and f(x) would return 4.382. Let's extend the composition: g(f(x)) = e^f(x) The number e raised to the f(x) can of course be represented as a decimal number, and in our example, e^4.382 = 80 (adjusted for rounding error); system function exp(), which is the inverse of log() (exp(log(x)) = log(exp(x)) = x), does this: product = exp( sum( log( multiplier ) ) ) This all works because of the laws of exponents (multiplication case): x^a * x^b * … * x^n = x^a+b+…+n If the explanation is a little dense, don’t worry. What we do need to worry about is finding a negative number or zero in the input column, because the log for these is undefined. Here is the error you get when you try a log(0) or log(-5) operation: Msg 3623, Level 16, State 1, Line 2 An invalid floating point operation occurred. We circumvent the problem by adding the nullif() function to substitute nulls for zeros and the abs() function to ensure that all numbers are positive, but we’ll need additional code to get the correct answer—zero whenever zeros occur in the column expression and a negative value when the count of negative multipliers is odd: exp( sum( log( nullif( abs( multiplier ), 0 ) ) ) ) iif( sum( iif( multiplier = 0, 1, null ) ) > 0, 0, 1 ) iif( sum( iif( multiplier < 0, 1, 0 ) ) % 2 = 1, -1, 1 ) Note that unlike the other solutions, EXP LOG doesn’t require the outer join or other strategy to return a null on null input. Now we’ll explore an option not available to the built-in aggregates. Choosing Invariance to Nulls The popular built-in aggregates count, sum, min, max, avg, and the others are invariant to nulls, meaning that having nulls in the column expressions over which they operate does not affect the outcome. The user cannot change this property. We, however, can, and so let’s see how it would be done, starting with the CLR. Here is the line in the Accumulate method from the sample code above that does the computation: product *= (multiplier.IsNull) ? 1 : multiplier; The ternary operator throws out nulls from the input, making the implementation invariant to nulls. Remove the operator and the product is null whenever the input has a null value, making it variant to nulls. Because aggregates—system or CLR—don’t expose a parameterized constructor, a single struct cannot give the user the option; simply introduce a second struct. Set the property IsInvariantToNulls on required attribute class SqlUserDefinedAttribute to true on one and false on the other, keeping in mind that enforcement is up to you. By contrast, were we to place the T-SQL implementations into aggregate-simulating functions—more on this later—a bit parameter specifying invariance would obviate the need for duplication of code/ You may have realized that the code snippets from the previous section differ in variance, the first and second being variant, and the last invariant (because the log() function itself is invariant to nulls). Let’s start with the code that flips the property for the CLR simulation: CLR Simulation DECLARE @product float = 1; select @product *= ( isnull( prod.multiplier, 1 ) * niladj.adjustor ) from aggr.T_ProductTest prod RIGHT OUTER JOIN select iif( count( * ) > 0, 1, null ) from aggr.T_ProductTest where multiplier is not null ) niladj( adjustor ) 1 = 1 -- any tautology will do select @product; This rewrite of the derived table is correct over two boundary cases: when aggr.T_ProductTest is empty—as before; and when the multiplier column in all rows has null values. But the code is not optimal because the count() aggregate requires a full (table or index) scan. DECLARE @product float = 1, @countOfMultipliers smallint = 0; select @product *= isnull( multiplier, 1 ), @countOfMultipliers += iif( multiplier is not null, 1, 0 ) from aggr.T_ProductTest; select product = @product * iif( 0 = @countOfMultipliers, null, 1 ); The solution now more closely simulates the CLR implementation. Remove the isnull() function and it becomes an alternate solution for the variant case. Because the recursion strategy doesn't depend upon outside variables, it must use the derived table or equivalent CTE to achieve invariance, and pay the performance penalty. select level = ID, product = isnull( multiplier, 1 )... -- anchor select ..., product = ( prodCTE.product * isnull( prod.multiplier, 1 ) ) -- recursive select product = product * niladj.adjustor from cteRecursiveProduct RIGHT OUTER JOIN select iif( count( * ) > 0, 1, null ) from aggr.T_ProductTest where multiplier is not null niladj( adjustor ) level = 1 Flipping the property in the opposite direction for EXP LOG means one more multiplier: exp( sum( log( nullif( abs( multiplier ), 0 ) ) ) ) -- no log( <= 0 ) iif( sum( iif( multiplier = 0, 1, null ) ) > 0, 0, 1 ) -- 0-adjust iif( sum( iif( multiplier < 0, 1, 0 ) ) % 2 = 1, -1, 1 ) -- neg no adjust iif( sum( iif( multiplier is null, 1, 0 ) ) > 0, null, 1 ) -- null-variant DISTINCT Keyword Finally, another important option we could implement is the DISTINCT keyword. I won’t expand on it, but suffice it to say that the CLR would need more effort: e.g., a data member vector could buffer all numbers in Accumulate and the vector could be sorted to bypass duplicates during multiplication in Terminate. Extensions to the T-SQL strategies vary in complexity and performance. Do you see the error in this code? (Hint: let the column contain values 5 and -5.) exp( sum( DISTINCT log( nullif( abs( multiplier ), 0 ) ) ) ) Okay (oh no) Looping While the looping tactic is verboten, I reference its metrics as the baseline in the section on performance next, so here is one optimal form. DECLARE @product float = 1.0, @countOfMultipliers smallint = 0, @next_row smallint = 1, @max_row smallint; select @max_row = max( ID ) from aggr.T_ProductTest; while( @next_row <= @max_row ) DECLARE @next_multiplier float; select @next_multiplier = multiplier from aggr.T_ProductTest where ID = @next_row; select @product *= isnull( @next_multiplier, 1 ), @countOfMultipliers += iif( @next_multiplier is null, 0, 1 ); SET @next_row += 1; select product = @product * iif( @countOfMultipliers = 0, null, 1 ); For your eyes only. Destroy the code after reading. A Performance Comparison—and Caveats The test I performed for all solutions in their invariant to null forms was a product over the multiplier column for all 2,000 rows in the aggr.T_ProductTest table. I ran each solution code 100 times at an interval 1/20 second apart to get the logical disk reads and rough averages over CPU and duration from the Profiler. The average times clearly show that the recursive and looping techniques are not viable. The first pair of numbers for their measurements are the values for the code as was displayed, and the second, for their safer versions, to be explained. The first cost for recursion is very good, and for looping, exceptional. So why the disconnect between the optimizer's estimates and actual performance? Execution Plans: CLR, CLR SIMULATION, and EXP LOG Above is the execution plan for the SQL CLR aggregate. If you add one Compute Scalar operator on each side of the Stream Aggregate, you essentially have the plan for the EXP LOG code; subtract the Stream Aggregate, the CLR SIMULATION. In all cases, the index scan on the clustered primary key is known to the optimizer to return a fixed number of rows--the Estimated Number of Rows = the Actual Number of Rows = 2000--and so it can make accurate estimated costs. Execution Plans: RECURSION The graphic above depicts the operators that start one branch of the recursive part in the estimated execution plan followed by those that start in the actual execution plan. Where the other strategies employ a one-pass index scan, recursion and also looping must seek on the same index to get multipliers from successive IDs, once for each recursion/iteration. This accounts for more page touches in looping, and for recursion, seek must re-fetch all the multipliers seen so far plus the current one for each stack frame, so the reads skyrocket. By visually inspecting the code, we can see that the anchor gets the multiplier at ID 2000 (the last row in the sample table), and each recursion operates at the next lower contiguous ID stopping at ID one for a total of 2000. In fact, in the actual plan, the Actual Number of Rows returned by the seek operator is 1999, as indicated by the much thicker outbound arrow. But the optimizer can't deduce the row count from the recursive definition, and so it puts in a placeholder value of one for Estimated Number of Rows, as indicated by the thin arrow. It is for this reason that the optimizer cannot give a reasonable estimated cost for recursion or for looping as well. (The estimated branch cost may accurately reflect the effort to get the multipliers at IDs 2000 and 1999 or just one multiplier.) Recursion, Looping, and Safety When I introduced the T-SQL solutions, I noted that I added column ID as an int IDENTITY clustered primary key to aid the performance of one of the solutions (the compact natural key is message_id, which otherwise would be clustered). That solution of course is RECURSION (add looping). But for this to work, we must guarantee the following: 1) that the minimum ID is one; and 2) that there are no gaps in the ID sequence, such as those resulting from row deletions and rolled-back transactions. And often this is not the case. Safe Tactic: CTE Let’s add a CTE that gives us our contiguous IDs starting at one: WITH cteMultiplierRank( rankNo, multiplier ) as select CAST( ROW_NUMBER( ) OVER( ORDER BY ID ) as int ), multiplier from aggr.T_ProductTest The recursive CTE is rewritten to reference this CTE instead of the sample table, and the execution plan shows that this CTE as well as the anchor and recursive parts of the recursive CTE all share the starting operators below. The graphic is the start of the recursive branch of the actual execution plan: In the estimated plan, each operator for all branches outputs 2,000 rows, but run-time information shows that the recursive branch operators each produced 4,000,000 actual rows—2,000 sample table rows times 2,000. The operators essentially set up 2,000 groups of all (rankNo, multiplier) pairings, and a Filter operator to come applies the recursive condition to determine the set of multipliers to use for each group. The logical disk reads go from 22,010 to 156,147, the estimated cost balloons to an unacceptable (untrustworthy! but still...) 2.3517, and the user experience degrades proportionately. This next attempt fares better. Performance Tactic: Table Variable DECLARE @tblMultiplier TABLE( rowno int IDENTITY PRIMARY KEY, multiplier float NULL ); After we rewrite the recursive CTE to reference the table variable, the execution plan is identical to the original and the cost comes back to a healthy 0.016537. But in practice 85% of the total cost comes from populating the table variable from the sample table, bringing the cost to 0.1077. The second sets of numbers in RECURSION and Looping show the additional cost of this safety. Safer certainly--but is it safe now? Our revised solutions share with the CLR SIMULATION a potential problem inherent in its not being expressible in one atomic statement—a problem to be addressed in the section T-SQL Solutions and Aggregate Functions. Success with EXP LOG Built-in and user-defined CLR aggregates can be used in SELECT, HAVING, and ORDER BY clauses. Of our T-SQL solutions, only EXP LOG is a wholly self-contained expression, and so it too can be used in these clauses: select groupcol, yearcol, product = exp( sum( log( nullif( abs( multiplier ), 0 ) ) ) ) from aggr.T_ProductTest group by groupcol, yearcol having exp( sum( log( nullif( abs( multiplier ), 0 ) ) ) ) > 0 order by exp( sum( log( nullif( abs( multiplier ), 0 ) ) ) ); I’ve left off the part of the calculation that adjusts for zeros and negative numbers for brevity. Aggregate Window Functions: A Simple Fix product_by_year_exp = sum( exp( sum( log( multiplier ) ) ) ) OVER( PARTITION BY yearcol ), product_by_year_clr = sum( aggr.PRODUCT( multiplier ) ) OVER( PARTITION BY yearcol ) if we add these column expressions to the select list in the query, exp is rejected by the compiler as a window function because it is not an aggregate or other acceptable function type. But sum() is, and we can use it as the outer function to get the intended result. Function aggr.PRODUCT is also an aggregate—it is the local name for the CLR aggregate—but this too is rejected (for an unknown reason), and so we reuse the trick. Poor Grouping Choices Neither of our remaining strategies, RECURSION or the CLR SIMULATION, is suitable for grouping. For our sample query, either we would need to know in advance the (groupcol, yearcol) paired values of interest—or employ more code to get the pairings—and windowing makes no sense. In particular, recursion is not a solution for grouping. The CLR SIMULATION, with its individual variable technique, is marginally better but not necessarily safe. Here products for several years are set in one SELECT clause, invariant to nulls form; grouping is implied in the SELECT clause: @product2001 *= iif( '2001' = niladj.yearcol, isnull( multiplier, 1 ), 1 ) * iif( '2001' = niladj.yearcol, niladj.adjustor, 1 ), @product2002 *= iif( '2002' = niladj.yearcol, isnull( multiplier, 1 ), 1 ) * iif( '2002' = niladj.yearcol, niladj.adjustor, 1 ), @product2003 *= iif( '2003' = niladj.yearcol, isnull( multiplier, 1 ), 1 ) * iif( '2003' = niladj.yearcol, niladj.adjustor, 1 ), @product2004 *= iif( '2004' = niladj.yearcol, isnull( multiplier, 1 ), 1 ) * iif( '2004' = niladj.yearcol, niladj.adjustor, 1 ) aggr.T_ProductTest proTest RIGHT OUTER JOIN select niladj.yearcol, cnt_year_non_null.cnt select yearcol, iif( count( * ) > 0, 1, null ) from aggr.T_ProductTest where multiplier is not null group by yearcol ) cnt_year_non_null( yearcol, cnt ) RIGHT OUTER JOIN select yearcol, nilAdj = null select [2001] = 1, [2002] = 1, [2003] = 1, [2004] = 1 ) p nilAdj FOR yearcol in( [2001], [2002], [2003], [2004] ) ) as unpvt ) niladj( yearcol, adjustor ) cnt_year_non_null.yearcol = niladj.yearcol niladj( yearcol, adjustor ) proTest.yearcol = niladj.yearcol; Oh myyyyyyyyyy! The derived table must now do its own outer join on an unpivot relational operator or employ a similar strategy (think UNION ALL in the second derived table) to ensure that each year has its own adjustor row (with possibly null adjustor), not just those years having rows in the sample table. Of course, the more optimal second form for invariance should have been used, but any developer may decide against using one @countOfMultiplier variable per year while not thinking of the derived table problem. Increment the failure point column. All product strategies except one agree that the total non-grouped product for the sample data is 6.04851066640848E-310. The exception is LOG EXP, which evaluates to 6.04851066640616E-310. This is a small difference over a tiny number—clearly a rounding error in exp or log or both. In other testing with very small numbers, including grouping, sometimes it matched exactly with the others and sometimes not. With limited testing over small samples and larger numbers, it always agreed. You make the call. T-SQL Solutions and Aggregate Functions An aggregate is a scalar function whose input is a column expression of suitable data type. CLR solutions are certainly that, and though we can put any of our T-SQL solutions into scalar functions, they are certainly not. CREATE TYPE aggr.tblMultiplier AS TABLE( multiplier float NULL ); product_clr = aggr.PRODUCT( multiplier ), product_tsql = aggr.sf_PRODUCT( cast( multiplier as aggr.tblMultiplier ) )… In the above, the compiler recognizes the first function, aggr.PRODUCT, as a user-defined aggregate built from a .NET object, but no sleight of hand can make the compiler accept the second function, aggr.sf_PRODUCT, written in T-SQL, as an aggregate, or allow a column to be cast as a user-defined table type. If we want to use the T-SQL function as a generic aggregate, we must, for each product desired, fill a table variable (of type aggr.tblMultiplier) and set a variable to its return in a separate statement. Aside from being inefficient and inelegant, this opens up the door to problems arising from unrepeatable reads. A Read-Write Conflict Example Let’s look at a basic problematic scenario, keeping in mind that the error is less likely to happen with aggregates, system or user-defined in the SQL CLR, because they can always be placed into larger (atomic) statements (and locks are held for the duration of the statement). -- T1 begin tran; DECLARE @tblMultiplier aggr.tblMultiplier, @product float, @count int; INSERT INTO @tblMultiplier select multiplier from aggr.T_ProductTest where groupcol = 'a'; select @product = aggr.sf_PRODUCT( @tblMultiplier, 1 ); -- 1 is 'invariant to nulls' <context switch to T2: INSERT a row having groupcol value = ‘a’> select @count = count( * ) from aggr.T_ProductTest where groupcol = 'a'; The non-serializable schedule represents a READ-WRITE transaction conflict, and is demonstrated by the sample code. After T1 reads the rows in aggr.T_ProductTest falling under groupcol ‘a,’ T2 commits a row to the group, making T1’s second read a phantom read (a type of unrepeatable read). Without the context switch, (@product, @count) is <1.3322913590615E-104, 675>, but with it, the values are <3.86364494127789E-104, 676>, making the scalar variable values <1.3322913590615E-104, 676>, out of sync with each other. To prevent the phantom read, we could up the transaction isolation level to SERIALIZABLE or force serializable access to aggr.T_ProductTest only with an appropriate table hint (TABLOCKX e.g.). But that tactic—pessimistic locking—potentially decreases concurrency and increases the likelihood of deadlocks. For this particular example, it would be better to get the count from the table variable; in practice, subtle errors are made. Extending the Module If we persist in the code module strategy, we should optionally make it support grouping as well as the HAVING and ORDER BY clauses. For example, if we group by groupcol and yearcol, we would want the result set in one invocation rather than one for each (groupcol, yearcol) pairing, with possible constraints on the groupcol/yearcol groups. Reducing the number of calls lessens the risk of unrepeatable reads but doesn't eliminate it. For it to be generic, it must work for grouping over any column list from any table with a numeric column. Another desideratum is that it determine the grouping columns without needing a parameter. You may have noticed: the function has morphed into a stored procedure using dynamic SQL. Finally, as per the code sample above, it should have a parameter for specifying invariance to null behavior. My solution is in the download in the Product Aggregate Generic Procedure folder. Without a mechanism to pass columns as parameters, the problems—all puns intended—multiply, and our attempts to make aggregate-simulating functions from some good ad-hoc T-SQL code are for naught. The Report Card If the grading seems arbitrary, think of me as being some of the teachers you had when you were in school. The MSDN library demonstrates the CLR SQL Server user-defined aggregates with an example that counts the number of vowels in a column of strings (http://msdn.microsoft.com/en-us/library/91e6taax(v= vs.90).aspx). I've written a T-SQL scalar function that counts the values for one input string; the code is in the download. Following is its invocation that matches the CLR functionality along with performance metrics when run over the text column from our sample table: select cntVowels = sum( aggr.sf_CountTheVowels( <some_string_column> ) )... Function aggr.sf_CountTheVowels approximates the Accumulate method, is easy to write, and doesn’t try to be an aggregate—the sum() is the aggregate--making it safe by our standard. This time the optimizer knows up front to expect 2000 rows from the index scan, but probably because of the logical disk reads involved, gives a better cost to the T-SQL solution even though it runs 10 times slower. As the rule, CLR code gives better performance; in the product aggregate example we may have hit a rare exception. In a solution I did for a recent client, I needed a product aggregate but the client didn’t want to enable the CLR, so I used my logarithm-based technique. But whether we write in a .NET language or T-SQL, rushing in without forethought could be costly. As we well know. After the article appeared, I verified that the search engines would find it. They do—and they also list another Code Project article (Tip/Trick) that discusses computing the product with logarithms (click here[^] ). The tip, by Dr. Alexander Bell, references an earlier work of his that details his research (ours are independent); read the tip’s referenced article for a second view of the In that latter article, he discusses the performance-universality dilemma, which means that the more cases handled by a solution, the costlier it is. For EXP LOG that implies addressing nulls, zeros, and negative numbers that may occur in the column expression, which we did. I’ll call the code that doesn’t bare bones. Further analysis verified the added cost: although the query optimizer uses the same execution plan and assigns the same final cost to both the bare bones and full EXP LOG solutions, and the page reads are the same, EXP LOG adds machine cycles. Using the same testing method as described in the performance section, I found that CPU usage went from an average of one millisecond in bare bones to two milliseconds in EXP LOG, and duration jumped tenfold (by more than 10 milliseconds). So know your data, and take what you need.
{"url":"http://www.codeproject.com/Articles/548395/The-Product-Aggregate-in-T-SQL-Versus-the-CLR","timestamp":"2014-04-19T08:41:22Z","content_type":null,"content_length":"141582","record_id":"<urn:uuid:b76c7229-b218-48b2-822b-32d54a12990a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Second Order Differential Equations • one year ago • one year ago Best Response You've already chosen the best response. \[y'' + \frac{ 1 }{ x }y'+ \left( 1-\frac{ n^{2} }{ x^{2} } \right)y = 0\] how much the solution zero-order of the Bessel equations above at (n=0) in between intervals of π along the positive x Best Response You've already chosen the best response. @ajprincess would you kindly help my brother if you are free.... Best Response You've already chosen the best response. I am extremely sorry @kryton1212.:( I havnt learnt differential equations yet. Best Response You've already chosen the best response. @ajprincess okay....never mind, me either...so can you tell your friends about this question of they learnt it? @gerryliyana sorry... Best Response You've already chosen the best response. if they learnt it * Best Response You've already chosen the best response. @hartnn, @experimentX, @unklerhaukus, @apoorvk. Can u all plz help with this question? Best Response You've already chosen the best response. hartnn is not online yet... Best Response You've already chosen the best response. But @unklerhaukus and @apoorvk are online. They may be able to help.:) Best Response You've already chosen the best response. thank you @ajprincess :) Best Response You've already chosen the best response. Best Response You've already chosen the best response. @gerryliyana don't worry about it. Someone may help you :) sorry that I cannot help you... Best Response You've already chosen the best response. can you reword the question, im not sure what you are asking Best Response You've already chosen the best response. @sauravshakya would you kindly help my brother if you are free? Best Response You've already chosen the best response. @UnkleRhaukus " how much the minimum solution of the equations above (at n=0) in between intervals of π along the positive x axis????" Best Response You've already chosen the best response. i dont know what you mean by "how much the ..... solution?" are you looking for the number of solutions? are you looking for the solution? are you looking for the value of the function at one of the solutions? Best Response You've already chosen the best response. i'm looking for the least number of solutions.., Best Response You've already chosen the best response. the minimum number of solutions Best Response You've already chosen the best response. ah ok. Best Response You've already chosen the best response. Even though I'm 78 years old and cannot integrate x with pencil and paper, I've access to the computer program, Mathematica 8 Home Editon. that can. The attachment shows the general solution with two constants of integration. A plot of the solution from x = 0 through x = 6 Pi is included. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50dad6d8e4b0d6c1d5429c24","timestamp":"2014-04-16T04:16:10Z","content_type":null,"content_length":"75059","record_id":"<urn:uuid:c7a24c38-52db-433a-9b82-112fcb99f4d3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
taking QFT You will need to be very comfortable with Green functions and complex integration. Also, a solid quantum foundation in the harmonic oscillator and dirac notation. It can be done without a solid background in quantum...but to a certain extent your understanding will be superficial...i.e you could probably perform the calculations, but the context will be muddled. I'm not too fond of peskin personally. I think if you used 'Mandl and Shaw' ,'Schrednecki' and Ryder to compliment Peskin then you might be ok. Also, Griffiths Intro to particle physics has a nice introductory section into QFT somewhere around chptr 11 I think. Introduces the QFT lagrangian for a scalar field and the Euler lagrange eqtns.
{"url":"http://www.physicsforums.com/showthread.php?t=209452","timestamp":"2014-04-21T07:25:32Z","content_type":null,"content_length":"31121","record_id":"<urn:uuid:ab1c6c5f-acf0-4858-a92a-51e92c74b857>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple quadratic algebra question May 7th 2010, 11:36 AM #1 Simple quadratic algebra question Solve for p and q All values of x work Got up to 23=p^2+q (used x as 0) Now I'm stuck, tried x as 1 and got no further, any help? May 7th 2010, 12:08 PM #2 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia
{"url":"http://mathhelpforum.com/algebra/143561-simple-quadratic-algebra-question.html","timestamp":"2014-04-20T11:40:33Z","content_type":null,"content_length":"33562","record_id":"<urn:uuid:53728170-096e-4600-8039-87bc6b7fbc27>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Resource Studio Math Resource Studio 4.4.2 Schoolhouse Technologies Inc. in Education / Mathematics Create professional-quality mathematics worksheets to provide students in grades K to 10 with the skills development and practice they need as part of a complete numeracy program. Over 70 mathematics worksheet activities can be produced to advance and reinforce skills in number operations, number concepts, fractions, numeration, time, measurement, money, problem solving and more. Using the friendly, intuitive interface, you will be able to see exactly what the worksheet will look like on paper as you design it. You will have complete control over the layout of the worksheets with customizing features that let you change the number of questions, as well as the title, comments, instructions, date, picture, and font. The worksheets that you create with Mathematics Worksheet Factory are not pre-designed but are randomly generated based on a complex set of algorithms corresponding to the specific mathematical structure of each type of worksheet. This allows for virtually unlimited original worksheets. Download Math Resource Studio and give it a trial run before you buy. File Size: 7.51 MB License: Demo Price: $64.00 Related: - - - - - - - - - - - Platform: Win2000, Win7 x32, WinVista, WinXP Downloads: Total: 52 | This Month: 1 Released: 2010-01-13 Math Resource Studio Similar Software Math tool for high school math, middle school math teaching and studying. Function graphing and analyzing: 2D, 2.5D function graphs and animations, extrema, root, tangent, limit,derivative, integral, .... Free download of Math Studio 2.8.1 MATH FLIGHT - Windows Practice basic arithmetic with activities ... report cards. Race against timer or airplane. Print math fun (test) papers. Hall of fame. Youth & Adult modes. Math games, negative numbers and more.... Math Flight also includes multiple set-up options. Specify number range, number of math questions, focus on a specific number and flight .... Free download of Math Flight 1.4 Math calculator, also derivative calculator, integral calculator, calculus calculator, ... used to calculate expression, derivative, root, extremum, integral. Math calculator, also a derivative calculator, integral calculator, calculus calculator, expression calculator, equation solver, can be used to calculate expression, derivative, root, extremum, integral. .... Free download of Math Calculator 2.5.1 Math software for students studying precalculus and calculus. Math Center Level 2 consists of a Scientific Calculator, ... is a further development of Graphing Calculator2D from Math Center Level 1. It has extended functionality: hyperbolic functions are added. There are also added .... Free download of Math Center Level 2 1.0.2.1 Math software for students studying precalculus. Can be interesting for teachers teaching precalculus. Math Center Level 1 consists of Graphing calculator 2D, Advanced Calculator, Simple Calculator, Simple Calculator, Simple Rational Calculator, and Simple Integer Calculator called from the Control Panel. Simple calculator is a general purpose .... Free download of Math Center Level 1 1.0.1.9 Popular Software in Education / Mathematics Application for displaying 2D and 3D graphs for functions of 3 dimentions. Free download of 3D Graph 2.12 Best data analyzer, 2D/3D-plotting, calc and presentation program.. Free download of SimplexNumerica 9.2.9.4 Graph maker to create 2d, 2.5d, 3D and 4d function graphs and animations.. Free download of Function Grapher 3.9.1 High-level interpreted language, primarily intended for numerical computations. Free download of Octave 3.6.4 Create constructions with points, vectors, and segments. Free download of GeoGebra Portable 5.0 Beta
{"url":"http://www.downloadtyphoon.com/math-resource-studio/inforilgqrqy","timestamp":"2014-04-20T16:44:39Z","content_type":null,"content_length":"35703","record_id":"<urn:uuid:b3ee3e07-86c2-45d6-9491-22a3f3de993a>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e8ba56c0b8b543d41f9c932","timestamp":"2014-04-17T01:31:13Z","content_type":null,"content_length":"255997","record_id":"<urn:uuid:b7ab4c3b-e172-4a50-a0ad-2f23ad4f305f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Risk Analysis (Monte Carlo Simulation), planEASe Risk Analysis (Monte Carlo Simulation) Perform Monte Carlo Risk Analysis with any assumptions you choose versus any measure, such as Rate of Return (IRR or MIRR), Net Present Value (NPV), etc. Risk Analysis allows you to investigate how these measures vary with a change in assumptions like Holding Period, Cap Rate at Sale, Renewal Probability, Vacancy, TI's, etc. Risk Analysis provides a one page table and graph which shows the probability of achieving any level for the chosen measure. planEASe Risk Analysis is based on a recognized technique in the literature of Operations Research known as "Monte Carlo Simulation". The basic concept of the technique is best described through illustration. We all know that a normal six-sided die with the numbers one through six on each of the faces has an equal chance of showing any of the six numbers on any one roll. Statisticians would say that the results of a roll are "uniformly distributed" between one and six, and that the results of any one roll of the die represented a "random number" sampled from that uniform distribution. This is very fancy language for a very simple concept, but the language becomes more useful as we get deeper into the Monte Carlo method. Now, suppose that you wanted to know what the chances were that the numbers on a normal pair of dice would total seven when rolled. While there are many ways to solve this question mathematically, one simple method would be to roll the dice many times and count the proportion of times that the total is actually seven. If for instance, you rolled them 36 times and they totaled seven on six occasions, you might deduce that the chances of getting a seven were one in six, or 16.7%. As it happens, you would be exactly right in this instance. However, the dice could have shown seven on ten occasions. In this case, the results of your "simulation" would have been misleading, unless you had the judgment to take the results with a grain of salt. The results of rolling a pair of dice are random after all, and the chances of rolling a seven the precise six times required here are rather small. If, however, we were interested in the number shown on the dice in the same 36 rolls, you typically would find that the total of the numbers on the dice divided by the 36 rolls was close to seven. In other words, your simulation was a rather good predictor of the average result, while not necessarily giving you accurate information about the probability of any individual result occurring. To continue the illustration, suppose that you were interested in describing the "spread" of your simulation results. One common method of doing this is to show the results in what is known as a "bar graph" or "histogram". Suppose that the results of your 36 rolls were: • 1 two 3 threes 3 fours 3 fives 6 sixes 6 sevens 5 eights 3 nines 1 ten 3 elevens 2 twelves The bar graph of these results is shown to the right of the table. Each bar is proportionally as high as the number of times that the result occurred, so that the bar for the result of three is three times as high as the bar for the result of two. Such a graph is a useful tool to describe the results of your simulation, although you would not believe that it accurately represented the chances of rolling a particular total, since the results are random. One thing you might do, however, is draw a smooth line over the results, as has been done here, and think that such a line might come close to the tops of a bar graph of a "perfect" simulation". In this case, you would be right, since the triangle shape is the actual underlying "probability distribution" of the sum of two die. While the bar graph is a good picture of the results of the simulation, statisticians typically use two particular numbers to describe the same thing in summarized form. The first statistic is the "mean" or average result. This is determined by adding all the results, and dividing the total by the number of trials. In this case, the average result is 290/36 or 8.06. The second statistic is called the "standard deviation". It is a measure of the "spread" of the distribution, and is, mathematically, the square root of the sum of the squares of the deviations from the mean divided by the number of trials. While that is a confusing definition, the use of the statistic in the context of Risk Analysis is quite simple. Since the standard deviation measures the spread of the results, it is a good measure of the amount of risk in the simulation results. Although the dice illustration is quite simple, a statistician would say that we have just conducted a Monte Carlo Simulation for 36 trials in order to describe the probability distribution of the total shown on a pair of dice. In order to do so, we have sampled two random numbers from a uniform probability distribution between one and six, and performed a mathematical operation (adding the two numbers together) on the pair of random numbers. The Risk Analysis of Analytic Associates is performed in exactly this fashion. However, there are a few differences due to the nature of the real-life situation we are simulating. The uniform probability distribution for the number on a die is relatively unusual in real life. First, the sum can only assume integer values, whereas most variables are "continuous" values. The inflation rate in the economy, for instance, can be 7.033% or 9.445% ... it is not restricted to integers. Secondly, the shape itself is unusual in that most uncertain variables in real life are distributed as some kind of "bell shaped curve". There are several bell shaped curves defined in the mathematics of probability. The most commonly known is the "normal" distribution, shown here. It is "symmetric", in that the left side of the curve is a mirror image of the right side. Another characteristic is that the curve never crosses the bottom line, but rather trails away endlessly. In other words, if we were to say that the inflation rate were normally distributed with an average of 7%, we would implicitly be saying that there was a real possibility, however slight, that inflation could be -1,000%. In other words, the beta distribution curve has its highest point at the "most likely" amount for the random variable, seven percent, and there is no possibility under this distribution of inflation amounts lower than five percent and higher than ten percent. Note that the distribution is slightly "skewed" because the most likely point is not midway between the outside limits. planEASe Risk Analyses use beta distributions to describe the user’s uncertain assumptions, just as the illustration used a uniform distribution to describe the number on the die. Thus, when a random number is sampled from the beta distribution, it is more likely to be close to the most likely value than the tails of the distribution. The reason that we use beta distributions is not at all mathematical. Quite simply, we believe that this distribution best describes the shape of what the user really means when he says that inflation will be about 7% and certainly between 5% and 10%. Monte Carlo Simulation for Risk Analysis is conducted almost exactly as in the dice illustration. First, the random numbers are sampled for each of the uncertain assumptions. This is analogous to rolling the dice. Secondly, the random numbers obtained are used together with the other assumption values to perform the basic analysis. This mathematical operation is analogous to totaling the numbers on the dice in the illustration. The measure requested by the user is then recorded in a table for display in a bar graph, just as we did for the total shown on the dice. Before discussing Risk Analysis as performed by planEASe, we should define what is meant by the term "risk" itself. Most investors think of the risk in their investments in terms of whether there is a significant chance of losing money. Such an investment is termed "risky". However, in a more general sense, risk relates to the range of possible results of the investment. In this sense, an investment with possible rates of return between 10% and 50% is "riskier" than an investment in a bond with a guaranteed 8% rate of return held to maturity. The purpose of Risk Analysis here is to evaluate the range and probability for the rate of return on the investment, so "risk" is treated here in the more general sense. As you enter assumptions into a planEASe analysis, there are many whose values are inherently uncertain. For example, look at the TEST assumption set for the RU models shipped with planEASe. Some of the values in this assumption set are shown in the table at the top of the screen below in the "Most Likely" column. For instance, the TEST Assumption Set assumes that the user will hold the Sample Apartments for four years, and then sell the property for five times its Gross Income at that time. However, it would be sheer happenstance if the property were sold for five times the gross income in exactly four years. These assumption values represent educated guesses, not accurate predictions. In the case of the Sample Apartments , the user has recognized this weakness in the Basic Analysis, and has asked for a Risk Analysis to investigate the risk involved in the Rate of Return Before Tax. He has examined his assumption values and selected those which he considers to be subject to uncertainty. For example, although he thinks that the Gross Income Multiplier assumption value of five times is a good estimate, he believes that the eventual multiple could be anywhere from four times at the lowest, to six and a half times at the highest. The list of all of these risk assumptions selected by the user is shown in the upper portion of the screen. The list shows the lowest, most likely, and highest values that the user believes are possible for the assumptions. Implicitly then, he is also saying that the values for the other assumptions in the analysis are fixed, and will not vary. While the quantified range of the user’s uncertainty for his assumed values is certainly useful information, it does not answer the question with which he is most concerned ... what does the uncertainty for the assumptions mean in terms of the ultimate rate of return . Obviously, he would like to see all of these assumption values in some fashion to see the of possible rates of return considering those uncertainties. This is where "Monte Carlo Simulation" comes into play. planEASe Risk Analysis uses this technique to project the probability distribution of the rate of return after tax from the assumed values. The screen below has been obtained after conducting a Monte Carlo Simulation of the Real Estate Investment Analysis for two hundred trials (in progress in the screen above). For each of these trials, the risk analysis process selects a random number from the beta probability distribution for each of the uncertain assumptions. The selection of these random numbers is such that each number can assume any value within the lowest to highest range for that assumption, but more likely will be around the most likely value. Thus a bar graph of the one hundred random numbers selected for any one assumption would look like the corresponding beta distribution, subject to the randomness of the process. When the random numbers for the uncertain assumptions have been selected for one of the trials, the basic analysis is completed using those assumption values. In this case, the user has requested that the Risk Analysis be performed for the Rate of Return Before Tax (just as with Sensitivity Analysis, Risk Analysis may be conducted for any of the measures in the model). Accordingly, that rate of return is recorded in a table after each of the hundred trials. A bar graph of the rates of return obtained in the one hundred trials is shown in this screen. It shows, for example, that there were ten rates of return below zero, eight between 0% and 3%, and seven more between 3% and 6%. Some useful statistics for the hundred trials on the right side of the screen. The average rate of return was 16.9% as opposed to the 15.4% for the same measure in the Basic Analysis. Some of this variation can be ascribed to the randomness of the simulation. Additionally, some of the distributions for the uncertain assumptions are skewed, or asymmetrical. It is a characteristic of such distributions that the means of the distributions are different from the most likely or highest points. Mathematically, this means that the average of our hundred trials be different than the 15.4%. The last two statistics shown are the lowest and highest rates of return obtained in the hundred trials: 0.0% and 44.8% in this case. While these are useful numbers, they should be interpreted with care. For instance, if we had conducted the simulation a thousand times, we almost surely would have obtained rates of return higher than 44.8% In other words, these two numbers do show the lowest and highest rates of return possible under the assumptions. Those lowest and highest rates could only be obtained by requesting basic analyses using only the most pessimistic and most optimistic assumption values from the list at the top of the page. Even then, those rates of return would represent the possible range of rates of return only if the assumption ranges were all correct in actual fact, which is extremely unlikely. In short, the Risk Analysis is not intended to show the entire range of possible investment results , but rather is meant to give you an picture of the probability of those results. The lowest rate of return obtained in this simulation was 0.0%. planEASe does not compute a rate of return if the sum of the cash flow involved is negative, but rather records zero percent for that case. In this case, six of the hundred trials resulted in the investor not recovering his invested funds. Users interested in how much money was lost by the investor in such cases may request a Risk Analysis for the Net Present Value of the same cash flow using a zero discount rate. Some useful conclusions may be drawn from the bar graph itself. For instance, there are 25 rates of return less than 6%. One could say, then, that there is about a 12% chance of making less than 6% on the investment. Similarly, there is an even chance of obtaining a rate of return greater than 17%, and a 5% possibility of losing money on the investment. Considering the wide ranges chosen for the assumption values, this analysis should provide considerable comfort for the user who is worried about the possible "downside" risk in the investment. There are some limitations in Monte Carlo Risk Analysis which should be of concern to you. For instance, Monte Carlo Simulation uses random numbers for the risk assumption values. This causes the results of the simulation to be slightly unreliable. This unreliability becomes smaller and smaller as the number of trials is increased. Experience with the technique indicates that one hundred trials gives a good prediction for the mean and standard deviation of the resulting probability distribution, but does not, typically, show the distribution shape or the length of the tails accurately. Two hundred trials typically results in a smoother distribution, and five hundred trials typically give an extremely smooth distribution with good definition of the tails. Another limitation of the process is that the simulation assumes that all the assumption values and ranges are accurate, and also assumes that the assumptions are independent of one another. Accuracy in the all the assumption values is obviously impossible. Independence of the assumption values is also typically questionable. For instance, on any single trial, the simulation could choose a 4% Inflation Rate and a 6.5 sale multiple to go with it. Clearly, the chances of selling the property for 6.5 times gross when inflation has been low would be extremely unlikely. While Risk Analysis has some significant limitations due to the technique involved, it is an extremely useful tool for investigating the amount of risk involved in an investment. By quantifying the variability of the results of the investment, it allows you to properly portray the real nature of the investment. It is a truism that real estate investments are "risky", but Risk Analysis allows you to quantitatively that risk. Perform Monte Carlo Risk Analysis with any assumptions you choose versus any measure, such as Rate of Return (IRR or MIRR), Net Present Value (NPV), etc. Risk Analysis allows you to investigate how these measures vary with a change in assumptions like Holding Period, Cap Rate at Sale, Renewal Probability, Vacancy, TI's, etc. Risk Analysis provides a one page table and graph which shows the probability of achieving any level for the chosen measure.
{"url":"http://www.planease.com/product/analysis/risktx.aspx","timestamp":"2014-04-20T15:51:20Z","content_type":null,"content_length":"35367","record_id":"<urn:uuid:8ac44cda-aaaa-4de9-8fc5-fe7983a242fb>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Calibration of the camera system with a second order linear system. The non-linear transformation from image pixels to world coordinates is approximated by a second-order, non-symmetric polynomial of the form $x = a_{11}u+a_{12}v+a_{13}+a_{14}u^2+a_{15}v^2+a_{16}uv$ $y = a_{21}u+a_{22}v+a_{23}+a_{24}u^2+a_{25}v^2+a_{26}uv$ with u, v the image pixels, and x, y the coordinates in world space (meters). The twelve parameters of this-very simple model-are estimated in a least-square fashion from a sufficiently large number of coordinate pairs. Thus, the software has to be provided with a number of image points which positions in the real world are known -- for instance by using a pattern of known dimensions. Hereby, increasing the number of pairs used in the estimation process over the minimum of six moderates the effect of noisy measurements during calibration. Also, one observes that the precision of the approximation and hence the precision of the prediction is higher in the neighborhood of the training points. By that, we are able to bias the region within the camera frame where we want to minimize the prediction area. Calib. PointsEdit Path the XML file containing the calibration points. The file must have the following format: <?xml version="1.0"?> More points can be added like the the two in this example. A minimum of 6 non-collinear points is needed to compute the calibration. Last modified on 3 October 2011, at 18:29
{"url":"http://en.m.wikibooks.org/wiki/SwisTrack/Components/CalibrationLinear","timestamp":"2014-04-19T17:07:29Z","content_type":null,"content_length":"16436","record_id":"<urn:uuid:b28b05d6-b33d-4b30-acaf-b4e1ff276403>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum measurement theory and the quantum Zeno effect Gagen, Michael Joseph (1993). Quantum measurement theory and the quantum Zeno effect PhD Thesis, Physics, University of Queensland. Attached Files (Some files may be inaccessible until you login with your UQ eSpace credentials) Name Description MIMEType Size Downloads n01front.pdf n01front.pdf application/pdf 105.47KB 82 n02chapter1.pdf n02chapter1.pdf application/pdf 138.68KB 175 n03chapter2.pdf n03chapter2.pdf application/pdf 146.51KB 334 n04chapter3.pdf n04chapter3.pdf application/pdf 154.00KB 30 n05chapter4.pdf n05chapter4.pdf application/pdf 419.52KB 138 n06chapter5.pdf n06chapter5.pdf application/pdf 358.54KB 37 n07chapter6.pdf n07chapter6.pdf application/pdf 252.74KB 51 n08chapter7.pdf n08chapter7.pdf application/pdf 240.12KB 85 n09chapter8.pdf n09chapter8.pdf application/pdf 296.87KB 24 n10chapter9.pdf n10chapter9.pdf application/pdf 129.68KB 47 n11appendix1.pdf n11appendix1.pdf application/pdf 106.31KB 23 n12references.pdf n12references.pdf application/pdf 93.07KB 137 Author Gagen, Michael Joseph Thesis Quantum measurement theory and the quantum Zeno effect Centre or Physics Institution University of Queensland Publication 1993 Thesis type PhD Thesis Supervisor Prof G. J. Milburn This is a theoretical thesis in the area of quantum measurement theory. Due to the extensive breadth of this field we choose to narrow our focus to examine a particular problem - the quantum Zeno effect (defined below). Quantum measurement theory is introduced in Chap. 1, using the terminology of effects and operations. This approach allows an operational definition of such terms as a state vector, an ensemble, and a measurement device (for instance), and a consideration of interactions between quantum systems and inaccurate measurement devices. We further introduce the quantum trajectories approach to consider the evolution of an individual quantum system subject to measurement. The quantum Zeno effect is introduced in Chap. 2. Any quantum treatment of a measurement interaction must consider the measurement backaction onto the measured system and this backaction will disrupt the free evolution of the system. The quantum Zeno effect occurs in the strong measurement limit where the measurement backaction totally freezes the evolution of the system, thus rendering the measurement useless. The effect is introduced via projective measurements of two level systems subject to measurement of level populations. At this stage we are able to discuss the main questions addressed by this thesis, and present its structure in Chap. 2. We then develop a new measurement model for the interaction between a system and a measurement device in Chap. 3. Our motivation in doing this is to better model the usual laboratory meter, and in our approach the meter dynamics are such that it relaxes towards an appropriate readout of the system parameter of interest. The irreducible quantum noise of the meter introduces fluctuations that drive the stochastic dynamical collapse of the system wavefunction. In our model, the measured system dynamics (if treated selectively) are described by a stochastic, nonlinear Schroedinger equation. A double well system subject to position measurement provides a natural first application for this model. This is done in Chap. 4 where we monitor the coherent tunnelling of a particle from one well to the other. The advantage afforded by considering this system is that it displays differing regimes where the measurement observable (position) is approximated as possessing either, respectively, a continuous or a discrete eigenvalue structure. Thus, we use this one model to explore the quantum Zeno effect in both measurement regimes. The above treatment is of a theoretical measurement model. In Chap. 5 we turn to consider a recent experimental test Abstract/ of the quantum Zeno effect which examined the dynamics of a two level atom subject to pulsed measurements of atomic level populations. We treat a slightly modified experiment in a fully Summary continuous measurement regime. By first unravelling the optical Bloch equations, and second, using the quantum trajectories approach we demonstrate the existence of certain measurement regimes where there is a quantum Zeno effect, and other regimes where no measurement of the atomic populations is being effected at all. Through these results we demonstrate the importance of making a full analysis of the system-detector interaction before any conclusions can be made. In the remainder of the thesis we propose further possible tests of the quantum Zeno effect. In Chap. 6 the evolution of a Rydberg atom exchanging one photon with a single cavity mode subject to measurement is examined. The measurement is made by monitoring the photon number occupancy of the cavity mode using a beam of Rydberg atoms configured so as to perform phase sensitive detection. In the limit of frequent monitoring we show that the free oscillation of the atomic inversion is disrupted, and the atom is trapped close to its initial state. This is the quantum Zeno effect. In Chap. 7 we realize the Zeno effect on two possible systems. We consider first, a two level Jaynes-Cumming atom interacting with a cavity mode, and second, two electromagnetic modes configured as a multi-level parametric frequency converter. These systems interact with another cavity mode via a quadratic coupling system based on four wave mixing, and constructed to be a quantum nondemolition measurement of the photon number. This mode is damped to the environment thus effecting a measurement of the system populations. Again we show that this interaction, can manifest the quantum Zeno effect. Our explicit modelling of the system-detector interaction enables us to show how the effect depends on the resolution time of the detector. Finally, we consider a proposed measurement of the square of the quadrature phase of an electromagnetic mode in Chap. 8. Here, a three mode interaction mediated by a second order nonlinear susceptibility is considered. One mode, the pump, is prepared in a feedback generated photon number state to give insight into the role of pump noise. The other two modes are treated as an angular momentum system, and we show that photon counting on the two mode rotation system effects the above mentioned measurement. In addition, this measurement provides a direct measure of the second order squeezing of the signal. With that we finish our investigation of the quantum Zeno effect using the techniques of quantum measurement theory. However, in the epilogue [Chap. 9] we note that no thesis in quantum measurement theory would be complete without some consideration of the ``meaning" attributed to the theory. In the epilogue we take a novel historical approach and examine the method by which metaphysical theories are formed to draw conclusions regarding quantum metaphysics. Keyword theory
{"url":"http://espace.library.uq.edu.au/view/UQ:157843","timestamp":"2014-04-19T23:37:06Z","content_type":null,"content_length":"59682","record_id":"<urn:uuid:1c2885b5-7411-433f-9064-0efdb04a9647>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Discovering patterns in groups of objects to discover their total number is Tang’s forte, and here he is as engaging as ever, even when his examples don’t necessarily make intuitive—or, for that matter, common—sense. Each two-page spread provides the reader with a dazzlingly colored image of a number of objects—honeycomb cells, jalapeño peppers, ladybug spots—and a little rhyming ditty that sets the scene and provides a hint on how to solve the addition problem. Most often the reader is asked to discern some pattern to make the sum more manageable or how to use subtraction to make finding the sum easier, as when adding rows of starfish with gaps in their ranks: “How many starfish are in view? / This is all you have to do. / Instead of counting one by one, / Just subtract and you’ll be done.” (An answers and explanations page is included.) Tang’s counterintuitive examples are less successful, as in counting raindrops in a rainbow by counting them within the arc of each color group rather than in the more obvious, and simpler, straight lines passing through the arc. Nonetheless, it is another take on how to get the job done—it’s all in the seeing. Best of all, Tang makes play out of math and the problem-solving riddles keep math-suspicious minds from wandering and maybe even from clogging. (Picture book. 7-10)
{"url":"https://www.kirkusreviews.com/book-reviews/greg-tang/math-appeal/print/","timestamp":"2014-04-21T07:48:13Z","content_type":null,"content_length":"4872","record_id":"<urn:uuid:fdb2c171-3d79-4cd9-b5b6-f8a55e67f148>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
MATH 3703 Summer 2006 Course Name: Geometry for P-8 Teachers Office Number: 322 Boyd Bldg. Instructor: Dr. Angela Barlow Office Hours: MWF 9:00 - 10:00 or by appointment E-mail: abarlow@westga.edu Telephone: O: 678-839-4132 H: 770-836-5202 Webpage: www.westga.edu/~abarlow/ Course Objectives: Students will demonstrate: 1. a better understanding of standard vocabulary and symbols of elementary mathematics; 2. an ability to reason logically and to provide justifications and coherent arguments for the plausibility of conjectures; 3. an ability to use geometry in real-world problem solving; 4. well-developed spatial sense including both two- and three-dimensional figures (tessellations, symmetry, congruence, similarity, polygons and other curves, polyhedra); 5. a better understanding of geometry and measurement from a historical perspective; 6. a better understanding of measurement including the metric system; 7. an ability to solve measurement problems involving perimeter, circumference, area, volume, temperature, and mass; 8. a better understanding of synthetic, coordinate, and transformational geometry with an emphasis on problem solving; 9. a better understanding of the uses of a variety of manipulatives, technology, and other materials for the P-8 level; 10. a better understanding of the vision of mathematics education as put forth in NCTM's Principles and Standards (2000); 11. a better understanding of the scope and sequence of elementary school mathematics programs; 12. a knowledge of current professional literature in the field of mathematics education. Text: A Problem Solving Approach to Mathematics for Elementary School Teachers, Addison-Wesley Publishing Co., Inc. Eighth Ed., 2004. Authors: Billstein, Libeskind, Lott. Additional Supplies: You will need to have a ruler, a compass, a protractor, a pair of scissors, and a package of 3x5 index cards. Also, a packet of course handouts is available at the bookstore. Evaluation: Test 1 100 points Test 2 100 points Test 3 100 points Test 4 100 points Portfolio 180 points Final 150 points Total Possible 730 points Grading Policy: A (657 - 730 pts), B (584 - 656 pts), C (511 - 583 pts), D (438 - 510 pts), F (0 - 437 pts) Late Assignments Policy: Hard copies of all assignments, journal entries, etc. are due at the beginning of class on the specified date (see course schedule). Late assignments will not be accepted unless prior approval has been given by the instructor. In the event that a late assignment is accepted, the grade on the assignment will be lowered. Attendance Policy: Students are expected to attend all classes. This term a student may withdraw with a grade of W through June 28th, regardless of grades, absences, etc. This deadline has been established by the University. After this deadline, if a student has accumulated more than three absences throughout the semester, he/she will normally receive a grade of WF. (A grade of WF counts as an F.) The three absences should be saved for sickness and other emergencies. Late arrivals and early exits count one-half of an absence. If a student is absent for a test and has an excuse from someone in authority, then the final exam grade will be used for the missed test in the calculation of the final course grade. No make-ups will be given. Students who maintain a perfect attendance record (i.e. no excused or unexcused absences) will have 7 points added to their Total Points at the end of the semester. Suggested Problems: For each section covered in class there will be a set of problems provided. These are not homework problems in the sense that they will be taken up and graded. Instead, these are problems that are recommended for you to work in order to be successful in the class. If you have questions concerning the suggested problems, you should address these questions to the instructor during office hours, before or after class, or during the review session prior to the test. Conferences: Conferences can be beneficial and are encouraged. All conferences should occur during the instructor's office hours, whenever possible. If these hours conflict with a student's schedule, then appointments should be made. The conference time is not to be used for duplication of lectures that were missed; it is the student's responsibility to obtain and review lecture notes before consulting with the instructor. The instructor is very concerned about the student's achievement and well-being and encourages anyone having difficulties with the course to come by the office for extra help. Grades will be based on coursework, not on Hope Grant needs, GPA, or any other factors outside the realm of coursework. Return to Course page.
{"url":"http://www.westga.edu/~math/syllabi/syllabi/summer06/MATH3703.htm","timestamp":"2014-04-20T18:29:38Z","content_type":null,"content_length":"6556","record_id":"<urn:uuid:aa695bd8-4ca1-4b3b-b20a-4b28ea06aaae>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Grundlehren der Mathematischen Wissenschaften, 276. New York etc.: Springer-Verlag. XV, 488 p. DM 196.00 (1985). Interacting particle systems serve as models for as different things as ferromagnetism, spread of infection or turbulences in liquids. The most interesting phenomena in such systems are the multiplicity of existing phases (various directions or strength of magnetisation) and the existence of critical values for certain parameters where spontaneous changes in the spectrum of possible phases occur (Curie temperature). Mathematically such a system is a Markov process. The above phenomena rewrite in terms of invariant measures and ergodicity. Thus the first chapter of the book is about general results on Markov processes. The relation to semi-groups and some theorems on existence and uniqueness of invariant measures are stated. Markov processes for spin systems are constructed from a collection of transition measures. Then the martingale approach to Markov processes is discussed. Mutual singularity of measures corresponding to different epochs makes the difference between interacting particle systems and other more common Markov processes. General results being very rare, the theory is an account of tools for an investigation of special systems. These tools are well-known from other fields of probability but are often used here in a different manner. Coupling, duality, relative entropy, and reversibility are such tools introduced in chapter II. A new result on the stability of positive recurrence for Markov chains without imposing assumptions on the moments is presented. In chapter III these tools are applied to general spin systems. The analysis of the most important spin systems (Ising-model, voter-model, contact process, exclusion process, and nearest particle process) is almost complete. They are the simplest examples where the above phenomena occur. These systems are treated rather independently in chapters IV through VIII. The concept of potentials and Gibbs states is developed. Chapter IX is about linear systems with state space ${\left[0,\infty \right)}^{S}·$ Which subjects are not included? These are infinite systems of stochastic differential equations, measure-valued diffusions, shape theory, renormalization theory, and some well known models as the classical Heisenberg model or Dyson’s hierarchical models. Also discrete time systems are not mentioned. ”Notes and references” and ”open problems” at the end of each chapter give a good insight into the state of the art. That is helpful for newcomers to master the great number of research papers. 60K35 Interacting random processes; statistical mechanics type models; percolation theory 60-02 Research monographs (probability theory) 60Jxx Markov processes
{"url":"http://zbmath.org/?q=an:0559.60078&format=complete","timestamp":"2014-04-20T08:50:49Z","content_type":null,"content_length":"23175","record_id":"<urn:uuid:5b3f3ba1-0ca0-468f-a0e7-a09c512f4ad3>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of Array order or arrangement, as of troops drawn up for battle. an arrangement of a series of terms according to value, as from largest to smallest. an arrangement of a series of terms in some geometric pattern, as in a matrix. array (əˈreɪ) 1. an impressive display or collection 2. an orderly or regular arrangement, esp of troops in battle order 3. poetic rich clothing; apparel 4. maths a sequence of numbers or symbols in a specified order 5. maths a set of numbers or symbols arranged in rows and columns, as in a determinant or matrix 6. electronics an arrangement of aerials spaced to give desired directional characteristics, used esp in radar 7. law a panel of jurors 8. the arming of military forces 9. computing a regular data structure in which individual elements may be located by reference to one or more integer index variables, the number of such indices being the number of dimensions in the array 10. to dress in rich attire; adorn 11. to arrange in order (esp troops for battle); marshal 12. law to draw up (a panel of jurors) [C13: from Old French aroi arrangement, from arayer to arrange, of Germanic origin; compare Old English arǣdan to make ready] array (ə-rā') Pronunciation Key 1. Mathematics A rectangular arrangement of quantities in rows and columns, as in a matrix. 2. Numerical data ordered in a linear fashion, by magnitude. Mathematics A rectangular arrangement of quantities in rows and columns, as in a matrix.
{"url":"http://dictionary.reference.com/browse/Array","timestamp":"2014-04-18T01:07:00Z","content_type":null,"content_length":"109181","record_id":"<urn:uuid:ad7abf35-fcc0-4c00-bae1-1e9d4508c47b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Hilbert spaces October 24th 2009, 08:38 PM Hilbert spaces Is it possible that a fonction u(x) element of C[0,1], u(0)=u(1)=0 is not element of H°1 (0,1) , the closure of the H1 Hilbert space ? I believe it is not possible, but I cannot manage to justify my answer. Thank you October 24th 2009, 08:45 PM What is H°1 (0,1) is it $H_0 ^1 (0,1):= \overline {C_0 ^{\infty} (0,1)} \subseteq H^1(0,1)$? Are you talking about Sobolev spaces? Edit: Assuming this is what you meant, the strongest I could find is that if $u \in H^1(0,1) \cap C[0,1]$ and $u(0)=u(1)=0$ then $u\in H_0 ^1 (0,1)$ October 25th 2009, 07:18 AM What is H°1 (0,1) is it $H_0 ^1 (0,1):= \overline {C_0 ^{\infty} (0,1)} \subseteq H^1(0,1)$? Are you talking about Sobolev spaces? Edit: Assuming this is what you meant, the strongest I could find is that if $u \in H^1(0,1) \cap C[0,1]$ and $u(0)=u(1)=0$ then $u\in H_0 ^1 (0,1)$ Yes, indeed, that was I meant. That's exactly my reasoning, but I was not sure of one conclusion : Even if $u \in C[0,1]$ and not $u \in C^{\infty}[0,1]$ , we can have the same conclusions ? The exact question I have is that if $u \in C[0,1]$ and $u(0)=u(1)=0$, then is it possible that $u otin H_0 ^1 (0,1)$ ? We never have the hypothesis that $u \in H^1(0,1) \cap C[0,1]$ and $u(0)=u(1)=0$. So, must I conclude that without that stronger hypothesis, we can have a function that is $C_0 [0,1]$and not $u \ in H^1(0,1)$ ? October 25th 2009, 09:31 AM Tough one... I don't really know since working with functions in Sobolev spaces is messy as it is, but maybe trying to characterize these functions in easier terms is the best approach. For example: Is a function that is nowhere differentiable weakly differentiable? If the answer is no, then you have the desired function.
{"url":"http://mathhelpforum.com/differential-geometry/110219-hilbert-spaces-print.html","timestamp":"2014-04-18T22:54:32Z","content_type":null,"content_length":"8930","record_id":"<urn:uuid:5a05a81d-ac85-4961-941a-fc1fdc0279b9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
The new version is here. R version 2.11.0 has been released on 2010-04-22. The source code is first available in this directory, and eventually via all of CRAN. Binaries will arrive in due course (see download instructions above). Poor man’s pairs trading… There is a central notion in Time Series Econometrics, cointegration. Loosely it refers to finding the long run equilibrium of two non-stationary series. As the most know non-stationary series examples comes from finance, cointegration is nowadays a tool for traders (not a common one though!). They use it as the theory behind pairs trading (aka A von Mises variate… Inspired from a mail that came along the previous random generation post the following question rised : How to draw random variates from the Von Mises distribution? First of all let’s check the pdf of the probability rule, it is , for . Ok, I admit that Bessels functions can be a bit frightening, but R 2.11.0 due date This is the announcement as posted in the mailing list : This is to announce that we plan to release R version 2.11.0 on Thursday, April 22, 2010. Those directly involved should review the generic schedule at http://developer.r-project.org/release-checklist.html The source tarballs will be made available daily (barring build troubles) via http://cran.r-project.org/src/base-prerelease/ For the R Core The distribution of rho… There was a post here about obtaining non-standard p-values for testing the correlation coefficient. The R-library SuppDists deals with this problem efficiently. library(SuppDists) plot(function(x) dPearson(x,N=23,rho=0.7),-1,1,ylim=c(0,10),ylab="density") plot(function(x)dPearson(x,N=23,rho=0),-1,1,add=TRUE,col="steelblue") plot(function(x)dPearson(x,N=23,rho=-.2),-1,1,add=TRUE,col="green") plot(function(x)dPearson(x,N=23,rho=.9),-1,1,add=TRUE,col="red");grid() legend("topleft", col=c("black","steelblue","red","green"),lty=1, legend=c("rho=0.7","rho=0","rho=-.2","rho=.9"))</pre> This is how it looks like, Now, let’s construct a table of critical values for some arbitrary or not significance levels. q=c(.025,.05,.075,.1,.15,.2) xtabs(qPearson(p=q, N=23, rho In search of a random gamma variate… One of the most common exersices given to Statistical Computing,Simulation or relevant classes is the generation of random numbers from a gamma distribution. At first this might seem straightforward in terms of the lifesaving relation that exponential and gamma random variables share. So, it’s easy to get a gamma random variate using the fact that \pi day! It’s π-day today so we gonna have a little fun today with Buffon’s needle and of course R. A well known approximation to the value of $latex \pi$ is the experiment tha Buffon performed using a needle of length,$latex l$. What I do in the next is only to copy from the following file the function In a nls star things might be different than the lm planet… The nls() function has a well documented (and discussed) different behavior compared to the lm()’s. Specifically you can’t just put an indexed column from a data frame as an input or output of the model. > nls(data ~ c + expFct(data,beta), data = time.data, + start = start.list) Error in parse(text = x) : unexpected Jobless as I might be, I do have some clients for data analysis. I try not to visit them in their office coz then things get really slow and time-consuming. When I can’t escape this, the worst thing is tuning data and software with client. So, I have a USB with portable versions of my A quicky.. If you’re (and you should) interested in principal components then take a good look at this. The linked post will take you by hand to do everything from scratch. If you’re not in the mood then the dollowing R functions will help you. An example. # Generates sample matrix of five discrete clusters that have
{"url":"http://www.r-bloggers.com/author/m-parzakonis/","timestamp":"2014-04-16T10:27:23Z","content_type":null,"content_length":"36934","record_id":"<urn:uuid:0bae8057-be2a-4bc6-a61c-123ce6bfa77b>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Teacher2Teacher - Q&A #19028 View entire discussion [<<prev] [next>>] From: Steve R. (for Teacher2Teacher Service) Date: Nov 12, 2007 at 13:57:49 Subject: Re: slide rule Hi Yevgeniy - The EM "slide rule" is really just a mechanical device to help kids visualize overlapping number lines. It's used for adding and subtracting integers and also for fractions and works the same way in both cases. I'll give you an integer example. First, understand that one of the methods EM teaches for adding and subtracting integers is to "walk" along the number line. The keys are that addition means facing to the right and subtraction means facing to the left, and a positive number means to walk forward (the direction you are facing) while a negative number means to walk backwards (away from the direction you are facing). So, if we want to do 8 + (-3) we can imagine standing on a number line at positive 8, facing to the right (since it's an addition problem) and then moving backwards three spaces (since it's negative three) thus finishing on positive 5. To do -4 - (7) we would start by standing on -4, facing to the left (since it's a subtraction problem) and walking forwards seven spaces (since it's positive seven) thus finishing on -11. You can see how this handles the "subtracting a negative" issue since in that case you face left but walk backwards, moving you in the positive direction. So the "slide rule" consists of two number lines, one of which remains still (the holder) and the other of which moves (the slider). The holder is v-shaped and the slider sits inside it. Both pieces have the numbers to the left of 0 (ie, the negatives) shaded. The holder actually displays those numbers with a negative sign while the slider has the shading but does not include the negative signs. To use the slide rule to do the two problems above, here's what I'd do. For 8 + (-3) I'd move the slider so that the 0 on that scale aligns with +8 on the holder. That's the starting point and is the same as imagining that I'm standing on 8. Then, since I'm adding negative three, I'd imagine I'm facing right and move backwards three, or three into the shaded zone. Then I'd look at the holder and see that I'm now lined up with positive five. Note that the slider doesn't actually move in this part, I just count spaces on the slider and then look at the holder to see where I am. For -4 - (-7) I'd align the slider so its 0 aligns with -4 on the holder. Then I'd imagine facing left and backing up 7, so moving 7 into the unshaded numbers. Then I'd look down at the holder and see that I'm lined up with positive three. The fraction slide rule works the same way - imagine two rulers aligning with each other, with both calibrated to sixteenths. It only works for halves, quarters, eighths, and sixteenths. So yes, this is based on a linear scale and is really just a way to help kids physically visualize that "walking the number line" approach by using the slider to set the starting point and measure the steps, then viewing the holder to see where you finish up. Does this make sense and help? -Steve R., for the T2T service Post a public discussion message Ask Teacher2Teacher a new question
{"url":"http://mathforum.org/t2t/message.taco?thread=19028&message=2","timestamp":"2014-04-20T10:59:19Z","content_type":null,"content_length":"7478","record_id":"<urn:uuid:b268c845-9944-4bb7-88de-27d2f2520149>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
The new version is here. R version 2.11.0 has been released on 2010-04-22. The source code is first available in this directory, and eventually via all of CRAN. Binaries will arrive in due course (see download instructions above). Poor man’s pairs trading… There is a central notion in Time Series Econometrics, cointegration. Loosely it refers to finding the long run equilibrium of two non-stationary series. As the most know non-stationary series examples comes from finance, cointegration is nowadays a tool for traders (not a common one though!). They use it as the theory behind pairs trading (aka A von Mises variate… Inspired from a mail that came along the previous random generation post the following question rised : How to draw random variates from the Von Mises distribution? First of all let’s check the pdf of the probability rule, it is , for . Ok, I admit that Bessels functions can be a bit frightening, but R 2.11.0 due date This is the announcement as posted in the mailing list : This is to announce that we plan to release R version 2.11.0 on Thursday, April 22, 2010. Those directly involved should review the generic schedule at http://developer.r-project.org/release-checklist.html The source tarballs will be made available daily (barring build troubles) via http://cran.r-project.org/src/base-prerelease/ For the R Core The distribution of rho… There was a post here about obtaining non-standard p-values for testing the correlation coefficient. The R-library SuppDists deals with this problem efficiently. library(SuppDists) plot(function(x) dPearson(x,N=23,rho=0.7),-1,1,ylim=c(0,10),ylab="density") plot(function(x)dPearson(x,N=23,rho=0),-1,1,add=TRUE,col="steelblue") plot(function(x)dPearson(x,N=23,rho=-.2),-1,1,add=TRUE,col="green") plot(function(x)dPearson(x,N=23,rho=.9),-1,1,add=TRUE,col="red");grid() legend("topleft", col=c("black","steelblue","red","green"),lty=1, legend=c("rho=0.7","rho=0","rho=-.2","rho=.9"))</pre> This is how it looks like, Now, let’s construct a table of critical values for some arbitrary or not significance levels. q=c(.025,.05,.075,.1,.15,.2) xtabs(qPearson(p=q, N=23, rho In search of a random gamma variate… One of the most common exersices given to Statistical Computing,Simulation or relevant classes is the generation of random numbers from a gamma distribution. At first this might seem straightforward in terms of the lifesaving relation that exponential and gamma random variables share. So, it’s easy to get a gamma random variate using the fact that \pi day! It’s π-day today so we gonna have a little fun today with Buffon’s needle and of course R. A well known approximation to the value of $latex \pi$ is the experiment tha Buffon performed using a needle of length,$latex l$. What I do in the next is only to copy from the following file the function In a nls star things might be different than the lm planet… The nls() function has a well documented (and discussed) different behavior compared to the lm()’s. Specifically you can’t just put an indexed column from a data frame as an input or output of the model. > nls(data ~ c + expFct(data,beta), data = time.data, + start = start.list) Error in parse(text = x) : unexpected Jobless as I might be, I do have some clients for data analysis. I try not to visit them in their office coz then things get really slow and time-consuming. When I can’t escape this, the worst thing is tuning data and software with client. So, I have a USB with portable versions of my A quicky.. If you’re (and you should) interested in principal components then take a good look at this. The linked post will take you by hand to do everything from scratch. If you’re not in the mood then the dollowing R functions will help you. An example. # Generates sample matrix of five discrete clusters that have
{"url":"http://www.r-bloggers.com/author/m-parzakonis/","timestamp":"2014-04-16T10:27:23Z","content_type":null,"content_length":"36934","record_id":"<urn:uuid:0bae8057-be2a-4bc6-a61c-123ce6bfa77b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Double Fourier Series April 7th 2009, 10:46 PM #1 Junior Member Sep 2007 Please help me : In calculus ,I only know Fourier Series,But in Real world,I need to know Double Fourier Series so i want to know where can find this theory(basic),and which book discuss this good. can Someone would teach me this theory ? Thanks very much..... What does exact mean your expression 'double Fourier series?... a) a Fourier series in two dimesions b) a Fourier series in the complex domain c) something else Kind regards I mean this: For Example: $u_{L}(X,Y) = \frac{A_{00}}{2}+\sum_{n=1}^{\infty}{(A_{0n}cos(nX ) + B_{0n}sin(nY))}$$+ \sum_{m=1}^{\infty}{A_{m0}cos(mX) + B_{m0}sin(mY)}$ $+\sum_{m=1}^{\infty}\sum_{n=\pm1}^{\pm\infty}{[A_{mn}cos(mX+nY) + B_{mn}sin(mX + nY)]}$ $A_{mn}+jB_{mn} = \frac{2}{(2\pi)^2}\int_{-\pi}^{\pi}\int_{-\pi}^{\pi}{u_{L}(X,Y)e^{j(mX+nY)}}dXdY$ thanks very much ! April 8th 2009, 01:44 AM #2 April 8th 2009, 02:06 AM #3 Junior Member Sep 2007
{"url":"http://mathhelpforum.com/calculus/82831-double-fourier-series.html","timestamp":"2014-04-16T06:48:05Z","content_type":null,"content_length":"36583","record_id":"<urn:uuid:a382d64d-9951-45d3-ad47-fba65640a843>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
How does math guide our ships at sea? - George Christoph About TED-Ed Originals TED-Ed Original lessons feature the words and ideas of educators brought to life by professional animators. Are you an educator or animator interested in creating a TED-Ed original? Nominate yourself here » Meet The Creators Sissy Emmons HobizalArtist Additional Resources for you to Explore John Napier was a famous Scottish theologian and mathematician who lived between 1550 and 1617. He spent his entire life seeking knowledge, and working to devise better ways of doing everything from growing crops to performing mathematical calculations. He is best known as the discoverer of logarithms. He was also the inventor of the so-called "Napier's bones". Napier also made common the use of the decimal point in arithmetic and mathematics. http://www.johnnapier.com/ Napier's bones (or Napier's rods) and logarithms: http://www.youtube.com/watch?v=ShjoKnSm9ds Logarithms were introduced by John Napier in the early 17th century as a means to simplify calculations. They were rapidly adopted by navigators, scientists, engineers, and others to perform computations more easily, using slide rules and logarithm tables. http://en.wikipedia.org/wiki/Logarithm A clock is an instrument used to indicate, keep, and co-ordinate time. The word clock is derived ultimately from the Celtic words clagan and clocca meaning "bell". A silent instrument missing such a mechanism has traditionally been known as a timepiece. http://en.wikipedia.org/wiki/Clock A sextant is an instrument used to measure the angle between any two visible objects. Its primary use is to determine the angle between a celestial object and the horizon which is known as the object's altitude. Making this measurement is known as sighting the object, shooting the object, or taking a sight and it is an essential part of celestial navigation. http://www.mat.uc.pt/~helios/ Lesson Creator New York, NY
{"url":"http://ed.ted.com/lessons/how-does-math-guide-our-ships-at-sea-george-christoph","timestamp":"2014-04-21T04:33:07Z","content_type":null,"content_length":"46369","record_id":"<urn:uuid:ed21a0d3-cdc4-4ec2-8b3c-8a585a249340>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclidean volume of the unit ball of matrices under the matrix norm up vote 12 down vote favorite The matrix norm for an n-by-n matrix A is defined as |A|=max(|Ax|) where x ranges over all vectors with |x|=1, and the norm on the vectors in R^n is the usual Euclidean one. This is also called the induced (matrix) norm, the operator norm, or the spectral norm. The unit ball of matrices under this norm can be considered as a subset of R^(n^2). What is the Euclidean volume of this set? I'd be interested in the answer even in just the 2-by-2 case. ca.analysis-and-odes matrices add comment 8 Answers active oldest votes Building on the nice answer of Guillaume: The integral $$ \int_{[-1,1]^n} \prod\_{i < j} |x_i^2 - x_j^2 | dx_1\dots dx_n $$ has the closed-form evaluation $$ 4^n \prod_{k \leq n} \binom{2k}{k}^{-1}.$$ This basically follows from the evaluation of the Selberg beta integral S[n](1/2,1,1/2). up vote 5 down vote accepted Combined with modding out by a typo, we now arrive at the following product formula for the volume of the unit ball of nxn matrices in the matrix norm: $$ n! \prod_{k\leq n} \frac{ \pi^k }{ ((k/2)! \binom{2k}{k})} .$$ In particular, we have: • 2/3 π^2 for n=2 • 8/45 π^4 for n=3 • 4/1575 π^8 for n=4 add comment The volume of the unit ball for the spectral norm in nxn real matrices is given by the formula $$ c_n \int\limits_{[-1,1]^n} \prod_{i < j} |x_i^2-x_j^2| dx_1\dots dx_n $$ where $c_n = n! 4^{-n} \prod_{k=1}^n v_k^2$ and $v_k=\pi^{k/2}/\Gamma(1+k/2)$ is the volume of the unit ball in R^n. A much more general formula for calculating all kind of similar quantities appears e.g. here (Lemma 1). The proof is by applying the SVD decomposition as a change of variables. up vote 12 down vote The first values are • 2/3 π^2 for 2x2 matrices • 8/45 π^4 for 3x3 matrices • 4/1575 π^8 for 4x4 matrices ... There might be a closed formula for the integral above. Edit : such a formula appears in Armin's post below !! When using your formula I get 1/3 Pi^2 for 2x2 matrices and the values for n=3,4 are different from the ones you give as well. Is there a small typo somewhere (or am I just messing up the calculation)? – Armin Straub Oct 22 '09 at 20:20 I doubled-checked and the general formula seems corrects (anyway you can derive it from the paper I quoted). But you are right, there was a typo for n=4 (now corrected). By the way, you should find c_2=pi^2/4 ; c_3=pi^4/9 ; c_4=pi^8/144. – Guillaume Aubrun Oct 23 '09 at 21:47 Looking at the paper I found the typo: c_n = n! 4^{-n} ... Also, your quite right; the integral does have a nice closed form coming from writing it as a Selberg integral. I put details into a new answer. – Armin Straub Oct 28 '09 at 22:10 Oops you're right ... – Guillaume Aubrun Oct 28 '09 at 22:48 add comment Concerning the 2x2 case: As Mike points out, you can write down an explicit formula for the norm of the matrix {{a,b},{c,d}}. It takes a good while but Mathematica can then compute the volume you're asking for. Integrate[If[a^2 + b^2 + c^2 + d^2 + Sqrt[((b+c)^2 + (a-d)^2) ((b-c)^2 + (a+d)^2)] <= 2, 1, 0], up vote 3 down {a, -1, 1}, {b, -1, 1}, {c, -1, 1}, {d, -1, 1}] Its answer is: 2π^2/3. For comparison: the volume of the Euclidean ball in R^4 is π^2/2 (which contradicts Mike's final statement that the matrix norm ball sits inside the Euclidean one). My apologies, it is actually easy to see that the matrix norm ball does not sit inside the Euclidean one. The identity matrix clearly does the job. – Mike Hartglass Oct 21 '09 at Nice example. At least it sits inside the max norm unit ball (filling it out by an ambitious 41% ...). – Armin Straub Oct 21 '09 at 11:12 add comment Not that this is too helpful, but in the case of a 2 x 2 matrix A (with diagonal entries a and d and off diagonal entries b and c all real) the norm for the matrix is given by the formula $\frac{1}{2}(a^{2} + b^{2} + c^{2} + d^{2} + \sqrt{(a^{2} + b^{2} + c^{2} + d^{2})^{2} - 4D})$ where $D = det(A^{*}A)$. It is a pretty ugly region but at least it can be computed in terms up vote 1 of a, b, c, and d and this unit ball will sit inside the Euclidean ball in R^{4}. down vote There should be a square root outside that expression, right? – j.c. Oct 21 '09 at 0:16 yes, I forgot the square root (or at least the lack of a square root is a typo in Conway's book). – Mike Hartglass Oct 21 '09 at 2:03 add comment Yes, O(n) is the n(n-1)/2 dimensional space of orthogonal n by n matrices. Vol(O(n)) is its volume. The integrand in the answer is simply the Jacobian of the singular value decomposition, {s_ i} is just the ordered set of the singular value and the integration is performed on the up vote 1 down subset bounded by 1. I may just have missed a factor of 1/2^n because of the sign ambiguity in the svd singular values add comment I worked out the answer for the 2 by 2 case as well. First, when dealing with 2 by 2 matrices in general, a convenient variable change is: Then a^2+b^2+c^2+d^2 = w^2+x^2+y^2+z^2. And the determinant (ad-bc) = (1/2)*(x^2+y^2-w^2-z^2). (Aside: this set of coordinates lets you see for instance that the set of rank 1 matrices in the space of 2D matrices realized as R^4 is a cone over the Clifford torus, since x^2+y^2 = w ^2+z^2 on a sphere x^2+y^2+w^2+z^2=r^2 implies x^2+y^2 = r^2/2 and w^2+z^2 = r^2/2, which are scaled equations for a flat torus) Let r1^2 = x^2+y^2, r2^2 = w^2+z^2. (These are radial coordinates of two cylindrical coordinate systems filling out 4-space). Then the norm squared is: (1/2)*(r1^2+r2^2 + \sqrt{ (r1^2+r2^2)^2 - (r1^2-r2^2)^2 }) When this is less than one, this corresponds to the region plotted below: up vote 1 Note that each point in the r1,r2 picture corresponds to a different "torus", x^2+y^2=r1^2, w^2+z^2=r2^2. down vote We can now integrate over the shaded in region, \int_{region} dw dx dy dz. This 4-D integral can be reduced to 2D using r1 and r2, since dx dy = 2π r1 dr1, dw dz = 2π r2 dr2: (4π^2) \int_{region} dr1 dr2 r1 r2 Now, note that we can rewrite r2 in terms of r1. In particular, after some manipulation of our norm, the shaded in region is defined by r2^2 ≤ 2-2\sqrt{2}r1+r1^2=(\sqrt{2}-r1)^2. Hence r2≤ \sqrt{2}-r1, and we can evaluate the r2 integral: (4π^2) \int_{r1=0}^\sqrt{2} dr1 r1 \int_{r2=0}^{\sqrt{2}-r1} r2 dr2 = (4π^2) \int_{r1=0}^\sqrt{2} dr1 r1 (\sqrt{2}-r1})^2/2 = (4π^2) (1/6) This yields 2π^2/3, as Armin found. add comment I had a go at this question, but the method I tried here doesn't quite work out. It does reduce it to upper triangular matrices, although that doesn't seem to be a lot of help for general Let your volume be V. By scaling, the volume of the set {|A|≤K} is VK^n^2. Now let M be a matrix whose entries are independent normal random variables with mean 0 variance 1. From the density function of the up vote 1 normal distribution, this gives P(|M|≤K)~(2π)^-n^2/2VK^n^2 in the limit of small K. down vote I'll now calculate this expression in an alternative way. Use the M=QR decomposition, where Q is orthogonal and R is upper triangular, with diagonal elements λ[n], λ[n-1],…λ[1], which are the eigenvalues of R. This can be done in such a way that λ[k]^2 has the χ^2[k]-distribution (a quick google search gives this but there's probably better references). The upper triangular parts of R have the standard normal density. We need to calculate |R|. I was originally thinking that this is the max eigenvalue, but it's not quite that simple. add comment By means of singular value decomposition, I think that the general answer for a real n by n matrix should be: Required volume = $$ {\rm vol}(O(n))^2 \int\limits_{0\leq s_n \leq s_ {n-1}\leq \dots s_1\leq 1}\prod_{i < j < n} (s_ i^2-s_ j^2).$$ up vote 1 down vote O(n) is the n-dimensional orthogonal group Is vol(O(n)) the n(n-1)/2-dimensional measure of the set of orthogonal n by n matrices? I would like to see how you came up with this. – Darsh Ranjan Oct 21 '09 at 8:15 add comment Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes matrices or ask your own question.
{"url":"http://mathoverflow.net/questions/1464/euclidean-volume-of-the-unit-ball-of-matrices-under-the-matrix-norm/1543","timestamp":"2014-04-16T22:38:13Z","content_type":null,"content_length":"93142","record_id":"<urn:uuid:8ea14035-cded-4861-8a63-e806f30ac157>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Kristian Woodsend: 'Optimize what you write, satisfy how you write it: Integer linear programming models for text rewriting' Ponente: Kristian Woodsend, (Institute for Language, Cognition and Computation, University of Edinburgh) Fecha: martes 22 de enero de 2013 Hora: 10h30 Lugar de celebración: Sala 1.03, ETSI Informática, UNED Recent years have witnessed increased interest in data-driven methods for text rewriting, e.g., writing a document in a simpler style, or a sentence in more concise manner. It is frequently the case, when performing inference in these natural language tasks, that the decisions involved are mutually dependent. Local decision makers (such as machine-learning classifiers) have a role to play, but in order to make coherent decisions during inference, it is essential that takes these interdependencies into account. I will be giving a tutorial on how to develop Integer Linear Programming (ILP) models for inference, using models that we developed for text rewriting as examples. In these models, we combined the rules and predictions made through data-driven and machine learning methods, with declarative knowledge expressed as constraints. In the second part, I will go on to describe our application of these techniques on two rather old and well-studied text generation problems: simplification and multi-document summarization. Leveraging large-scale corpora such as Wikipedia, we induced automatically a quasi-synchronous tree-substitution grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. I will then present ILP models that select the most appropriate content from the space of possible rewrites generated by the grammar. Finally, I will present experimental results to show that this approach is able to produce grammatical and meaningful output. Joint work with Mirella Lapata. Kristian Woodsend is currently a researcher at the University of Edinburgh. He has been working with Prof Mirella Lapata on natural language generation, on tasks such as summarization, generating highlights and captions, and simplification. This work has involved combining machine learning techniques with integer linear programming optimization methods which are able to explore the whole solution space efficiently and find the global optimum. Previously, he gained his PhD in large-scale numerical optimization methods for training support vector machines. Before that, he spent several years developing software for mobile phones. Lugar de celebración Sala 1.03ETSI Informática, UNEDc/ Juan del Rosal, 16Ciudad Universitaria28040 Madrid
{"url":"http://www.mavir.net/talks/138-woodsend-ene2013","timestamp":"2014-04-21T07:04:41Z","content_type":null,"content_length":"16290","record_id":"<urn:uuid:26ebb23e-82ef-4738-b388-73449eb305f7>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
A certain sum with q by the power of binomial (n 2) Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Is there a closed form to the following sum: $\sum_{n=0}^{\infty}a^nq^{n(n-1)/2}$ for all $a>0$ and $0\lt q\lt 1$ ? I don't think that there is a closed form, but there is a simple continued fraction, see formula (1.1) in math.sun.ac.za/~hproding/pdffiles/touchard-2011.pdf. Johann Cigler May 25 '12 at 15:22 Thanks, but how can I get rid of the element $(-1)^n$ appearing in this reference formula (1.1) ? guy May 25 '12 at 19:43 In a deleted answer, "Guy" asks if there is a connection to hypergeometric series. S. Carnahan♦ May 27 '12 at 9:48 add comment There will not be a closed form for this without some special function. The reason is that there would then be a closed form for the Jacobi theta function, without special functions. up vote 1 down Let your function be $F(a,q)$, then $F(e^{2\pi i z+\pi i \tau},e^{2\pi i \tau})$ is $\sum_{n=0}^\infty e^{\pi i n^2 \tau+2\pi n z}$ so $\vartheta(z;\tau)=F(e^{2\pi i z+\pi i \tau},e^ vote {2\pi i \tau})+F(e^{-2\pi i z+\pi i \tau},e^{2\pi i \tau})-1$. I don't know if your function can be written in terms of some already-named special function. If it were it would have to be something somehow related to the Jacobi theta function. but in Jacobi theta function the sum starts from (- infinity), and in the questions it starts from 0. Does it still hold? guy May 25 '12 at 8:03 You are corrected. I have turned around my answer to avoid that mistake. Will Sawin May 25 '12 at 8:43 add comment
{"url":"http://mathoverflow.net/questions/97920/a-certain-sum-with-q-by-the-power-of-binomial-n-2","timestamp":"2014-04-21T09:48:57Z","content_type":null,"content_length":"56183","record_id":"<urn:uuid:ba0cc73d-3f5a-4156-b89b-6af898f8eecc>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Legacy of War - OoC Re: Legacy of War - Recruitment (Closes 10/15) Ah... Toughness is the exception to being able to buy defenses. You can't buy up toughness like that. hmm.... possible solution, Under equipment, buy your dagger as a str-based dmg 2 effect for 2 equipment points, then buy light leather armor(Protection 1) for 1 equipment point. With your str of 1, the dagger dmg is still 3. or possibly just check your calculations of the defenses. Abilities: 28pts STR:1 STA:0 AGL:4 DEX:3 FGT:2 INT:2 AWE:1 PRS:0 Defenses: 19pts Ddg:7 Par:6 Frt:5 Tgh:5/1 Wil:5 Dodge 4Agi+3points = 7 Parry 2Fighting+ 5points = 7 Fort 0Stamina+5points = 5 Will 1Awareness +4 points = 5 is... 17 points spent on it Take the two, spend for a point on Stamina and you get your Fort up to caps [6], With the point of stamina and the leather armor from above, you can get to Toughness 5/2 (1sta+1armor+3def.roll), you can drop a defensive roll and bring your Will back to 6 to meet caps. Just a thought. Last edited by flynnarrel on Wed Oct 17, 2012 2:35 pm, edited 1 time in total. Re: Legacy of War - Recruitment (Closes 10/15) Thanks, flynnarrel. I appreciate the help. Re: Legacy of War - Recruitment (Closes 10/15) background is redone if you hadn't noticed! Re: Legacy of War - Recruitment (Closes 10/15) We seem close enough. I'll review the changes and start the game today. It seems everybody uses InvisibleCastle here, so we'll stick to that for rolling traits. Re: Legacy of War - Recruitment (Closes 10/15) bi0philia wrote:background is redone if you hadn't noticed! Wow, the Elven family and the Dwarven family were both blacksmiths, with shops... likely rivals. Could be interesting roleplaying. Re: Legacy of War - Recruitment (Closes 10/15) flynnarrel wrote: bi0philia wrote:background is redone if you hadn't noticed! Wow, the Elven family and the Dwarven family were both blacksmiths, with shops... likely rivals. Could be interesting roleplaying. I think it more likely the elves would be fletchers or leatherworkers. I guess that just plays to the concept of elves being close with nature and all. To be honest, I don't think a town like this can support two blacksmiths. It's pretty damn small. Re: Legacy of War - Recruitment (Closes 10/15) The school of warfare could have been a great customer to both. With the elves being armorers (with their fancy metallurgy) and the dwarves being the weaponsmiths. If the school was as big as you say, it could work. I never wrote as to how successful my father was at blacksmithing. We could make it work. Re: Legacy of War - Recruitment (Closes 10/15) I guess that could be a good way to do it. Re: Legacy of War - Recruitment (Closes 10/15) So... When do we start playing (assuming my rogue made the cut)? Re: Legacy of War - Recruitment (Closes 10/15) I'm going to post the game later tonight. Re: Legacy of War - Recruitment (Closes 10/15) hey, Inivisble castle is not allowing me to login for some reason...any ideas on how to get it fixed? I sent a message to inisible castle's moderators, but have received no response... Re: Legacy of War - Recruitment (Closes 10/15) I have no idea. Longing in for the first time now. Re: Legacy of War - Recruitment (Closes 10/15) Two things: 1. I'll probably be asleep "later tonight", depending on the time. 2. So I did make the cut! Huzzah! Re: Legacy of War - Recruitment (Closes 10/15) Yes, welcome to the team, Harry. I'm completely drained after a long day. I want the first game post to be good, so I'm going to wait to post it until morning. Sorry for the delay. Hopefully it'll be done before most of you read this/wake-up. Re: Legacy of War - Recruitment (Closes 10/15) Harry Ham Bone wrote:Two things: 1. I'll probably be asleep "later tonight", depending on the time. 2. So I did make the cut! Huzzah! Count up your advantages again, please. I think you have one too many. Count up how much you've spent on Defenses again, please. I think you've spent one too much. (If I'm right, just take the point from defenses and put it to pay for the extra Advantage. your initiative is +4, not +5
{"url":"http://atomicthinktank.com/viewtopic.php?f=20&t=45663&start=150","timestamp":"2014-04-19T09:40:53Z","content_type":null,"content_length":"46990","record_id":"<urn:uuid:2464f626-a9fa-45d4-965c-c6f9393f92ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Interval-censored survival data with informative examination times: parametric models and approximate inference. Stat Med. 1999 May 30;18(10):1235-48. Unique Identifier : AIDSLINE We develop parametric methods for analysing interval-censored data when examination and survival times are not independent. The hazard function is modelled by introducing individual frailties related to the frequency of examinations. Model parameters may be obtained by direct maximization of the marginal log-likelihood. We develop a simpler approximate method in which the frailties are estimated by empirical Bayes. The two approaches are equivalent asymptotically as the number of examinations on each individual increases. Simulations suggest that the approximate method is adequate for estimating regression parameters even when the number of examinations on each individual is small. The methods are used to estimate age and period effects on HIV incidence in a cohort of repeat attenders at genito-urinary clinics in London, U.K. JOURNAL ARTICLE Adult Ambulatory Care Facilities/STATISTICS & NUMER DATA Computer Simulation Female Human HIV Infections/EPIDEMIOLOGY/*MORTALITY Incidence Likelihood Functions London/EPIDEMIOLOGY Middle Age Multicenter Studies Regression Analysis *Survival Analysis Survival Rate
{"url":"http://www.aegis.org/DisplayContent/print.aspx?SectionID=351020","timestamp":"2014-04-17T01:51:12Z","content_type":null,"content_length":"3685","record_id":"<urn:uuid:a9b86ff7-ae1e-41ef-8597-c6baf98784a7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Forcing over models without the axiom of choice up vote 10 down vote favorite In the vast majority of papers forcing is always developed over ZFC. Not surprisingly too, since infintary combinatorial principles are often used to prove results based on properties such as chain conditions, closure, and so on. I am looking for a good start on forcing over models of ZF. I have before me two papers which I have yet to read thoroughly, however may not be as useful for this purpose as I am hoping. • Grigorieff, S. Intermediate Submodels and Generic Extensions in Set Theory. The Annals of Mathematics, Second Series, Vol. 101, No. 3 (May, 1975), pp. 447-490 • Monro, G. P. On Generic Extensions Without the Axiom of Choice. The Journal of Symbolic Logic, Vol. 48, No. 1 (Mar., 1983), pp. 39-52 While I do intend to read them either way, it seems that neither develops the theory of forcing in the absolute absence of choice. I am currently looking for references which deal with such situation, or with the relation between forcing theorems proved in ZFC and the amount of choice needed for them to hold. Edit: I probably should have mentioned that I am quite familiar with permutation models of ZFA+embedding theorems and transfer theorems (Jech-Sochor, Pincus' theorem) as well with symmetric I am not looking for ways to develop forcing extensions of ZF without the axiom of choice; rather I am looking for theorems such as c.c.c forcing does not collapse cardinals and similar theorems extended to the choiceless contexts if possible, or the strength of choice needed for these theorems to hold. Consider two examples: 1. Suppose a model of ZF in which the axiom of choice does not hold. Can we, by set forcing add the axiom of choice? If not, can it be done using a machinery similar to a symmetric extension? If we can in fact find such extension, does that mean the model without choice is a symmetric extension between two larger models? 2. Suppose A is an infinite Dedekind-finite set, what can we say on a forcing poset based on A (either domain of functions are partial to A or the range is in A)? Can we "collapse" amorphous sets onto ordinals? Can we collapse one amorphous set onto another? And so on. set-theory axiom-of-choice forcing reference-request Well, the canonical reference is books.google.com/…. – Ricky Demer Sep 26 '11 at 15:42 (retracted, was looking at the wrong paper.) – Michael Blackmon Sep 26 '11 at 16:10 @Ricky: Are you sure? This is about consequences of AC, while I am looking for defining forcing in the lack thereof. – Asaf Karagila Sep 26 '11 at 16:32 "or with the relation between theorems proved in ZFC and the amount of choice needed for them to hold." – Ricky Demer Sep 26 '11 at 18:03 Hi Asaf, Only now I see this nice question. Busy life as usual, but I'll add something of substance if time permits. For now, the answer to 1 is no. Gitik's model where all cardinals are singular 1 is a counterexample. Woodin has shown that in the choiceless setting, from very strong large cardinal assumptions (beyond embeddings from V to V) it follows that we can recover choice, but class forcing is needed in general. It would be fabulous if one could provide structure in Woodin's setting by identifying enough of it as a symmetric model. But V is in general a class symmetric extension of HOD. – Andres Caicedo Sep 28 '11 at 6:50 show 2 more comments 4 Answers active oldest votes Arnie Miller's "Long Borel hierarchies" specifically pp 8-12 may be of interest for you. See here up vote 4 down vote Actually this entire paper is of interest to me. I also was wondering about facts true in the Feferman-Levy model. Many many many thanks! – Asaf Karagila Sep 26 '11 at 18:29 add comment The book "Theory of Semisets" by Vopenka and Hajek contains forcing constructions over models that violate AC. For one example: Start with the basic Fraenkel model (of ZF with atoms, ZFA --- I'm using here the terminology of Jech's "Axiom of Choice" book); it has an infinite Dedekind-finite set $A$ of atoms. Adjoin an $A$-indexed family of Cohen reals, by forcing with finite partial functions from $A\times\omega$ to 2. The pure part of the resulting model is the basic Cohen model. (In other words, instead of the usual procedure of passing to a symmetric submodel of a forcing extension, you can equivalently start with symmetry in the ground model and then just force.) This is how Vopenka and Hajek introduce the basic Cohen model. Unfortunately, I think the only way to read the Vopenka-Hajek book is straight through from the beginning, because there's a lot of notation and terminology that will make no sense if you up vote just open the book to the chapter you're interested in. 4 down vote Another nice example of forcing over choiceless models of ZFA is that Mostowski's linearly ordered model of ZFA can be obtained from the basic Fraenkel model by adding a generic linear ordering of $A$ with finite conditions. I second Francois's suggestion to look into Eric Hall's work, which builds on ideas like these and takes them a good deal farther. I looked over Eric Hall's work, and it indeed seemed impressive. I wanted in addition some more foundational book about this. As you say, his work builds on ideas like these; so I probably should start with this book. I'll grab it from the library and see what's in there later today. Thanks! – Asaf Karagila Oct 2 '11 at 6:16 add comment Friedman's book "Fine structure and class forcing" develops forcing over ZF, rather than ZFC, in chapter 2. Although chapter 1 is about fine structure, it is not used in chapter 2. up vote 3 Although the rest of his book is well above my level, I find Friedman's exposition of forcing quite helpful. down vote The online catalog of the local library shows the copy is on the shelf somewhere in the library. I'll go look at it tomorrow and let you know how 'spot on' this suggest was. Thanks! – Asaf Karagila Sep 26 '11 at 19:39 Actually, now that I look at it a bit more carefully (and with coffee), I'm not sure this is what you're looking for, although I think you probably will find it interesting. I don't think Friedman spends much time really getting into what happens when AC fails; I think he just works in a little more generality than, say, Kunen. Still, it might be helpful. – Noah S Sep 26 '11 at 23:15 add comment You may want to look at Eric Hall's papers. Regarding your question 1, I think that if $M$ is a model of SVC with $S$ (see Blass, Injectivity, projectivity, and the axiom of choice, TAMS 255), then you can force AC by wellordering the set $S$ using finite functions from $\omega$ into $S$. On the other hand, if you can force AC with a poset $P$ then the original model should satisfy SVC with $P$. So it looks like SVC is the key to force AC (unless you allow class forcing). up vote 1 down vote Regarding your question 2, forcing with finite injections from a $\omega$ to any set $A$ which is of greater cardinality than every finite ordinal will force a bijection between $\omega$ and $A$. I suppose you could do the same to force a bijection between any two given sets $A$ and $B$ by using finite partial injections from $A$ into $B$, provided this poset satisfies the obvious density requirements. However, this might accidentally force both sets to lose certain properties. Thanks for the references. I will be sure to dig through them. As for satisfaction. The theorem is that $M\models AC\implies M[G]\models AC$. Your first paragraph seems to imply the inverse as well, which I would believe is not completely true. – Asaf Karagila Sep 27 '11 at 17:57 I think you're misreading the first paragraph. The correct statement is that if $M[G] \vDash AC$ (for some generic $G$ over some forcing poset $P \in M$) then $M \vDash SVC$ (with parameter $P$). – François G. Dorais♦ Sep 27 '11 at 18:04 I think I should start sleeping full nights, or at least a good schlafstunde! :-) – Asaf Karagila Sep 27 '11 at 18:04 add comment Not the answer you're looking for? Browse other questions tagged set-theory axiom-of-choice forcing reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/76428/forcing-over-models-without-the-axiom-of-choice?sort=oldest","timestamp":"2014-04-18T03:24:08Z","content_type":null,"content_length":"80382","record_id":"<urn:uuid:32a46be2-ec3e-4ffd-b345-c2adf5051a46>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00310-ip-10-147-4-33.ec2.internal.warc.gz"}
Symmetric group module decomposition up vote 4 down vote favorite Let $E$ and $F$ be $n$-dimensional representations of $S_n$ of the form $V\oplus\mathbb{C}$, where $V$ is the standard representation. I just wonder if there is a formula to decompose $S^k(E\otimes F)\otimes\Lambda^m(E\otimes F)$ into irreducible modules of $S_n\times S_n$. Many thanks. rt.representation-theory co.combinatorics add comment 1 Answer active oldest votes This is very far from a complete answer, but it should give enough results to enable the irreducible constituents to be computed for some small values of $k$, $m$ and $n$. I would be amazed if there was an easy formula for the decomposition. To clarify the notation, I'll write $\boxtimes$ for the outer tensor product, so if $U$ and $V$ are representation of $S_n$ then $U \boxtimes V$ is a representation of $S_n \times S_n$. The question asks for the inner tensor product of $S^k( E\boxtimes F)$ and $\Lambda^m(E \boxtimes F)$ where $E$ and $F$ are the natural $n$-dimensional permutation representations of $S_n$. The symmetric power and exterior powers can be decomposed using the plethystic identities $$ S^k(E \boxtimes F) = \sum_\mu \Delta^\mu(E) \boxtimes \Delta^\mu (F)$$ $$ \Lambda^m (E \boxtimes F) = \sum_\mu \Delta^\mu(E) \boxtimes \Delta^{\mu'}(F)$$ where the sums all over all partitions $\mu$ of $k$ or $m$ respectively, $\Delta^\mu(E)$ is the image of $E$ under the Schur functor for the partition $\mu$, and $\mu'$ is the conjugate partition to $\mu$. One source for these identities is this article by Loehr and Remmel (see bottom of page 191). They are generalizations of the isomorphism of Schur functors used by S. Carnahan in this Mathoverflow answer. We then need to understand $\Delta^\mu(E)$ as a representation of $S_n$. I believe this is very tricky, and only a few special cases are known. If $\mu = (k)$ then $\Delta^k(E) = S^k E$: in this case the character can be written as a sum of Young permutation characters, and the irreducible constituents determined by Young's rule. (There are also relevant results from invariant theory.) If $\mu = (1^k)$ then $\Delta^k(E) = \Lambda^k(E)$ and the character is well known to be up vote 4 down vote $$ \chi_{\Lambda^k(E)} = \chi^{(n-k+1,1^{k-1})} + \chi^{(n-k,1^k)} $$ using the standard notation. Assuming that the characters of $S^k(E\boxtimes F)$ and $\Lambda^m(E \boxtimes F)$ have been expressed as sums of irreducible characters of $S_n \times S_n$, the identity $$ (U \boxtimes V) \otimes (U' \boxtimes V') \cong (U \otimes U') \boxtimes (V \otimes V') $$ then reduces the problem to decomposing various inner tensor products of representations of $S_n$ into irreducible representations. This is yet another very tricky problem and few general results are known. This paper by Bessenrodt and Kleshchev is a good introduction. One very special case is tensor products with the $(n-1)$-dimensional standard representation: for this the identity $$ \chi^\lambda \chi^{(n-1,1)} = \rm{Ind}^{S_n} \bigl(\rm{Res}_{S_n-1} \chi^\lambda \bigr) - \chi^{(n-1,1)} $$ combined with the ordinary branching rule is often useful. To give a small example, suppose we want to compute the character of $S^2(E \boxtimes F) \otimes (E \boxtimes F)$. The character of $S^2 E$ is the sum of the Young permutation characters $ \pi^{(n-2,2)} + \pi^{(n-1,1)} $ and so $$\chi_{S^2 E} = 2\chi^{(n)} + 2\chi^{(n-1,1)} + \chi^{(n-2,2)}.$$ By the result on exterior powers mentioned above, $\chi_{\Lambda^2 E} = \chi^ {(n-1,1)} + \chi^{(n-2,1,1)}$. This gives enough to write $S^2(E \boxtimes F)$ as a sum of irreducible characters of $S_n \times S_n$, and then the restriction / induction trick can be used to finish the calculation. Many thanks for your help, Mark. I think I can try some examples for small k, m and n now. – JYQ Apr 14 '12 at 20:40 add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/93723/symmetric-group-module-decomposition/93854","timestamp":"2014-04-18T16:06:51Z","content_type":null,"content_length":"55391","record_id":"<urn:uuid:b84d5e23-0b58-4079-8f2f-30f91a8c52be>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Juliann plans on drawing triangle ABC, where the measure of <A can range from 50-60 and the measure of <B can range from 90-100. • one year ago • one year ago Best Response You've already chosen the best response. Given these conditions what is the correct range of measures possible for <C. [A] 30 to 50 [B] 80 to 90 [C] 120 to 130 [D] 20 to 40 Best Response You've already chosen the best response. There is an important property of triangles that states:the length of a side of a triangle is less than the sum of the lengths of the other two sides and greater than the difference of the lengths of the other two sides. Best Response You've already chosen the best response. Does that help? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5088983ae4b004fc96eb7f9c","timestamp":"2014-04-18T16:04:49Z","content_type":null,"content_length":"32586","record_id":"<urn:uuid:e6944d01-0adb-4429-821d-822ea979f6fd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Errata for Shafarevich's Basic Algebraic Geometry? up vote 5 down vote favorite Is there a good errata for Shafarevich's Basic Algebraic Geometry? I don't seem to be able to find one through google. reference-request ag.algebraic-geometry 2 Don't know of any (is the error rate so high that it's not better to simply read the book and deal with errors as one finds them?), but here's an interesting error: in the proof of existence of enough translation-invariant 1-forms on a group variety $G$ over an alg. closed field, the argument implicitly uses the false fact that the Zariski topology on $G \times G$ is the product topology. Look in "Neron Models" for an elegant rather different (and correct) method of proof of this result (which applies more widely too). – BCnrd Nov 10 '10 at 15:50 In I.6.3, the corollary to theorem 7 is false. The correct version is either stated on the source or restricted to proper maps. This error is used in II.1.4 where the set of singular points is 1 argued to be closed. That can be fixed by using the projective closure of the tangent spaces to get a proper projection map. In IV.1.1. the formula for intersection number seems wrong. In II.5.3 the proof of normalization of curves seems lacunary. In III.2.2, lemma, the point x must be smooth. etc....in spite of missing hypotheses, arguments, this book is excellent and the proofs can be fixed. – roy smith Nov 10 '10 at 16:31 This kind of question is reasonable and comes up periodically on MO, but ideally there would be a more dedicated place to store information on errata for books. As it is, this discussion will very 2 soon be lost in the vast archives out of sight. Apart from that, it's always wise to specify the edition or printing in question. Publishers used to do multiple printings, often with corrections added, as well as new editions. (Not so common with today's technology.) Books in translation pose special problems: serious misprints affecting the mathematics tend to get introduced. – Jim Humphreys Nov 10 '10 at 20:03 Are any of these errors fixed in the recent 3rd edition? – Marius Kempe Jan 18 at 21:22 add comment 2 Answers active oldest votes hey i don't know if it is included in errata or not, in page 10, about the proof of the representasion, he noted that the proof for which there are only finitely many point for which p (x_0,y_0)/q(x_0,y_0) equals to the common roots of the denominator of phi(t) and psi(t) using "similar reasons" , as noted on "for which this fails,for similar reasons" referring to q and up vote 0 f being coprime.. but i think this is due to the function p/q would becomes constant on k(X). down vote add comment Springer printed an Errata supposedly to be included in a study edition of the text to be published in 1977. I purchased the original edition directly from Springer and they mailed me up vote 3 the Errata at a later date. I don't really know what transpired in the intervening years. down vote 1 I have the 1974 edition and the 12 page errata booklet. They were mostly typos, which do seem to be fixed in the 1994 (2nd edition of the 1977 version), and not mathematical errors. The mathematical errors mentioned above persist through 1994. – roy smith Nov 10 '10 at 17:18 add comment Not the answer you're looking for? Browse other questions tagged reference-request ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/45528/errata-for-shafarevichs-basic-algebraic-geometry?answertab=active","timestamp":"2014-04-18T05:49:53Z","content_type":null,"content_length":"60064","record_id":"<urn:uuid:d9d1806f-69a4-4cc4-8cf9-458a17742523>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Free Differential Geometry Books Elementary Differential Geometry Curves and SurfacesAsst. Prof. Martin RaussenPDF 160 Pages The purpose of this course note is the study of curves and surfaces , and those are in general, curved. The book mainly focus on geometric aspects of methods borrowed from linear algebra; proofs will only be included for those properties that are important for the future development. Projective differential geometry old and new from Schwarzian derivative to cohomology of diffeomorphism groupsOvsienko and TabachnikovPDF 281 Pages This book is addressed to the reader who wishes to cover a greater distance in a short time and arrive at the front line of contemporary research. This book can serve as a basis for graduate topics courses. Exercises play a prominent role while historical and cultural comments relate the subject to a broader mathematical context. Lectures on Differential Geometry (PDF 221P)Wulf RossmannPDF 221 Pages This note contains on the following subtopics of Differential Geometry, Manifolds, Connections and curvature, Calculus on manifolds and Special topics. Lectures on Symplectic Geometry (PDF 225P)Ana Cannas da SilvaPDF 225 Pages This note contains on the following subtopics of Symplectic Geometry, Symplectic Manifolds, Symplectomorphisms, Local Forms, Contact Manifolds, Compatible Almost Complex Structures, Kahler Manifolds, Hamiltonian Mechanics, Moment Maps, Symplectic Reduction, Moment Maps Revisited and Symplectic Toric Manifolds. Notes on Differential Geometry and Lie Groups Notes on Differential Geometry Geometry and linear algebra Notes on Differential Geometry, Lars Andersson 1 Lecture Notes in Differential Geometry (PS) Natural Operations in Differential Geometry Plane Geometry Natural operations in differential geometry Projective Geometry Geometry of Surfaces Differentiable Manifolds Minimal surfaces in Euclidean spaces Differential Geometry Lecture Notes Differential Geometry A First Course in Curves and Surfaces Differential Geometry and Physics Course of differential geometry Complex Analytic and Differential Geometry Topics in Differential Geometry Functional Differential Geometry Elementary DifferentialGeometry Lecture Notes Differential Geometry Csikos B. Quick Introduction to Tensor Analysis Introduction to Differential Forms Complex Manifolds and Hermitian Differential Geometry Differential Geometry Reconstructed A Unified Systematic Framework
{"url":"http://www.freebookcentre.net/Mathematics/Differential-Geometry-Books.html","timestamp":"2014-04-17T19:26:03Z","content_type":null,"content_length":"41793","record_id":"<urn:uuid:34bcc342-6021-4252-89bd-e299f0810b25>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
One of my clients recently asked me to modify an Excel model, so that the adoption of products entering the market would follow a S-curve. After some digging and googling, I came across this excellent post by Juan C. Mendez, where he proposes a clean and very practical way to use the logistic function, and calibrate it through 3 input parameters: the peak value, and the time at which the curve reaches 10% and 90% of its peak value. The beauty of his approach is that his function is compact so it can be typed in easily in a worksheet cell, and the input very understandable. However, I found it a bit restrictive: transforming it for values other than 10% and 90% requires some recalibration, and more importantly, it cannot accomodate values that are not "symmetrical" around 50%. So I set to work through a generalized solution to the following problem: find a S-Curve that fits any arbitrary value, rather than just 10% and 90%. The solution The formula I ended up with is, not surprisingly, quite a bit longer (and unpleasant) than Mendez's solution: (I broke the formula into 4 pieces to make sure it fit on screen. The formula should be in one piece in a single cell.) Peak represents the peak market share, i.e. the long-term value of the share of the product (called "Saturation" in Mendez's post). Value1 and Time1 represent the percentage of the Peak share that the product has already reached at time 1, and Value2 and Time2 the percentage of peak share the product has reached at time 2. Time is the time at which the function is to be evaluated. Illustration: suppose that your product has a long-term market share of 80%, and that it will reach 50% of its peak share (i.e. 50% of 80%, that is, a 40% market share) in April 1st, 2008, and 90% of its peak share (i.e. 90% of 80%, that is, a 72% market share) in July 1st, 2012. In that case, the parameters would be Peak: 80% Time1: 2008.25 Value1: 50% Time2: 2012.5 Value2: 90% The Excel sheet attached illustrates the curve in action. Given how lengthy the formula for the curve is, I would recommend to consider first whether the formula proposed by Juan Mendez is sufficient for your needs, and, if you really want to go ahead with mine, to write it as a user-defined function, so that you won't have to keep such a large formula in your cells. The math The equation for the S-curve is given by: We need to be able to transform this curve so that we control when the growth happens, and its speed. To that effect, we will transform the original curve by adding two parameters Alpha and T0: In essence, T0 shifts the timeline of the curve, and alpha stretches or compresses time. The chart below illustrates the impact of these parameters on the curve. The Blue curve corresponds to the original S-Curve, with Alpha = 1 and T0 = 0. The Red curve has a value of T0 of 2, which "moves" the curve by 2 units to the right: it reaches 50% at t=T0, instead of t=0. The Green curve has a value of Alpha = 2; it still crosses 50% at t=0, but its growth happens "twice as fast" as the original curve. Where the original curve takes (roughtly) 4 periods to grow from 10% to 90%, the Green curve achieves the same growth in just 2 periods. Our goal is the following: given two values f1 and f2, and two dates t1 and t2, we want to find the two values Alpha and T0 such that f(t1) = f1 and f(t2) = f2. Playing a bit with the equation f(t1) = f1 yields the following: Doing the same exercise on f(t2) = f2, we end up with a system of 2 linear equations in two unknowns Alpha and T0: That system is easily solved and gives us the following values for Alpha and T0: 6/6/2008 12:36:38 PM # Hi Mathias I've just left a message on Juan's page and saw your generalised solution and thought I'd leave a similar message. I use an interactive Bass curve dashboard to get everyone involved in the ‘what if’ sensitivity analysis process. There is a link to it on my blog page acasoanalytics.wordpress.com/.../. I’ll have a go at doing the same with your approach. 7/30/2012 9:51:28 PM # Hi Mathias: I have been reading this S curve discussion from this site. Thanks for sharing good information. However, I am having difficulty in converting this example for a project schedule. For example, My Y axis on the right will contain percentages 0% to 100% and my x axis will include months e.g Jan, Feb, Mar My data will look like the below, any insights on how to get an S curve will greatly help!! Main Task 1 Start date 1 End Date 1 Main Task 2 Start date 2 End Date 2 Main Task 3 Start date 3 End Date 3 6/15/2008 5:54:38 AM # Pingback from acasoanalytics.wordpress.com Intuitive Bass Diffusion « Acaso Analytics 6/15/2008 6:09:36 AM # Hi Mathias, I've just posted an interactive dashboard of your S curve. The link is here acasoanalytics.wordpress.com/.../ Took me a little while to troubleshoot as the software doesn't support named ranges (only cells) so it kept returning a 'not a number' error. Seems to work fine now. Cheers - Billy 6/26/2008 10:00:15 PM # @ Billy Boyle: very effective (and visually pleasing) way to illustrate the impact of parameters on the shape of the Bass curve. Definitely let me know if you end up creating a dashboard for "my" 7/31/2012 3:40:39 PM # Hi Mathias: I have been reading this S curve discussion from this site. Thanks for sharing good information. However, I am having difficulty in converting this example for a project schedule. For example, My Y axis on the right will contain percentages 0% to 100% and my x axis will include months e.g Jan, Feb, Mar My data will look like the below, any insights on how to get an S curve will greatly help!! Main Task 1 Start date 1 End Date 1 Main Task 2 Start date 2 End Date 2 Main Task 3 Start date 3 End Date 3 7/18/2009 5:15:55 AM # Very nice formula to get the S-curve. However these curves do not take initial adoption (adoption at t=0) as an input and probably assumes it to be 0. How should I modify the formula so that it takes the initial adoption rate as input as 7/18/2009 5:50:37 AM # @rana: thanks for the feedback. Strictly speaking, if you are really talking about adoption, the initial value should be 0%, because no one has adopted yet. I can understand your question two ways. First, there could already be a population that adopted, and the "new" adoption is adding to it. In that case, replace the formula by InitialValue + formula, with Peak value replaced by Peak-InitialValue. Now if you want the initial rate of growth to be set to a value you define yourself, that's a more complicated enterprise... 9/1/2009 6:19:24 PM # Hi Mathias, I have used your excel file for a thesis work @ universtiy or Lugano and ETH Zurich, i have been working on Innovation as a theory for a case study on virtual world, i have used your excel file for generating the S-curve. Hope thats alright. Please let me know if possible. Thanks in advance ! 9/1/2009 6:42:25 PM # Hi Tom, I am very flattered! Thank you, and yes you have my blessing to use that file, and... I'd love to hear more about your research. 9/23/2009 3:41:33 PM # Hi Mathias, I found this very useful indeed after spending a few hours googling the S-Curve. I am trying to convert this into a payment profile for a model in the construction industry where saturation would be 100% and adoption 0%. How would you need to change the formula to force saturation at a specified date to replicate a payment profile with a start and end date? 9/24/2009 4:23:23 AM # Hi Bertus, If I understand you correctly, you want a curve which begins at 0% and ends at 100% over time, matching certain levels over time. There is one issue here, which is that the logistic curve never reaches 0% or 100%: the curve tends to 0%, and to 100%, if you set the peak value to 100% - so you won't be able to fit a curve starting at 0% and ending at 100%. What I would suggest you do is to set the peak value at 100%, and set value1 and value2 close to 0% and close to 100%, at the dates you desire. Hope this is helpful! 10/21/2009 12:24:47 PM # Thank you so much for this. I started with Juan C. Mendez' function, but struggled with exactly the same things you indicated. Thank you so much for providing this solution, and even more for explaining it clearly so everyone can implement it. 10/21/2009 4:37:53 PM # Thank you for the encouragement, Mercedes - and I am glad it was helpful! 11/19/2009 5:58:58 AM # Hello Mathias, By using you Excel program, how can I control/change the x-axis range (time). You set the range from Q12000 t0 Q42010 which give 41 unit of x-axis; but I wanted more units in x-axis. Will appreciate your help. 11/27/2009 4:29:06 AM # Billy Boyle's interactive model is nice , but I do believe that p & q are mislabelled. p is coeff of innovation q is coeff of imitation 11/27/2009 6:36:20 PM # @ Khalil: my apologies for the late reply. To add more periods, you need to do 2 things: 1) just drag down the last 2 rows of calculations, so that more periods and formulas are added. At that point, you should see an error, because the formulas refer to the named range "Time", which doesn't get automatically extended when you drag down the cells. 2) The "time" range is initially A14:A54: you want to edit that named range, so that it covers all your new values in column A. Hope this helps! 11/27/2009 6:43:54 PM # @Matt: I believe you are right - you should probably contact him! It's a totally minor issue, but hey, why not aim for perfection... 11/27/2009 11:02:12 PM # thank you, this is very good information... and useful 11/30/2009 7:32:46 PM # Took me a little while to troubleshoot as the software doesn't support named ranges (only cells) so it kept returning a 'not a number' error. Seems to work fine now. Cheers 9/23/2010 11:04:31 AM # Having a recordset with data that cleary seemed to be distributed in an s-shaped fashion, I started searching on the internet to find a logistic regression method to apply to these data. After having searched in vain for an awful lot of time I finally found this very helpful page. I found especially the second equation from the math section most helpful. In Excel I then used the Solver add-in to estimate the alpha and T0 parameters, using them in a least sum of squares problem. In the end I obtained a correlation of 0.996 between the observed data and the s-curve, which is an extremely satisfying result. 4/18/2011 12:10:16 AM # Hai Mathias, Thanks for this nice post! I also was wondering how to make more units in x-axis?? Any idea? 8/6/2011 10:44:12 PM # Hey Mathias My name is Dave and I came across your S-Curve adaption excel spreadsheet. I am attending the University of Maryland taking a technology management class and would like to have your permission to use your S-Curve excel spreadsheet. 9/18/2011 9:37:18 PM # Hey Mathias, I am Vijay. i find your S- curve interesting. And, i would like to use this your curve. Hope you do not mind. I have the scenario as given below: TIme1=25; Value1=50 and Time2=70 and Value2 =90. And Peak =100 and i want to see this peak at Time=100 and then on, the value should stabilize at 100. How can i change your curve to fit to my constraints. Any suggestions would be appreaciated. 10/25/2011 9:52:49 AM # Pingback from d-marketingspot.com Strategy+Business Lessons: Financial Ratios | D-Marketingspot.com 10/26/2011 5:30:42 AM # Pingback from nyatiphotography.com nyati photography » Blog Archive » Accenture Financial Ratios Knowing the Market Drivers 2/10/2012 12:00:23 AM # Pingback from edebat.shikshik.org Mathias boyle | Edebat 2/12/2012 9:12:23 AM # Pingback from h3maths.edublogs.org H3 Maths - What’s the Difference between Algebra and Geometry? 5/13/2012 4:40:13 PM # Pingback from seidoch.allergiesaid.com Thomas mampilly | Seidoch 5/20/2012 4:31:34 PM # Hello Mathias, wow...what an awesome resource. I guess I am not clear on where you go to make the change to Time1 and Time0 5/27/2012 12:42:45 PM # Thanks for the words of encouragement! I believe you should be able to change T0 and T1 on top of the spreadsheet without too much problem - or am I misunderstanding your issue? 5/29/2012 5:22:57 PM # This is beautiful...Keep up the great work! 6/5/2012 11:42:03 AM # This was a great help. In a much more abstract sense, this is akin to the Pareto principle. In this case, close to 80 percent of market adoption happens during (the middle) 20 percent of the time. Various other factors can be added onto this based on specifics of the product type, market, etc. I used it as a baseline (instead of a linear baseline) for city growth planning to target population 30 years from now. 3/5/2013 6:51:47 AM # This article was extremely helpful. I my company ever needs this kind of help again, I know just the consulting firm. =) Thanks again. 3/22/2013 3:26:07 PM # Thanks Quintin - and glad it helped! 8/21/2013 6:33:04 AM # I am wondering if you have combined Mr. Mendez's approach to modeling the adoption curve on an annual approach, basically showing the time derivative of your total market adoption curve, from the point of view of the input parameters. I need to model and integrate over time the monthly revenue of a product for a curve defined by the max annual sales and the percentage of max sales at two points in time. Can you tell me if this has been done by someone already? 8/24/2013 4:45:55 AM # Hi Scott, I don't know of an existing implementation - good luck with your project! Add comment • Comment • Preview
{"url":"http://www.clear-lines.com/blog/post/S-shaped-market-adoption-curve.aspx","timestamp":"2014-04-18T13:26:52Z","content_type":null,"content_length":"97494","record_id":"<urn:uuid:6c17a16c-5489-48f7-bde3-63937b541218>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
This module provides unification and matching in an Abelian group. In this module, an Abelian group is a free algebra over a signature with three function symbols: • the binary symbol +, the group operator, • a constant 0, the identity element, and • the unary symbol -, the inverse operator. The algebra is generated by a set of variables. Syntactically, a variable is an identifer such as x and y (see isVar). The axioms associated with the algebra are: x + y = y + x (x + y) + z = x + (y + z) Group Identity x + 0 = x x + -x = 0 A substitution maps variables to terms. A substitution s is applied to a term as follows. • s(0) = 0 • s(-t) = -s(t) • s(t + t') = s(t) + s(t') The unification problem is given the problem statement t =? t', find a most general substitution s such that s(t) = s(t') modulo the axioms of the algebra. The matching problem is to find a most general substitution s such that s(t) = t' modulo the axioms. Substitition s is more general than s' if there is a substitition s" such that s' = s" o s. data Term Source A term in an Abelian group is represented by the group identity element, or as the sum of factors. A factor is the product of a non-zero integer coefficient and a variable. No variable occurs twice in a term. For the show and read methods, zero is the group identity, the plus sign is the group operation, and the minus sign is the group inverse. Eq Term Read Term Show Term isVar :: String -> BoolSource A variable is an alphabetic Unicode character followed by a sequence of alphabetic or numeric digit Unicode characters. The show method for a term works correctly when variables satisfy the isVar Equations and Substitutions newtype Equation Source An equation is a pair of terms. For the show and read methods, the two terms are separated by an equal sign. Eq Equation Read Equation Show Equation data Substitution Source A substitution maps variables into terms. For the show and read methods, the substitution is a list of maplets, and the variable and the term in each element of the list are separated by a colon. Eq Substitution Read Substitution Show Substitution Unification and Matching unify :: Monad m => Equation -> m SubstitutionSource Given Equation (t0, t1), return a most general substitution s such that s(t0) = s(t1) modulo the equational axioms of an Abelian group. match :: Monad m => Equation -> m SubstitutionSource Given Equation (t0, t1), return a most general substitution s such that s(t0) = t1 modulo the equational axioms of an Abelian group.
{"url":"http://hackage.haskell.org/package/agum-2.1/docs/Algebra-AbelianGroup-UnificationMatching.html","timestamp":"2014-04-24T03:37:53Z","content_type":null,"content_length":"17128","record_id":"<urn:uuid:424f0e60-1a74-40dd-8e5c-40808775da66>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Science Buddies: "Ask an Expert" Re: Hair Elasticity I lost you at "I need to find area in meters" since area would be meter^2, not meters. A bundle of 100 hairs will have 100 times the area of one hair, and since area goes as diameter squared, the bundle of 100 will have 10 times the diameter of one hair. Thus diameter of one hair = (1/10) diameter of bundle of 100 = (1/10) * 1 mm = 0.1 mm = 1E-4 m = 100 micrometers. Re: Hair Elasticity So in order to find stress i would Divided Force (Newtons ) by Area ( 0.0000000000785 m) will give me stress in pascals.. If 1 Pascal = 1 Pa = 1 N/m2 is the area 0.0000000000785 m or m2? alondra011 wrote:So in order to find stress i would Divided Force (Newtons ) by Area ( 0.0000000000785 m) will give me stress in pascals.. If 1 Pascal = 1 Pa = 1 N/m2 is the area 0.0000000000785 m or m2? If you compute using scientific notation you are less likely to make arithmetical errors. Myself, I cannot decipher expressions like "0.0000000000785". Re: Hair Elasticity Thank you so much for everybody's help and support. My science fair came out super great Re: Hair Elasticity alondra011 wrote:So what I did was that I made a Bundle of 100 hairs and measure it. It was 1mm so I divided 1/100 = 0.01 . I need to find area in meters so I converted 0.01mm to a meter which equals to 0.00001m since we divided 0.01/1000. 0.00001m is the diameter but we need to find the radius to calculate area so it will equal to 0.000005m. Am I correcT? What justification do you have for dividing by 100 this early in the calculations? If the diameter of a bundle of 100 hairs is 1mm, the radius of the bundle is 0.5 mm (radius = 1/2 the diameter), and the area of the bundle is 2*pi*(0.5*0.5) = aproximately 1.57 sq mm. Now you get to divide this area by 100 hairs to get the cross section area of the average hair in the bundle = 0.0157 sq mm which is 0.0000000157 sq m (1.57 x 10**-8 m). This cross sectional area in sq m is what you need in the formula.
{"url":"https://www.sciencebuddies.org/science-fair-projects/phpBB3/viewtopic.php?f=29&p=39878","timestamp":"2014-04-18T09:25:13Z","content_type":null,"content_length":"29649","record_id":"<urn:uuid:e65991a8-1867-4856-86a8-4322cc4e646c>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
User Qingyun bio website location Washington University in St Louis age 28 visits member for 3 years, 6 months seen May 1 '13 at 4:44 stats profile views 265 Feb system of homogeneous matrix equations 2 comment I do not know the background of this equation, the person who asked me this problem is working in algebraic geometry which I know nothing about. You answer is very helpful, are these the only solutions? Feb system of homogeneous matrix equations 2 comment I see, as a corollary of your conclusion, there is no solution if $n$ is not dividing the size of matrices, and $xA+yB$ will have distinct eigenvalues if $n$ equal to the size of matrices, are I right? 1 awarded Commentator Feb system of homogeneous matrix equations 1 comment Sorry for not making the question clear, I am looking for matrices $A,B$ such that the identity holds for all $x,y$. I guess my terminology is incorrect. Feb system of homogeneous matrix equations 1 revised added 157 characters in body 1 awarded Nice Question 1 asked system of homogeneous matrix equations Jan Interesting examples of minimal action on torus 31 comment @ Alain Valette @ Michele Triestino Thanks! Jan Interesting examples of minimal action on torus 31 revised added 104 characters in body Jan Interesting examples of minimal action on torus 31 comment @Lee Mosher Yes you are right, thanks for pointing this out. The correct statement should be that the functions $f_i$ are in suitable homotopy classes other than the one containing constant functions. The details are in Theorem 2.1 of Furstenberg's paper STRICT ERGODICTICY AND TRANSFORMATION OF THE TORUS and the remark after it. Jan Interesting examples of minimal action on torus 29 revised added 511 characters in body Jan Interesting examples of minimal action on torus 29 revised added 104 characters in body; added 20 characters in body; added 12 characters in body 29 asked Interesting examples of minimal action on torus 2 accepted Finite projection in Von Neumann algebra 28 asked Finite projection in Von Neumann algebra 8 awarded Yearling 21 accepted Integral interpolation by polynomial 28 asked Integral interpolation by polynomial Apr About Turan`s problem(inequality) in multivariable 1 comment Since F(n) is homogeneous degree 0, F(n) always has a minimum point. If we take the gradient and set it to 0, we get a system of homogeneous polynomial equations(k equations and k variables), it seems that we should be able to solve it and thereafter find the minimum of F(n), but I've no idea of how to deal with such a system. Mar accepted A Perturbation problem for U(n)
{"url":"http://mathoverflow.net/users/9858/qingyun?tab=activity","timestamp":"2014-04-18T10:40:49Z","content_type":null,"content_length":"44702","record_id":"<urn:uuid:2d1a9813-55d3-4252-a564-826aa9a793e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Lesson Ideas Dismayed by decimals? Don’t be! In this movie, Tim and Moby introduce you to the mysteries these special numbers. Learn where the word “decimal” comes from, and find out how many different kinds of decimal units you can form out of a whole (hint: it’s a lot!). Discover how to count numbers smaller than one, and learn why you’ll need a decimal point--and where it goes. Find out six ways you can use decimals in every day life, as well as two reasons that decimals can make math easier. Finally, learn about three different kinds of decimals, including a very special kind that goes on forever without repeating. You’ll never fear decimals again! In this lesson plan which is adaptable for grades 3-8, students explore ratios and proportions using an online math game called Ratio Rumble. Students will identify ratios when used in a variety of contextual situations and explain why ratios and rates naturally relate to fractions and decimals. This lesson plan is aligned to Common Core State Standards. See more » In this lesson plan which is adaptable for grades 1-5, students will use an online math game to practice creating number combinations (such as whole numbers which add up to 10, or decimals which add up to 1.) Students then create their own version of the game using hands-on materials. This lesson plan is aligned to Common Core State Standards. See more » In this lesson plan which is adaptable for grades 2-5, students will use BrainPOP Jr. and BrainPOP resources (including an online math game) to practice multiplying whole numbers and/or decimals. Students will identify patterns within a multiplication table and create their own multiplication tables with unique patterns. This lesson plan is aligned to Common Core State Standards. See more » In this lesson plan, which is adaptable for grades 3 through 8, students use BrainPOP resources and hands-on collaborative activities to order fractions and decimals. Students then practice and apply fraction and decimal concepts through online interactive game play. This lesson plan is aligned to Common Core State Standards. See more » In this lesson plan, which is adaptable for grades 3-12, students use BrainPOP resources to explore mathematical concepts such as whole numbers, decimals, and fractions. Students will use interactive game play to understand relationships between numbers and estimate positions on a number line. This lesson plan is aligned to Common Core State Standards. See more » In this lesson plan, which is adaptable for grade 3 through 8, students use BrainPOP resources to understand and apply the associative property. Students will use an online interactive game to practice rounding strategies and adding numbers in different combinations to create a target amount. This lesson plan is aligned to Common Core State Standards. See more » In this lesson plan which is adaptable for grades 3-12, students work collaboratively to research selected math skills. Students then create, play, and assess a math game that is designed to apply and reinforce their selected math concept. This lesson plan is aligned to Common Core State Standards. See more »
{"url":"http://www.brainpop.com/educators/community/bp-topic/decimals/","timestamp":"2014-04-18T08:15:49Z","content_type":null,"content_length":"79816","record_id":"<urn:uuid:42b3d6dd-60f1-40d0-81fa-3b8bce7f898a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Rob Gross Department of Mathematics Boston College Chestnut Hill, MA 02467-3806 (617) 552-3758 Associate Professor of Mathematics This web page contains my educational history, employment history, information about my books, some publications, information about courses that I’m teaching this year (2013–2014), information about Ideas in Math: The Grammar of Numbers, a course that Michael Connolly and I taught in the spring of 1998, and other useful stuff. B.A., 1979, Princeton University. Ph.D., 1986, Massachusetts Institute of Technology. Thesis advisor: Joseph Silverman. Associate Professor of Mathematics Back to top Massachusetts Institute of Technology Teaching Assistant 1979–1983 Northeastern University Instructor 1983 Boston College Instructor 1984–6 Boston College Assistant Professor 1986–93 Boston College Associate Professor 1993–present Boston University Visiting Associate Professor 1993–4, 2000–1 Back to top Elliptic Tales: Curves, Counting, and Number Theory, with Avner Ash. Princeton University Press, 2012. Reviews and errata. Fearless Symmetry: Exposing the Hidden Patterns of Numbers, with Avner Ash. Princeton University Press, 2006, paperback 2009. Reviews and errata. Getting Started with Mathematica®, with C-K. Cheung, G.E. Keough, and Charles Landraitis. Contributor to Standard Mathematical Tables and Formulæ, Thirty-first Edition, Edited by Daniel Zwillinger, CRC Press, 2003, New York. Contributor to Standard Mathematical Tables and Formulæ, Thirtieth Edition, Edited by Daniel Zwillinger, CRC Press, 1996, New York. “Frequencies of Successive Pairs of Prime Residues,” with Avner Ash, Laura Beltis, and Warren Sinnott, Experimental Mathematics, 20:4, 2011, 400–411. Click here for PDF format. “Frequences of Successive Tuples of Frobenius Classes,” with Avner Ash and Brandon Bate, Experimental Mathematics, 18:1, 2009, 55–63. Click here for PDF format. “Prime Specialization in Genus 0,” with Brian Conrad and Keith Conrad, Transactions of the American Mathematical Society, 360:6, June, 2008, 2867–2908. Click here for PDF format. “Generalized Non-abelian Reciprocity Laws: A Context for Wiles’s Proof,” with Avner Ash, Bulletin of the London Mathematical Society, 32, 2000: 385–397. Click here for PDF format. “A Generalization of a Conjecture of Hardy and Littlewood to Algebraic Number Fields,” with John H. Smith, Rocky Mountain Journal of Mathematics, 30:1, Spring, 2000: 195–215. Click here for PDF “S-Integer Points on Elliptic Curves,” with Joseph Silverman, Pacific Journal of Mathematics, 167, 1995: 263–288. Click here for PDF format. “On the Integrality of Some Galois Representations,” Proceedings of the American Mathematical Society, 123:1, January, 1995: 299–301. Click here for PDF format. “A Note on Roth’s Theorem,” Journal of Number Theory, 36:1, September, 1990: 127–132. Click here for PDF format. “Antigenesis: A Cascade Theoretical Analysis of the Size Distribution of Antigen-Antibody Complexes: Applications of graphs in chemistry and physics,” with John Kennedy, Lou Quintas, and Martin Yarmush. Discrete Applied Mathematics 19:1–3, 1988: 177–194. Supplementary notes to the Harvard Calculus Text, covering infinite series. Click here to get the file in PDF format. Back to top Current Courses MT004.02: Finite Probability and Applications Not open to students who have completed their Mathematics Core Curriculum Requirement without permission of the Department Undergraduate Vice Chair (except for Psychology majors completing their second mathematics co-requisite). This course, designed for students in the humanities, the social sciences, and the School of Education, is an introduction to finite combinatorics and probability, emphasizing applications. Topics include finite sets and partitions, enumeration, probability, expectation, and random variables Class home page. MT180.02: Principles of Statistics for the Health Sciences This course introduces statistics as a liberal discipline and applies the principles of statistics to problems of interest to health sciences professionals. Students will gain an understanding of statistical ideas and methods, acquire the ability to deal critically with numerical arguments, and gain an understanding of the impact of statistical ideas on the health sciences, public policy and other areas of application. Class home page. MT410.02: Differential Equations Prerequisites: MT202 or equivalent multivariable calculus course, and MT210 or equivalent linear algebra course. This course is an elective intended primarily for the student interested in seeing applications of mathematics. Among the topics covered will be first order differential equations, higher order linear differential equations with constant coefficients, linear systems, and Laplace transforms. If time permits, we will cover stability of solutions of systems of differential equations. Class home page. MT426.02: Probability This course provides a general introduction to modern probability theory. Topics include probability spaces, discrete and continuous random variables, joint and conditional distributions, mathematical expectation, the central limit theorem, and the weak law of large Class home page. In the spring of 1998, Michael Connolly (the chair of the Department of Slavic and Eastern European Languages) and I taught a course called: MT007/SL266 Ideas in Mathematics: The Grammar of Numbers. It had no prerequisites, and was a a core mathematics course for non-math and non-science majors. This one-semester course studied the role of numbers, number names, and number symbols in various cultures. Topics include number mysticism, symbolism in religion and the arts, elementary number theory, number representations, and calendars. Texts: The Magic Numbers of Doctor Matrix, Martin Gardner. Number Words and Number Symbols: A Cultural History of Numbers, Karl Menninger. Click here for more information. Back to top My .emacs file, for use with unix and emacs. Download. Back to top Math Department Home Page
{"url":"http://fmwww.bc.edu/gross/","timestamp":"2014-04-19T12:53:36Z","content_type":null,"content_length":"9635","record_id":"<urn:uuid:865c1f3f-d42c-450d-b150-6c89d43446c7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Kirchoff Rule Sign Conventions, Need Clarity. I have re-ordered your post somewhat, on grounds that it's important to address your last statement first. I really need to make sure my use of potential difference and potential is not convoluted because I tend to misuse the terms. And then there is potential as well. Potential is analogous to elevation, and we can talk about the potential at a specific location. There is the complication that we can add an arbitrary constant to the potential, as long as we add the same constant at all locations. Imagine asking for the elevation at the top of a building: do you mean relative to the ground just outside the building, or relative to sea level? You'll come up with different values for the elevation depending on which reference point is used. So there has to be an implied or understood reference point if you are going to give a numerical value to the potential or elevation at some specific location. (P.D.) means we are comparing the potentials at two locations. It would be nonsensical to talk about the P.D. at a single location -- unless there is some implied or understood reference point that the potential at the location in question is being compared to, in which case we are really talking about the P.D. between the location of interest and the location of the reference point. (For P.D., that arbitrary-constant-issue we have with is no longer an issue, because any constant value added to two locations will cancel when we do the subtraction to calculate the P.D. between those locations.) is simply the rate of change or slope of the potential, and we talk about the potential gradient at a single location. (It's equal in magnitude to the electric field, which points toward lower potential.) So for 3, because we are going in the direction of the assumed current, current always flows from the high potential to the low potential,... Yes, correct (for resistors anyway). . . . just as when we consider a positive point charge the potential gradient weakens as we move further away. So the RI term is - which indicates the potential difference at that location is It's more a matter of how the potential itself, not the gradient, is changing. In a typical resistor, the potential gradient is pretty constant -- i.e., the potential itself has a pretty constant slope -- and current is from higher potential "down the slope" to a lower potential. for 4, because we are going opposite the direction of current, we are moving further away from a lower potential difference and closer to a higher potential difference. Like transversing a Potential gradient from infinity to the source, it gets stronger and stronger and the potential is higher. Actually, it is that we are moving from a lower potential to a higher potential. As I mentioned before, the actual potential gradient (i.e. the slope of the potential) is pretty constant within the resistor material. Hope that helps.
{"url":"http://www.physicsforums.com/showthread.php?s=73a2095a0eb01db0f19250b5ad5e296b&p=4549182","timestamp":"2014-04-23T15:34:27Z","content_type":null,"content_length":"28343","record_id":"<urn:uuid:d804b0f7-a747-49c8-8ba6-42e8e79a86bd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
finite, separable March 1st 2010, 01:43 PM finite, separable I'm trying to prove the following theorem. Suppose K,L,M are fields with K subset of L subset of M such that M:K (field extension) is algebraic and separable. Then both L:K and M:L are algebraic and seperable. If M:K is algebraic then it is finite. since M:K is finite then tower law says that both L:K and M:L are finite and hence algebraic. Since M:K is seperable every element of M is seperable over K. So the minimal polynomial of every element of M is seperable over K. Since L is a subset of M, every element of L is seperable over K also. so L:K is seperable. If i can show that the minimal polynomials over K of these elements of M are also irreducible over L then i'm done, since then these minimal polynomials will still have no multiple zero. I need help in showing M:L is seperable. Also is what i have done so far correct? Any help appreciated thanks. EDIT: Actually i don't think "If i can show that the minimal polynomials over K of these elements of M are also irreducible over L" is always true. EDIT2: I wonder if ican do the following; Suppose $\alpha \in M$ and m(x) is it's minimal polynomial over K[x]. If m(x) is irreducible over L[x] then all's good. if not then $m(x)=m_1(x)m_2(x)m_3(x).....m_n(x)$ where $m_i(x) \in L[x]$ then one of $m_i(x)$ is the minimal polynomial of $\alpha$ over L and has no multiple zero since if it did so would m(x) March 1st 2010, 03:28 PM If M:K is algebraic then it is finite The converse is true, but for instance the algebraic closure of a prime field ( $\mathbb{F}_p\ \text{or}\ \mathbb{Q}$) is an algebraic non finite extension. Your edit 2 is good, perhaps just use that given an element in $M,$ its minimal polynomial over $L$ divides its minimal polynomial over $K$ (does not need the factoriality of $L[X]$) March 1st 2010, 04:46 PM Agreed, i think i was thinking of splitting fields. So maybe i should say If every element of M is algebraic over K then since L is a subset of M, every element of L is algebraic over K. Also every element of M is algebraic over K and so they must also be algebraic over L since K[x] is a subset of L[x]. so it's actually simpler.
{"url":"http://mathhelpforum.com/advanced-algebra/131466-finite-separable-print.html","timestamp":"2014-04-19T08:45:03Z","content_type":null,"content_length":"7587","record_id":"<urn:uuid:675aefbf-cca6-489c-9af6-35f65caffa0f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
2-Category theory Transfors between 2-categories Morphisms in 2-categories Structures in 2-categories Limits in 2-categories Structures on 2-categories Limits and colimits In general, a 2-limit is the sort of limit appropriate in a (weak) 2-category. See 2-limit for details. However, when we happen to be in a strict 2-category we also have another notion at our disposal. Since strict 2-categories are just categories enriched over Cat, we can apply the usual notions of weighted limits in enriched categories verbatim. (Historically, these were called 2-limits while the up-to-isomorphism limits were called bilimits.) Because enriched category theory doesn’t know anything about the 2-categorical nature of Cat, the resulting limits can have cones that commute strictly and have universal properties expressed by isomorphisms of categories; thus they can be evil. However, such strict limits often turn out to be technically useful even if we are fundamentally only interested in the non-srict notions, since in many strict 2-categories we can use tools of enriched category theory to construct strict limits, and then by considering suitably non-evil strict limits we can construct (non-strict) limits. This is reminiscent of the use of strict structures in homotopy theory as a tool to get at weak ones, and in fact a precise comparison can be made (see below). By a limit we will mean the fully 2-categorical notion described at 2-limit, in which cones commute up to isomorphism and the universal property is expressed by an equivalence of categories. It just occured to me that ‘strict initial object’ conflicts with this. But unlike ‘weak limit’, that doesn’t generalise very far. Heh, you’re right. I suppose we could try calling strict initial objects stable initial objects, which would make more sense anyway since they are really the 0-ary version of a stable coproduct. But there’s probably not likely to be any real confusion created by the two uses of strict. • A strict 2-limit (or just strict limit) in a strict 2-category is just a Cat-enriched (weighted) limit. This means that its cones must commute strictly (although weakness can be built in via the weighting, see below), and its universal property is expressed by an isomorphism of categories. Note that a strict limit is not necessarily a limit, because it may be evil. (cf. red herring • A pseudo limit (or strict pseudo limit if it is necessary to emphasize the strictness) is a limit whose cones commute up to coherent 2-cell isomorphism, but whose universal property can still be expressed by an isomorphism of categories. For any weight $W$, there is another weight $W'$ (a cofibrant replacement of $W$) such that pseudo $W$-weighted limits are equivalent to strict $W'$ -weighted ones. The idea is that $W'$ includes explicitly all the extra isomorphisms in a pseudo $W$-cone. Since any isomorphism of categories is a fortiori an equivalence of categories, any pseudo limit is also a limit. • A strict lax limit is a limit whose cones commute only up to a coherent transformation in one direction, but again whose universal property is expressed by an isomorphism. Likewise we have strict oplax limits where the transformation goes in the other direction. Strict lax and oplax limits can also be rephrased as strict (non-lax) limits for a different weight. As in the pseudo case, any strict (op)lax limit is also an (op)lax limit. More generally, any non-evil strict limit (one which doesn’t demand equality of objects) will also be a limit. Two formal versions of this statement involve flexible limits and the more restrictive PIE-limits. In particular, any strict flexible limit is also a limit. Since pseudo limits are PIE-limits, it follows that any strict 2-category which admits (strict) PIE-limits also admits all limits, even if it fails to admit some (evil) strict limits. The category of algebras and pseudo morphisms for any 2-monad, such as MonCat?, is a good example of a 2-category having strict PIE-limits but not all strict limits. Pseudo limits and homotopy limits If there is a model category structure on the 1-category underlying the given strict 2-category $C$, then in addition to whatever 2-categorical notions of limit exist in $C$, there is the notion of homotopy limits in $C$. If $C$ is a model 2-category with the “trivial” or “natural” model structure constructed in (Lack 2006), then these two notions coincide (Gambino 2007). For example, this is the case in Cat and Grpd, so the examples listed at homotopy limit are also examples of pseudo limits. In general, homotopy limits in a model 2-category give (non-strict) limits in its “homotopy Any ordinary 1-limit can be made into a strict 2-limit simply by boosting up its ordinary universal property (a bijection of sets) to an isomorphism of hom-categories. Thus we have strict products, strict pullbacks, strict equalizers, and so on. Of these, strict products (including terminal objects) are non-evil (and thus are also limits), while others such as pullbacks and equalizers tend to be evil. • For example, a strict terminal object is an object 1 such that $K(X,1)$ is isomorphic to the terminal category, for any object $X$. • Likewise, a strict product of $A$ and $B$ is an object $A\times B$ with projections $p:A\times B\to A$ and $q:A\times B\to B$ such that (1) given any $f:X\to A$ and $g:X\to B$, there exists a unique $h:X\to A\times B$ such thath $p h = f$ and $q h = g$ (equal, not isomorphic), and (2) given any $h,k:X\to A\times B$ and $\alpha: p h \to p k$ and $\beta:q h \to q k$, there exists a unique $\gamma:h\to k$ such that $p\gamma = \alpha$ and $q\gamma =\beta$. As mentioned above, adding pseudo in front of an ordinary limit has a precise meaning: it means that all the triangles in the limit cone now commute up to specified isomorphism, and the universal property is still expressed by an isomorphism of categories. In particular, there is still a specified projection to each object in the diagram. For example: • The pseudo pullback of a cospan $A \overset{f}{\to} C \overset{g}{\leftarrow} B$ is a universal object $P$ equipped with projections $p:P\to A$, $q:P\to B$, and $r:P\to C$ and 2-cell isomorphisms $f p \cong r$ and $g q \cong r$. • The pseudo equalizer of a pair of arrows $f,g:A\rightrightarrows B$ is a universal object $E$ equipped with morphisms $h:E\to A$ and $k:E\to B$ and 2-cell isomorphisms $f h \cong k$ and $g h \ cong k$. These are to be distinguished from: • The iso-comma object of $A \overset{f}{\to} C \overset{g}{\leftarrow} B$ is a universal object $P$ equipped with projections $p:P\to A$ and $q:P\to B$ and a 2-cell isomorphism $f p \cong g q$. • The iso-inserter of $f,g:A\rightrightarrows B$ is a universal object $E$ equipped with a morphism $e:E\to A$ and a 2-cell isomorphism $f e \cong g e$. The pseudo pullback, pseudo equalizer, iso-comma object, and iso-inserter are all strict Cat-weighted limits; their universal property is expressed by an isomorphism of categories. Usually the pseudo pullback and iso-comma object are not isomorphic, and likewise the pseudo equalizer and iso-inserter are not isomorphic. However, both the pseudo pullback and iso-comma object are non-evil and represent a pullback; therefore they are equivalent when they both exist. Likewise, the pseudo equalizer and iso-inserter both represent an equalizer, and are equivalent when they both exist. If one is mostly interested in (non-strict) limits, then there is little harm in using “pseudo pullback” to mean “iso-comma object” or “pullback,” as is common in the literature. However, with lax limits the situation is more serious. Speaking precisely, in the lax version of a limit, the triangles in the limiting cone are made to commute up to a specified transformation in one direction, but there are still specified projections to each object in the diagram. For example: • The strict lax limit of an arrow $f:A\to B$ is a universal object $L$ equipped with projections $p:L\to A$ and $q:L\to B$ and a 2-cell $f p \to q$. • The strict lax pullback of a cospan $A \overset{f}{\to} C \overset{g}{\leftarrow} B$ is a universal object $P$ equipped with projections $p:P\to A$, $q:P\to B$, $r:P\to C$, and 2-cells $f p \to r$ and $g q \to r$. In particular, the strict lax pullback is quite different from the following more common limit. • The comma object of a cospan $A \overset{f}{\to} C \overset{g}{\leftarrow} B$ is a generalization of the comma category in $Cat$; it is a universal object $(f/g)$ equipped with projections $p:(f/ g)\to A$ and $q:(f/g)\to B$ and a 2-cell $f p \to g q$. Even in their non-strict forms, the lax pullback and comma object are distinct. Usually the comma object is the more important one, but calling it a “lax pullback” should be avoided. Here are some more important examples of 2-limits, all of which come in strict and weak forms and are non-evil.
{"url":"http://ncatlab.org/nlab/show/strict+2-limit","timestamp":"2014-04-18T03:38:49Z","content_type":null,"content_length":"55738","record_id":"<urn:uuid:8737e541-a137-4301-b15f-d4d734f818bd>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Build a Hash Function from any Collision-Resistant Function , 2009 "... Many cryptographic applications of hash functions are analyzed in the random oracle model. Unfortunately, most concrete hash functions, including the SHA family, use the iterative (strengthened) Merkle-Damg˚ard transform applied to a corresponding compression function. Moreover, it is well known tha ..." Cited by 20 (2 self) Add to MetaCart Many cryptographic applications of hash functions are analyzed in the random oracle model. Unfortunately, most concrete hash functions, including the SHA family, use the iterative (strengthened) Merkle-Damg˚ard transform applied to a corresponding compression function. Moreover, it is well known that the resulting “structured ” hash function cannot be generically used as a random oracle, even if the compression function is assumed to be ideal. This leaves a large disconnect between theory and practice: although no attack is known for many concrete applications utilizing existing (Merkle-Damg˚ard based) hash functions, there is no security guarantee either, even by idealizing the compression function. Motivated by this question, we initiate a rigorous and modular study of developing new notions of (still idealized) hash functions which would be (a) natural and elegant; (b) sufficient for arguing security of important applications; and (c) provably met by the (strengthened) Merkle-Damg˚ard transform, applied to a “strong enough ” compression function. In particular, we show that a fixed-length compressing random oracle, as well as the currently used Davies-Meyer compression function (the latter analyzed in the ideal cipher model) are “strong enough ” for the two specific weakenings of the random oracle that we develop. These weaker notions, described below, are quite natural and should be interesting in their own right: • Preimage Aware Functions. Roughly, if an attacker found a “later useful ” output y of the function, then it must , 2008 "... Abstract. We propose a family of compression functions built from fixed-key blockciphers and investigate their collision and preimage security in the ideal-cipher model. The constructions have security approaching and in many cases equaling the security upper bounds found in previous work of the aut ..." Cited by 18 (5 self) Add to MetaCart Abstract. We propose a family of compression functions built from fixed-key blockciphers and investigate their collision and preimage security in the ideal-cipher model. The constructions have security approaching and in many cases equaling the security upper bounds found in previous work of the authors [24]. In particular, we describe a 2n-bit to n-bit compression function using three n-bit permutation calls that has collision security N 0.5,whereN =2 n, and we describe 3n-bit to 2n-bit compression functions using five and six permutation calls and having collision security of at least N 0.55 and N 0.63. Key words: blockcipher-based hashing, collision-resistant hashing, compression functions, cryptographic hash functions, ideal-cipher model. 1 - EUROCRYPT 2011, volume 6632 of LNCS , 2011 "... We exhibit a hash-based storage auditing scheme which is provably secure in the random-oracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any ..." Cited by 11 (1 self) Add to MetaCart We exhibit a hash-based storage auditing scheme which is provably secure in the random-oracle model (ROM), but easily broken when one instead uses typical indifferentiable hash constructions. This contradicts the widely accepted belief that the indifferentiability composition theorem applies to any cryptosystem. We characterize the uncovered limitation of the indifferentiability framework by showing that the formalizations used thus far implicitly exclude security notions captured by experiments that have multiple, disjoint adversarial stages. Examples include deterministic public-key encryption (PKE), password-based cryptography, hash function nonmalleability, key-dependent message security, and more. We formalize a stronger notion, reset indifferentiability, that enables an indifferentiabilitystyle composition theorem covering such multi-stage security notions, but then show that practical hash constructions cannot be reset indifferentiable. We discuss how these limitations also affect the universal composability framework. We finish by showing the chosen-distribution attack security (which requires a multi-stage game) of some important public-key encryption schemes built using a hash construction paradigm introduced by Dodis, Ristenpart, and Shrimpton. 1 "... Abstract. In this paper, we give a security proof for Abreast-DM in terms of collision resistance and preimage resistance. As old as Tandem-DM, the compression function Abreast-DM is one of the most well-known constructions for double block length compression functions. The bounds on the number of q ..." Cited by 6 (3 self) Add to MetaCart Abstract. In this paper, we give a security proof for Abreast-DM in terms of collision resistance and preimage resistance. As old as Tandem-DM, the compression function Abreast-DM is one of the most well-known constructions for double block length compression functions. The bounds on the number of queries for collision resistance and preimage resistance are given by O (2 n). Based on a novel technique using query-response cycles, our security proof is simpler than those for MDC-2 and Tandem-DM. We also present a wide class of Abreast-DM variants that enjoy a birthday-type security guarantee with a simple proof. 1 "... Abstract. In this paper, we introduce a new notion of security, called adaptive preimage resistance. We prove that a compression function that is collision resistant and adaptive preimage resistant can be combined with a public random function to yield a hash function that is indifferentiable from a ..." Cited by 4 (1 self) Add to MetaCart Abstract. In this paper, we introduce a new notion of security, called adaptive preimage resistance. We prove that a compression function that is collision resistant and adaptive preimage resistant can be combined with a public random function to yield a hash function that is indifferentiable from a random oracle. Specifically, we analyze adaptive preimage resistance of 2n-bit to n-bit compression functions that use three calls to n-bit public random permutations. This analysis also provides a simpler proof of their collision resistance and preimage resistance than the one provided by Rogaway and Steinberger [19]. By using such compression functions as building blocks, we obtain permutation-based pseudorandom oracles that outperform the Sponge construction [4] and the MD6 compression function [9] both in terms of security and efficiency. , 2009 "... The design of cryptographic hash functions is a very complex and failure-prone process. For this reason, this paper puts forward a completely modular and fault-tolerant approach to the construction of a full-fledged hash function from an underlying simpler hash function H and a further primitive F ..." Cited by 2 (0 self) Add to MetaCart The design of cryptographic hash functions is a very complex and failure-prone process. For this reason, this paper puts forward a completely modular and fault-tolerant approach to the construction of a full-fledged hash function from an underlying simpler hash function H and a further primitive F (such as a block cipher), with the property that collision resistance of the construction only relies on H, whereas indifferentiability from a random oracle follows from F being ideal. In particular, the failure of one of the two components must not affect the security property implied by the other component. The Mix-Compress-Mix (MCM) approach by Ristenpart and Shrimpton (ASIACRYPT 2007) envelops the hash function H between two injective mixing steps, and can be interpreted as a first attempt at such a design. However, the proposed instantiation of the mixing steps, based on block ciphers, makes the resulting hash function impractical: First, it cannot be evaluated online, and second, it produces larger hash values than H, while only inheriting the collision-resistance guarantees for the shorter output. Additionally, it relies on a trapdoor one-way permutation, which seriously compromises the use of the resulting hash function for random oracle instantiation in certain scenarios. This paper presents the first efficient modular hash function with online evaluation and short output length. The core of our approach are novel block-cipher based designs for the mixing steps of the MCM approach which rely on significantly weaker assumptions: The first mixing step is realized without any computational assumptions (besides the underlying cipher being ideal), whereas the second mixing step only requires a oneway permutation without a trapdoor, which we prove to be the minimal assumption for the construction of injective random oracles. - In: Information Security and Privacy. Lecture Notes in Computer Science , 2010 "... Abstract. At Crypto 2005, Coron et al. introduced a formalism to study the presence or absence of structural flaws in iterated hash functions: If one cannot differentiate a hash function using ideal primitives from a random oracle, it is considered structurally sound, while the ability to differenti ..." Cited by 2 (0 self) Add to MetaCart Abstract. At Crypto 2005, Coron et al. introduced a formalism to study the presence or absence of structural flaws in iterated hash functions: If one cannot differentiate a hash function using ideal primitives from a random oracle, it is considered structurally sound, while the ability to differentiate it from a random oracle indicates a structural weakness. This model was devised as atool tosee subtle real world weaknesses while in the random oracle world. In this paper we take in a practical point of view. We show, using well known examples like NMACand the Mix-Compress-Mix (MCM) construction, how we can prove a hash construction secure and insecure at the same time in the indifferentiability setting. These constructions do not differ in their implementation but only on an abstract level. Naturally, this gives rise to the question what to conclude for the implemented hash function. Ourresultscastdoubtsaboutthenotionof“indifferentiabilityfromarandomoracle ” tobeamandatory, practically relevant criterion (as e.g., proposed by Knudsen [16] for the SHA-3 competition) to separate good hash structures from bad ones. "... Abstract. In this paper, we study security for a certain class of permutation-based compression functions. Denoted lp231 in [12], they are 2n-bit to n-bit compression functions using three calls to a single n-bit random permutation. We prove that lp231 is asymptotically preimage resistant up to (2 2 ..." Cited by 1 (1 self) Add to MetaCart Abstract. In this paper, we study security for a certain class of permutation-based compression functions. Denoted lp231 in [12], they are 2n-bit to n-bit compression functions using three calls to a single n-bit random permutation. We prove that lp231 is asymptotically preimage resistant up to (2 2n 3 /n) queries, adaptive preimage resistant up to (2 n 2 /n) queries/commitments, and collision resistant up to (2 n 2 /n 1+ɛ) queries for ɛ> 0. 1 "... Abstract. We revisit the problem of building dual-model secure (DMS) hash functions that are simultaneously provably collision resistant (CR) in the standard model and provably pseudorandom oracle (PRO) in an idealized model. Designing a DMS hash function was first investigated by Ristenpart and Shr ..." Add to MetaCart Abstract. We revisit the problem of building dual-model secure (DMS) hash functions that are simultaneously provably collision resistant (CR) in the standard model and provably pseudorandom oracle (PRO) in an idealized model. Designing a DMS hash function was first investigated by Ristenpart and Shrimpton (ASIACRYPT 2007); they put forth a generic approach, called Mix-Compress-Mix (MCM), and showed the feasibility of the MCM approach with a secure (but inefficient) construction. An improved construction was later presented by Lehmann and Tessaro (ASIACRYPT 2009). The proposed construction by Ristenpart and Shrimpton requires a non-invertible (pseudo-) random injection oracle (PRIO) and the Lehmann-Tessaro construction requires a non-invertible random permutation oracle (NIRP). Despite showing the feasibility of realizing PRIO and NIRP objects in theory–using ideal ciphers and (trapdoor) one-way permutations – these constructions suffer from several efficiency and implementation issues as pointed out by their designers and briefly reviewed in this paper. In contrast to the previous constructions, we show that constructing a DMS hash function does not require any PRIO or NIRP, and hence there is no need for additional (trapdoor) one-way permutations. In fact, Ristenpart and Shrimpton posed the question of whether MCM is secure under easy-to-invert mixing steps as an open problem in their paper. We resolve this question in the affirmative in the fixed-input-length (FIL) hash setting. More precisely, we show that one can sandwich a provably CR function, which is sufficiently compressing, between two random "... Recent years have witnessed an exceptional research interest in cryptographic hash functions, especially after the popular attacks against MD5 and SHA-1 in 2005. In 2007, the U.S. National Institute of Standards and Technology (NIST) has also significantly boosted this interest by announcing a publi ..." Add to MetaCart Recent years have witnessed an exceptional research interest in cryptographic hash functions, especially after the popular attacks against MD5 and SHA-1 in 2005. In 2007, the U.S. National Institute of Standards and Technology (NIST) has also significantly boosted this interest by announcing a public competition to select the next hash function standard, to be named SHA-3. Not surprisingly, the hash function literature has since been rapidly growing in an extremely fast pace. In this paper, we provide a comprehensive, up-to-date discussion of the current state of the art of cryptographic hash functions security and design. We first discuss the various hash functions security properties and notions, then proceed to give an overview of how (and why) hash functions evolved over the years giving raise to the current diverse hash functions design approaches. A short version of this paper is in [1]. This version has been thoroughly extended, revised and updated. This
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.140.9453","timestamp":"2014-04-19T23:00:32Z","content_type":null,"content_length":"40674","record_id":"<urn:uuid:b2332570-a166-446a-aa7e-7b135d92f39d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00172-ip-10-147-4-33.ec2.internal.warc.gz"}
Subgroup of cyclic normal subgroup is normal April 25th 2010, 12:49 PM #1 Apr 2010 Subgroup of cyclic normal subgroup is normal Let $K \triangleleft G$ where $K$ is cyclic. Show that every subgroup of $K$ is normal in $G$. Been working on this for awhile and i know that every subgroup of a cyclic subgroup will be cyclic and correspondingly abelian and that we must show $gxg^{-1} \in H \; x\in H, \forall g \in G$, where $x=h^i, \; i \in \mathbb{Z}$ with h the generator of the subgroup of K. Totally stuck though on where to go after this. Last edited by HomieG; April 25th 2010 at 03:33 PM. Let $K \triangleleft G$ where $K$ is cyclic. Show that every subgroup of $K$ is normal in $G$. Been working on this for awhile and i know that every subgroup of a cyclic subgroup will be cyclic and correspondingly abelian and that we must show $gxg^{-1} \in H \; x\in H, \forall g \in G$, where $x=h^i, \; i \in \mathbb{Z}$ with h the generator of the subgroup of K. Totally stuck though on where to go after this. Hint: (1) Definition: a subgroup $H$ of a group $G$ is called characteristic if $\phi(H)=H\,\,,\,\forall \phi\in Aut(G)$ , and we write this as $H\, char.\, G$ Theorem: If $K\triangleleft G\,\,\,and\,\,\,H\,char.\,K\,\,\,then\,\,\,H\trian gleleft G$ Lemma: Any subgroup of a finite cyclic group is characteristic (hint: there exists only one of order any divisor of the group's order). Solve your problem now. April 25th 2010, 06:13 PM #2 Oct 2009
{"url":"http://mathhelpforum.com/advanced-algebra/141308-subgroup-cyclic-normal-subgroup-normal.html","timestamp":"2014-04-18T12:29:14Z","content_type":null,"content_length":"36997","record_id":"<urn:uuid:42e6c29f-f5e7-44e8-875f-942ddb3a53b0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra Terms Likelihood or chance of the occurrence of an event. Qualitative Data Data that can be represented with qualitative features. Quantitative Data Data that can be represented with numbers. Categorical Data Data that can be organized into mutually exclusive groups or categories. A diagram representing data or relationship(s) between variables. Discrete Data A type of data for which there is only a finite number of possible values. Continuous Data A type of data for which there is no possible separation between the possible values. Univariate Data Data with only one variable. Not to be confused with data on a unicycle. Bivariate Data Data that deals with relationships between two variables. Mean (Average) The sum of all the data points divided by the number of data points. The middle value of a list of data points. The data point with highest frequency. A set of points that divide the data set into three equal parts. Stem And Leaf Plot A representation of data where each data point is split into a leaf and a stem. The leaf is usually the last digit, and the stem consists of the other digits. Bar Graph A representation of data that uses rectangular bars to show the magnitude of categorical variables. Useful in all sorts of real-life situations A representation of data that uses rectangular bars to show the magnitude of quantitative variables. Pie Chart A circular graph divided into sectors, where the area of each sector is proportional to the relative size of the quantities represented. Also incredibly useful in real life Box And Whisker Plot A representation of data that displays the range and quartiles of the data set. Looks like a kitty cat when you squint and tilt your head to the left. Interquartile Range The difference between the third and the first quartile. Data points that are numerically far away from the rest of the data set. The loners of the group, if you will. Scatter Plot A graph of points showing the relationship between two variables. Linear Regression Fitting a straight line to a set of data points to find the linear relationship between the dependent and independent variables. The measure of the linear relationship between two variables. Is not causation. The ratio of the probability that an event will happen to the probability that it will A set of outcomes of an experiment. Mutually Exclusive Event Events A and B are mutually exclusive if the occurrence of event A implies event B cannot occur. Independent Event Events A and B are independent if the outcome of event A has no effect on the outcome of event B. It's grown up! It can do what it wants! ! = × ( – 1) × ( – 2) . . . × 2 × 1. The most excited of all key terms. One of all possible rearrangements of a collection of objects. One of all possible ways of choosing objects out of a larger group where order does not matter. Not Judge Judy's favorite math concept, that's for sure.
{"url":"http://www.shmoop.com/probability-statistics/terms.html","timestamp":"2014-04-18T06:38:50Z","content_type":null,"content_length":"28270","record_id":"<urn:uuid:ae0f3932-f638-4f2c-8160-3050766d330c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Chris Umans [Home][Research][Teaching][Theory links][Other] I am a professor of Computer Science in the Computing and Mathematical Sciences department at Caltech, and a member of the Theory Group. My research interests are in theoretical computer science, especially computational complexity. Specifically, I am interested in derandomization, explicit constructions, algebraic complexity and algorithms, and hardness of approximation. I received my Ph.D. in Computer Science from Berkeley in 2000. From 2000-2002 I was a postdoc in the Theory Group at Microsoft Research. I joined Caltech in 2002. Apply to Caltech: Some professional activities: On this page you can find: Chris Umans Computer Science, MC 305-16 California Institute of Technology 1200 E. California Blvd. Pasadena, CA 91125 Office: Annenberg 311 (626) 395-5725 umans@cs.caltech.edu
{"url":"http://users.cms.caltech.edu/~umans/","timestamp":"2014-04-19T22:05:32Z","content_type":null,"content_length":"4383","record_id":"<urn:uuid:39bdc834-5113-46d3-8640-0376570435f6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Noncommutative differential geometry, and the matrix representations of generalised algebras The underlying algebra for a noncommutative geometry is taken to be a matrix algebra, and the set of derivatives the adjoint of a subset of traceless matrices. This is sufficient to calculate the dual I-forms, and show that the space of 1-forms is a free module over the algebra of matrices. The concept of a generalised algebra is defined and it is shown that this is required in order for the space of 2-forms to exist. The exterior derivative is generalised for higher-order forms and these ale also shown to be: free modules over the matrix algebra. Examples of mappings that preserve the differential structure are given. Also given are four examples of matrix generalised algebras. and the corresponding noncommutative geometries. including the cases where the generalised algebra corresponds to a representation of a Lie algebra or a q-deformed algebra.
{"url":"http://www.research.lancs.ac.uk/portal/en/publications/noncommutative-differential-geometry-and-the-matrix-representations-of-generalised-algebras(ff569e7e-3af6-428d-9286-62fbddec37c8).html","timestamp":"2014-04-16T22:08:55Z","content_type":null,"content_length":"32651","record_id":"<urn:uuid:16a6dd5d-cffb-4b04-a7a0-d178f4f79872>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
From DocForge Haskell is a standardized purely functional programming language with non-strict semantics, named after the logician Haskell Curry. [edit] History Following the release of Miranda, in 1985, functional languages proliferated. By 1987, there existed more than a dozen non-strict, purely functional programming languages. Of these Miranda was the most widely used, but was proprietary. At the conference on Functional Programming Languages and Computer Architecture (FPCA '87) in Portland, Oregon, a meeting was held during which strong consensus was found among the participants that a committee should be formed to define an open standard for such languages. This would have the express purpose of consolidating the existing languages into a common one that would serve as a basis for future research in language design.^[1] The first version of Haskell ("Haskell 1.0") was defined in 1990.^[2] The committee's efforts resulted in a series of language definitions, which in late 1997, culminated in Haskell 98, intended to specify a stable, minimal, portable version of the language and an accompanying standard library for teaching, and as a base for future extensions. The committee expressly welcomed the creation of extensions and variants of Haskell 98 via adding and incorporating experimental features. In January 1999, the Haskell 98 language standard was originally published as "The Haskell 98 Report". In January 2003, a revised version was published as "Haskell 98 Language and Libraries: The Revised Report".^[3] The language continues to evolve rapidly, with the Hugs and GHC implementation (see below) representing the current de facto standard. In early 2006, the process of defining a successor to the Haskell 98 standard, informally named Haskell′ ("Haskell Prime"), was begun.^[4] This process is intended to produce a minor revision of Haskell 98.^[5] [edit] Features and extensions Characteristic features of Haskell include pattern matching, currying, list comprehensions ^[6], guards, definable operators, and single assignment. The language also supports recursive functions and algebraic data types, as well as lazy evaluation. Unique concepts include monads, and type classes. The combination of such features can make functions which would be difficult to write in a procedural programming language almost trivial to implement in Haskell. Several variants have been developed: parallelizable versions from MIT and Glasgow University, both called Parallel Haskell; more parallel and distributed versions called Distributed Haskell (formerly Goffin) and Eden; a speculatively evaluating version called Eager Haskell and several object oriented versions: Haskell++, O'Haskell and Mondrian. There is also a Haskell-like language that offers a new method of support for GUI development called Concurrent Clean. Its biggest deviations from Haskell are use of uniqueness types for input instead of monads. [edit] Applications Although Haskell has a comparatively small user community, its strengths have been well applied to a few projects. Audrey Tang's Pugs is an implementation for the long-term forthcoming Perl 6 language with an interpreter and compilers that proved useful already after just a few months of its writing; similarly, GHC is often a testbed for advanced functional programming features and optimizations. Darcs is a revision control system, with several innovative features. Linspire GNU/Linux chose Haskell for system tools development.^[7] Xmonad is a window manager for the X Window System, written entirely in Haskell. [edit] Examples A simple example that is often used to demonstrate the syntax of functional languages is the factorial function for positive integers, shown in Haskell: fac :: Integer -> Integer fac 0 = 1 fac n | n > 0 = n * fac (n-1) Or in one line: fac n = if n > 0 then n * fac (n-1) else 1 This describes the factorial as a recursive function, with one terminating base case. It is similar to the descriptions of factorials found in mathematics textbooks. Much of Haskell code is similar to standard mathematical notation in facility and syntax. The first line of the factorial function shown is optional, and describes the types of this function. It can be read as the function fac (fac) has type (::) from integer to integer (Integer -> Integer). That is, it takes an integer as an argument, and returns another integer. The type of a definition is inferred automatically if the programmer didn't supply a type annotation. The second line relies on pattern matching, an important feature of Haskell. Note that parameters of a function are not in parentheses but separated by spaces. When the function's argument is 0 (zero) it will return the integer 1 (one). For all other cases the third line is tried. This is the recursion, and executes the function again until the base case is reached. A guard protects the third line from negative numbers for which a factorial is undefined. Without the guard this function would recurse through all negative numbers without ever reaching the base case of 0. As it is, the pattern matching is not complete: if a negative integer is passed to the fac function as an argument, the program will fail with a runtime error. A final case could check for this error condition and print an appropriate error message instead. The "Prelude" is a number of small functions analogous to C's standard library. Using the Prelude and writing in the point-free style[2] of unspecified arguments, it becomes: fac = product . enumFromTo 1 The above is close to mathematical definitions such as f = g o h (see function composition) with the dot acting as the function composition operator, and indeed, it is not an assignment of a numeric value to a variable. In the Hugs interpreter, you often need to define the function and use it on the same line separated by a where or let..in, meaning you need to enter this to test the above examples and see the output 120: let { fac 0 = 1; fac n | n > 0 = n * fac (n-1) } in fac 5 fac 5 where fac = product . enumFromTo 1 The GHCi interpreter doesn't have this restriction and function definitions can be entered on one line and referenced later. [edit] More complex examples A simple RPN calculator expressed with the higher-order function foldl whose argument f is defined in a where clause using pattern matching and the type class Read: calc :: String -> [Float] calc = foldl f [] . words f (x:y:zs) "+" = y+x:zs f (x:y:zs) "-" = y-x:zs f (x:y:zs) "*" = y*x:zs f (x:y:zs) "/" = y/x:zs f xs y = read y : xs The empty list is the initial state, and f interprets one word at a time, either matching two numbers from the head of the list and pushing the result back in, or parsing the word as a floating-point number and prepending it to the list. The following definition produces the list of Fibonacci numbers in linear time: fibs = 0 : 1 : zipWith (+) fibs (tail fibs) The infinite list is produced by corecursion — the latter values of the list are computed on demand starting from the initial two items 0 and 1. This kind of a definition is an instance of lazy evaluation and an important part of Haskell programming. For an example of how the evaluation evolves, the following illustrates the values of fibs and tail fibs after the computation of six items and shows how zipWith (+) has produced four items and proceeds to produce the next item: fibs = 0 : 1 : 1 : 2 : 3 : 5 : ... + + + + + + tail fibs = 1 : 1 : 2 : 3 : 5 : ... = = = = = = zipWith ... = 1 : 2 : 3 : 5 : 8 : ... fibs = 0 : 1 : 1 : 2 : 3 : 5 : 8 : ... The same function, written using GHC's parallel list comprehension syntax (GHC extensions must be enabled using a special command-line flag '-fglasgow-exts'; see GHC's manual for more): fibs = 0 : 1 : [ a+b | a <- fibs | b <- tail fibs ] The factorial we saw previously can be written as a sequence of functions: fac n = (foldl (.) id [\x -> x*k | k <- [1..n]]) 1 A remarkably concise function that returns the list of Hamming numbers in order: hamming = 1 : map (*2) hamming # map (*3) hamming # map (*5) hamming where xxs@(x:xs) # yys@(y:ys) | x==y = x : xs#ys | x<y = x : xs#yys | x>y = y : xxs#ys Like the various fibs solutions displayed above, this uses corecursion to produce a list of numbers on demand, starting from the base case of 1 and building new items based on the preceding part of the list. In this case the producer is defined in a where clause as an infix operator represented by the symbol #. Apart from the different application syntax, operators are like functions whose name consists of symbols instead of letters. Each vertical bar | starts a guard clause with a guard before the equals sign and the corresponding definition after the equals sign. Together, the branches define how # merges two ascending lists into one ascending list without duplicate items. [edit] Criticism While Haskell has many advanced features not found in many other programming languages, some of these features have been criticized as making the language too complex or difficult to understand. In addition, there are complaints stemming from the purity of Haskell and its theoretical roots. Jan-Willem Maessen, in 2002, and Simon Peyton Jones, in 2003, discussed problems associated with lazy evaluation while also acknowledging the theoretical motivation for it^[8]^[9], in addition to purely practical considerations such as improved performance.^[10] They note that, in addition to adding some performance overhead, laziness makes it more difficult for programmers to reason about the performance of their code (specifically with regard to space usage). Bastiaan Heeren, Daan Leijen, and Arjan van IJzendoorn in 2003 also observed some stumbling blocks for Haskell learners, "The subtle syntax and sophisticated type system of Haskell are a double edged sword -- highly appreciated by experienced programmers but also a source of frustration among beginners, since the generality of Haskell often leads to cryptic error messages." ^[11] To address these, they developed an advanced interpreter called Helium which improved the user-friendliness of error messages by limiting the generality of some Haskell features, and in particular removing support for type classes. [edit] Implementations The following all comply fully, or very nearly, with the Haskell 98 standard, and are distributed under open source licenses. There are currently no proprietary Haskell implementations. • The Glasgow Haskell Compiler compiles to native code on a number of different architectures—as well as to ANSI C—using C-- as an intermediate language. GHC is probably the most popular Haskell compiler, and there are quite a few useful libraries (e.g. bindings to OpenGL) that will only work with GHC. • Gofer was an educational version of Haskell, developed by Mark Jones. It was supplanted by Hugs (see below). • HBC is another native-code Haskell compiler. It has not been actively developed for some time, but is still usable. • Helium is a newer dialect of Haskell. The focus is on making it easy to learn by providing clearer error messages. It currently lacks typeclasses, rendering it incompatible with many Haskell • Hugs, the Haskell User's Gofer System, is a bytecode interpreter. It offers fast compilation of programs and reasonable execution speed. It also comes with a simple graphics library. Hugs is good for people learning the basics of Haskell, but is by no means a "toy" implementation. It is the most portable and lightweight of the Haskell implementations. • Jhc is a Haskell compiler written by John Meacham emphasising speed and efficiency of generated programs as well as exploration of new program transformations. • nhc98 is another bytecode compiler, but the bytecode runs significantly faster than with Hugs. Nhc98 focuses on minimizing memory usage, and is a particularly good choice for older, slower • Yhc, the York Haskell Compiler is a fork of nhc98, with the goals of being simpler, more portable, more efficient and integrating support for Hat, the Haskell tracer. [edit] See also [edit] References [edit] External links [edit] Tutorials Additional copyright notice: Some content of this page is a derivative work of a Wikipedia article under the CC-BY-SA License and/or GNU FDL. The original article and author information can be found
{"url":"http://docforge.com/wiki/Haskell","timestamp":"2014-04-17T04:15:57Z","content_type":null,"content_length":"51419","record_id":"<urn:uuid:f3c59da7-25d7-4e6b-ab65-2153bd4ab45b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/srikanth_gangula/answered","timestamp":"2014-04-18T16:09:03Z","content_type":null,"content_length":"69575","record_id":"<urn:uuid:acb4e030-0302-4be2-b32f-3767daf66a17>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
ODE Fall 2012, A. Donev, CIMS Using the symbolic algebra package Maple to solve ODEs analytically and numerically > ODE := diff(theta(t),t,t) + gamma*diff(theta(t),t) + omega^2*sin(theta(t))=0; # Equations > ICs := theta(0)=Theta, D(theta)(0)=Omega; # Initial conditions > dsolve({ODE, ICs}, theta(t)); # Try to compute closed-form solution (no answer is returned) 1. Linearized ODE The linearized ODE in which sin(theta) is replaced by theta can be solved analytically using the methods we discussed in class. Maple knows all of the recipies we discussed and can do the calculations without making mistakes: > LinearODE := diff(theta(t),t,t) + gamma*diff(theta(t),t) + omega^2*theta(t)=0; # Linearization for small theta > solution:=simplify(dsolve({LinearODE, ICs}, theta(t))); Now let's plug in some specific numbers for which complex numbers will be required: > numbers:={gamma=1, omega=sqrt(5/4), Theta=Pi/4, Omega=2}; Plug these numbers into the solution: > eval(solution,numbers); By default Maple does not perform complex number simplifications, we need to use the evalc (evaluate using complex arithmetic) function, and then simplify: > approx_solution:=simplify(evalc(eval(solution,numbers))); Let us now plot the approximate (linearized) solution theta(t): > p1:=plot(eval(theta(t),approx_solution), t=0..15, color=red): 2. Phase Plot Let us now plot the trajectory in phase space, where the coordinates are [theta(t), theta'(t)] > y1:=eval(theta(t),approx_solution); # Position (angle) of pendulum theta(t) > y2:=simplify(eval(diff(theta(t),t), approx_solution)); # Velocity of pendulum theta'(t) > P1:=plot([y1, y2, t=0..15], color='red', labels=["theta(t)","theta'(t)"]): 3. Numerical Solution Finally, let's solve the true nonlinear ODE numerically using Maple's default numerical method (called "RK4") > num_solution:=dsolve(eval({ODE, ICs}, numbers), theta(t), type='numeric', range=0..15); > p2:=odeplot(num_solution, color='blue', style='point'): Let's compare the numerical solution to the approximate (linearized) solution: And let's also compare the true phase plot to the approximate one: > P2:=odeplot(num_solution, [theta(t), diff(theta(t), t)], color='blue', style='point'):
{"url":"http://cims.nyu.edu/~donev/Teaching/ODE-Fall2012/ODE_Maple/ODE_Maple.html","timestamp":"2014-04-19T09:24:54Z","content_type":null,"content_length":"16946","record_id":"<urn:uuid:639ca3b0-7f78-48fa-97d5-1226d8f4c360>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00013-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. Demi, Hannah, Izobel, Celia, Joseph and Michael from All Saints C of E Junior School sent us their solution to this challenge. They said: First we divided the grid into $12$ compartments each of $9$ squares ($3$x$3$). Then we covered a large dice in paper and stuck it with sellotape. Next we drew the top left corner pattern onto a side of the dice. Then we rolled it down once, then drew on that pattern. After doing lots of trial and error we found out a route: We started at the top left corner and then went down $1$ space, right $1$, and then up $1$. Next we went to the right and down $3$. Then we went left $2$, up $1$ and $1$ to the right! (the finishing point!) I've drawn a rough sketch to show the route they describe: They also sent in a net of the cube: Niharika from Leicester High Schools for Girls told us: I saw the cube turn in the air. It was hard but I enjoyed it. Mathematicians might call that 'visualising'. Niharika sent in another solution which is the reverse of the route above. Niharika also tackled the second part of the challenge which involved a cube with coloured squares painted on it. Firstly, she labelled the grid: She then went on to describe the route: (1, 1) --- (2, 1) --- (3, 1) --- (3, 2) --- (2, 2) --- (2, 3) --- (3, 3) --- (3, 4) --- (2, 4) --- (1, 4) --- (1, 3) --- (1, 2) Niharika explained that she thought carefully about the symmetry of each of the faces and how an odd or even number of 'tips' might affect each face. Fantastic! Will the reverse of Niharika's route work too, do you think?
{"url":"http://nrich.maths.org/7241/solution?nomenu=1","timestamp":"2014-04-19T17:02:38Z","content_type":null,"content_length":"5170","record_id":"<urn:uuid:5251dc12-daf4-4c6f-a0f5-f5725b20c50f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - helix and radius of curvature Jonny_trigonometry Oct14-05 10:30 PM helix and radius of curvature I was wondering how to find the radius of curvature of a helix. If it's circling around the z axis, the radius of it's projection onto the xy axis is a circle of radius r. Let one full cycle of the helix around the z-axis cover a distance d along the z-axis, then what is R, the radius of curvature of the helix in terms of d and r? I know it must be larger than d + r... Is there a handy formula for this? amcavoy Oct14-05 10:38 PM Quote by Jonny_trigonometry I was wondering how to find the radius of curvature of a helix. If it's circling around the z axis, the radius of it's projection onto the xy axis is a circle of radius r. Let one full cycle of the helix around the z-axis cover a distance d along the z-axis, then what is R, the radius of curvature of the helix in terms of d and r? I know it must be larger than d + r... Is there a handy formula for this? Hmm. From what I know about these, the equations are in the form of: [tex]\vec{r}=\left<r\cos{t},r\sin{t},\alpha t\right>[/tex] You know the radius projected onto the x-y plane, and also that d is proportional to the period. Assuming you know the formula for the radius of curvature: Jonny_trigonometry Oct15-05 08:29 PM hmm, ya. The parametric curve looks good, but what is kappa? forget the "radius of curvature", what I mean is radius... I guess what I really want to know is what is the radius R of the circle that is made from the length of a string that is wound around a cyninder with radius r as it spans a distance d (along the longitudinal axis of the cylinder) to make one cycle around the cylinder. If i have to integrate the parametric curve to find the length, then I guess thats what I have to do... I just don't like the complexity involved in doing so, and I figured someone has already done that and found a relationship between the variables R, d and r. amcavoy Oct15-05 10:05 PM Quote by Jonny_trigonometry hmm, ya. The parametric curve looks good, but what is kappa? forget the "radius of curvature", what I mean is radius... I guess what I really want to know is what is the radius R of the circle that is made from the length of a string that is wound around a cyninder with radius r as it spans a distance d (along the longitudinal axis of the cylinder) to make one cycle around the cylinder. If i have to integrate the parametric curve to find the length, then I guess thats what I have to do... I just don't like the complexity involved in doing so, and I figured someone has already done that and found a relationship between the variables R, d and r. I might be doing this wrong, but this is what it looks like: [tex]2\pi R=\int_{0}^{2\pi}\sqrt{r^{2}+\alpha^{2}}\,dt=2\pi\sqrt{r^{2}+\alpha^{2} }=2\pi\sqrt{r^{2}+\frac{d^{2}}{4\pi^{2}}}[/tex] Which would represent the length of the helix (I calculated that by the definition of arc length). Now you know that the length above (circumference) is really 2piR where R is the radius of the circle you want. Is this what you were getting at or did I misinterpret your question? Jonny_trigonometry Oct16-05 04:16 PM thanks! this is exactly what i was looking for. I reviewed arc length in 3d and checked your solution. It must be correct. I didn't think it would be that easy, I thought there would be a triple integral for some reason. Eh, I got a c in calc 3, so I'm not proficient enough in doing problems like this. Now that I think of it, triple integrals really don't show up unless you're calculating volume, and doubles are usually for area, or to simplify a more difficult single integral... thanks a lot Helix Radius I've seen vaiants of formulas such as amcavoy suggests in his second post. They do the job, but it bothered me that a Pathagorean approach was used when trig should offer a streamlined version. This is what I formulated: R = r(cos)^2 where the cos is derived from the slope of the curve around the cylinder. I recognize that amcavoy did in fact introduce trig into his forms, suggested in his first post, but without squaring the cos, the value for t is unattainable. Regards, Bob Re: Helix Radius Quote by bobb513 (Post 1525797) I've seen vaiants of formulas such as amcavoy suggests in his second post. They do the job, but it bothered me that a Pathagorean approach was used when trig should offer a streamlined version. This is what I formulated: R = r(cos)^2 where the cos is derived from the slope of the curve around the cylinder. I recognize that amcavoy did in fact introduce trig into his forms, suggested in his first post, but without squaring the cos, the value for t is unattainable. Regards, Bob I've seen that result quoted before in a text book, unfortunately the derivation wasn't given, and so far it eludes me. Any chance you could provide a step by step explanation of how the R = r(cos)^2 result was obtained? adriank Nov30-08 06:03 PM Re: helix and radius of curvature Well, the curvature of a curve in R^3 is [tex]\kappa = \frac{\lvert \vec r' \times \vec r'' \rvert}{\lvert \vec r' \rvert^3}[/tex], and using [tex]R = \frac{1}{\lvert \kappa \rvert}[/tex] should give you the radius of curvature. Re: helix and radius of curvature Quote by adriank (Post 1981789) Well, the curvature of a curve in R^3 is [tex]\kappa = \frac{\lvert \vec r' \times \vec r'' \rvert}{\lvert \vec r' \rvert^3}[/tex], and using [tex]R = \frac{1}{\lvert \kappa \rvert}[/tex] should give you the radius of curvature. Thanks, I don't mean to sound ungrateful, but I was particularly hoping to avoid using vectors, and was hoping for a solution using ordinary algebra and trigonometry. A previous poster, bobb513 appears to be saying he reached his result that way, where the angle involved is the slope of the curve around the cylinder. I would appreciate any help in reaching the R = r(cos)^2 result just using the trig functions and simple algebra if possible. I would just add, I don't need this for any specific purpose, other than personal curiosity. It is a result I've seen stated several times, but so far I have never seen it derived in a way I could Re: helix and radius of curvature Ok, I've found a website that has allowed me to find the solution I wanted. From the result given on that site for R, and using the fact that cos(pitch) can be found from the geometry given, it is easy to show that: r = R cos^2 (pitch) which was the result I wanted to be able to find. However, there is still a slight catch. I can follow the math on that page, and I was even able to extend it to reach the trigonometric result. However, I can't see why the opening statement is true: Helix_Length = C * c/Helix_Length I can't think of a justification for that statement, can anyone here see what I'm missing? All times are GMT -5. The time now is 08:03 PM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=94758","timestamp":"2014-04-20T01:03:37Z","content_type":null,"content_length":"16049","record_id":"<urn:uuid:b3f0f204-a79d-4562-897d-f35338dcc9a9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Submitted by Anonymous on April 13, 2011. Dear Bill Casselman, I worked on trisecting a square. Perigal was the first one to find a minimal 6-piece solution probably arround 1835-1840, but he only publish later in 1891. Thus Phillip Kelland was probably the first one who publish this technique for a gnomon in 1855. I've upload in wikisource his full publication for "Geometric dissections and transpositions". Here it is: Moreover, at the end of this paper: "L. J. Rogers (1897). Biography of Henry Perigal in appendix of On certain Regular Polygons in Modular Network. Proceedings London Mathematical Society. Volume s1-29, pp. 732-735.", I've found an interesting biography of Perigal (look at the four last pages): Best regards, Christian Blanvillain
{"url":"http://plus.maths.org/content/comment/reply/2189/2342","timestamp":"2014-04-20T11:03:32Z","content_type":null,"content_length":"21012","record_id":"<urn:uuid:653ac611-c1bc-4b42-93a6-2831c6cc0e4b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Help on solving differential equations... October 18th 2008, 10:58 AM #1 Oct 2008 Help on solving differential equations... I am doing a project on projectiles in sport and have set up the following differential equations when investigating the projectile of a golf ball with air resistance: mx'' = -kx' and my'' = -mg -ky' with initial conditions: Could someone help me solve these differential equations and show me the method in which it is done. I am really not sure how to do them. has well known solutions of $x(t)=C_{1}cos(\sqrt{\frac{k}{m}}t)+C_{2}sin(\sqrt{ \frac{k}{m}}t)$ Now, find x'(t) and use your initial conditions to find $C_{1}, \;\ C_{2}$ Sorry can you explain that please. Not sure what you mean. Could you go through solving each equation step by step if possible please, i have attempted them and must have gone wrong at one of the early steps... October 18th 2008, 11:40 AM #2 October 18th 2008, 11:47 AM #3 Oct 2008 October 19th 2008, 03:46 PM #4
{"url":"http://mathhelpforum.com/calculus/54352-help-solving-differential-equations.html","timestamp":"2014-04-20T14:06:18Z","content_type":null,"content_length":"40810","record_id":"<urn:uuid:930a3720-7cac-4519-a464-709600f9ea89>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00643-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: IMAGE PROJECTION METHOD AND PROJECTOR Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP An image projection method and a projector are provided to be able to change aspect ratio or resolution of a projection image, without setting of keystone correction again. The projector sets a displayable region with a shape according to an aspect ratio and a resolution of the projection image. The projector converts coordinates of the displayable region into coordinates in a screen coordinate system, and generates a rectangular area with a desired aspect ratio in a region corresponding to the displayable region in the screen coordinate system and a portion included in a projecting range. The projector converts the coordinates of a rectangular area into coordinates of a range corresponding to the rectangular area in the panel coordinate system, forms an image in a range corresponding to the rectangular area, and projects the projection image on the screen. An image projection method using a projector, which forms an image by a planar image formation panel and projects light from said planar image formation panel forming the image to an external projection plane, thereby projecting a projection image to said external projection plane, the method comprising:a step for setting a projection range where the projection image can be projected on said projection plane;a step for obtaining a method for mutual conversion between a panel coordinate system that defines a position on said image formation panel and a projection plane coordinate system that defines a position on said projection plane, based on coordinates of a range on said image formation panel corresponding to said projection range by projection and coordinates of said projection range in said projection plane coordinate system;a step for setting a displayable region where an original image for the projection image can be formed, on said image formation panel, according to an aspect ratio of the projection image to be projected;a step for calculating coordinates of said set up displayable region in said panel coordinate system;a step for converting said coordinates of said displayable region in said panel coordinate system to coordinates of a region corresponding to said displayable region in said projection plane coordinate system;a step for setting a rectangular area with a same aspect ratio as the projection image to be projected in a region corresponding to said displayable region and a portion included in said projection range in said projection plane coordinate system;a step for converting coordinates of said rectangular area in said projection plane coordinate system to coordinates of a range corresponding to said rectangular area in said panel coordinate system;a step for forming said original image for the projection image in a range corresponding to said rectangular area on said image formation panel represented by the converted coordinates; anda step for projecting light from said image formation panel forming said image to said projection plane. An image projection method using a projector, which forms an image by a planar image formation panel and projects light from said image formation panel forming the image to an external projection plane, thereby projecting a projection image to said projection plane, the method comprising:a step for setting a projection range where the projection image can be projected on said projection plane;a step for obtaining a method for mutual conversion between a panel coordinate system that defines a position on said image formation panel and a projection plane coordinate system that defines a position on said projection plane, based on coordinates of a range on said image formation panel corresponding to said projection range by projection and coordinates of said projection range in said projection plane coordinate system;a step for setting a displayable region where an original image for the projection image can be formed on said image formation panel in a shape and size according to an aspect ratio and a resolution of the projection image to be projected;a step for calculating coordinates of said set displayable region in said panel coordinate system;a step for converting said coordinates of said displayable region in said panel coordinate system to coordinates of a region corresponding to said displayable region in said projection plane coordinate system;a step for setting a rectangular area with a same aspect ratio as an aspect ratio of the projection image to be projected, in a portion included said projection range and in a region corresponding to said displayable region in said projection plane coordinate system;a step for converting coordinates of said rectangular area in said projection plane coordinate system to coordinates of a range corresponding to said rectangular area in said panel coordinate system;a step for forming said original image for the projection image in a range corresponding to said rectangular area on said image formation panel represented by the converted coordinates; anda step for projecting light from said image formation panel forming said image to said projection plane. A projector, which comprises a planar image formation panel and projection means for projecting a projection image to an external projection plane by projecting light from said image formation panel forming an image to said projection plane, the projector comprising:projection range setting means for setting a projection range where the projection image can be projected to said projection plane; means for calculating coordinates of a range, in a panel coordinate system which defines a position on said image formation panel, corresponding to said projection range on said projection plane by projection;means for setting coordinates of said projection range in a projection plane coordinate system that defines a position on said projection plane;means for calculating a conversion parameter that is required for a predetermined transformation formula to mutually convert a position in said panel coordinate system and a position in said projection plane coordinate system, based on coordinates corresponding to said projection range in said panel coordinate system and coordinates of said projection range in said projection plane coordinate system;means for setting a displayable region where an original image for the projection image can be formed, on said image formation panel, according to an aspect ratio of the projection image to be projected;means for calculating coordinates of said displayable region set by said means in said panel coordinate system;means for converting said coordinates of said displayable region in said panel coordinate system to coordinates of a region corresponding to said displayable region in said projection plane coordinate system, with the use of said conversion parameter;rectangular area setting means for setting a rectangular area with a same aspect ratio as the projection image to be projected, in a region corresponding to said displayable region and a portion included in said projection range in said projection plane coordinate system;means for converting coordinates of said rectangular area in said projection plane coordinate system to coordinates of a range corresponding to said rectangular area in said panel coordinate system, with the use of said conversion parameter; andimage forming means for forming said original image for the projection image in a range corresponding to said rectangular area on said image formation panel represented by the coordinates converted by said means. The projector according to claim 3, wherein said rectangular area setting means comprises:means for setting a central point of said projection range in said projection plane coordinate system;means for generating a first straight line passing said center point whose inclining angle is the same as an aspect ratio of said projection image and a second straight line passing said center point whose inclining angle is a negative value of said aspect ratio of the projection image, in said projection plane coordinate system;means for obtaining an intersecting point that has a shortest distance from said central point of all intersecting points where said first straight line and said second straight line intersect with edges of a region corresponding to said displayable region or with edges of said projection range, in said projection plane coordinate system; andmeans for defining said rectangular area with four corners constituted of points on said first straight line or said second straight line whose distances from said center point are equal to a distance between said center point and said intersecting point, in said projection plane coordinate system. The projector according to claim 3, wherein said projection range setting means is configured to set as said projection range a range occupied by a projection image on said projection plane, when said projection image with a specific aspect ratio is projected on said projection plane; the projector further comprising:means for deforming said original image for the projection image so that said original image falls within a range on the image forming panel corresponding to said projection range and keystone correction is performed, when a aspect ratio of the projection image to be projected is the same as said specific aspect ratio; andmeans for forming said original image deformed by said means in said range on said image forming panel. The projector according to claim 3, wherein said image forming means comprises:means for obtaining a parameter required for a predetermined transformation formula that deforms said original image for the projection image so that said original image falls within a range on said image formation panel corresponding to said rectangular area and keystone correction is performed, based on the coordinates of a range corresponding to said rectangular area in said panel coordinate system;means for deforming said original image with the use of said parameter set by said means; andmeans for forming said image deformed by said means in a range corresponding to said rectangular area on said image formation panel. BACKGROUND [0001] 1. Technical Field This invention relates to a projector which projects a rectangular image to an external projection plane, such as a screen. More specifically, this invention relates to an image projection method for projecting the image, while correcting a shape of the image projected on the projection plane, and a projector which adopts the method. 2. Description of Related Art In a field of presentation or video projection, a projector is used that accepts image data from outside and carries out extended image projection to a projection plane, such as an external screen or a wall, based on the accepted image data. Such a projector is provided with a planar image formation panel consisting of a liquid crystal panel or a DMD (Digital Micromirror Device), which forms an image. The projector carries out image projection by projecting to the external projection plane light reflected by the image formation panel, or light which penetrates the image formation panel. Thereby, the image is projected on the projection plane. Hereafter, the image projected on the projection plane is called a projection image, and the image formed by the image formation panel is called a panel image. Generally, a shape of the projection image that the projector should project is a rectangle. When the projector is arranged so that the light can be perpendicularly projected to the projection plane, the projector can form the panel image to be rectangle shape and projects the light, so that the projector can project the rectangular projection image on the projection plane. However in many cases, the projector cannot be arranged to project light perpendicularly to the projection plane. In such a case, the projector may project light in an oblique direction to the projection plane, from the upper or lower side of the projection plane, or from the right or left side thereof. When the projector projects the light in the oblique direction to the projection plane after forming a rectangular panel image, a traveling distance of light to the projection place differs at both ends of the image, and a magnification of the image differs. Then, the shape of the projection image is distorted from the rectangle. The distortion of this projection image is called keystone distortion. For these reasons, a projector requires a function of the keystone correction which corrects the keystone distortion in order to form the projection image in a rectangle, by projecting light after changing beforehand the shape of a portion of a panel image corresponding to the projection image from the Conventionally, various methods have been proposed for performing the keystone correction with the projector. An exemplary method contains a step that detects the shape of the projection image or distance between the projector and the projection plane, and adjusts the shape of the panel image based on the detection result, thereby correcting the shape of the projection image automatically. An example of this art is described in Japanese Patent Application Laid-Open No. 2003-029714. In the case of using the art of automatic keystone correction case, although the projector needs a sensor to detect data required for the keystone correction, the keystone correction can be performed easily, without making the user to take time and effort. The exemplary method is also used which contains a step that the projector projects an outer frame or points at four corners which show the projection range on the projection plane, and specifies the position of the outer frame or the points of four corners by the user's operation in order to make the shape of the projection range be a rectangle with a predetermined aspect ratio, and by adjusting the shape of a position of the panel image corresponding to the projection range. Thus, the shape of the projection image is corrected. In a case of performing the keystone correction according to the user's operation, although the user needs to take time and effort for the operation, it is possible to use an area of the screen that is the projection plane to the limitation and to adjust the shape of the projection image to a certain extent freely. By the way, it is known to use various values as the aspect ratio of an image. A horizontal to vertical ratio of 4:3 and 16:9 are used generally. Many conventional projectors comprise the image formation panel whose aspect ratio is set to be 4:3 or 16:9. In either case, the projector can project both of a projection image whose aspect ratio is 4:3 and a projection image whose aspect ratio is 16:9. FIG. 10 (a) and FIG. 10 (b) are schematic views showing how to project a projection image whose aspect ratio is 4:3 by a projector which has an image formation panel whose aspect ratio is 4:3. FIG. 10 (a) shows an input image with an aspect ratio of 4:3 that is input to the projector to project the image whose aspect ratio is 4:3. The projector stores data of the input image, converts the data by keystone correction, deforms the input image, and forms a panel image containing the deformed input image with the image formation panel. FIG. 10 (b) shows the panel image containing the deformed input image. An image region shown in FIG. 10 (b) is a region of the input image deformed through keystone correction. The image region with an aspect ratio of 4:3 corresponds to the projection image projected on a projection plane. Portions of the panel image other than the imaging range are projected with black, for example. FIG. 11 (a) and FIG. 11 (b) are schematic views showing how to project an image whose aspect ratio is 16:9 with a projector which has an image formation panel whose aspect ratio is 4:3. FIG. 11 (a) shows an input image with an aspect ratio of 16:9 that is input to the projector to project the image. FIG. 11 (b) shows a panel image. Since the aspect ratio of the image formation panel is 4:3, the projector forms a panel image with the aspect ratio of 4:3, and the formed panel image includes the input image whose aspect ratio is 16:9. In this case, as shown in FIG. 11 (b), the projector sets offset regions that do not include the input image in the upper and lower portions of the panel image whose aspect ratio is 4:3. Then, the projector generates a displayable region whose aspect ratio is 16:9 in the panel image whose aspect ratio is 4:3, and keeps an input image deformed through keystone correction in the displayable region. An image region shown in FIG. 11 (b) is a region of the input image deformed through the keystone correction. The image region with an aspect ratio of 16:9 corresponds to the projection image projected on a projection plane. For example, portions of the panel image other than the displayable region are projected in black, and the offset regions are not projected. Thus, the projector with the image formation panel whose aspect ratio is 4:3 can project either a projection image whose aspect ratio is 4:3 or a projection image whose aspect ratio is 16:9 while performing the keystone correction. Similarly, a projector with an image formation panel whose aspect ratio is 16:9 can project a projection image whose aspect ratio is 4:3, by setting offset regions in the right and left portions of a panel image whose aspect ratio is 16:9. Furthermore, when a resolution of an image which should be projected is lower than a resolution of an image formation panel, a projector can project a desired projection image, by setting at the upper, lower, left and right portions of a panel image offset regions which correspond to the pixel that become unnecessary due to the reduction of the resolution. SUMMARY Problems to be Solved by the Invention [0010] The projector can accept data of an input image from various devices, such as television tuner or personal computer (PC). Therefore, the aspect ratio or resolution of the image that should be projected may be changed. For example, in the case of projecting televised image, programs of movies may be broadcasted with an aspect ratio of 16:9, and the other programs may be broadcasted with an aspect ratio of 4:3. Furthermore, the aspect ratio or resolution of the projection image may be changed arbitrarily by preference of the user. However, when changing the aspect ratio or resolution of a projection image after setting the input image to be deformed by keystone correction, it causes deformation of the imaging range, in the panel image to be formed by the image formation panel, corresponding to the projection image whose aspect ratio or resolution is changed. Furthermore, it causes a change of the offset regions, and then the range of the displayable region is changed. As the results, there is a possibility that the imaging range becomes larger than the displayable region in the panel image and the projector cannot project the projection image. For this reason, it is necessary to repeat setting for keystone correction so that the imaging range corrected by keystone correction is included in the displayable region of the panel image. Therefore, every time the aspect ratio or resolution of the projection image is changed by the projector, it is necessary to repeat the setting for keystone correction. These situations cause the problem that quick alteration of the aspect ratio of the projection image or resolution cannot be performed. Especially, when setting for keystone correction according to the user's operation, there is a problem that the user's time and effort are increased. The present invention is proposed in view of the above problems. One object of the present invention is to provide an image projection method for projecting a projection image whose aspect ratio or resolution can be changed without repetition of setting for keystone correction, by adjusting location and shape of the image which is an origin for the projection image and formed on the image formation panel; so that the image which is an origin for the projecting image is included in a range on the image formation panel corresponding to a projection range determined when setting for keystone correction and a displayable region. Another object of the present invention is providing a projector which can implement the method. Another object of the present invention is to provide a projector which can project a projection image with easily viewable magnitude for the user by projecting the projection image whose aspect ratio or resolution is changed to be as large as possible. Means for Solving the Problems [0013] An image projection method according to the present invention is that an image projection method using a projector, which forms an image by a planar image formation panel and projects light from said planar image formation panel forming the image to an external projection plane, thereby projecting a projection image to said external projection plane, the method comprising: a step for setting a projection range where the projection image can be projected on said projection plane; a step for obtaining a method for mutual conversion between a panel coordinate system that defines a position on said image formation panel and a projection plane coordinate system that defines a position on said projection plane, based on coordinates of a range on said image formation panel corresponding to said projection range by projection and coordinates of said projection range in said projection plane coordinate system; a step for setting a displayable region where an original image for the projection image can be formed, on said image formation panel, according to an aspect ratio of the projection image to be projected; a step for calculating coordinates of said set up displayable region in said panel coordinate system; a step for converting said coordinates of said displayable region in said panel coordinate system to coordinates of a region corresponding to said displayable region in said projection plane coordinate system; a step for setting a rectangular area with a same aspect ratio as the projection image to be projected in a region corresponding to said displayable region and a portion included in said projection range in said projection plane coordinate system; a step for converting coordinates of said rectangular area in said projection plane coordinate system to coordinates of a range corresponding to said rectangular area in said panel coordinate system; a step for forming said original image for the projection image in a range corresponding to said rectangular area on said image formation panel represented by the converted coordinates; and a step for projecting light from said image formation panel forming said image to said projection An image projection method according to the present invention is an image projection method using a projector, which forms an image by a planar image formation panel and projects light from said image formation panel forming the image to an external projection plane, thereby projecting a projection image to said projection plane, the method comprising: a step for setting a projection range where the projection image can be projected on said projection plane; a step for obtaining a method for mutual conversion between a panel coordinate system that defines a position on said image formation panel and a projection plane coordinate system that defines a position on said projection plane, based on coordinates of a range on said image formation panel corresponding to said projection range by projection and coordinates of said projection range in said projection plane coordinate system; a step for setting a displayable region where an original image for the projection image can be formed on said image formation panel in a shape and size according to an aspect ratio and a resolution of the projection image to be projected; a step for calculating coordinates of said set displayable region in said panel coordinate system; a step for converting said coordinates of said displayable region in said panel coordinate system to coordinates of a region corresponding to said displayable region in said projection plane coordinate system; a step for setting a rectangular area with a same aspect ratio as an aspect ratio of the projection image to be projected, in a portion included said projection range and in a region corresponding to said displayable region in said projection plane coordinate system; a step for converting coordinates of said rectangular area in said projection plane coordinate system to coordinates of a range corresponding to said rectangular area in said panel coordinate system; a step for forming said original image for the projection image in a range corresponding to said rectangular area on said image formation panel represented by the converted coordinates; and a step for projecting light from said image formation panel forming said image to said projection plane. A projector according to the present invention comprises a planar image formation panel and projection means for projecting a projection image to an external projection plane by projecting light from said image formation panel forming an image to said projection plane, the projector comprising: projection range setting means for setting a projection range where the projection image can be projected to said projection plane; means for calculating coordinates of a range, in a panel coordinate system which defines a position on said image formation panel, corresponding to said projection range on said projection plane by projection; means for setting coordinates of said projection range in a projection plane coordinate system that defines a position on said projection plane; means for calculating a conversion parameter that is required for a predetermined transformation formula to mutually convert a position in said panel coordinate system and a position in said projection plane coordinate system, based on coordinates corresponding to said projection range in said panel coordinate system and coordinates of said projection range in said projection plane coordinate system; means for setting a displayable region where an original image for the projection image can be formed, on said image formation panel, according to an aspect ratio of the projection image to be projected; means for calculating coordinates of said displayable region set by said means in said panel coordinate system; means for converting said coordinates of said displayable region in said panel coordinate system to coordinates of a region corresponding to said displayable region in said projection plane coordinate system, with the use of said conversion parameter; rectangular area setting means for setting a rectangular area with a same aspect ratio as the projection image to be projected, in a region corresponding to said displayable region and a portion included in said projection range in said projection plane coordinate system; means for converting coordinates of said rectangular area in said projection plane coordinate system to coordinates of a range corresponding to said rectangular area in said panel coordinate system, with the use of said conversion parameter; and image forming means for forming said original image for the projection image in a range corresponding to said rectangular area on said image formation panel represented by the coordinates converted by said means. A projector according to the present invention is characterized in that said rectangular area setting means comprises: means for setting a central point of said projection range in said projection plane coordinate system; means for generating a first straight line passing said center point whose inclining angle is the same as an aspect ratio of said projection image and a second straight line passing said renter point whose inclining angle is a negative value of said aspect ratio of the projection image, in said projection plane coordinate system; means for obtaining an intersecting point that has a shortest distance from said central point of all intersecting points where said first straight line and said second straight line intersect with edges of a region corresponding to said displayable region or with edges of said projection range, in said projection plane coordinate system; and means for defining said rectangular area with four corners constituted of points on said first straight line or said second straight line whose distances from said center point are equal to a distance between said center point and said intersecting point, in said projection plane coordinate system. A projector according to the present invention is characterized in that said projection range setting means is configured to set as said projection range a range occupied by a projection image on said projection plane, when said projection image with a specific aspect ratio is projected on said projection plane; the projector further comprising: means for deforming said original image for the projection image so that said original image falls within a range on the image forming panel corresponding to said projection range and keystone correction is performed, when a aspect ratio of the projection image to be projected is the same as said specific aspect ratio; and means for forming said original image deformed by said means in said range on said image forming panel. A projector according to the present invention is characterized in that said image forming means comprises: means for obtaining a parameter required for a predetermined transformation formula that deforms said original image for the projection image so that said original image falls within a range on said image formation panel corresponding to said rectangular area and keystone correction is performed, based on the coordinates of a range corresponding to said rectangular area in said panel coordinate system; means for deforming said original image with the use of said parameter set by said means; and means for forming said image deformed by said means in a range corresponding to said rectangular area on said image formation panel. In the present invention, a projector projects a projection image to an external projection plane, such as a screen, by setting up on an image formation panel a displayable region with a shape according to an aspect ratio of the projecting image to be projected, making said image formation panel form an image, and projecting light externally from said image formation panel forming said image. The projector sets a projection range on the projection plane, and then converts coordinates of the displayable region in a panel coordinate system according to an aspect ratio of the projection image to be projected, to coordinates in a projection plane coordinate system. The projector generates a rectangular area with a desired aspect ratio in a region corresponding to the displayable region in the panel coordinate system and a portion included in the projection range, and then converts coordinates of the rectangular area in the projection plane coordinate system to coordinates of a range corresponding to the rectangular area in the panel coordinate system. The projector makes the image formation panel form an image in a range corresponding to the rectangular area. By projecting the image formed in a range corresponding to the rectangular area on the image formation panel, the projector projects the projection image fit in a rectangular area contained in the projection range on the projection plane. Furthermore, in the present invention, a projector projects a projection image to an external projection plane, by setting on an image formation panel a displayable region with a shape and a size according to an aspect ratio and a resolution of the projecting image to be projected, making said image formation panel form an image, and projecting light externally from said image formation panel forming said image. The projector sets a projection range on the projection plane, and then converts coordinates of the displayable region in a panel coordinate system according to an aspect ratio and a resolution of the projection image to be projected, to coordinates in a projection plane coordinate system. The projector generates a rectangular area with a desired aspect ratio in a region corresponding to the displayable region in the panel coordinate system and a portion included in the projection range, and then converts coordinates of the rectangular area in the projection plane coordinate system to coordinates of a range corresponding to the rectangular area in the panel coordinate system. The projector makes the image formation panel form an image in a range corresponding to the rectangular area. By projecting the image formed in a range corresponding to the rectangular area on the image formation panel, the projector projects the projection image in a rectangular area with a size according to a desired resolution. Furthermore, in the present invention, when setting the rectangular area in the projection plane coordinate system, the projector sets a rectangular area with a desired aspect ratio, so that one of the points at four corners centering around the central point of the projection range contacts an edge of a region corresponding to the displayable region or an edge of the projection range. Thus, the projector can set the rectangular area with the desired aspect ratio, with the largest possible size, at the center of the projection range. Furthermore, in the present invention, when setting a projection image on the projection plane, the projector sets the projection image with a specific aspect ratio. The projector makes the image formation panel to form a deformed image so that the deformed image falls within a range corresponding to the projection range, in the case that an aspect ratio of the projection image is not changed from the specific aspect ratio. The projection image after keystone correction is projected to the projection range on the projection plane, by projecting the formed image. Moreover, in the present invention, the projector deforms an original image for projection so that the original image falls within a range corresponding to the rectangular the image formation panel makes the image formation panel form, a panel image including the deformed original image, and then projects the projection image. Therefore, the projector can project the projection image in the rectangular area with a desired aspect ratio on the projection plane. EFFECTS OF THE INVENTION [0024] In the present invention, a rectangular area with a desired aspect ratio is set according to an aspect ratio of a projection image to be projected, so that the rectangular area is included in a projection range and in a region corresponding to a displayable region on the image formation panel, then a projector makes the image formation panel form an image and projects a projection image so that the projection range is located in the rectangular area. Therefore, even when the projector changes an aspect ratio of a projection image, the projector can change the aspect ratio of the projection image easily and then project the projection image without re-execution of setting for keystone correction. Furthermore, in the present invention, a rectangular area is set with a desired aspect ratio and a resolution according to an aspect ratio and a resolution of a projection image to be projected, so that the rectangular area is included in a projection range and in a range corresponding to a displayable region on the image formation panel and then a projector forms an image with the image formation panel and projects a projection image so that the projection image is located in the rectangular area. Therefore, even when the projector changes an aspect ratio and a resolution of a projection image, the projector can change the aspect ratio and the resolution of the projection image easily and project the projection image without performing setting for keystone correction again. Especially, because there is no need for the user to take time and effort for operating the projector to perform setting of keystone correction even when the user changes the aspect ratio and the resolution of the projection image, a shape of the projection image is automatically changed even when an aspect ratio or a resolution of an image corresponding to a input image data are changed. Besides, the projector can change the aspect ratio or the resolution of the projection image easily, and thus provides improved usability. Furthermore, in the present invention, because the projector can set the rectangular area with a desired aspect ratio, with a largest possible size, at the center of the projection range, the projector can avoid unnecessary reduction of the size of the projection image to be projected to the rectangular area, and thus the projector can project projection images with various aspect ratios, with an easy-to-see size. Furthermore, in the present invention, the projector can project a projection image corrected by keystone correction to have a rectangular shape, on the entire projection range set to have a rectangular shape with a specific aspect ratio, without performing setting of a rectangular area in the case that an aspect ratio of the projection image is not changed from the specific aspect Moreover, in the present invention, the projector deforms an original image for projection so that the original image falls within a range corresponding to the rectangular area on the image formation panel, forms a panel image including the deformed original image and projects the projection image which is formed by the image formation panel, a panel image including the deformed original image. Therefore, the projector can project the projection image in the rectangular area with a desired aspect ratio on the projection plane, and thus, the projector can project the projection image with a desired aspect ratio quickly. The present invention brings beneficial effects, such as described above. BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS [0029] FIG. 1 is a block diagram illustrating functions in a projector of the present invention; FIG. 2 is a flowchart illustrating procedures of keystone correction performed by a projector of the present invention; FIG. 3 shows exemplary schematic views illustrating correspondence between a projecting range on a screen and a pattern image formed by an image formation panel; FIG. 4 is a flowchart illustrating procedures of image projection using keystones correction performed by a projector of the present invention; FIG. 5 is a flowchart illustrating procedures to change the details of keystone correction performed by a projector of the present invention; FIG. 6 is a flowchart illustrating procedures to change the details of keystone correction performed by a projector of the present invention; FIG. 7 shows schematic views illustrating procedures to change the details of keystone correction in a panel coordinate system and in a screen coordinate system; FIG. 8 shows schematic views illustrating procedures to project an image with an aspect ratio of 16:9 using an image formation panel whose aspect ratio is 4:3; FIG. 9 shows schematic views illustrating procedures to project an image whose resolution is less than a projecting range; FIG. 10 shows schematic views illustrating a method for projecting a projection image with an aspect ratio of 4:3 by using a projector which has an image formation panel with an aspect ration of 4:3; FIG. 11 shows schematic views illustrating a method for forming an image with an aspect ratio of 16:9 by using a projector which has an image formation panel with an aspect ration of 4:3. DESCRIPTION OF THE NUMERALS [0040] 1 projector 10 remote controller 11 control processing part 13 pattern image generating part 14 remote receiving part 15 operating part 16 input part 21 image formation panel 23 keystone processing part S screen (projection plane) DETAILED DESCRIPTION [0050] Hereinafter, based on the drawings illustrating embodiments of the present invention, the present invention will be specifically described. FIG. 1 is a block diagram showing the functions inside a projector according to the present invention. The projector 1 comprises a control processing part 11 composed of a processor carrying out arithmetic and a RAM for storing information for the arithmetic. The control processing part 11 is connected to a ROM 12 storing control programs, and performs a process to control entire operation of the projector 1 according to the control programs stored in the ROM 12. The control processing part 11 is connected to a remote receiving part 14 which receives a signal sent by using, e.g. infra-red radiation or electric wave from a remote controller (remote) 10 operated by the user, and an operating part 15 which composed of various kinds of switches and accepts various kinds of instructions for processing through the user's operation. The remote receiving part 14 and the operating part 15 are configured to accept various kinds of instructions for processing and perform a processing according to the accepted instructions for processing. Moreover, the projector 1 comprises a planar image formation panel 21 which is composed of a liquid crystal panel or a DMD, or the like. The image formation panel 21 has plural pixels constituted of liquid crystal, minute mirrors, etc, to form a panel image with a predetermined resolution defined by the number of pixels. The projector 1 projects light to the image formation panel 21, using a light source and an optical system which are not shown. Furthermore, the projector 1 comprises a projector lens 3 which projects to the outside the light projected to the image formation panel 21 and reflected by the image formation panel 21. The white arrow in FIG. 1 shows light. The light from the projector lens 3 is projected to an external screen (projection plane) S of the projector 1, and then a projection image is projected on a surface of the screen S. In addition, the projector 1 may be configured to project a projection image by projecting light passing through the image formation panel 21 that has formed a panel image. The projector 1, furthermore, comprises an input part 16 to input image data from external devices, such as a television tuner or a PC. The input part 16 is connected to a scaling processing part 24 to scale an image created from input image data input to the input part 16, in accordance with the resolution of the image formation panel 21. The scaling processing part 24 is connected to a keystone processing part 23 to perform keystone correction for the image scaled by the scaling processing part 24. The keystone processing part 23 is connected to a panel image generation part 22 to generate a panel image including the image keystone-corrected by the keystone processing part 23. The panel image generation part 22 is connected to the image formation panel 21 to form the panel image generated by the panel image generation part 22. The image formation panel 21 is, moreover, connected to a pattern image generation part 13 to generate a pattern image constituted of an outer frame or points at the four corners, which shows a range of the projection image. The projector 1 is configured to project an image showing a range of the projection image to the screen S, by making the image formation panel 21 to form the pattern image generated by the pattern image generation part 13. The input part 16 is connected to the control processing part 11, and information on the input image data is input into the control processing part 11. Moreover, the pattern image generation part 13, the keystone processing part 23, the panel image generation part 22, and the image formation panel 21 are connected to the control processing part 11, and their operations are controlled by the control processing part 11. Next, description will be given to an image projection method according to the present invention, which is performed by the projector 1 comprising the configuration described above. The control processing part 11 sets an offset region and a displayable region in a range where the image formation panel 21 forms an image, according to an aspect ratio and resolution of a projection image to be projected based on the image data input to the input part 16. The offset region is a region where an original image for the projection image is not formed and the displayable region is a region where an original image for the projection image can be formed. The whole area of the image formation panel 21 is configured to be the displayable region, when the image formation panel 21 is configured to have an aspect ratio of 4:3, the aspect ratio of the projection image is 4:3, and the resolution of the projection image is higher than the resolution of the image formation panel 21. On the occasion when the projector 1 first projects the projection image, the image formation panel 21 forms a panel image that is the original image for projection displayed on the whole area of the displayable region. It is often the case that screen S is previously configured to have a predetermined aspect ratio, such as 4:3 or 16:9 and so on, in accordance with the aspect ratio of the projection image. When a projection image with the aspect ratio of 4:3 is projected to the screen S with the aspect ratio of 4:3, the projection image can be projected on the whole area of the screen S. However, the projection image projected by the projector 1 to the screen S is projected in a distorted shape from a rectangular shape, because the projector 1 is rarely put just in front of the screen S. Therefore, it is required, first of all, to correct the keystone distortion of the projection image by keystone correction. FIG. 2 is a flowchart showing procedures for setting the keystone correction performed by the projector 1. The processing part 11 waits for acceptance of instructions to set a projection range, which shows a range where the projection image can be projected on the screen S (S11), as the instructions are given by a predetermined operation of the user to the operation part 15 or by a predetermined operation of users to the remote 10 to make the remote receiving part 14 receive a predetermined signal sent from the remote 10. When the control processing part 11 does not accept the instructions to set the projection range (S11: NO), the control processing part 11 keeps on waiting for acceptance of instructions. When the control processing part 11 accepts the instructions to set a projection range (S11: YES), the control processing part 11 makes the pattern image generation part 13 to generate a pattern image showing a range of the projection image with a specific aspect ratio, such as 4:3 and so on, and makes the image formation panel 21 to form the generated pattern image. The image showing a range of the projection image is projected to the screen S, by projecting light reflected by the image formation panel 21, from the projector lens 3 (S12). At that time, the projector 1 projects an image consisting of four luminescent spots corresponding to the four corners of the projection range, to the screen S. In addition, the projector may be configured to perform processing to project an image consisting of luminescent line corresponding to the outer frame of the projection range. FIG. 3 is exemplary schematic views illustrating correspondence between a projecting range on the screen S and a pattern image formed by the image formation panel 21. FIG. 3 (a) shows an image projected on the screen S, in the case that the pattern image is formed on the whole area of image formation panel 21. FIG. 3 (b) shows the image formation panel 21 forming the pattern image on the whole area of the image formation panel 21. Filled circles in the figures show luminescent spots corresponding to the four corners of the projection range. As shown in FIG. 3 (a), generally, a range where the whole area of the image formation panel 21 is projected on the screen S, does not correspond to a range where an image can be projected on the screen S, and the shape of the range is distorted from a rectangular shape. When visually recognizing such an image as shown in FIG. 3 (a), the user inputs instructions to deform the projection range by displacing the respective luminescent spots corresponding to the four corners of the projection range, through operation of the operation part 15 or the remote 10. After the step S12 is completed, the control processing part 11 waits for acceptance of instructions to deform the projection range (S13). When the control processing part 11 accepts the instructions to deform the projection range (S13: YES), the control processing part 11 displaces positions of the luminescent spots projected on the screen S, and thereby deforms the projection range, by displacing each position of the luminescent spot involved in the pattern image generated by the pattern image generation part 13 (S14). When the control processing part 11 does not accept instructions to deform the projection range at the step S13 (S13: NO), or when the step S14 is completed, the control processing part 11 waits for acceptance of instructions to establish a projection range, to be given by the user through operation of the operation part 15 or the remote 10 (S15). When the control processing part 11 does not accept instructions to establish the projection range (S15: NO), the control processing part 11 returns procedures to the step S13. When the control processing part 11 accepts instructions to establish the projection range (S15: YES), the control processing part 11 establishes a projection range corresponding to the pattern image generated by the pattern image generation part 13 (S16). FIG. 3 (c) shows a projection range established on the screen S. The user operates the projector 1 to make the projection range with rectangular shape, as shown in the figure, while visually recognizing the projected image. When the aspect ratio of the projection range is the same as the aspect ratio of the screen S, the whole area of the screen S can be utilized as the projection range, by displacing the luminescent spots corresponding to the four corners of the projection range at four corners of the screen S, as shown in the figure. FIG. 3 (d) shows the image formation panel 21 firming a pattern image corresponding to the established projection image on the screen S. The luminescent spots on the image formation panel 21 corresponding to the four corners of the projection range moves, in accordance with the displacement of the luminescent spots corresponding to the four corners of the projection range on the screen S to form the projection range into a rectangular shape. The range framed by the four luminescent spots on the image formation panel 21 is a range corresponding to the projection range on the image formation panel 21, and an image formed in this range is projected to the projection range on the screen S. The control processing part 11, then, acquires coordinates of a range corresponding to the established projection range, on a panel coordinate system that defines a position of a spot on the image formation panel 21 (S17). The control processing part 11 previously defines a panel coordinate system. For example, when the image formation panel 21 is configured with 1024×768 pixels, the control processing part 11 defines a coordinate of a pixel at upper left corner on the image formation panel 21 as (0, 0), and a coordinate of each pixel on the image formation panel 21 as (x, y) where x is 0≦x≦1024 and y is 0≦y≦768. The control processing part 11 acquires coordinates of a range corresponding to the established projection range, by acquiring coordinates of pixels at the positions of the luminescent spots on the image formation panel 21 corresponding to the four corners of the projection range, on the defined panel coordinate system. The control processing part 11, then, acquires coordinates of the established projection range on a screen coordinate system (projection plane coordinate system) that defines a position of a spot on the screen S (S18). At this time, the control processing part 11 sets the values of an aspect ratio and a resolution of the projection range the same as the aspect ratio and the resolution of the displayable region of the image formation panel 21 at the time the projection range is set to define the screen coordinate system. For example, when the whole area of an image formation panel with 1024×768 pixels is the displayable region, the control processing part 11 defines the screen coordinate system to make the respective coordinates of the four corners of the projection range to be (0, 0), (0, 768), (1024, 0) and (1024, 768). In addition, any given coordinate system may be set as a screen coordinate system. The control processing part 11 can acquire the coordinates of the projection range on the screen coordinate system, by acquiring the coordinates of the four corners of the projection range as just described. The control processing part 11 calculates conversion parameters for a predetermined conversion formula for mutually converting a position on the panel coordinate system and a position on the screen coordinate system by projecting a position on the image formation panel 21 to a surface on the screen S (S19). This conversion is a coordinate conversion based on projective transformation between the panel coordinate system and the screen coordinate system. Known formulas include a common conversion formula to convert coordinates from a panel coordinate system to a screen coordinate system and a common conversion formula to convert coordinates from a screen coordinate system to a panel coordinate system using the projective transformation. The control program stored by ROM 21 includes these conversion formulas. The control processing part 11 calculates conversion parameters for a the conversion formula, based on correspondence between coordinates on the panel coordinate system corresponding to the four corners of the projection range and coordinates on the screen coordinate system. The control processing part 11 stores the calculated conversion parameters and completes the process for the projection range setup. After the completion of the projection range setup, the projector 1 performs keystone correction for a projection image to project the projection image within the set up projection range, in the condition that the aspect ratio and the resolution of the projection image is not changed. FIG. 4 is the flowchart showing procedures of the process that the projector 1 performs keystone correction and projects an image. When image data is input from external to the input part 16 (S21), the scaling processing part 24 scales an image based on the input image data, according to the resolution of the image formation panel 21 (S22). For example, when an image with 1280×720 resolution is input, the scaling processing part 24 reduce the image to an image with 1004×768 resolution, corresponding to the image formation panel 21. The control processing part 11, then, makes the keystone processing part 23 deform the image scaled by the scaling processing part 24, to fall the image into a range on the image formation panel 21 corresponding to the projection range (S23). The keystone processing part 23, as shown in FIG. 10 (b), generates a sub-image including the deformed image (S24). The panel image generation part 22 generates a panel image including a portion corresponding to the offset region of the image formation panel 21 and a panel image including the sub-image generated by the keystone processing part 23 (S25). Since the image formation panel 21 does not have an offset region and the whole area of the image formation panel 21 is the displayable region in the case that the aspect ratio of the projection image is the same as the aspect ratio of the image formation panel 21, the panel image matches the sub-image. The control processing part 11 makes the image formation panel 21 form the panel image generated by the panel image generation part 22 (S26), projects the keystone-corrected projection image corrected for keystone correction to the screen S, by projecting from the projector lens 3 the light reflected by the image formation panel 21 which has formed the panel image (S27). Thus, the projector 1 completes the process, and repeats the process every time the image data is input into the input part 16. The projector 1, according to the present invention) performs processing to project a projection image with changed details of keystone correction, when it is required to project the projection image with a different aspect ratio or a different resolution from the aspect ratio and the resolution of the set up projection range. This situation is applied to the case that the input image data represents an image with a different aspect ratio from the aspect ratio of the set up projection range, the case that the input image data represents an image with a lower resolution than the resolution of the set up projection range, or the case that the projector 1 accepts instructions from the user to change the aspect ratio or the resolution of the projection image. FIG. 5 and FIG. 6 are flowcharts showing procedures to change the details of the keystone correction performed by the projector 1. The control processing part 11 waits for input of image data into the input part 15 to make the aspect ratio or the resolution of the projection image to be different from the set up projection range (S301). When there is no input of image data with a different aspect ratio or a different resolution (S301: NO), the control processing part 11 waits for acceptance of instructions to change the aspect ratio or the resolution of the projection image given by the predetermined operation of the user to the operation part 15 or the remote 10 (S302). When there is no acceptance of the instructions (S302: NO), the control processing part 11 returns the process to the step S301. When there is an input of image data with a different aspect ratio or a different resolution at the step S301 (S301: YES), or when there is acceptance instructions to change the aspect ratio or the resolution of the projection image (S302: YES), the control processing part 11 makes the image formation panel 21 set an offset region and a displayable region, corresponding to the changed aspect ratio or resolution of the projection image (S303). The control processing part 11, then, acquires coordinates for the set displayable region in the panel coordinate system (S304). FIG. 7 is a schematic view showing the processing to change the details of keystone correction in a panel coordinate system and a screen coordinate system. In the figures, an exemplar case is showed where the aspect ratio of a projection image is changed from 4:3 to 16:9. In FIG. 7 (a), broken lines show a range of the image formation panel 21 on the panel coordinate system, and solid lines show a range of the displayable region. As the aspect ratio is changed from 4:3 to 16:9, the offset regions are set at upper and lower portions of the image formation panel 21. Then, it sets displayable region with the aspect ratio of 16:9, which is smaller than the whole area of the image formation panel 21. The control processing part 11 acquires coordinates of the displayable region by acquiring, for example, coordinates of the four corners of the displayable region. The displayable region set at this process corresponds to the shape of the image before keystone correction. The control processing part 11, then, converts the coordinates of the displayable region in the panel coordinate system to the coordinates in the screen coordinate system, with the use of the conversion formula and the conversion parameter to convert coordinates from the panel coordinate system to the screen coordinate system (S305). In FIG. 7 (b), broken lines show a projection range in the screen coordinate system, and solid lines show a range corresponding to a displayable region. Because of the offset regions, a range corresponding to the displayable region is reduced in the up-and-down direction than the projection range. The control processing part 11, then, acquires a coordinate for a center point of the projection range on the screen coordinate system (S306). At this time, the control processing part 11 averages coordinates for the four corners of the projection range in the screen coordinate system, and then acquires the coordinate for the center point. The control processing part 11, then, generates a first straight line passing the center point with an inclining angle whose value is the same as the changed aspect ratio of the projection image, and a second straight line passing the center point with an inclining angle whose value is a negative number obtained by multiplying the changed aspect ratio of the projection image by -1 (S307). In FIG. 7 (c), the first straight line and the second straight line are shown. When the changed aspect ratio of the projection image is 16:9, the inclining angle of the first straight line is 9/16 and the inclining angle of the second straight line is - 9/16. The control processing part 11, then, calculates coordinates of intersecting points where the first straight line and the second straight line intersect with edges of a range corresponding to the displayable region on the screen coordinate system (S308). The control processing part 11, then, calculates distances on the screen coordinate system from the center point to each intersecting point whose coordinates have been calculated (S309), and selects one intersecting point whose distance is the shortest of all the calculated distances (S310). The control processing part 11, then, calculates distances from the center point to the edges of the projection range on the first straight line and on the second straight line, in the screen coordinate system (S311), and judges whether the distance from the center point to the selected intersecting point is equal to or shorter than the distance from the center point to the edge of the projection range (S312). When the distance from the center point to the selected intersecting point is equal to or shorter than the distance from the center point to the edge of the projection range (S321: YES), the control processing part 11 acquires four points on the first straight line and on the second straight line whose distances to the center point are equal to the distance from the center point to the selected intersection point, on the screen coordinate system (S313). When the distance from the center point to the selected intersecting point is longer than the distance from the center point to the edge of the projection range (S321: NO), the control processing part 11 acquires four points, on the screen coordinate system, where the edges of the projection range intersect with the first straight line and on the second straight line (S314). After the step S313 or the step S314 is completed, the control processing part 11 specifies a rectangular area with the acquired four points as four corners in the screen coordinate system (S315). In FIG. 7 (d), dashed lines show a range of the rectangular area. The aspect ratio of the rectangular area is equal to the changed aspect ratio of the projection image, because the rectangular area has four corners consisting of four points, whose distances from the center point are equal to each other, on the first straight line with the changed aspect ratio of the projection image as the inclining angle and on the second straight line with the negative value of the changed aspect ratio of the projection image as the inclining angle. Furthermore, the rectangular area falls in a range corresponding to the displayable region and the projection range, because the rectangular area includes at its four corners the points whose distances from the center point are the shortest of all the intersecting points where the first straight line and the second straight line intersect with the edges of a range corresponding to the displayable region or with the edges of the projection range. Therefore, it is possible to project the projection image with the changed aspect ratio, by projecting an image within the specified rectangular area on the screen S. The control processing part 11, then, converts coordinates of the rectangular area on the screen coordinate system to coordinates on the panel coordinate system, with the use of the conversion formula and the conversion parameter for converting coordinates from the screen coordinate system to the panel coordinate system (S316). FIG. 7 (e) shows ranges corresponding to the displayable region and the rectangular area, in the panel coordinate system. Dashed lines show a range corresponding to the rectangular area on the panel coordinate system. It is possible to form an original image for the projection image within this range, because the range corresponding to the rectangular area falls in the displayable region even on the panel coordinate system. The control processing part 11, then, acquires a position of the range corresponding to the rectangular area within the displayable region after subtracting a portion corresponding to offset regions of the image formation panel 21 from coordinates in the panel coordinate system, and then calculates a deformation parameter required for a predetermined deformation formula that deforms a shape of an image corresponding to the whole displayable region to be a shape of the range corresponding to the rectangular area for keystone correction (S317). This deformation formula is similar to the conversion formula that converts a position on the screen coordinate system to a position on the panel coordinate system, but differs in a value of the parameter from the conversion formula. It is possible to deform an image so as to fit an original image for the projection image within the range corresponding to the rectangular area and to perform keystone correction, by converting points on an image before keystone correction to points within a region corresponding to the rectangular area, utilizing the deformation formula and the deformation parameter. The control processing part 11 stores the calculated deformation parameter, and completes the processing to change the details of keystone correction. After changing the details of keystone correction, the projector 1 projects an image according to the changed details of keystone correction, through the same process shown by the flowchart in FIG. 4. FIG. 8 shows schematic views illustrating procedures to project an image with an aspect ratio of 16:9 using an image formation panel 21 whose aspect ratio is 4:3. FIG. 9 shows schematic views illustrating procedures to project an image whose resolution is less than a projecting range. Hereinafter, the process is explained, with the use of FIG. 8, FIG. 9 and FIG. 4, for the projector 1 to project an image after changing the details of keystone 6 correction. FIG. 8 (a) shows a condition of the image formation panel 21 when the aspect ratio of the projection image is 16:9. Offset regions are set at upper and lower portions of the image formation panel 21, and a displayable region is set with the aspect ratio of 16:9. For example, when the resolution of the image formation panel 21 is 1024×768, the resolution of the displayable region is 1024×576 by subtracting the offset regions from the image formation panel 21. FIG. 9 (a) shows a condition of the image formation panel 21 when the resolution of the projection image is reduced. Offset regions are set at peripheral portions of the image formation panel 21, and the displayable region is set with the number of pixels corresponding to the reduced resolution. For example, when the resolution of the projection image is 768×576, the displayable region is set with 768×576 pixels. In FIG. 8 (a) and FIG. 9 (a), broken lines show a range corresponding to the rectangular area, which is acquired by the process shown by the flowchart in FIG. 5 and FIG. 6. Image data is input from external to the input part 16 (S21). The scaling processing part 24 scales an image based on the input image data, according to the resolution of the displayable region of the image formation panel 21 (S22). FIG. 8 (b) and FIG. 9 (b) show the scaled image, according to the resolution of the displayable region. In FIG. 8 (b), the aspect ratio of the image is 16:9. In FIG. 9 (b), the aspect ratio of the image is 4:3 which is reduced in accordance with the lower resolution. The control processing part 11, then, makes the keystone processing part 23 deform the scaled image with the use of the deformation formula and the deformation parameter (S23). The keystone processing part 23 generates a sub-image including the deformed image (S24). FIG. 8 (c) and FIG. 9 (c) show exemplificative sub-images. Image regions, shown in the figures, correspond to the deformed image by keystone correction, and finally correspond to the projection region. The panel image generation part 22 generates a portion corresponding to the offset region and a panel image including the sub-image (S25). FIG. 8 (d) and FIG. 9 (d) show exemplificative is panel images. The panel images include portions corresponding to the offset regions and the sub-image, and locate the image region at a range corresponding to the rectangular area. The control processing part 11, then, makes the image formation panel 21 form the panel image generated by the panel image generation part 22 (S26), and projects the projection image which is keystone-corrected in accordance with the changed details to the screen S, by projecting from the projector lens 3 the light reflected by the image formation panel 21 (S27). FIG. 8 (e) and FIG. 9 (e) show exemplificative projection images. In the figures, broken lines show projection ranges. In FIG. 8 (e), a projection image is projected with the aspect ratio of 16:9, within the projection range. In FIG. 9 (e), a scaled projection image is projected with reduced resolution, within the projection range. As described above, in this invention, the projector 1 sets the projection range with a given aspect ratio on the screen S and acquires a range corresponding to the projection range on the image formation panel 21, when the projector 1 projects the projection image to the screen S. Furthermore, the projector 1 performs keystone correction by forming an original image for projection within a range acquired on the image formation panel 21 and projects the rectangular projection image to the screen S. When the aspect ratio and the resolution of the projection image are not changed, the projector 1 deforms the image to fall within a range corresponding to the projection range on the image formation panel 21, makes the image formation panel 21 form the panel image including the deformed image, and projects the projection image. By projecting the deformed image in this way, it is possible to project the projection image adjusted to be rectangular shape by keystone correction, in the whole area of the projection range with the given aspect ratio. Moreover, in the present invention, the projector 1 obtains the conversion parameters required for the conversion formulas for mutual conversion between the panel coordinate system and the screen coordinate system based on the relationship between the coordinates of the projection range in the screen coordinate system and the coordinates of the range corresponding to the projection range in the panel coordinate system. When the aspect ratio and the resolution of the projection image are changed, the projector 1 converts the coordinates of the displayable region in the panel coordinate system according to the aspect ratio and the resolution to the coordinates in the screen coordinate system, and generates the rectangular area with a desired aspect ratio within a range corresponding to the displayable region in the screen coordinate system and the portions included in the projection range. Furthermore, the projector 1 converts the coordinates of the rectangular area in the screen coordinate system to the coordinates of a range corresponding to the rectangular area in the panel coordinate system, and makes the image formation panel 21 to form the image within a range corresponding to the rectangular area. By projecting the image formed within a range corresponding to the rectangular area on the image formation panel 21, it is possible to project the projection image within the rectangular area on the screen S. The projector 1 deforms the image so that the original image for projection falls within a range corresponding to the rectangular area on the image formation panel 21, makes the image formation panel 21 form the panel image including the deformed image, and projects the projection image. Thus, it is possible to project the projection image within the rectangular area on the screen S. Therefore, it is possible to project the projection image with the changed aspect ratio and with a size corresponding to the changed resolution. Thus, even when an aspect ratio or a resolution of the projection image is changed after setting keystone correction, the projector 1 can promptly project the projection image with a desired aspect ratio and size according to a desired resolution, without performing setting for keystone correction again Especially, because there is no need for the user to take time and effort for operating the projector to perform setting of keystone correction, a shape of the projection image is automatically changed even when an aspect ratio and a resolution of an image corresponding to a input image data are changed. Besides, the aspect ratio and the resolution of the projection image can be changed easily, and thus, improved usability of the projector can be provided. Moreover, in the present invention, when setting the rectangular area within a region corresponding to the displayable region and a portion included in the projection range in the screen coordinate system, the projector 1 sets the rectangular area with a desired aspect ratio, so that one of the points at four corners centering around the central point of the projection range contacts an edge of a region corresponding to the displayable region or an edge of the projection range. Therefore, the rectangular area with the desired aspect ratio can be set, with the largest possible size, at the center of the projection range. Thus, the projector 1 can project projection images with several aspect ratio, with a high visibility size for the user without reducing the size of the projection image to become unduly smaller than required. In addition, in the embodiments, while the projector according to the present invention is configured to set the projection range on the screen S, in response to the user's operation for setting keystone correction, the present invention should not be limited thereto, but the projector may be configured to pick up a pattern image projected on the screen S, and to set the projection range automatically so that the picked up pattern image becomes rectangular shape. Even in this case, the projector according to the present invention can project the projection image with a desired aspect ratio and a desired resolution within the set projection range, without performing the setting for keystone correction again, every time the aspect ratio or the resolution of the projection image is Moreover, in the embodiments, it is illustrated that the projector 1 projects the projection image to the screen S, but, even if the projection plane according to the present invention is configured to be different configuration, such as a house wall the projector 1 according to the present invention is configured to be able to project the keystone-corrected projection image by performing with the same processing as described above. Moreover, in the embodiments, illustration is given only about the case that the displayable region on the image formation panel 21 is reduced in response to the change of the aspect ratio or the resolution of the projection image, but even if the displayable region on the image formation panel 21 is enlarged in response to the change of the aspect ratio or the resolution of the projection image, the projector 1 according to the present invention is able to perform similar processing as described above. Even in this case, the projector according to the present invention can project a projection image with a desired aspect ratio and a desired resolution within the set projection range. Patent applications in class For projection axis inclined to screen Patent applications in all subclasses For projection axis inclined to screen User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20090027629","timestamp":"2014-04-20T20:27:19Z","content_type":null,"content_length":"107031","record_id":"<urn:uuid:5ab33b93-3fb8-4c10-a586-2c0872a8515e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
8. CMB PHYSICS AN MAGNETIC FIELDS The vacuum fluctuations of the gauge fields present during the inflationary stage of expansion may be amplified if conformal invariance is broken. It is then plausible that magnetic inhomogeneities are amplified not only at the scale of protogalactic collapse but also at larger length-scales. agnetic fields can be generated over all physical scales compatible Large-scale magnetic fields may then have various interesting implications on the physics of the CMB and of its anisotropies. The following possible effects have been discussed through the years: • distortion of the Planckian spectrum; • shift of the polarization plane (provided the CMB is linearly polarized); • shift in the position of the first Doppler peak; • generic increase of the amount of the (primary) anisotropy. On top of these effects, magnetic fields can also modify the evolution of the tensor fluctuations of the geometry for typical length scales much smaller than the ones probed by CMB anisotropy The possible distortions of the Planckian spectrum of CMB are usually discussed in terms of a chemical potential which is bounded, by experimental data, to me |µ| < 9 × 10^-5. Magnetic field dissipation at high red-shift implies the presence of a chemical potential. Hence bounds on the distorsion of the Planckian spectrum of the CMB can be turned into bounds on the magnetic field strength at various scales. In particular [287] the obtained bound are such that B < 3 × 10^-8 G for comoving coherence lengths between 0.4 kpc and 500 kpc. Large scale magnetic fields can also afffect the poosition of the Doppler peak. In [288] this analysis has been performed in a non-relativistic approximation where the scalar perturbations of the geometry obey linearized Newtonian equations of motion. It has been found that, in this approximation, the effect of the presence of the magnetic fields is an effective renormalization of the speed of sound of the baryons.
{"url":"http://ned.ipac.caltech.edu/level5/Sept03/Giovannini/Giovan8.html","timestamp":"2014-04-18T03:05:11Z","content_type":null,"content_length":"3536","record_id":"<urn:uuid:720cbbb4-8344-4979-b0f1-40251ae2785d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Submitted by Anonymous on April 13, 2011. Dear Bill Casselman, I worked on trisecting a square. Perigal was the first one to find a minimal 6-piece solution probably arround 1835-1840, but he only publish later in 1891. Thus Phillip Kelland was probably the first one who publish this technique for a gnomon in 1855. I've upload in wikisource his full publication for "Geometric dissections and transpositions". Here it is: Moreover, at the end of this paper: "L. J. Rogers (1897). Biography of Henry Perigal in appendix of On certain Regular Polygons in Modular Network. Proceedings London Mathematical Society. Volume s1-29, pp. 732-735.", I've found an interesting biography of Perigal (look at the four last pages): Best regards, Christian Blanvillain
{"url":"http://plus.maths.org/content/comment/reply/2189/2342","timestamp":"2014-04-20T11:03:32Z","content_type":null,"content_length":"21012","record_id":"<urn:uuid:653ac611-c1bc-4b42-93a6-2831c6cc0e4b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Stanford Undergraduate Research Institute in Mathematics June 23-August 29, 2014 The Stanford Undergraduate Research Institute in Mathematics is a ten-week program that provides Stanford undergraduates the opportunity to work on mathematical problems in an extra-curricular context. Most students will work on interesting mathematical problems in a collaborative environment. A number will work one-on-one with faculty member. Summer funding will be available for some students, thanks to VPUE; others can obtain course credit in the fall quarter for participating. You can find the SURIM 2012 website here and SURIM 2013 website here . Individual Research with a Faculty Member Students working individually with a faculty member will decide on a project and the dates in consultation with their faculty mentor. Note that it is the duty of the student to find a faculty member interested and willing to work with them. A short project proposal will be requested with the application. Collaborative Research The remaining students will take part, full-time, in the ten-week program that will run from Monday, June 23 through Friday, August 29. Goals of the program At SURIM, students will be exposed to questions that are of interest in current mathematics, as well as the research and exploration aspects that accompany such questions. With their mentor's assistance, students will study the prerequisite materials to understand their program's topic and will then participate in exploration of their questions about the subject. The emphasis will be on self-discovery of examples and properties. In addition to knowledge of their subject and an understanding of what it means to explore a research question, participants will practice the ability to present mathematics in a formal seminar setting, use software such as LaTeX to typeset mathematics, use other programming languages to study mathematical questions, and interact with peers, graduate students and faculty. All Stanford students who will be enrolled full/part-time during the Fall of 2014 are eligible to apply. Format for the ten-week program Students will be divided into groups depending on their mathematical interest and background. Each group will work closely with graduate students. A typical week There will be a couple of formal meetings with mentors each week. At the start, the mentors will lay out the beginning of the project, and the groups will decide how best to begin. Each group will prepare presentations to the entire institute each week, giving a status report to those working on other problems. (Practice with getting across ideas is essential to doing mathematics!) Much of the week will be spent working individually and in groups, and in informal discussions with mentors. There will be roughly two additional events per week. Some will be introductions to research tools (from writing with LaTex to the use of various software packages). Others will be lectures from researchers in academia and industry on what research is actually about --- how it is done, how to do it, and what it is like. Please check back for the schedule. The SURIM group will also have access to various classrooms during the summer, which will be listed later in the year. Stanford-Berkeley Joint Conference We expect to be holding a joint conference with the Berkeley REU program. Details to come later. We are still working on choosing the projects for the summer. Below are two of the projects students will be working in 2014. Random Walks On Finite Groups - Card Shuffling (mentor: Evita Nestoridi) One of the main questions concerning a random walk on a finite group is finding the order of the mixing time of the walk. In particular, in card shuffling we are really interested in finding out how many shuffles are required to get the deck "perfectly" shuffled. In this project, we are going to learn techniques of bounding the mixing time and play with a lot of examples. We will try to actually solve particular problems-examples either from card shuffling or from a group of matrices over a finite field (or perhaps another finite group that we might find interesting) and ideally come up with new techniques for bounding the mixing time. What happens when we iterate polynomials on the projective line? (mentor: Niccolo' Ronchetti) Suppose you take a polynomial f(z), say with rational coefficients. You can try to iterate the polynomial: z -> f(z) -> f(f(z)) -> ? What happens to the orbit of some z? Is it periodic, or maybe dense? Does it contain infinitely many primes? More generally what arithmetic properties do the set of periodic points have? There are plenty of interesting questions that one can ask and try to figure out. We will try to explore them and learn plenty of exciting math in the process (for example we'll learn about Galois groups, Diophantine problems, local fields?) What is the shape of molecule space? Can we develop topological OCR? (mentor: Ryan Lewis) Topological data analysis attempts to extract a topological understanding of scientific data from finite sets of samples. Usually data analysis assumes that the input is a point cloud and comes from some underlying geometric space. Topological data analysis focuses on the recovery of the lost topology of this underlying space. For this project we are looking for students to do topological data analysis. We will use a new computational topology library to analyze data sets. For those less interested in using and writing software, more mathematical problems can be solved. Please submit the following information, by email, to Nancy Rodriguez at nrodriguez@math.stanford.edu, by March 1, 2014 with subject line ''SURIM application''. Please include in the email the following information: • (a) Name, Stanford ID number, and year. • (b) If you have a faculty member who has agreed to work one-on-one with you, please let us know. (This is not necessary to apply.) If this is the case, please include a short proposal, developed in consultation with your intended mentor. • (c) Name of one or two professors who are familiar with you (ideally in mathematics) • (d) Mathematical background and interests. • (e) For those not working individually with a faculty member, which of the possible projects appeal to you? (we are still working on project 3…if geometry and topology intestes you state that on your application) • (f) Do you need funding in order to take part? Would you like course credit? (Note: it is not possible to get both funding and credit.) • (g) Curriculum vitae and unofficial Stanford transcript. Notification Deadline: Students will be notified of their acceptance by March 15, 2014. Directors: Gunnar Carlsson and Nancy Rodriguez. Assistant Director: Ravi Vakil. If you have any questions, or are even just curious about the program, please contact Dr. Nancy Rodriguez (nrodriguez@math.stanford.edu). She will also be available to chat during iDeclare week.
{"url":"http://math.stanford.edu/~surim/","timestamp":"2014-04-20T16:48:31Z","content_type":null,"content_length":"9178","record_id":"<urn:uuid:caa4262e-508b-40bf-bdb5-ee6f37f9a29c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] np.finfo().maxexp confusing Matthew Brett matthew.brett@gmail.... Tue Oct 11 13:39:48 CDT 2011 I realize it is probably too late to do anything about this, but: In [72]: info = np.finfo(np.float32) In [73]: info.minexp Out[73]: -126 In [74]: info.maxexp Out[74]: 128 minexp is correct, in that 2**(-126) is the minimum value for the exponent part of float32. But maxexp is not correct, because 2**(127) is the maximum value for the float32 exponent part: There is the same maxexp+1 feature for the other float types. Is this a sufficiently quiet corner of the API that it might be changed in the future with suitable warnings? More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2011-October/058695.html","timestamp":"2014-04-18T13:29:20Z","content_type":null,"content_length":"3249","record_id":"<urn:uuid:7f701813-17b7-4bef-9171-04c4c0671081>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00620-ip-10-147-4-33.ec2.internal.warc.gz"}
Life of Pi New York Times Bestseller * Los Angeles Times Bestseller * Washington Post Bestseller * San Francisco Chronicle Bestseller * Chicago Tribune Bestseller "A story to make you believe in the soul-sustaining power of fiction."—Los Angeles Times Book Review After the sinking of a cargo ship, a solitary lifeboat remains bobbing on the wild blue Pacific. The only survivors from the wreck are a sixteen-year-old boy named Pi, a hyena, a wounded zebra, an orangutan—and a 450-pound royal bengal tiger. The scene is set for one of the most extraordinary and beloved works of fiction in recent years. Universally acclaimed upon publication, Life of Pi is a modern classic. Yann Martel's imaginative and unforgettable Life of Pi is a magical reading experience, an endless blue expanse of storytelling about adventure, survival, and ultimately, faith. The precocious son of a zookeeper, 16-year-old Pi Patel is raised in Pondicherry, India, where he tries on various faiths for size, attracting "religions the way a dog attracts fleas." Planning a move to Canada, his father packs up the family and their menagerie and they hitch a ride on an enormous freighter. After a harrowing shipwreck, Pi finds himself adrift in the Pacific Ocean, trapped on a 26-foot lifeboat with a wounded zebra, a spotted hyena, a seasick orangutan, and a 450-pound Bengal tiger named Richard Parker ("His head was the size and color of the lifebuoy, with teeth"). It sounds like a colorful setup, but these wild beasts don't burst into song as if co-starring in an anthropomorphized Disney feature. After much gore and infighting, Pi and Richard Parker remain the boat's sole passengers, drifting for 227 days through shark-infested waters while fighting hunger, the elements, and an overactive imagination. In rich, hallucinatory passages, Pi recounts the harrowing journey as the days blur together, elegantly cataloging the endless passage of time and his struggles to survive: "It is pointless to say that this or that night was the worst of my life. I have so many bad nights to choose from that I've made none the An award winner in Canada (and winner of the 2002 Man Booker Prize), Life of Pi, Yann Martel's second novel, should prove to be a breakout book in the U.S. At one point in his journey, Pi recounts, "My greatest wish--other than salvation--was to have a book. A long book with a never-ending story. One that I could read again and again, with new eyes and fresh understanding each time." It's safe to say that the fabulous, fablelike Life of Pi is such a book. --Brad Thomas Parsons
{"url":"http://www.americanpoems.com/store/1052-1000-0547848412-Life_of_Pi.html","timestamp":"2014-04-20T06:16:50Z","content_type":null,"content_length":"54514","record_id":"<urn:uuid:37bc63a5-3c3c-410c-91a5-6256c22fedd9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by Amal Total # Posts: 33 36 banners 36 banners ثاني جامعة_انجليزي اسئلة لرواية جيليفر 240/6oo X 100=40% 0.05 M 6 electric lamps connected in parallel at 100 volt explain how they are connected to operate at 200 volts without spoiling them.find the total current intensity in the circuit if the resistance of each lamp 240 Ohm 6 electric lamps connected in parallel at 100 volt explain how they are connected to operate at 200 volts without spoiling them.find the total current intensity in the circuit if the resistance of each lamp 240 Ohm (x+1)^4/4+c * 4x=33+3y x=33+3y/4 x=-4-25 33+3y/4=-4-25 33+3y=-16-100 19y=-133 y=-7 x=3 4x=33+3y x=33+3y/4 x=-4-25 33+3y/4=-4-25 33+3y=-16-100 19y=-133 y=-7 x=3 y=12-2x in eq.2.....y=3x-13 12-2x=3x-13 5x=25 x=5...y=2 yes it can as their some equal 180 x=y+3/5 in eq.(2)...y=2(y+3/5)+6 y=2y+6/5 +6 y-6=2y+6/5 5y-30=2y+6 3y=36 y=12 & so x=3 y=12 & x=3 draw apieceof yarn3 counters A fire hose can fill a certain tank with water during 1 hour. A 2nd fire hose can fill the same tank in half hour. A 3rd fire hose can fill the same tank in quarter hour. find the time necessary to fill the same tank by the 3 fire hoses together calculate the momentum ofa toy car weighing 200 gram and moving with a velocity of 5meter/second? Oh! A play is move! I get it! No need to answer this one! I am confused about this one. It is about football. "In a football game the ball was advanced 5 yards from the Juniors' 25-yard line towards the Seniors' goal, then 6 yards then 8 yards back, 13 yards forward, 5 yards back, and then 11 yards forward. What was the ... Here, I am dealing with a Science formula for momentum. The formula is: M= mv So, I have: m= 10 grams (.01 Kilograms) v= 5,000 centimeters a second (50 m/s) It says to Find M and also solve the formula for M. Please help! Please help! I'm still waiting! The answer differs from yours. We know man 1 drives 50 miles per hour man 2 goes 55 mph A man (we'll refer to him as man 1) starts from his home at 8 A.M. and drives a steady rate of 50 MPH. An hour later a second man ( man 2) starts to follow him. Draw a graph to represent these facts. From the graph find when man 2 overtakes man 1. Okay, I imagined at 9 A.M... I really would appreciate some help! Here, I am dealing with a Science formula for momentum. The formula is: M= mv So, I have: m= 10 grams (.01 Kilograms) v= 5,000 centimeters a second (50 m/s) It says to Find M and also solve the formula for M. Please help! Stuck on 3d - Math Suppose you re thinking about buying a used car, but you ve become worried about the selection problems. In this market, the seller of each car knows the true value, but the buyer can t determine the value until after purchase. Assume that there are 6 sellers wi... 3rd grade math Standard and scientific notation 7.8*1o ^5 in standard notation find the number of permutations of 5 out of 9. then find 6 out of 6 Horizontal components of the forces are as follows: T(sin Θ) = ma --- Equation 1 and the vertical components of the forces are T(cos Θ) = mg --- Equation 2 where T = tension in the string Θ = angle that string makes with the vertical m = mass of the object a = a... How do you round these numbers to the nearest tenth 84,96,71
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Amal","timestamp":"2014-04-18T09:08:36Z","content_type":null,"content_length":"11297","record_id":"<urn:uuid:a945b01b-aea4-4bde-b71b-b7521b08397f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
East Boston ACT Tutor Find an East Boston ACT Tutor ...True learning, and true teaching, is always humane, always exciting, always transformative - never mechanical. Nevertheless, I don't pretend that concrete results aren't important. I give meaningful test-taking tips (which you won't find in any book) and teach with example problems constantly i... 47 Subjects: including ACT Math, chemistry, English, reading ...This translates to the way I have always approached teaching situations, which is to guide students to the answer, but allow them to arrive at it themselves. In this way, they are able to fully grasp the concepts behind the problem and gains a deeper understanding of the subject. Feel free to shoot me an email if you’re interested in studying with me, or if you have any questions. 38 Subjects: including ACT Math, chemistry, English, reading ...AP Physics C Mechanics: 5, AP Physics C Electricity and Magnetism: 5, APCalculus BC: 5, AP Computer Science AB: 5, AP Biology: 5, AP Chemistry: 5. MCAT Biological: 15MCAT Physical: 14In addition, I'm a fluent Persian (Farsi) speaker and I have been tutoring it for 3 years.I have dual degrees in ... 28 Subjects: including ACT Math, chemistry, calculus, algebra 1 I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. 14 Subjects: including ACT Math, geometry, algebra 1, statistics ...I have tutored middle and high school students for 14 years overwhelmed by or uninterested in homework and/or studying for and taking tests and provided them with methods, strategies, and skills to address issues of time management, organization, studying habits, note taking, effective reading an... 34 Subjects: including ACT Math, reading, calculus, English
{"url":"http://www.purplemath.com/east_boston_act_tutors.php","timestamp":"2014-04-18T00:43:15Z","content_type":null,"content_length":"23781","record_id":"<urn:uuid:a5a86c55-ae32-4263-80da-8aac7ca539f9>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Classical Mechanics Is Lagrangian; It Is Not Hamiltonian; The Semantics of Physical Theory Is Not Semantical Curiel, Erik (2009) Classical Mechanics Is Lagrangian; It Is Not Hamiltonian; The Semantics of Physical Theory Is Not Semantical. [Preprint] Download (311Kb) | Preview One can (for the most part) formulate a model of a classical system in either the Lagrangian or the Hamiltonian framework. Though it is often thought that those two formulations are equivalent in all important ways, this is not true: the underlying geometrical structures one uses to formulate each theory are not isomorphic. This raises the question whether one of the two is a more natural framework for the representation of classical systems. In the event, the answer is yes: I state and prove two technical results, inspired by simple physical arguments about the generic properties of classical systems, to the effect that, in a precise sense, classical systems evince exactly the geometric structure Lagrangian mechanics provides for the representation of systems, and none that Hamiltonian mechanics does. The argument not only clarifies the conceptual structure of the two systems of mechanics, their relations to each other, and their respective mechanisms for representing physical systems. It also provides a decisive counter-example to the semantical view of physical theories, and one, moreover, that shows its crucial deficiency: a theory must be, or at least be founded on, more than its collection of models (in the sense of Tarski), for a complete semantics requires that one take account of global structures defined by relations among the individual models. The example also shows why naively structural accounts of theory cannot work: simple isomorphism of theoretical and empirical structures is not rich enough a relation to ground a semantics. Export/Citation: EndNote | BibTeX | Dublin Core | ASCII/Text Citation (Chicago) | HTML Citation | OpenURL Social Networking: Share | Available Versions of this Item Actions (login required) Document Downloads
{"url":"http://philsci-archive.pitt.edu/8502/","timestamp":"2014-04-19T09:29:18Z","content_type":null,"content_length":"32070","record_id":"<urn:uuid:27882aaf-9892-4582-9bcf-4c4b11eb6489>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenMx - Advanced Structural Equation Modeling I'm developing a model, but I run into a couple of problems. Since the OpenMx documentation did not give the answer I was looking for, I want to try it here. I'm having a hard time getting started with SEM and with OpenMx because there is no real good overview of SEM methodology in my opinion. I'm trying to run the following code: model <- mxModel( manifestVars = c("A", "B", "C", "D", "E","F"), latentVars = c("var1", "var2", "var3", "var4"), mxPath(from="var1",to=c("A", "B", "C")), mxPath(from="var2",to=c("B", "D", "E")), mxPath(from="var3",to=c("F", "E")), mxPath(from="var4",to=c("C", "D")), mxPath(from="one",to=c("A", "B", "C", "D", "E","F", "var1", "var2", "var3", "var4")), fit <- mxRun(model) I'm getting the error message "Expected covariance matrix is not positive-definite in data row 126 at iteration 0" Can anybody tell me what this error message means? I read on the errors page that it means that you should change your "starting values", but where should I do that? And besides that, what are "starting values" anyway? Somewhere it also mentions "free variables", but I cannot find a clear explanation what a "free variable" is exactly. Who can give me the exact code I need to add to modify these starting values? I also read somewhere that it has to do with matrices for which no inverse can be calculated and that a Cholesky decomposition could help. Is anybody familiar with this method? And if so, what exactly do I need to change in the code above to make this work? Tue, 06/28/2011 - 10:24 Hi Steven, I'm going to have Hi Steven, I'm going to have to answer your questions out of order. OpenMx fits models by comparing an expected or model implied mean and covariance structure to the mean and covariance structure in your data. This expected covariance matrix is defined by the various free and fixed parameters in your model. Fixed parameters are numbers you assign as features of the model that don't change (i.e. you didn't assign a covariance between "var1" and "var2", so they have a fixed covariance of zero), while free parameters are variables that you change or estimate to make the model fit the data better (i.e., you're trying to find values for your factor loadings that best fit the data in your example). The process for finding these estimates is as follows: -take the current values for all parameters and evaluate model fit by -2 log likelihood (a major iteration) -vary all of the free parameters a little bit in either direction and get their -2LLs (minor iterations) -use the results of the minor iteration to vary the free parameters and improve fit. -take the new parameter estimates from step 3 and go back to step 1. when you can't make the fit any better (i.e., the parameter estimates from step 3 are about the same as what you started with in step 1), you're done! To kick this process off, you have to provide values for the first iteration. Because it's all done in the OpenMx backend code that's written in C, and C counts up from zero, you have to supply the values for all parameters for iteration zero. Your error is caused by the fact that you didn't specify any starting values at all, so everything was zero. This means that your expected covariance matrix can't be inverted (inversion is the matrix algebra analog to division. A covariance matrix with a zero variance or a perfect correlation can't be inverted), and thus can't be used as a set of starting values. You'll have to add "values=???" to your mxPath statements to assign numeric values to each path you want to create. Other programs guess at what your starting values should be, but OpenMx does exactly what you tell it to and nothing more. This keeps the program from making false assumptions about your model, but means you have to be explicit about every part of your model. I also notice that none of your variables have variances. You'll have to add variances for all of your variables, as well as any covariances (say, between your latent variables) that you want. You'll also have to identify the scale of the latent variables by fixing either the variance or a single loading for each factor to a non-zero constant (usually to the number 1). Hope this helps, and let us know what other questions we can answer, Tue, 06/28/2011 - 10:20 In short, starting values are In short, starting values are how the maximum likelihood estimation procedures gets started. You can add starting values for each parameter estimate or you can add a single value to each mxPath statement and OpenMX will use that starting value for the set of starting values referenced in the statement. Or, you can specify a different starting value for each parameter references in the mxPath statement. For some models not specifying any starting values leads to a model implied matrix that is ill formed (specifically filled with zeros because no starting values have been specified). But, specifying starting values will allow the model implied matrix to start off being invertible. You can add "values=1" to each of your mxPath statements so that OpenMx has something other than a null matrix to start with. I suggest trying that and seeing what happens. Tue, 06/28/2011 - 12:14 I tried to add "values=1" to I tried to add "values=1" to each of the mxPath statements but it doesn't work, still the same error message. Should I try it with other values, or values for each connection separately? Or should I maybe use the Cholesky decomposition? I'm reading the book "Principles and Practices of Structural Equation Modeling" by Rex Kline, but the book goes into detail much too soon so I get lost pretty early. Are there any introductory texts available on this topic? Tue, 06/28/2011 - 12:22 Just setting 'values=1' for Just setting 'values=1' for all of your factor loadings and means isn't enough, as you still don't have any variance terms in your model. I think John Loehlin's 'Latent Variable Modeling' book is a fantastic introduction to the range of confirmatory and exploratory methods. If you're looking for something free, take a look at Steve Boker's course, posted here on the forums (http://openmx.psyc.virginia.edu/forums/openmx-help/teaching-sem-using-op...) and the other materials available on our resources page (http://
{"url":"http://openmx.psyc.virginia.edu/thread/1002","timestamp":"2014-04-21T04:50:25Z","content_type":null,"content_length":"36914","record_id":"<urn:uuid:1eceab24-5650-400c-b488-3369c6e46aea>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
If P is a set of integers and 3 is in P, is every positive Author Message If P is a set of integers and 3 is in P, is every positive [#permalink] 01 Oct 2003, 15:43 Joined: 15 Aug 2003 5% (low) Posts: 3470 Question Stats: Followers: 57 Kudos [?]: 649 [0], given: 781 (00:00) correct 0% (00:00) based on 0 sessions If P is a set of integers and 3 is in P, is every positive multiple of 3 in P? (1) For any integer in P, the sum of 3 and that integer is also in P (2) For any integer in P, that integer minus 3 is also in P. stolyar, shouldnt it be E.? For any integer in the set,where 3 is a member of the set,the sum of 3 and that integer is in p. Joined: 13 Sep 2003 lets say the number in the set is 5. then statement 1 says 5 and 8 are in the set.Why do we assume that all the members in the set r multiples of 3 like '3'.its given Posts: 43 that 3 is a member of the set.it doesnt say that all the numbers in the set are multiples of 3. Location: US just curious??? Followers: 0 I agree with Stolyar. The answer should be D (1) For any integer in P, the sum of 3 and that integer is also in P: It is already given in the question stem that 3 is in the set P. So this statement sets the trigger. That means 3, 6, 9, ...... infinite will be part of the set P. So the Joined: 11 Mar 2003 answer to the question "is every positive multiple of 3 in P" is affirmative. SUFFICIENT Posts: 54 (2) For any integer in P, that integer minus 3 is also in P. Location: Chicago It is already given in the question stem that 3 is in the set P. So this statement sets the trigger. That menas 0, -3, -6, -9.... ....infinite will be in set P. So the Followers: 1 answer to the question is negative. SUFFICIENT. Answer D Intern But why assume that 3 is the only member in the set? Joined: 13 Sep 2003 thats my prob. Posts: 43 Location: US Followers: 0 am1974 sudzpwc wrote: Manager But why assume that 3 is the only member in the set? Joined: 11 Mar 2003 thats my prob. Posts: 54 sudzpwc, Location: Chicago We are not assuming that 3 is the only member in the set P. There could be other integers in the set P. But in the question stem (not in the statements), it is given that 3 is the part of P. So you have to accept that 3 is member of P and then consider each statment. Followers: 1 Hope this helps Intern A? Joined: 10 Oct 2003 because using 2 we can say that every multiple of 3 might be or might not be in the set. Posts: 45 Location: Finland Followers: 1 am1974 jaydi8 wrote: Manager A? Joined: 11 Mar 2003 because using 2 we can say that every multiple of 3 might be or might not be in the set. Posts: 54 YES. I think you are right. Based on statement 2 alone we can not say anything definitely. Location: Chicago Stolyar, do you agree with this? Followers: 1 I think the answer should be A. Thanks jaydi8. Intern Am, get your point now.Thanks for the help.appreciate it. Joined: 13 Sep 2003 god luck. Posts: 43 Location: US Followers: 0 Intern The question asked "Are all positive multiples of 3 in P?". With statement (II) 3, 0, -3, -6 ..... i.e. every positive multiple of 3 isn't in P.Hence the statement is Joined: 21 Jul 2003 IMO, D is the correct choice. Posts: 43 Location: India Followers: 1 Soumala wrote: gmatblast The question asked "Are all positive multiples of 3 in P?". With statement (II) 3, 0, -3, -6 ..... i.e. every positive multiple of 3 isn't in P.Hence the statement is Senior Manager IMO, D is the correct choice. Joined: 11 Nov 2003 This is tricky. Here you have unknowingly assumed that 3 is the starting point. Now for the time being imagin that set P contains infinite integers in such a way that it fulfills the condition of statement II. For example start from 999. Then 999, 996, 993......3, 0, -3, -6,..... Posts: 356 all are in set P. Here I have used 999 as a starting point just to as an example. It could be an infinite number. IN that the answer to the question would be YES. Location: Illinois So the stament II can result in YES as well as NO. NOT SUFF. Followers: 1 IMO, the answer should be A. Guys, please let me know what you think. AkamaiBrah Soumala wrote: GMAT Instructor The question asked "Are all positive multiples of 3 in P?". With statement (II) 3, 0, -3, -6 ..... i.e. every positive multiple of 3 isn't in P.Hence the statement is Joined: 07 Jul 2003 IMO, D is the correct choice. Posts: 771 No. You only know what MUST be in, but not what actually is. Suppose P is the set of ALL integers? Then every positive multiple of 3 IS in P. Location: New York NY 10024 _________________ Schools: Haas, MFE; Best, Anderson, MBA; USC, MSEE Followers: 9 Former Senior Instructor, Manhattan GMAT and VeritasPrep Vice President, Midtown NYC Investment Bank, Structured Finance IT Kudos [?]: 21 [0], given: 0 MFE, Haas School of Business, UC Berkeley, Class of 2005 MBA, Anderson School of Management, UCLA, Class of 1993 Let's try again If P is a set of integers and 3 is in P, is every positive multiple SVP of 3 in P? Joined: 03 Feb 2003 (1) For any integer in P, the sum of 3 and that integer is also in P. 3 is in, so 6, 9, 12, ... so on are in as well --- SUFF Posts: 1619 (2) For any integer in P, that integer minus 3 is also in P. Followers: 5 3 is in, so 0, -3, -6, -9, ... so on are in as well --- can we say something different about POSITIVE multiples? They can be in P, and they can be not. Kudos [?]: 29 [0], given: 0 D is not correct, my fault. It looks like A. Similar topics Author Replies Last post If P is a set of integers and 3 is in P , is every positive troy3626 4 19 Aug 2007, 16:59 If P is the set of integers and 3 is in P, is every positive young_gun 3 12 Dec 2007, 08:05 If P is a set of integers and 3 is in P, is every positive vannu 6 13 May 2009, 18:59 If P is a set of integers and 3 is in P, is every positive changhiskhan 3 20 Mar 2010, 11:33 4 If P is a set of integers and 3 is in P, is every positive Caffmeister 5 02 Jul 2010, 09:24
{"url":"http://gmatclub.com/forum/if-p-is-a-set-of-integers-and-3-is-in-p-is-every-positive-2703.html","timestamp":"2014-04-17T01:30:28Z","content_type":null,"content_length":"188130","record_id":"<urn:uuid:3de3790a-ba69-408b-b7de-1f076a1185ee>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
doc search for "1.2.4 circuit calculations answer key" (Page 1 of about 2,600 results) Activity 1.2.4 Circuit Calculation - Weber School District.doc Activity 1.2.4 Circuit Calculations Introduction. ... Be sure to put your answer in proper engineering notation and use the correct units ... Read Down Activity 1.2.3 Circuit Theory : Simulation.doc Activity 1.2.3 Circuit Theory ... Be sure to put your answer in proper engineering notation and use the correct units. ... review your circuit, your calculations, ...Read Down Principles of Engineering - SchoolWorld an Edline Solution.doc Activity 1.2.4 Circuit Calculations. Activity 1.2.5a Mechanical Efficiency Winch. Note: Review calculating: OHM’s Law, Kirchoff’s Law, ... Read Down
{"url":"http://www.freedocumentsearch.com/doc/1.2.4-circuit-calculations-answer-key.html","timestamp":"2014-04-17T21:21:59Z","content_type":null,"content_length":"15752","record_id":"<urn:uuid:b6fe342e-491c-47e9-8113-c118ce1ac02d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
partial derivative... March 9th 2009, 01:52 PM partial derivative... a)assume we have a cylinder ( with a top ) of height h and base radius r. if the top, bottom and wall costs per square meter are a, b and w dollars respectively, find the cost function K ( h, r, a, b, w )that gives the total material cost as a function of h, r, a, b, w b)find all 5 partial derivative of the function K how do i begin this question? surface area= 2 (pi)rh + 2(Pi)r^2 top=a= (pi)r^2 bottom= b= (pi) r^2 wall=w=2(pi)r h dk/dr= 2(pi)h + 4(pi)r dk/dh= 2(pi)r TC= a(dk/dr)+ b(dk/dr) + w(dk/dr)(dk/dh) ? help please(Crying)(Thinking) March 9th 2009, 05:06 PM Is this how its done? a)Total cost=apir^2 +bpir^2 +2wpirh dk/dr= 2apir + 2bpir + 2wpih dk/da= pir^2 dk/db= pir^2
{"url":"http://mathhelpforum.com/calculus/77773-partial-derivative-print.html","timestamp":"2014-04-20T10:52:02Z","content_type":null,"content_length":"4010","record_id":"<urn:uuid:54f68828-19d7-4570-b08b-18ee4819a25b>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Slides Then / Slides Now May 28th, 2009 by Dan Meyer a/k/a Redesigned: Dan Meyer Something I have been completely wrong about is the best way to use slide software in a math class. A few years ago I wrote a design series explaining how I use color theory, grid systems, etc., to clarify complex procedures, but the whole thing turns out to be simultaneously a) a lot more fun and b) a lot less time-consuming than that. My reversal in slide design reflects a shift in my math pedagogy also. Far more important to me now than "developing fluency with complex procedures" is "developing a strong framework for interpreting unfamiliar mathematics and the world." I'm not trying to set up a false dichotomy here. We do both. Both are important. But all too often slides like that first one, with the classroom dialogue and solution method predetermined, cordon off classroom dialogue and student reflection onto very narrow paths. That kind of pedagogy does nothing to unify mathematics, tending, instead, to position complex procedures in isolation from each other, which is a very confusing way to learn math and a very laborious way to teach it. Instead, I want my students to focus without distraction on a) how new questions are similar to old questions, b) how tougher questions demand tougher procedural skills, asking themselves c) which of their older tools can they adapt to these tougher questions? For example, I put six equations on separate slides, equations we have seen. I asked, "how many answers are there?" One. Two. Zero. Etc. Then I put up an inequality, tweaking the problem slightly, and quickly. They told me there were lots of answers. I asked my students to start listing them. "7, 6, 5, 4.2, 4.1, 4," etc.This became tiresome quickly and made the introduction of a graph — a picture of all those answers — clear and necessary. Slide software makes it easy to sequence these mathematical objects, ordering and re-ordering them to promote contrasts and complements. Slide software lets me sequence these mathematical objects quickly, from anywhere on the globe, from photos and videos I take, from movies my students watch, from textbooks too. Graphic design is useful to mathematics, but I am happy to have discovered certain constraints on that usefulness and, simultaneously, higher fruit hanging elsewhere. It is the curation of this mathematical media that interests me now, though I reserve the right to return to this space shortly and reverse myself again. 9 Responses to “Slides Then / Slides Now” 1. on 28 May 2009 at 6:24 am1 David Petersen If you are keeping it as simple as this, what is the point of using slides anyways? Especially if you are reorganizing on the fly or responding to student questions that might not go in quite the order planned, why not just use the old chalk and slate method? Maybe I’m just short-sighted, but I can see why the computer might help with Slide 1 (Then) since you could show the numbers being plugged in to the equation. The rest of what you’ve shown seem (to me) to be just using tech to use tech (or save the planet by not using chalk? or not getting hands dirty? or allow you to move around the classroom?). I do understand your point of having a more organic-type discussion of the problem rather than algorithmic lecturing. I also see that with the right questions you can lead the discussion in helpful directions, but when a student has an idea you wish to explore a little before getting back on track, slides seem somewhat limiting to me. 2. on 28 May 2009 at 8:25 am2 josh g. I’m still a student teacher / newbie, but I’ve had this kind of design shift on my mind after doing a lesson with an Info Tech 9 class on some basic PowerPoint design principles. Quick web research on people’s recommended do’s and don’ts kept coming up with seemingly extreme ideas like “no more than 6 words per slide”, and using a separate handout if you need your audience to retain details (as opposed to a printout of your slides). At first I wrote this off as appropriate for marketing but not for teaching; now I’m not so sure. It’s cool to see that someone is making this transition based on classroom experience. David: I can still see some advantages to using slides like this. It’s quicker to switch back and forth, and you can focus attention on one equation at a time more easily. (If I were doing this on the board, I’d end up with 4-5 things written on the board at once and try to point to one or the other – still workable but not as visually obvious to the student.) You could also use a mix; keep the big ideas and main focus in concise slides, but use the whiteboard when you want to demonstrate rough work. (I’m not sure I like the idea of working out solutions ahead of time on a slide anyway; it doesn’t model the actual process very well.) 3. on 28 May 2009 at 8:43 am3 Now this is what teaching is for me. Getting ‘them’ to discuss and think about mathematics. I had a great lesson monday when i asked my pupils ‘How will division work for complex numbers?’ (we started the lesson with how to multiply them which was very easy). No slides just a blank blackboard. I did have a computer with Geogebra ready, prepared for if they wanted to investigate the geometrical ‘meaning’ of multiplication. My job was moderating the discussion and asking questions about their suggestions (“Sure?” (yes even when they are right), “Can you explain to your neighbour?”, “Can you rephrase in mathematics? ”, “Can you think of an exception?”). Leaving long pauzes to let them think it through, and talk it through. The one being most uneasy with the silence and the pace was me. And they did it. It took them 2 hours, but they did everything from introducing the distance to the origin and the angle to finding a formula for z^{-1} if z=a+bi. They were great and both teacher and pupils left the class with a smile and eager to tackle the next problem: powers and roots. And i can not see how I could have used slides. Don’t they limit you? Where do they help? 4. on 28 May 2009 at 9:21 am4 I wish I had a projector and computer to present slides. All I have is a whiteboard. Slides allows the teacher to focus students attention to really important information rather than being distracted by something else on the board. I also agree with Josh G. about how slides can be quickly accessed if needed. However, can something be said for effective use of board work where important ideas are maintained on the board for students to refer to while thinking about the problem at hand? For instance, suppose the problem was talking about the absolute value of x is less than 5, if the solution of the absolute value of x equals 5 and the solution of x is less than 5 and the solution of x is less than -5, then perhaps students can use those ideas to create a whole new idea for the problem at hand. Again, the pieces are there on the board for students to synthesize without the prompting of the teacher to use such useful information, necessarily. How can this opportunity be done using a slide presentation program? Since I don’t use it as regularly as others, like Dan, I call on such people to share their experience. Great pedagogy topic! 5. on 28 May 2009 at 10:41 am5 Nate M For myself, I use a wacom wireless tablet combined with powerpoint to become my digital whiteboard. I use slides similar to Dan to focus the students and begin discussion. As students propose ideas through discussion, I pass them the tablet and have them solve it on the slide. The great thing about using powerpoint in this way as I can print the slides for missing students, save them to pull them up later in review for tests, or post them on the classroom blog for students to reference on their own. I understand the “using tech for the sake of using tech argument”, but in some cases these simple changes like this can take the learning to places a heavy piece of slate on a wall cannot go. 6. I feel I should warn you, Dan, that I find this blog post interesting and entirely pleasing in tone. I think you must be doing something wrong ;-) 7. on 28 May 2009 at 9:18 pm7 Technical question: how do you get ‘them’ to discuss it? Is it a whole-group conversation or do you use pair-shares or something else? I like to facilitate these types of discussions too, as best I can, but often it feels like the high-skilled kids take over and the lower-skilled kids disengage… David: If you are keeping it as simple as this, what is the point of using slides anyways? Especially if you are reorganizing on the fly or responding to student questions that might not go in quite the order planned, why not just use the old chalk and slate method? Josh has this mostly right. It’s speed. It’s the ability to preserve, shuffle, and re-organize a sequence of mathematical objects instantaneously (rather than scrawling one out, erasing it, and then starting again.) Slide software highlights the minute changes in consecutive mathematical objects (an equality changing to an inequality, in this case) in a way I couldn’t really reproduce in my overhead transparency days. I can also pull these mathematical objects from anywhere in any form. I don’t know how to reproduce any of my WCYWDT? lessons with chalk and slate, for instance. And: because I project onto a whiteboard, I have never had to tell a curious student, “Sorry, we can’t talk about that, it isn’t on the slide.” Peter: And i can not see how I could have used slides. Don’t they limit you? Where do they help? I have no idea how slide software could improve your lesson on complex numbers. Your review sounds great. Slide software, like dynamic geometry software, is just one tool in a complete pedagogical arsenal. How can I quickly sequence five videos, three images, and an equation, for instance, using only dynamic geometry software? Chuck: Technical question: how do you get ‘them’ to discuss it? Is it a whole-group conversation or do you use pair-shares or something else? I like to facilitate these types of discussions too, as best I can, but often it feels like the high-skilled kids take over and the lower-skilled kids disengage… My experience is the same. Typically, I ask the students to engage with the mathematical object on a piece of notepaper and to talk to a neighbor, all while I float around, tune in, offer an observation, a question, or a challenge, and tune out. It’s far from perfect and I’m open to suggestions. 9. Hi Dan: Your slides are awesome — have you used the TI-83/4 as a complementary learning tool for this content?
{"url":"http://blog.mrmeyer.com/2009/slides-then-slides-now/","timestamp":"2014-04-16T04:13:33Z","content_type":null,"content_length":"46831","record_id":"<urn:uuid:1b755986-98bb-48ed-930d-6eae2afc9c43>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00125-ip-10-147-4-33.ec2.internal.warc.gz"}
A Game Theoretic Formulation for Strategic Sensing in Cognitive Radio Networks: Equilibrium Analysis and Fully Distributed Learning (IJCSIS) International Journal of Computer Science and Information Security,Vol. 11, No. 6, June 2013relation to competitive multi-armed bandit problem. The armbandit problem is well understood for single CR, which wishesto opportunistically exploit the availability of resources in thespectrum. For multiple users however the outcomes are notclear. This difficulty is in particular due to the int eractivebehavior of the underlying processes of decision makingin dynamic environment. The authors proposed a Bayesianapproach to identify a trade-off metric between exploring theavailability of other free channels/time slots and exploiting theopportunities identified. In [10], a game theoretic learning andpricing have been proposed. In the above references, stochasticformulations of the medium access problem are examined.These formulations often lead to intractable configurations.Authors in [19], proposed a new two-step game where sens-ing and opportunistic access are jointly consid ered. A fullcharacterization of the Nash equilibria and analysis of theoptimal pricing policy, from the network owner view, for bothcentralized setting and decentralized setting, are also provided. Next, a combined learning algorithm that is fully distributedand allows the cognitive users to learn their optimal payoffsand their optimal strategies in both symmetric and asymmetriccases is B. contribution In this paper, we propose to associate game theory tolearning strategies in cognitive medium access, to find theequilibrium sensing time. To the best of our knowledge,this is the first paper devoted to analyze distributed sensing.Meanwhile, the related literature usually considers the sensingtime from an optimization perspective [18] [13] [5]. Incontrast to the classical literature of medium access games,which does not focus on the random nature of cognitive radios,we propose a fully distributed strategic learning to learn theequilibrium payoff and the associated equilibrium strategies.Moreove r, we provide many insightful results to understandthe possible relationship between sensing time and transmitprobability. Next, we analyze the impact of starting pointand speed of learning on conver gence to Nash equilibrium.Finally we propose a comparison in terms of sensing time andthroughput between the proposed solution and a centralizedone. C. Organization of the paper This paper will be organized as follows: In section II,the system model, the main notations and spectrum sensingpreliminaries are presented. In section III, we describe theutility function of the ga me and equilibrium analysis, insection IV, we propose a distributed learning algorithm, andin section V, a comparison between the proposed solution anda centralized one. Performance evaluation and results analysisare provided in section V.II. S A. System model We consider a secondary network that coexists with a pri-mary network where each PU is licensed to transmit wheneverhe/she wishes for most of time except for the case when thechannel is occupied by another PU. The duration of a primaryframe is denoted T. We consider that we have N SU tryingto access the spectrum of the PU. Throughout this work thefollowing consideration are taken into account: Energy-based spectrum sensing: the primary network activity is determined by measuring the signal strengthtraveling over the channel. If the received signal powerexceeds some given threshold, the channel is declaredbusy, it is declared idle in the other case. Imperfect sensing in the sense that SUs may declare abusy channel while it is idle (false alarm). Random access for data transmission of CRs. Here, weconsider that SUs follow a slotted aloha-like protocol totransmit data.During primary user’s activities, each SU i receives some givensignal. SU i samples the received signal at sampling frequencyf without loss of generality, we assume that all SUs use thesame sampling frequency. The discrete received signal at theSU i can be represented as: ) + : Hypothesis H : Hypothesis H (1)Where h is the channel gain experienced by SU i and n(t)is an circular symmetric complex Gaussian noise with mean 0and variance ] = . The channel state is consideredas the binary hypothesis test H B. Energy based spectrum sensing Spectrum sensing is often considered as a detection prob-lem. Many techniques were developed in order to detect theholes in the spectrum band. Focusing on each narrow band,existing spectrum sensing techniques are widely categorizedinto energy detection [6] and feature detection [7]. Althoughthis is not a restriction of our work, we will use energydetection throughout the paper.Let be the sensing time and N the number of consideredsamples. Thus, we have = [ .it follows that the averageenergy detected by SU i is:T ) = C. Imperfect sensing We consider throughout this work a scenario where thespectrum sensor has imperfect detection performance. Inother terms, each SU i has a false alarm probability , i.e.,the probability that the channel is sensed to be busy while itis actually idle. Let denotes the threshold which specifiesthe collision tolerance bound of PUs. Then: ) = > ǫ 26http://sites.google.com/site/ijcsis/ISSN 1947-5500
{"url":"http://www.scribd.com/doc/156879626/A-Game-Theoretic-Formulation-for-Strategic-Sensing-in-Cognitive-Radio-Networks-Equilibrium-Analysis-and-Fully-Distributed-Learning","timestamp":"2014-04-23T08:24:55Z","content_type":null,"content_length":"290138","record_id":"<urn:uuid:f4dfa873-3865-4e98-9702-c4975fa28634>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Call Input family symbol 1,1,1,1,1, calculate the groups, choose file name 11111 and print all groups. The Bravais group <-I[5]> is written to file '11111'. Note, each Bravais group of degree 5 contains this group. 2. Call Bravais_inclusions 11111 -S > all Now the file 'all' contains a list of the names of all Bravais groups of degree 5, more precisely representatives of the Z-classes. grep Symbol all | wc would tell us that there are 189 Z-classes of Bravais groups of degree 5.) 3. Call Input family symbol 5-1, calculate the groups, choose file name 51 and print all groups. All Bravais groups in family 5-1 are now listed in file '51'. There are three Bravais groups in file '51' now. By irreducibility all three Bravais groups are maximal finite. Later we want to omit their proper Bravais subgroups from the file 'all'. To prepare this edit the file '51' and split it up into three files '51a', '51b' and '51c' containing one Bravais group each. 4. Call Input family symbol 5-2 calculate the groups, choose file name 52 and print all groups. All Bravais groups in family 5-1 are now listed in file '52'. There are four Bravais groups in file 52 now. By irreducibility all four Bravais groups are maximal finite. Later we want to omit their proper Bravais subgroups from the file 'all'. To prepare this edit the file '52' and split it up into four files '52a', '52b', '52c' and '52d' containing one Bravais group each. 5. Call Bravais_inclusions 51a > notmax Bravais_inclusions 51b >> notmax Bravais_inclusions 51c >> notmax Bravais_inclusions 52a >> notmax Bravais_inclusions 52b >> notmax Bravais_inclusions 52c >> notmax Bravais_inclusions 52d >> notmax to write the names of the Bravais subgroups of the five maximal finite subgroups known so far on file 'notmax'. 6. Call grep Symbol notmax > compare sort -u compare > notmax Now the file 'notmax' contains the names of the seven maximal groups and of their proper Bravais subgroups, each one listed just once. By editing the file 'notmax', one writes the lines corresponding to maximal groups on a new file 'MAX. (These are the last seven lines.) 7. Call sort all > allsort diff allsort notmax | grep Symbol > all Now the file 'all' contains a complete list of Bravais groups, which are not contained in one of the groups of the file 'MAX'. 8. The last lines of the file 'all', more precisely the ones involving a symbol of the form 4-x;1 are the following: < Symbol: 4-1;1 homogeneously d.: 2 zclass: 1 < Symbol: 4-2';1 homogeneously d.: 1 zclass: 1 < Symbol: 4-2;1 homogeneously d.: 1 zclass: 1 < Symbol: 4-2;1 homogeneously d.: 2 zclass: 1 < Symbol: 4-3';1 homogeneously d.: 1 zclass: 1 < Symbol: 4-3;1 homogeneously d.: 1 zclass: 1 < Symbol: 4-3;1 homogeneously d.: 2 zclass: 1 By using Bravais_inclusions -S one can rule out the two groups involving a ' as maximal finite groups. For the remaining five groups it is clear now that they are maximal finite. So by repeating the above computations with these five groups leads us to a new file 'MAX' containig 7+5 groups and a new file 'all' containing all Bravais groups not contained in any of the groups in 'MAX' (up to Z-equivalence). It turns out that the process terminates after the next step with 'MAX' looking like 5-1a 5-1c 5-2b 5-2d 5-1b 5-2a 5-2c 4-112 4-211 4-212 4-311 4-312 32-11 32-13 32-21 32-22 32-23
{"url":"http://wwwb.math.rwth-aachen.de/carat/examples/Ex3.html","timestamp":"2014-04-16T21:59:15Z","content_type":null,"content_length":"6793","record_id":"<urn:uuid:1f5c82cd-3003-4fcf-88d0-c2413a9b33c6>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Optical Society of America B We derive the explicit analytical results of low-lying eigenenergies, eigenstates, momentum distributions, and all the two-order spatial correlation functions for a Bose–Hubbard model on a ring in the strong interaction limit by means of the first-order perturbation theory. We show explicitly that the ground and the low-lying excited states are all quantum entangled states in the incommensurate filling case and that certain correlation functions in some of these states, the ground state in particular, violate the Schwarz inequality, another indication of their © 2006 Optical Society of America OCIS Codes (020.7010) Atomic and molecular physics : Laser trapping (190.0190) Nonlinear optics : Nonlinear optics (270.0270) Quantum optics : Quantum optics ToC Category: Nonlinear Optics Original Manuscript: November 28, 2005 Revised Manuscript: March 29, 2006 Manuscript Accepted: April 18, 2006 Ying Wu and Xiaoxue Yang, "Bose-Hubbard model on a ring: analytical results in a strong interaction limit and incommensurate filling," J. Opt. Soc. Am. B 23, 1888-1893 (2006) Sort: Year | Journal | Reset 1. D. Jaksch, C. Bruder, J. I. Cirac, C. W. Gardiner, and P. Zoller, "Cold bosonic atoms in optical lattices," Phys. Rev. Lett. 81, 3108-3111 (1998). [CrossRef] 2. M. P. A. Fisher, P. B. Weichman, G. Grinstein, and D. S. Fisher, "Boson localization and the superfluid-insulator transition," Phys. Rev. B 40, 546-570 (1989). [CrossRef] 3. M. Greiner, O. Mandel, T. Esslinger, T. W. Hänsch, and I. Bloch, "Quantum phase transition from a superfluid to a Mott-insulator in a gas of ultracold atoms," Nature 413, 39-44 (2002). [CrossRef] 4. S. R. Clark and D. Jaksch, "Dynamics of the superfluid to Mott-insulator transition in one dimension," Phys. Rev. A 70, 043612/1-13 (2004). [CrossRef] 5. D. van Oosten, P. van der Straten, and H. T. C. Stoof, "Quantum phases in an optical lattice," Phys. Rev. A 63, 053601/1-12 (2001). [CrossRef] 6. D. B. M. Dickerscheid, D. van Oosten, P. J. H. Denteneer, and H. T. C. Stoof, "Ultracold atoms in optical lattices," Phys. Rev. A 68, 043623/1-13 (2003). [CrossRef] 7. Y. Wu and X. Yang, "Analytical results for energy spectrum and eigenstates of Bose-Einstein condensate in Mott insulator state," Phys. Rev. A 68, 013608/1-7 (2003). [CrossRef] 8. A. A. Svidzinsky and S. T. Chui, "Insulator-superfluid transition of spin-1 bosons in an optical lattice in magnetic field," Phys. Rev. A 68, 043612/1-8 (2003). [CrossRef] 9. S. Tsuchiya, S. Kurihara, and T. Kimura, "Superfluid-Mott insulator transition of spin-1 bosons in an optical lattice," Phys. Rev. A 70, 043628/1-11 (2004). [CrossRef] 10. S. Jin, J.-M. Hou, B.-H. Xie, L.-J. Tian, and M.-L. Ge, "Superfluid-Mott-insulator transition of spin-2 cold bosons in an optical lattice in a magnetic field," Phys. Rev. A 70, 023605/1-8 (2004). 11. K. V. Krutitsky and R. Graham, "Spin-1 bosons with coupled ground states in optical lattice," Phys. Rev. A 70, 063610/1-10 (2004). [CrossRef] 12. K. V. Krutitsky, M. Timmer, and R. Graham, "First- and second-order superfluid-Mott-insulator phase transitions of spin-1 bosons with coupled ground states in optical lattices," Phys. Rev. A 71, 033623/1-4 (2005). [CrossRef] 13. P. Jain and C. W. Gardiner, "A phase-space method for Bose-Hubbard model: application to mean-field models," J. Phys. B 37, 3649-3680 (2004). [CrossRef] 14. W. Zwerger, "Mott-Hubbard transition of cold atoms in optical lattices," J. Opt. 5, S9-S16 (2003). [CrossRef] 15. B. Damski, J. Zakrzewski, L. Santos, P. Zoller, and M. Lewenstein, "Atomic Bose and Anderson glasses in optical lattices," Phys. Rev. Lett. 91, 080403/1-4 (2003). [CrossRef] 16. A. Polkovnikov and D. W. Wang, "Effect of quantum fluctuations on the dipolar motion of Bose-Einstein condensates in optical lattices," Phys. Rev. Lett. 93, 070401/1-4 (2004). [CrossRef] 17. T. D. Kühner, S. R. White, and H. Monien, "One-dimensional Bose-Hubbard model with nearest-neighbor interaction," Phys. Rev. B 61, 12474-12489 (2000). [CrossRef] 18. L. Amico and V. Penna, "Time-dependent mean-field theory of the superfluid-insulator phase transition," Phys. Rev. B 62, 1224-1237 (2000). [CrossRef] 19. A. Buchleitner and A. Kolovsky, "Interaction-induced decoherence of atomic Bloch oscillations," Phys. Rev. Lett. 91, 253002/1-4 (2003). [CrossRef] 20. M. Rigol, V. Rousseau, R. T. Scalettar, and R. R. P. Singh, "Collective oscillations of strongly correlated one-dimensional bosons on a lattice," Phys. Rev. Lett. 95, 110402/1-4 (2005). 21. M. Rigol and A. Muramatsu, "Ground-state properties of hard-core bosons confined on one-dimensional optical lattices," Phys. Rev. A 72, 013604/1-13 (2005). 22. J. Zakrzewski, "Mean-field dynamics of the superfluid-insulator phase transition in a gas of ultracold atoms," Phys. Rev. A 71, 043601/1-7 (2005). [CrossRef] 23. C. Menotti, A. Smerzi, and A. Trombettoni, "Superfluid dynamics of a Bose-Einstein condensate in a periodic potential," New J. Phys. 5, 112/1-20 (2003). [CrossRef] 24. D. C. Roberts and K. Burnett, "Probing states in the Mott insulator regime in the case of coherent bosons trapped in an optical lattice," Phys. Rev. Lett. 90, 150401/1-4 (2003). [CrossRef] 25. V. A. Kashurnikov, N. V. Prokof'ev, and B. V. Svistunov, "Revealing the superfluid-Mott-insulator transition in an optical lattice," Phys. Rev. A 66, 031601(R)/l-4 (2002). [CrossRef] 26. G. G. Batrouni, V. Rousseau, R. T. Scalettar, M. Rigol, A. Muramatsu, P. J. H. Denteneer, and M. Troyer, "Mott domains of bosons confined on optical lattices," Phys. Rev. Lett. 89, 117203/1-4 27. L. Amico and V. Penna, "Dynamical mean field theory of the Bose-Hubbard model," Phys. Rev. Lett. 80, 2189-2192 (1998). [CrossRef] 28. L. Tonks, "The complete equation of state of one, two and three-dimensional gases of hard elastic spheres," Phys. Rev. 50, 955-963 (1936). [CrossRef] 29. M. D. Girardeau, "Relationship between systems of impenetrable bosons and fermions in one dimension," J. Math. Phys. 1, 516-523 (1960). [CrossRef] 30. E. H. Lieb and W. Liniger, "Exact analysis of an interacting Bose gas. I. The general solution and the ground state," Phys. Rev. 130, 1605-1616 (1963). [CrossRef] 31. E. H. Lieb, "Exact analysis of an interacting Bose gas. II. The excitation spectrum," Phys. Rev. 130, 1616-1624 (1963). [CrossRef] 32. M. D. Girardeau and E. M. Wright, "Breakdown of time-dependent mean field theory for a one-dimensional condensate of impenetrable bosons," Phys. Rev. Lett. 84, 5239-5243 (2000). [CrossRef] 33. M. D. Girardeau, E. M. Wright, and J. M. Triscari, "Ground-state properties of a one-dimensional system of hard-core bosons in a harmonic trap," Phys. Rev. A 63, 033601/1-6 (2001). [CrossRef] 34. R. Bhat, L. D. Carr, and M. J. Holland, "Bose-Einstein condensates in rotating lattices," Phys. Rev. Lett. 96, 060405/1-4 (2006). [CrossRef] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josab/abstract.cfm?URI=josab-23-9-1888","timestamp":"2014-04-20T13:41:08Z","content_type":null,"content_length":"181519","record_id":"<urn:uuid:01660f64-4e24-4a60-ab82-dbc18a2856ca>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/raerae01/asked","timestamp":"2014-04-17T06:46:07Z","content_type":null,"content_length":"65388","record_id":"<urn:uuid:d32c9f4a-73d8-4684-91e3-0b2cfd4d067b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00207-ip-10-147-4-33.ec2.internal.warc.gz"}