content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
A Review of Piecewise Linearization Methods
Mathematical Problems in Engineering
Volume 2013 (2013), Article ID 101376, 8 pages
Review Article
A Review of Piecewise Linearization Methods
^1Department of Information Technology and Management, Shih Chien University, No. 70 Dazhi Street, Taipei 10462, Taiwan
^2Program in Industrial and Systems Engineering, University of Minnesota, 111 Church Street SE, Minneapolis, MN 55455, USA
^3School of Information Management and Engineering, Shanghai University of Finance and Economics, Shanghai 200433, China
^4School of Management, Tokyo University of Science, 500 Shimokiyoku, Kuki, Saitama 346-8512, Japan
^5Department of Business Management, National Taipei University of Technology, Section 3, No. 1 Chung-Hsiao E. Road, Taipei 10608, Taiwan
Received 3 July 2013; Accepted 9 September 2013
Academic Editor: Yi-Chung Hu
Copyright © 2013 Ming-Hua Lin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Various optimization problems in engineering and management are formulated as nonlinear programming problems. Because of the nonconvexity nature of this kind of problems, no efficient approach is
available to derive the global optimum of the problems. How to locate a global optimal solution of a nonlinear programming problem is an important issue in optimization theory. In the last few
decades, piecewise linearization methods have been widely applied to convert a nonlinear programming problem into a linear programming problem or a mixed-integer convex programming problem for
obtaining an approximated global optimal solution. In the transformation process, extra binary variables, continuous variables, and constraints are introduced to reformulate the original problem.
These extra variables and constraints mainly determine the solution efficiency of the converted problem. This study therefore provides a review of piecewise linearization methods and analyzes the
computational efficiency of various piecewise linearization methods.
1. Introduction
Piecewise linear functions are frequently used in various applications to approximate nonlinear programs with nonconvex functions in the objective or constraints by adding extra binary variables,
continuous variables, and constraints. They naturally appear as cost functions of supply chain problems to model quantity discount functions for bulk procurement and fixed charges. For example, the
transportation cost, inventory cost, and production cost in a supply chain network are often constructed as a sum of nonconvex piecewise linear functions due to economies of scale [1]. Optimization
problems with piecewise linear costs arise in many application domains, including transportation, telecommunications, and production planning. Specific applications include variants of the minimum
cost network flow problem with nonconvex piecewise linear costs [2–7], the network loading problem [8–11], the facility location problem with staircase costs [12, 13], the merge-in-transit problem [
14], and the packing problem [15–17]. Other applications also include production planning [18], optimization of electronic circuits [19], operation planning of gas networks [20], process engineering
[21, 22], engineering design [23, 24], appointment scheduling [25], and other network flow problems with nonconvex piecewise linear objective functions [7].
Various methods of piecewisely linearizing a nonlinear function have been proposed in the literature [26–39]. Two well-known mixed-integer formulations for piecewise linear functions are the
incremental cost [40] and the convex combination [41] formulations. Padberg [35] compared the linear programming relaxations of the two mixed-integer programming models for piecewise linear functions
in the simplest case when no constraint exists. He showed that the feasible set of the linear programming relaxation of the incremental cost formulation is integral; that is, the binary variables are
integers at every vertex of the set. He called such formulations locally ideal. On the other hand, the convex combination formulation is not locally ideal, and it strictly contains the feasible set
of the linear programming relaxation of the incremental cost formulation. Then, Sherali [42] proposed a modified convex combination formulation that is locally ideal. Alternatively, Beale and Tomlin
[43] suggested a formulation for the piecewise linear function similar to convex combination, except that no binary variable is included in the model and the nonlinearities are enforced
algorithmically, directly in the branch-and-bound algorithm, by branching on sets of variables, which they called special ordered sets of type 2 (SOS2). It is also possible to formulate piecewise
linear functions similar to incremental cost but without binary variables and enforcing the nonlinearities directly in the branch-and-bound algorithm. Two advantages of eliminating binary variables
are the substantial reduction in the size of the model and the use of the polyhedral structure of the problem [44, 45]. Keha et al. [46] studied formulations of linear programs with piecewise linear
objective functions with and without additional binary variables and showed that adding binary variables does not improve the bound of the linear programming relaxation. Keha et al. [47] also
presented a branch-and-cut algorithm for solving linear programs with continuous separable piecewise-linear cost functions. Instead of introducing auxiliary binary variables and other linear
constraints to represent SOS2 constraints used in the traditional approach, they enforced SOS2 constraints by branching on them without auxiliary binary variables.
Due to the broad applications of piecewise linear functions, many studies have conducted related research on this topic. The main purpose of these studies is to find a better way to represent a
piecewise linear function or to tighten the linear programming relaxation. A superior representation of piecewise linear functions can effectively reduce the problem size and enhance the
computational efficiency. However, for expressing a piecewise linear function of a single variable with break points, most of the methods in the textbooks and literature require adding extra binary
variables and constraints, which may cause a heavy computational burden when is large. Recently, Li et al. [48] developed a representation method for piecewise linear functions with fewer binary
variables compared to the traditional methods. Although their method needs only extra binary variables to piecewisely linearize a nonlinear function with break points, the approximation process still
requires extra constraints, nonnegative continuous variables, and free-signed continuous variables. Vielma et al. [39] presented a note on Li et al.’s paper and showed that two representations for
piecewise linear functions introduced by Li et al. [48] are both theoretically and computationally inferior to standard formulations for piecewise linear functions. Tsai and Lin [49] applied the
Vielma et al. [39] techniques to express a piecewise linear function for solving a posynomial optimization problem. Croxton et al. [31] indicated that most models of expressing piecewise linear
functions are equivalent to each other. Additionally, it is well known that the numbers of extra variables and constraints required in the linearization process for a nonlinear function obviously
impact the computational performance of the converted problem. Therefore, this paper focuses on discussing and reviewing the recent advances in piecewise linearization methods. Section 2 reviews the
piecewise linearization methods. Section 3 compares the formulations of various methods with the numbers of extra binary/continuous variables and constraints. Section 4 discusses error evaluation in
piecewise linear approximation. Conclusions are made in Section 5.
2. Formulations of Piecewise Linearization Functions
Consider a general nonlinear function of a single variable ; is a continuous function, and is within the interval []. Most commonly used textbooks of nonlinear programming [26–28] approximate the
nonlinear function by a piecewise linear function as follows.
Firstly, denote as the break points of , , and Figure 1 indicates the piecewise linearization of .
can then be approximately linearized over the interval [] as where , , , in which only two adjacent ’s are allowed to be nonzero. A nonlinear function is then converted into the following
Method 1. Consider
where , , .
The above expressions involve new binary variables . The number of newly added 0-1 variables for piecewisely linearizing a function equals the number of breaking intervals (i.e., ). If is large, it
may cause a heavy computational burden.
Li and Yu [33] proposed another global optimization method for nonlinear programming problems where the objective function and the constraints might be nonconvex. A univariate function is initially
expressed by a piecewise linear function with a summation of absolute terms. Denote () as the slopes of line segments between and , expressed as . can then be written as follows:
is convex in the interval if , and otherwise is a non-convex function which needs to be linearized by adding extra binary variables. By linearizing the absolute terms, Li and Yu [33] converted the
nonlinear function into a piecewise linear function as shown below.
Method 2. Consider
where , , , , are upper bounds of and are extra binary variables used to linearize a non-convex function for the interval .
Comparing Method 2 with Method 1, Method 1 uses binary variables to linearize for whole interval. But the binary variables used in Method 2 are only applied to linearize the non-convex parts of .
Method 2 therefore uses fewer 0-1 variables than Method 1. However, for with intervals of the non-convex parts, Method 2 still requires binary variables to linearize .
Another general form of representing a piecewise linear function is proposed in the articles of Croxton et al. [31], Li [32], Padberg [35], Topaloglu and Powell [36], and Li and Tsai [38]. The
expressions are formulated as shown below.
Method 3. Consider
where , , and where is a large constant and .
The above expressions require extra binary variables and constraints, where break points are used to represent a piecewise linear function.
Form the above discussions, we can know that Methods 1, 2, and 3 require a number of extra binary variables and extra constraints linear in to express a piecewise linear function. To approximate a
nonlinear function by using a piecewise linear function, the numbers of extra binary variable and constraints significantly influence the computational efficiency. If fewer binary variables and
constraints are used to represent a piecewise linear function, then less CPU time is needed to solve the transformed problem. For decreasing the extra binary variables involved in the approximation
process, Li et al. [48] developed a representation method for piecewise linear functions with the number of binary variables logarithmic in . Consider the same piecewise linear function discussed
above, where is within the interval [] and break points exist within []. Let be an integer, , expressed as
Let be a set composed of all indices such that . For instance, , .
Denote to be the number of elements in . For instance, , .
To approximate a univariate nonlinear function by using a piecewise linear function, the following expressions are deduced by the Li et al. [48] method.
Method 4. Consider
where , , , and are free continuous variables, and are nonnegative continuous, and all the variables are the same as defined before.
The expressions of Method 4 for representing a piecewise linear function with break points use binary variables, constraints, non-negative variables, and 2 free-signed continuous variables. Comparing
with Methods 1, 2, and 3, Method 4 indeed reduces the number of binary variables used such that the computational efficiency is improved. Although Li et al. [48] developed a superior way of
expressing a piecewise linear function by using fewer binary variables, Vielma et al. [39] investigated that this representation for piecewise linear functions is theoretically and computationally
inferior to standard formulations for piecewise linear functions. Vielma and Nemhauser [50] recently developed a novel piecewise linear expression requiring fewer variables and constraints than the
current piecewise linearization techniques to approximate the univariate nonlinear functions. Their method needs a logarithmic number of binary variables and constraints to express a piecewise linear
function. The formulation is described as shown below.
Let and . An injective function , , where the vectors and differ in at most one component for all ,….
Let , for all, , and . Some notations are introduced below.
: a set composed of all , where of and for or of for ; that is, .
: a set composed of all , where of and for ,… or of for ; that is, .
The linear approximation of a univariate , , by the technique of Vielma and Nemhauser [50] is formulated as follows.
Method 5. Denote as the piecewise linear function of , where be the break points of . can be expressed as
Method 5 uses binary variables, continuous variables, and constraints to express a piecewise linearization function with line segments.
3. Formulation Comparisons
The comparison results of the above five methods in terms of the numbers of binary variables, continuous variables, and constraints are listed in Table 1. The number of extra binary variables of
Methods 1 and 3 is linear in the number of line segments. Methods 4 and 5 have the logarithmic number of extra binary variables with line segments, and the number of extra binary variables of Method
2 is equal to the number of concave piecewise line segments. In the deterministic global optimization for a minimization problem, inverse, power, and exponential transformations generate nonconvex
expressions that require to be linearly approximated in the reformulated problem. That means Methods 4 and 5 are superior to Methods 1, 2, and 3 in terms of the numbers of extra binary variables and
constraints as shown in Table 1. Moreover, Method 5 has fewer extra continuous variables and constraints than Method 4 in linearizing a nonlinear function.
Till et al. [51] reviewed the literature on the complexity of mixed-integer linear programming (MILP) problems and summarized that the computational complexity varies from to , where is the number of
constraints and is the number of binaries. Therefore, reducing constraints and binary variables makes a greater impact than reducing continuous variables on computational efficiency of solving MILP
problems. For finding a global solution of a nonlinear programming problem by a piecewise linearization method, if the linearization method generates a large number of additional constraints and
binaries, the computational efficiency will decrease and cause heavy computational burdens. According to the above discussions, Method 5 is more computationally efficient than the other four methods.
Experiment results from the literature [39, 48, 49] also support the statement.
Beale and Tomlin [43] suggested a formulation for piecewise linear functions by using continuous variables in special ordered sets of type 2 (SOS2). Although no binary variables are included in the
SOS2 formulation, the nonlinearities are enforced algorithmically and directly in the branch-and-bound algorithm by branching on sets of variables. Since the traditional SOS2 branching schemes have
too many dichotomies, the piecewise linearization technique in Method 5 induces an independent branching scheme of logarithm depth and provides a significant computational advantage [50]. The
computational results in Vielma and Nemhauser [50] show that Method 5 outperforms the SOS2 model without binary variables.
The factors affecting the computational efficiency in solving nonlinear programming problems include the tightness of the constructed convex underestimator, the efficiency of the piecewise
linearization technique, and the number of the transformed variables. An appropriate variable transformation constructs a tighter convex underestimator and makes fewer break points required in the
linearization process to satisfy the same optimality tolerance and feasibility tolerance. Vielma and Nemhauser [50] indicated that the formulation of Method 5 is sharp and locally ideal and has
favorable tightness properties. They presented experimental results showing that Method 5 significantly outperforms other methods, especially when the number of break points becomes large. Vielma et
al. [39] explained that the formulation of Method 4 is not sharp and is theoretically and computationally inferior to standard MILP formulations (convex combination model, logarithmic convex
combination model) for piecewise linear functions.
4. Error Evaluation
For evaluating the error of piecewise linear approximation, Tsai and Lin [49, 52] and Lin and Tsai [53] utilized the expression to estimate the error indicated in Figure 2. If is the objective
function, is the th constraint, and is the solution derived from the transformed program, then the linearization does not require to be refined until and , where is the evaluated error in objective,
is the optimality tolerance, is the error in the th constraint, and is the feasibility tolerance.
The accuracy of the linear approximation significantly depends on the selection of break points and more break points can increase the accuracy of the linear approximation. Since adding numerous
break points leads to a significant increase in the computational burden, the break point selection strategies can be applied to improve the computational efficiency in solving optimization problems
by the deterministic approaches. Existing break point selection strategies are classified into three categories as follows [54]:(i)add a new break point at the midpoint of each interval of existing
break points;(ii)add a new break point at the point with largest approximation error of each interval; (iii)add a new break point at the previously obtained solution point.
According to the deterministic optimization methods for solving nonconvex nonlinear problems [29, 33, 38, 39, 48, 49, 53–56], the inverse or logarithmic transformation is required to be approximated
by the piecewise linearization function. For example, the function or is required to be piecewisely linearized by using an appropriate breakpoint selection strategy, if a new break point is added at
the midpoint of each interval of existing break points or at the point with largest approximation error, the number of line segments becomes double in each iteration. If a new breakpoint is added at
the previously obtained solution point, only one breakpoint is added in each iteration. How to improve the computational efficiency by a better break point selection strategy still needs more
investigations or experiments to get concrete results.
5. Conclusions
This study provides an overview on some of the most commonly used piecewise linearization methods in deterministic optimization. From the formulation point of view, the numbers of extra binaries,
continuous variables, and constraints are decreasing in the latest development methods especially for the number of extra binaries which may cause heavy computational burdens. Additionally, a good
piecewise linearization method must consider the tightness properties such as sharp and locally ideal. Since effective break points selection strategy is important to enhance the computational
efficiency in linear approximation, more work should be done to study the optimal positioning of the break points. Although a logarithmic piecewise linearization method with good tightness properties
has been proposed, it is still too time consuming for finding an approximately global optimum of a large scale nonconvex problem. Developing an efficient polynomial time algorithm for solving
nonconvex problems by piecewise linearization techniques is still a challenging question. Obviously, this contribution gives only a few preliminary insights and might point toward issues deserving
additional research.
The research is supported by Taiwan NSC Grants NSC 101-2410-H-158-002-MY2 and NSC 102-2410-H-027-012-MY3.
1. J. F. Tsai, “An optimization approach for supply chain management models with quantity discount policy,” European Journal of Operational Research, vol. 177, no. 2, pp. 982–994, 2007. View at
Publisher · View at Google Scholar · View at Scopus
2. E. H. Aghezzaf and L. A. Wolsey, “Modelling piecewise linear concave costs in a tree partitioning problem,” Discrete Applied Mathematics, vol. 50, no. 2, pp. 101–109, 1994. View at MathSciNet ·
View at Scopus
3. A. Balakrishnan and S. Graves, “A composite algorithm for a concave-cost network flow problem,” Networks, vol. 19, no. 2, pp. 175–202, 1989. View at Publisher · View at Google Scholar · View at
4. K. L. Croxton, Modeling and solving network flow problems with piecewise linear costs, with applications in supply chain management [Ph.D. thesis], Operations Research Center, Massachusetts
Institute of Technology, Cambridge, Mass, USA, 1999.
5. L. M. A. Chan, A. Muriel, Z. J. Shen, and D. Simchi-Levi, “On the effectiveness of zero-inventory-ordering policies for the economic lot-sizing model with a class of piecewise linear cost
structures,” Operations Research, vol. 50, no. 6, pp. 1058–1067, 2002. View at MathSciNet · View at Scopus
6. L. M. A. Chan, A. Muriel, Z. J. M. Shen, et al., “Effective zero-inventory-ordering policies for the single-warehouse multiretailer problem with piecewise linear cost structures,” Management
Science, vol. 48, no. 11, pp. 1446–1460, 2002. View at Scopus
7. K. L. Croxton, B. Gendron, and T. L. Magnanti, “Variable disaggregation in network flow problems with piecewise linear costs,” Operations Research, vol. 55, no. 1, pp. 146–157, 2007. View at
Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
8. D. Bienstock and O. Günlük, “Capacitated network design polyhedral structure and computation,” INFORMS Journal on Computing, vol. 8, no. 3, pp. 243–259, 1996. View at Scopus
9. V. Gabrel, A. Knippel, and M. Minoux, “Exact solution of multicommodity network optimization problems with general step cost functions,” Operations Research Letters, vol. 25, no. 1, pp. 15–23,
1999. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
10. O. Günlük, “A branch-and-cut algorithm for capacitated network design problems,” Mathematical Programming A, vol. 86, no. 1, pp. 17–39, 1999. View at MathSciNet · View at Scopus
11. T. L. Magnanti, P. Mirchandani, and R. Vachani, “Modeling and solving the two-facility capacitated network loading problem,” Operations Research, vol. 43, no. 1, pp. 142–157, 1995. View at
Publisher · View at Google Scholar · View at MathSciNet
12. K. Holmberg, “Solving the staircase cost facility location problem with decomposition and piecewise linearization,” European Journal of Operational Research, vol. 75, no. 1, pp. 41–61, 1994. View
at Scopus
13. K. Holmberg and J. Ling, “A Lagrangean heuristic for the facility location problem with staircase costs,” European Journal of Operational Research, vol. 97, no. 1, pp. 63–74, 1997. View at Scopus
14. K. L. Croxton, B. Gendron, and T. L. Magnanti, “Models and methods for merge-in-transit operations,” Transportation Science, vol. 37, no. 1, pp. 1–22, 2003. View at Publisher · View at Google
Scholar · View at Scopus
15. H. L. Li, C. T. Chang, and J. F. Tsai, “Approximately global optimization for assortment problems using piecewise linearization techniques,” European Journal of Operational Research, vol. 140,
no. 3, pp. 584–589, 2002. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
16. J. F. Tsai and H. L. Li, “A global optimization method for packing problems,” Engineering Optimization, vol. 38, no. 6, pp. 687–700, 2006. View at Publisher · View at Google Scholar · View at
MathSciNet · View at Scopus
17. J. F. Tsai, P. C. Wang, and M. H. Lin, “An efficient deterministic optimization approach for rectangular packing problems,” Optimization, 2013. View at Publisher · View at Google Scholar
18. R. Fourer, D. M. Gay, and B. W. Kernighan, AMPL—A Modeling Language for Mathematical Programming, The Scientific Press, San Francisco, Calif, USA, 1993.
19. T. Graf, P. van Hentenryck, C. Pradelles-Lasserre, and L. Zimmer, “Simulation of hybrid circuits in constraint logic programming,” Computers & Mathematics with Applications, vol. 20, no. 9-10,
pp. 45–56, 1990. View at Publisher · View at Google Scholar · View at Scopus
20. A. Martin, M. Möller, and S. Moritz, “Mixed integer models for the stationary case of gas network optimization,” Mathematical Programming, vol. 105, no. 2-3, pp. 563–582, 2006. View at Publisher
· View at Google Scholar · View at MathSciNet · View at Scopus
21. M. L. Bergamini, P. Aguirre, and I. Grossmann, “Logic-based outer approximation for globally optimal synthesis of process networks,” Computers and Chemical Engineering, vol. 29, no. 9, pp.
1914–1933, 2005. View at Publisher · View at Google Scholar · View at Scopus
22. M. L. Bergamini, I. Grossmann, N. Scenna, and P. Aguirre, “An improved piecewise outer-approximation algorithm for the global optimization of MINLP models involving concave and bilinear terms,”
Computers and Chemical Engineering, vol. 32, no. 3, pp. 477–493, 2008. View at Publisher · View at Google Scholar · View at Scopus
23. J. F. Tsai, “Global optimization of nonlinear fractional programming problems in engineering design,” Engineering Optimization, vol. 37, no. 4, pp. 399–409, 2005. View at Publisher · View at
Google Scholar · View at MathSciNet · View at Scopus
24. M. H. Lin, J. F. Tsai, and P. C. Wang, “Solving engineering optimization problems by a deterministic global approach,” Applied Mathematics and Information Sciences, vol. 6, no. 7, pp. 21S–27S,
2012. View at MathSciNet
25. D. Ge, G. Wan, Z. Wang, and J. Zhang, “A note on appointment scheduling with piece-wise linear cost functions,” Working paper, 2012.
26. M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming—Theory and Algorithms, John Wiley & Sons, New York, NY, USA, 2nd edition, 1993. View at MathSciNet
27. F. S. Hillier and G. J. Lieberman, Introduction to Operations Research, McGraw-Hill, New York, NY, USA, 6th edition, 1995.
28. H. A. Taha, Operations Research, Macmillan, New York, NY, USA, 7th edition, 2003. View at MathSciNet
29. C. A. Floudas, Deterministic Global Optimization—Theory, Methods, and Applications, Kluwer Academic Publishers, Dordrecht, The Netherlands, 1999. View at MathSciNet
30. S. Vajda, Mathematical Programming, Addison-Wesley, New York, NY, USA, 1964. View at MathSciNet
31. K. L. Croxton, B. Gendron, and T. L. Magnanti, “A comparison of mixed-integer programming models for nonconvex piecewise linear cost minimization problems,” Management Science, vol. 49, no. 9,
pp. 1268–1273, 2003. View at Scopus
32. H. L. Li, “An efficient method for solving linear goal programming problems,” Journal of Optimization Theory and Applications, vol. 90, no. 2, pp. 465–469, 1996. View at MathSciNet · View at
33. H. L. Li and C. S. Yu, “Global optimization method for nonconvex separable programming problems,” European Journal of Operational Research, vol. 117, no. 2, pp. 275–292, 1999. View at Publisher ·
View at Google Scholar · View at Scopus
34. H. L. Li and H. C. Lu, “Global optimization for generalized geometric programs with mixed free-sign variables,” Operations Research, vol. 57, no. 3, pp. 701–713, 2009. View at Publisher · View at
Google Scholar · View at MathSciNet · View at Scopus
35. M. Padberg, “Approximating separable nonlinear functions via mixed zero-one programs,” Operations Research Letters, vol. 27, no. 1, pp. 1–5, 2000. View at MathSciNet · View at Scopus
36. H. Topaloglu and W. B. Powell, “An algorithm for approximating piecewise linear concave functions from sample gradients,” Operations Research Letters, vol. 31, no. 1, pp. 66–76, 2003. View at
Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
37. S. Kontogiorgis, “Practical piecewise-linear approximation for monotropic optimization,” INFORMS Journal on Computing, vol. 12, no. 4, pp. 324–340, 2000. View at MathSciNet · View at Scopus
38. H. L. Li and J. F. Tsai, “Treating free variables in generalized geometric global optimization programs,” Journal of Global Optimization, vol. 33, no. 1, pp. 1–13, 2005. View at Publisher · View
at Google Scholar · View at MathSciNet · View at Scopus
39. J. P. Vielma, S. Ahmed, and G. Nemhauser, “A note on ‘a superior representation method for piecewise linear functions’,” INFORMS Journal on Computing, vol. 22, no. 3, pp. 493–497, 2010. View at
Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
40. H. M. Markowitz and A. S. Manne, “On the solution of discrete programming problems,” Econometrica, vol. 25, no. 1, pp. 84–110, 1957. View at Publisher · View at Google Scholar · View at
41. G. B. Dantzig, “On the significance of solving linear programming problems with some integer variables,” Econometrica, vol. 28, no. 1, pp. 30–44, 1960. View at Publisher · View at Google Scholar
· View at MathSciNet
42. H. D. Sherali, “On mixed-integer zero-one representations for separable lower-semicontinuous piecewise linear functions,” Operations Research Letters, vol. 28, no. 4, pp. 155–160, 2001. View at
Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
43. E. L. M. Beale and J. A. Tomlin, “Special facilities in a general mathematical programming system for nonconvex problems using ordered sets of variables,” in Proceedings of the 5th International
Conference on Operations Research, J. Lawrence, Ed., pp. 447–454, Tavistock Publications, London, UK, 1970. View at MathSciNet
44. I. R. de Farias Jr., E. L. Johnson, and G. L. Nemhauser, “Branch-and-cut for combinatorial optimization problems without auxiliary binary variables,” The Knowledge Engineering Review, vol. 16,
no. 1, pp. 25–39, 2001. View at Publisher · View at Google Scholar · View at Scopus
45. T. Nowatzki, M. Ferris, K. Sankaralingam, and C. Estan, Optimization and Mathematical Modeling in Computer Architecture, Morgan & Claypool Publishers, San Rafael, Calif, USA, 2013.
46. A. B. Keha, I. R. de Farias, and G. L. Nemhauser, “Models for representing piecewise linear cost functions,” Operations Research Letters, vol. 32, no. 1, pp. 44–48, 2004. View at Publisher · View
at Google Scholar · View at MathSciNet · View at Scopus
47. A. B. Keha, I. R. de Farias, and G. L. Nemhauser, “A branch-and-cut algorithm without binary variables for nonconvex piecewise linear optimization,” Operations Research, vol. 54, no. 5, pp.
847–858, 2006. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
48. H. L. Li, H. C. Lu, C. H. Huang, and N. Z. Hu, “A superior representation method for piecewise linear functions,” INFORMS Journal on Computing, vol. 21, no. 2, pp. 314–321, 2009. View at
Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
49. J. F. Tsai and M. H. Lin, “An efficient global approach for posynomial geometric programming problems,” INFORMS Journal on Computing, vol. 23, no. 3, pp. 483–492, 2011. View at Publisher · View
at Google Scholar · View at MathSciNet · View at Scopus
50. J. P. Vielma and G. L. Nemhauser, “Modeling disjunctive constraints with a logarithmic number of binary variables and constraints,” Mathematical Programming, vol. 128, no. 1-2, pp. 49–72, 2011.
View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
51. J. Till, S. Engell, S. Panek, and O. Stursberg, “Applied hybrid system optimization: an empirical investigation of complexity,” Control Engineering Practice, vol. 12, no. 10, pp. 1291–1303, 2004.
View at Publisher · View at Google Scholar · View at Scopus
52. J. F. Tsai and M. H. Lin, “Global optimization of signomial mixed-integer nonlinear programming problems with free variables,” Journal of Global Optimization, vol. 42, no. 1, pp. 39–49, 2008.
View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
53. M. H. Lin and J. F. Tsai, “Range reduction techniques for improving computational efficiency in global optimization of signomial geometric programming problems,” European Journal of Operational
Research, vol. 216, no. 1, pp. 17–25, 2012. View at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus
54. A. Lundell, Transformation techniques for signomial functions in global optimization [Ph.D. thesis], Åbo Akademi University, 2009.
55. A. Lundell and T. Westerlund, “On the relationship between power and exponential transformations for positive signomial functions,” Chemical Engineering Transactions, vol. 17, pp. 1287–1292,
56. A. Lundell, J. Westerlund, and T. Westerlund, “Some transformation techniques with applications in global optimization,” Journal of Global Optimization, vol. 43, no. 2-3, pp. 391–405, 2009. View
at Publisher · View at Google Scholar · View at MathSciNet · View at Scopus | {"url":"http://www.hindawi.com/journals/mpe/2013/101376/","timestamp":"2014-04-17T07:16:15Z","content_type":null,"content_length":"313825","record_id":"<urn:uuid:bd8ec1ee-1ec9-417d-8ee7-f1efdadf16b5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Independent spanning trees of product graphs and their construction
, 1997
"... . We show that the independent spanning tree conjecture on digraphs is true if we restrict ourselves to line digraphs. Also, we construct independent spanning trees with small depths in iterated
line digraphs. From the results, we can obtain independent spanning trees with small depths in de Bruijn ..."
Cited by 2 (0 self)
Add to MetaCart
. We show that the independent spanning tree conjecture on digraphs is true if we restrict ourselves to line digraphs. Also, we construct independent spanning trees with small depths in iterated line
digraphs. From the results, we can obtain independent spanning trees with small depths in de Bruijn and Kautz digraphs that improve the previously known upper bounds on the depths. Keywords.
independent spanning trees, line digraphs, vertex-connectivity, de Bruijn digraphs, Kautz digraphs, interconnection networks, broadcasting. 1 Introduction Unless stated otherwise each digraph of this
paper is nite and may have loops but not multiarcs. Let G be a digraph. Then V (G) and A(G) denote the vertex set and the arc set of G, respectively. Let (u; v) 2 A(G). Then we say that u is adjacent
to v, and v is adjacent from u. Also, it is said that (u; v) is incident to v and incident from u. Let (v; w) 2 A(G). Then we say that (u; v) is adjacent to (v; w), and (v; w) is adjacent from (u;
v). Let ...
"... A set of spanning trees rooted at vertex r in G is called independent spanning trees (IST) if for each vertex v in G, v = r, the paths from v to r in any two trees are different and
vertexdisjoint. If the connectivity of G is k, the IST problem is to construct k IST rooted at each vertex. The IST p ..."
Add to MetaCart
A set of spanning trees rooted at vertex r in G is called independent spanning trees (IST) if for each vertex v in G, v = r, the paths from v to r in any two trees are different and vertexdisjoint.
If the connectivity of G is k, the IST problem is to construct k IST rooted at each vertex. The IST problem has found applications in fault-tolerant broadcasting, but it is still open for general
graph with connectivity greater than four. Obokata et al. [IEICE Trans. Fundamentals of Electronics, Communications and Computer Sciences E79-A (1996) 1894–1903] have proved that the IST problem can
be solved on multidimensional tori. However, their construction algorithm forbids the possibility of parallel processing. In this paper, we shall propose a parallel algorithm that is based on the
Latin square scheme to solve the IST problem on multidimensional tori.
"... Two spanning trees of a graph G are said to be independent if they are rooted at the same vertex r, and for each vertex v = r in G, the two different paths from v to r, one path in each tree,
are internally disjoint. A set of spanning trees of G is independent if they are pairwise independent. A re ..."
Add to MetaCart
Two spanning trees of a graph G are said to be independent if they are rooted at the same vertex r, and for each vertex v = r in G, the two different paths from v to r, one path in each tree, are
internally disjoint. A set of spanning trees of G is independent if they are pairwise independent. A recursive circulant graph G(N, d) has N = cdm vertices labeled from 0 to N − 1, where d � 2, m �
1, and 1 � c < d, and two vertices x, y ∈ G(N, d) are adjacent if and only if there is an integer k with 0 � k � ⌈logd N⌉−1 such that x±dk ≡ y (mod N). In this paper, we propose an algorithm to
construct multiple independent spanning trees on recursive circulant graphs G(cdm, d) under the condition d � 3, where the number of independent spanning trees matches the connectivity of G(cdm, d). | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3467667","timestamp":"2014-04-21T11:18:05Z","content_type":null,"content_length":"19533","record_id":"<urn:uuid:e4a15f87-f3ad-46fa-a430-645dae2e4442>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Generating Contour Plots Using Multiple Sensor Platforms
Fumin Zhang and Naomi Ehrich Leonard
Proc. IEEE Swarm Intelligence Symposium, 2005.
We prove a convergent strategy for a group of mobile sensors to generate contour plots, i.e., to automatically detect and track level curves of a scalar field in the plane. The group can consist
of as few as four mobile sensors, where each sensor can take only a single measurement at a time. The shape of the formation of mobile sensors is determined to minimize the least mean square
error in the estimates of the scalar field and its gradient. The algorithm to generate a contour plot is based on feedback control laws for each sensor platform. The control laws serve two
purposes: to guarantee that the center of the formation moves along one level curve at unit speed; and to stabilize the shape of the formation. We prove that both goals can be achieved
asymptotically. We show simulation results that illustrate the performance of the control laws in noisy environments. | {"url":"http://www.princeton.edu/~naomi/publications/2005/SIS05ZhaLeo.html","timestamp":"2014-04-16T08:05:10Z","content_type":null,"content_length":"1651","record_id":"<urn:uuid:8064ef37-83bf-45f5-a7db-ad407601467f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Machine Learning and Automated Trading
Instead of focusing on predicting price direction and price volatility with nonlinear models derived with machine learning methods, an alternative would be to try and discover exploitable price
relationships between assets of the same class and react (=trade) when mispricing happens, in other words, do statistical arbitrage. In a sense this is somehow ‘easier’ than attempting to forecast
prices, since the only thing one has to do is to find a relatively stable, linear or non-linear relationship between a group of at least two assets and assume that, from the time of its detection,
that relationship will carry on for some time into the future. Trading under this assumption is then very much a reactive process that is triggered by price movements that diverge significantly from
the modeled relationship. Traditional Pair Trading and trading of assetts in a VECM (Vector Error Correction Model) relationship are good examples for statarb using linear models. So why not use a
simple one-layer neural network or even an RBM to discover a non-linear price relationship between two not-cointegrated assets and if this discovery process is successful, trade it in a similar way
to a classical pair ? Things become even more interesting when groups with more than just two assets are considered. This would then be the non-linear equivalent of a VECM.
Feature Selection – Breadth vs. Depth
Lets say we have a univariate timeseries predicition target that can either be of type regression or classification, and we have to decide what input features to select. More concretely, we have a
large universe of timeseries that we can use as inputs and we would like to know how many we should pick (breadth) and also how far back in time we want to look for each one (depth). There is a two
dimensional space of choices, delimited by the following four extreme cases, under the assumption that we have a total of N series and we can, at the most, look back K timesteps: (1) pick only one
series and lookback one timestep, (2) pick only one series and lookback K timesteps, (3) pick N series and lookback one timestep, (4) pick N series and lookback K timesteps. The optimal choice will
likely not be either of those, since (1) and (2) may not contain enough predictve information and (3) and especially (4) will either not be feasible due to computing contstraints or contain too much
random noise. The suggested way of approaching this is to start small at (1), see what performance you get, and then increase the size of the input space, either breadth or depth-wise, until you have
reached satisfactory prediction performance or until you have exhausted your computing resources and need to either abandon the whole approach :( or buy a new (farm of) desktop(s) :)
Using Stacked Autoencoders and Restricted Boltzmann Machines in R
Stacked Autoencoders (SAs) and Restricted Boltzmann Machines (RBMs) are very powerful models for unsupervised learning. Unfortunately, at the time of writing it looks as if there are no direct R
implementations available, which is surprising since both model types have been around for a while and R has implementations for many other machine learning model types. As a workaround, SAs could be
implemented using one of several neural network packages of R fairly quickly (nnet, AMORE) and RBMs, well, someone would have to write a good R implementation for them. But given that training both
model types requires a lot of computational resources, we also want an implementation that can make use of GPUs. So at the moment the simplest solution we seem to have is to use Theano. It can use
GPUs and it provides implementations of stacked (denoising) autoencoders and RBMs. In addition Python/Theano code for several other more exotic Boltzmann Machine variants is floating around the net
as well. We can use rPython to call these Python functions from R but the challenge is the data. Getting large datasets back and forth between R and Python without using the ascii serialization that
rPython implements (too slow) needs to be solved. An at least equally potent implementation of autoencoders that supports GPU use is available via the Torch7 framework (demo). However, Torch7
functions are called using ‘lua’ and calling them from within R instead will require some work at C level.
In conclusion: Use Theano(Python) or Torch7(lua) for training models with GPU support and write the trained models to file. In R, import the trained model from file and use for prediction.
What Frequencies to Trade ?
When trying to find exploitable market patterns that one could trade as a retail trader, one of the first questions is: What trading frequencies to look at? Monthly? Weekly? Daily? Or intraday
anywhere between 5 seconds to 1 hour? With limited time available for conducting research at all of these timescales, this becomes an important question to answer. I and others have observed that
there seems to be a simple relationship between trading frequency and amount of effort needed to find a profitable strategy that is purely quantitative and has acceptable risk. In short:
The lower (=slower) the frequency you want to trade at, the ‘smarter’ your profitable strategy needs to be.
As an example, one could look at the (very) high frequency end of the spectrum, where marketmaking strategies based on really very simple mathematics can be very profitable, if you manage to be
close enough to the market center.
Taking a big jump into the daily frequency realm, it is becoming much harder to find quantitative strategies that are profitable while still being based on rather simple mathematics.
Trading in weekly and monthly intervals, using simple quantitative methods or ‘technical’ indicators only is a very good recipe for disaster.
So, assuming for a moment that this relationship is indeed true and also considering that we can and want to use sophisticated machine learning techniques in our trading strategies, we could start
with a weekly frequency window and work our way towards higher frequencies.
Weekly trading does not have to be automated at all and can be done from any web-based brokerage interface. We could develop a bag of strategies, using publicly available historical data in
combination with our favourite learning algorithm to find tradeable market patterns and then execute the strategy manually. At this scale, all the effort should go into finding and fine-tuning the
quantitative strategy and very little thought needs to be put into trade execution. Trade automation effort: 0%. Strategy smartness required: 100%
Daily trading should be automated, unless you can really dedicate a fixed portion of your day to monitoring the markets and executing trades. Integrating machine learning algorithms with automated
daily trading is not a trivial tasks, but it can be done. Trade automation effort: 20%, Strategy smartness required: 80%
On intraday timescales, ranging from minutes and seconds to sub-seconds, the effort you will have to undertake to automate your trades can lie anywhere in the range between 20% and 90%. Fortunately
the smaller the timescale becomes the ‘dumber’ your strategy can be, but ‘dumb’ is of course a relative concept here. Trade automation effort: 80%, Strategy smartness required: 20%
What features to use ? Hand-crafted vs. learned
At one point in the design of a (machine) learning system you will inevitable ask yourself what features to feed into your model. There are at least two options. The first is to use hand-crafted
features. This option will normally give you good results if the features are designed well (that of course is a tautology, since you would only call them well designed if they gave you good results
….). Designing hand-crafted features requires expert knowledge about the field to which the learning system will be applied, i.e. audio classification, image recognition or in our case trading. The
problem here is that you may not have any of that expert knowledge (yet) and it will be very difficult to come by or take a lot of time or most likely both. So the alternative is to learn the
features from the data or in other words, use unsupervised learning to obtain them. One requirement here is that you really need lots of data. Much more of it than you would need for hand-crafted
features, but then again it doesn’t have to be labeled. The benefit however is clear. You don’t really need to be an expert in the specific field you design the system for, i.e. trading and finance.
So while you still need to figure out which subset of the learned features will be best for your learning system, that is also something you would have to do with the hand-crafted features. My
suggestion: Try designing some hand-crafted features by yourself. If they don’t perform and you have good reasons to believe that it is possible to have better results than the ones you are getting,
use unsupervised learning methods to learn features. You can even create a hybrid system that uses designed and learned features together.
Why I use Open Source tools for building trading applications
When I first started to look into doing my own automated trading, I had three requirements on the set of tools that I wanted to use. 1) They should cost as little as possible to get me started, even
if that meant that I had to do a lot of programming and customizations myself (==it would cost time) 2) There should be a community of like-minded people out there using these same tools for a
similar purpose. 3) The tools should allow me to go as deep into the entrails of the system as necessary, even if in the very beginning my aim was more to discover the basics. I did not want to find
myself in a situation where two years down the line I would need to switch to a different set of tools, just because the ones I had started out with did not allow me to do what I wanted because of
problems with closed sources and restrictive licensing.
As a result I came to choose ‘R’ as my language of choice for developing trading algortihms and I started using Interactive Brokers since they provide an API for interfacing with their brokerage
While there are many nice trading tools that connect to the IB Trader Workstation and some can be used for automated trading, none of these offer the same power, flexibility and community support
that the ‘R’ project has. In addition, R has really an amazing repository of free and very adavanced statistical and machine learning packages, something that is essential if you want to create
trading algorithms. | {"url":"http://censix.com/","timestamp":"2014-04-18T21:11:16Z","content_type":null,"content_length":"30210","record_id":"<urn:uuid:8f9e05d6-d518-48c3-b680-fcc6ad21f577>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Gödel's Philosophy of Mathematics,
5.) Truth Criteria for Higher Axioms.
Before attempting an explication of Gödel's truth criteria, it must be mentioned that the word 'truth' as employed in this context is intended to be in accord with Gödel's remark that the axioms of
set theory "force themselves upon us as being true."[38] One could replace 'true' by 'correct', 'acceptable', 'tenable', 'plausible', or other seemingly neutral terminology. In particular, we can
think of an axiom being true if and only if it is satisfied in the principal interpretation of the theory in question.[39] Hence, a mathematician who states that he "believes an axiom to be true" is
actually indicating what he considers to be the principal interpretation of the theory. It is hoped that the logistic method allows us to recast ontologically suggestive terminology into a form which
is meaningful and acceptable to mathematicians of varying philosophical viewpoints, thus ridding ourselves of the nuisance of fruitless, inconclusive debate. This is especially important in the case
of Gödel, whose pronounced realism can be seen to have heuristic value (as in the case of "thing" language describing mathematical entities) even for nominalists and formalists, as well as others who
do not find it a tenable ontological position. We have avoided discussing Gödel's realism as much as possible in considering his methodology primarily to show that his methodology, which is crucial,
can be accepted without accepting his realism.
Gödel argues that the axioms of set theory, as usually considered today, constitute only a partial description of the whole of mathematics. We certainly do not wish to view mathematics as a completed
whole since new results and new methods are constantly being developed. It is necessary then to revise and amend our axiom systems, as well as our methods of reasoning, in order to incorporate new
achievements. When we add new axioms to the usual ones, we find that not only are certain issues seemingly related to the new axiom decided by the new axioms, but in addition, many questions in lower
systems are decided as well. The example of number theory discussed above is particularly important. The integers are trusted by most philosophical viewpoints. Hence we can expect that mathematicians
will be particularly sensitive to number-theoretic consequences of higher axioms.
Gödel calls an axiom a "weak extension" if it "has a model which can be defined and proved to be a model in the original (unextended) system."[40] A strong extension, then, would be an axiom which
possesses no inner model. He calls an extension "fruitful" if it yields consequences not otherwise obtainable in the lower system, "sterile" otherwise. A fruitful extension which yields
number-theoretic consequences is of importance because these consequences can often be confirmed to a degree by computation up to any given integer. Gödel calls this process a "verification,"
although 'confirmation' would be better terminology since computations are more closely allied with inductive rather than deductive methods. What is crucial however is the fact that these
computations are the most tangible evidence available to us. Indeed, one can doubt Peano's Axioms, but it is unreasonable to doubt calculations, the most basic of mathematical facts.
Gödel distinguishes between "plausible" and "implausible" consequences of an axiom. This is perhaps the weakest aspect of his methodology because one can always object to whatever decision is
reached. It is difficult to state precisely why the consequences of an axiom are implausible in terms other than mathematicians tend to regard these consequences as untenable. There is no concrete
way to resolve differences of opinion, and mathematicians are not always in agreement, as the history of the subject indicates. Today we are accustomed to irrational numbers, imaginary numbers,
continuous functions without derivatives, transcendental numbers, and the actual infinite. However, the remarks made by some mathematicians when these concepts were first introduced are not only
humorous, but are indicative of the subjectivity which is possible in mathematics.[41] Gödel however feels that most mathematicians are in agreement because the intuition is objective, not
subjective, as the intuitionists believe.[42] Moreover, an axiom might have both plausible and implausible consequences, as the axiom of choice is often regarded.[43] But Gödel argues that
mathematics has always progressed in this manner, weighing the plausible against the implausible, and notes that conclusive evidence may take centuries to gather. Critics of Gödel's plausibility
criteria may be challenged to produce a more decisive method of evaluation. It appears that a thorough knowledge of a mathematical discipline is the only credential for responsible decision-making,
and beyond this it does not seem possible to just list what makes a consequence plausible or implausible, other than the fact that it is so regarded by those deeply embedded in this area of research.
The value of Gödel's appeal to higher axioms is now apparent. If there is a question of plausibility which is unresolved, the issue may be decided by an axiom which does have universally acceptable
consequences. Thus one arrives at a decision by assenting to an axiom which resolves the issue. The axiom is accepted because its consequences are considered desirable. Admittedly, this proposal has
its drawbacks, but it must be regarded as a positive approach to the problem, rather then a dismissal, as Gödel views the intuitionists' rejection of the theory of Alephs.
We can summarize Gödel's truth criteria, although it must be mentioned that this list is not any more complete than mathematics is itself. As new problems arise, new criteria will have to be
(i) A fruitful extension is to be preferred over a sterile extension, provided that the fruitful extension does not create implausible consequences in the lower system.
(ii) An extension which yields theorems about integers, thus confirmable by computation up to any given integer, is to be preferred over an extension which is sterile with respect to number theory.
(iii) The needs of applied mathematics are to be taken into consideration, but the fact that an axiom system is employed in applied mathematics does not mean that it must therefore be employed in
pure mathematics, because of the essential differences of the two disciplines.
(iv) A question, shown to be undecidable in a lower system, is to be evaluated with respect to the value of its consequences, the value of its negation's consequences, as well as its relationship to
the value of other axioms known to decide it.
The third and fourth criteria require some explanation. In the third, we find that Gödel believes that the needs of physics and other areas where mathematics is applied cannot supply an answer to
mathematical questions not related to these disciplines. For example, the higher axioms of infinity do not seem to be directly relevant to physics at present,[44] although it is possible that they
may yield results, say in partial differential equations, which would be of value to physicists. The fourth criterion points out the importance of searching for tenable higher axioms to decide open
questions otherwise unresolvable. Whether an axiom system admits one and only one principal interpretation, or many principal interpretations depends on the nature of the theory in question. For
example, the various geometries seem to be equally acceptable interpretations of those axioms common to each,[45] e.g. "two points determine a line uniquely" holds in each interpretation. In the case
of axiomatic set theory, Gödel feels that the principal interpretation is indeed unique:
...the set-theoretical concepts and theorems describe some well-determined reality, in which Cantor's conjecture must be either true or false.[46]
We shall investigate this view when we discuss Gödel's realism.
6.) Some Concluding Remarks an Gödel's Methodology.
Barring any unforeseen catastrophe within the bounds of classical mathematics, one can safely assume that Gödel will patently reject any severe limitation or restrictive modification on the
procedures and content of classical mathematics. Indeed the major shortcomings of restrictive methodologies in general revolve around their inability to develop an adequate theory of real numbers.
Gödel then, as well as most mathematicians, regards classical analysis as fundamentally embedded in the core of mathematics, and any restrictive principle of reason which inhibits the adequate
development of classical analysis is a fortiori pathological. Mathematics for Gödel is boundless, having its beginning in the rudiments of logic, extending up to classical analysis, the higher axioms
of infinity, and beyond to bolder, richer but as yet undiscovered theories.
Banach., S. and A. Tarski. "Sur la décomposition des ensembles de points en parties respectivement congruentes," Fundamenta Mathematicae, 1924, 6:244-277.
Borel, E. Éléments de la Théorie des Ensembles (Paris, 1949), pp. 200-239.
----- Note VII "Les paradoxes de l'axiome du choix," Leçons sur la Theorie des Fonctions, 4th ed. (Paris, 1950), pp. 287-291.
----- Chapter 11, Elements of the Theory of Probability, translated by John E. Freund (Englewood Cliffs, 1965), pp. 109-117.
Sierpinski, W. Leçons sur les Nombres Transfinis (Paris, 1928).
----- Chapter VI "The axiom of choice. Controversy about it," Cardinal and Ordinal Numbers (Warsaw, 1958), pp. 88-131.
Suppes, P. Chapter 8, Axiomatic Set Theory (Princeton, 1960), pp. 239-252.
In Section 2, Schlegel employs the Continuum Hypothesis:
The next known transfinite number beyond 47]
In Section 3, "The Cardinality of Atom-Spaces,"[48] there is an intricate cardinality argument which blends physical theory with transfinite arithmetic. Gödel, in the "Supplement to the Second
Edition" of "What is Cantor's Continuum Problem?" remarked that a physical interpretation could not decide open questions of set theory, i.e. there was (at the time of his writing) no "physical set
theory" although there is a physical geometry:
As far as the epistemological situation is concerned, it is to be said that by a proof of undecidability a question loses its meaning only if the system of axioms under consideration is
interpreted as a hypothetico-deductive system; i.e. if the meanings of the primitive terms are left undetermined. In geometry, e.g., the question of whether Euclid's fifth postulate is true
retains its meaning if the primitive terms are taken in a definite sense, i.e., as referring to the behavior of rigid bodies, rays of light, etc. The situation is set theory is similar, the
difference is only that, in geometry, the meaning usually adopted today refers to physics rather than to mathematical intuition and that, therefore, a decision falls outside the range of
mathematics. On the other hand, the objects of transfinite set theory...clearly do not belong to the physical world and even their indirect connection with physical experience is very loose
(owing primarily to the fact that set-theoretical concepts play only a minor role in the physical theories of today.)[49]
Gödel himself has performed research in cosmology but we do not know if he has criticized Schlegel's article. It would be remarkable if physicists were able to employ set theory, especially
transfinite arithmetic, in their work. Historically, physics has employed much of available mathematics, e.g. classical analysis, probability and statistics, group theory, and geometry, with set
theory a notable exception. Such an intimate relationship is not easily explained when one considers the apparent divergence of the experimental nature of physics from the intuition inherent in
mathematical thought. Moreover, it would tend to indicate that physical theory may actually depend on mathematical discovery; that the question of whether mathematics is being interpreted in physics
or physics is being interpreted in mathematics is not as clear-cut as some who would use physical theory as a criterion of mathematical truth seem to indicate.
Cf. also Schlegel's Completeness In Science (New York, 1967), and the review of it by Edward H. Madden in Philosophy of Science, 1967, 34:386-388.
On Gödel's Philosophy of Mathematics, Chapter II
Copyright (c) 1968, 1998 Harold Ravitch, Ph.D. All Rights Reserved | {"url":"http://www.friesian.com/goedel/chap-1.htm","timestamp":"2014-04-20T15:51:07Z","content_type":null,"content_length":"42548","record_id":"<urn:uuid:c8f39fa7-6adc-418d-b13a-692bf6e63da4>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] About Paradox Theory
Rob Arthan rda at lemma-one.com
Sat Sep 24 08:27:24 EDT 2011
On 18 Sep 2011, at 23:27, Rob Arthan wrote:
> On 17 Sep 2011, at 16:05, hdeutsch at ilstu.edu wrote:
>> Here is the argument concerning the "paradox of grounded classes" to save people from having to look it up:
>> The following argument is first-order valid:
>> AyEzAx(F(xz) <--> x=y). Therefore,
>> -EwAx(F(xw) <--> Au([F(xu) --> Ey(F(yu) & -Ez{F(zu) & F(zy)])]).
> I think something has gone missing in your transcription here: AyEzAx(F(xz) <--> x=y) is not true for every interpretation of F (e.g., if F is identically false).
In case this is still confusing anyone, it became clear from later posts that ". Therefore." here is intended to be read as the horizontal line between succedent and antecedent in an object level inference, and not as a meta level modality connecting two meta level truths.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: </pipermail/fom/attachments/20110924/f18cac47/attachment.html>
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2011-September/015810.html","timestamp":"2014-04-18T10:35:54Z","content_type":null,"content_length":"3744","record_id":"<urn:uuid:60020f00-3070-4e85-b940-1d9537151a97>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
Words Avoiding a Reflexive Acyclic Relation
Let ${\cal A}\subseteq {\bf [n]}\times{\bf [n]}$ be a set of pairs containing the diagonal ${\cal D} = \{(i,i)\,|\, i=1,\ldots,n\}$, and such that $a\leq b$ for all $(a,b) \in {\cal A}$. We study
formulae for the generating series $F_{\cal A} ({\bf x}) = \sum_w {\bf x}^w$ where the sum is over all words $w \in {\bf [n]}^*$ that avoid ${\cal A}$, i.e., $(w_i,w_{i+1})\notin {\cal A}$ for $i=1,\
ldots,|w|-1$. This series is a rational function, with denominator of the form $1-\sum_{T}\mu_{{\cal A}}(T){\bf x}^T$, where the sum is over all nonempty subsets $T$ of $[n]$. Our principal focus is
the case where the relation ${\cal A}$ is $\mu$-positive, i.e., $\mu_{\cal A}(T)\ge 0$ for all $T\subseteq {\bf [n]}$, in which case the form of the generating function suggests a cancellation-free
combinatorial encoding of words avoiding ${\cal A}$. We supply such an interpretation for several classes of examples, including the interesting class of cycle-free (or crown-free) posets.
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v11i2r28","timestamp":"2014-04-20T00:58:23Z","content_type":null,"content_length":"15619","record_id":"<urn:uuid:af4cde34-5d2a-42e3-a872-2672c7f3d50e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00513-ip-10-147-4-33.ec2.internal.warc.gz"} |
NEST 2014 Physics Syllabus
NEST 2014 Physics Syllabus
NEST Entrance Exam 2014 : : NEST Exam Date 2014 | NEST Application Form 2014 | NEST Admit Card 2014 | NEST Exam Centre 2014 | NEST Syllabus 2014 |
NEST 2014 Physics Syllabus
General : Units and dimensions, dimensional analysis. least count, significant figures. Methods of measurement ( Direct, Indirect, Null, etc., ) and measurement of length, time, mass, temperature,
electrical potential difference, current and resistance.
Design of some simple experiments, Identification of independent, dependent and control variables, Identification of sample size, range and interval. Identification of appropriate measurement
techniques and instruments.
Graphical representation, interpretation and analysis of data. Errors in the measurements and error analysis.
Mechanics : Kinematics in one and two dimensions ( Cartesian coordinates only ), projectiles; Uniform Circular motion; Relative velocity. Newton’s laws of motion. Inertial and uniformly accelerated
frames of reference. Static and dynamic friction. Kinetic and potential energy; Work and power; Conservation of linear momentum and mechanical energy.
Systems of Particles; Centre of mass and its motion. Impulse. Elastic and inelastic collisions.
Law of gravitation; Gravitational potential and field; Acceleration due to gravity; Motion of planets and satellites in circular orbits. Escape Velocity.
Rigid Body, moment of inertia, parallel and perpendicular axes theorems, moment of inertia of uniform bodies with simple geometrical shapes. Angular momentum, Torque. Conservation of angular
momentum. Dynamics of rigid bodies with fixed axis of rotation. Rolling without slipping of rings, cylinders and spheres. Equilibrium of rigid bodies. Collision of point masses with rigid bodies.
Linear and angular simple harmonic motions. Hooke’s law, Young’s modulus.
Pressure in a fluid; Pascal’s law; Buoyancy; Surface energy and surface tension, capillary rise; Viscosity – Stoke’s and Poiseuille’s law, Terminal velocity. Streamline flow, equation of continuity,
Bernoulli’s theorem.
Plane wave motion ( plane waves only ), longitudinal and transverse waves, superposition of waves. Progressive and stationary waves. Vibration of strings and air columns. Resonance; Beats. Speed of
sound in gases. Doppler effect ( in sound ).
Thermal Physics : Thermal expansion of solids, liquids and gases. Calorimetry, latent heat. Heat conduction in one dimension. Elementary concepts of convection and radiation; Newton’s law of cooling;
Ideal gas laws; Specific heats ( C[v], and C[p] for monoatomic and diatomic gases ). Isothermal and adiabatic processes, bulk modulus of gases. Equivalence of heat and work; First and second law of
thermodynamics and its applications ( only for ideal gases ). Entropy. Blackbody Radiation – absorptive and emissive powers; Kirchhoff’s law; Wien’s displacement law, Stefan’s law.
Electricity and Magnetism : Coulomb’s law. Electric field and potential. Electrical potential energy of a system of point charges and of electrical dipoles in a uniform electrostatic field. Electric
field lines; Flux of electric field. Gauss’s law and its application in simple cases. such as, to find field due to infinitely long straight wire. uniformly charged infinite plane sheet and uniformly
charged thin spherical shell.
Capacitance – Calculation of capacitance with and without dielectrics. Capacitors in series and parallel. Energy stored in a capacitor. Electric current. Ohm’s law. Series and parallel arrangements
of resistances and cells. Kirchhoff’s laws and simple applications. Heating effect of current.
Biot – Savart’s law and Ampere’s law; Magnetic field near a current carrying straight wire, along the axis of a circular coil and inside a long straight solenoid; Force on a moving charge and on a
current carrying wire in a uniform magnetic field.
Magnetic moment of a current loop; Effect of a uniform magnetic field on a current loop; Moving coil galvanometer, voltmeter, ammeter and their conversions.
Electromagnetic induction : Faraday’s law, Lenz’s law; Self and mutual inductance; RC, LR and LC circuits with and AC Sources.
Optics : Rectilinear propagation of light. Reflection and refraction at plane and spherical surfaces, Deviation and dispersion of light by a prism; Thin lenses. Magnification.
Wave nature of light – Huygen’s principle, interference limited to Young’s double slit experiment. Elementary idea of diffraction – Rayleigh criterion, Elementary idea of polarization – Brewster’s
law and the law of Malus.
Modern Physics : Atomic nucleus. Alpha, beta and gamma radiations; Law of radioactive decay; Decay constant; Half – life and mean life. Binding energy and its calculation. Fission and fusion
processes. Energy calculation in these processes.
Photoelectric effect. Bohr’s theory of hydrogen like atoms; Characteristic and continuous X – rays, Moseley’s law. de Broglie wavelength of matter waves. Heisenberg’s uncertainty principle.
NEST Syllabus 2014 – NEST Physics Syllabus 2014 – NEST Physics Syllabus Download 2014 – NEST Syllabus Material 2014 – NISER Physics Syllabus 2014.
Exams Date Related : NEST 2014, NISER NEST 2014 Syllabus, NISER 2014 Physics Syllabus, NEST 2014 Physics Syllabus, NEST 2014 Physics Exam Syllabus, NEST 2014 Entrance Exam Syllabus, NEST 2014
Syllabus, NISER 2014 Physics Syllabus Detail, NISER 2014 Syllabus Material, NISER 2014 Syllabus Download, NEST 2014 Syllabus Curriculum, NISER 2014 MSc Syllabus, NEST 2014 Physics Syllabus Download,
NEST 2014 Detailed Physics Syllabus, NEST 2014 Study Material, NISER NEST 2014 Examination Syllabus, NEST 2014 Syllabus for Physics, NISER 2014 Admission Syllabus, NEST 2014 Syllabus Portions, NEST
2014 Physics Study Material, Physics Syllabus for NEST Exam 2014, National Entrance Screening Test Physics Syllabus 2014, NEST 2014 Question Paper, NEST 2014 Solved Papers,
Posted In exams date : Leave a response for nest 2014 physics syllabus by amala | {"url":"http://www.winentrance.com/exams_date/NEST-Physics-Syllabus.html","timestamp":"2014-04-17T18:40:56Z","content_type":null,"content_length":"32394","record_id":"<urn:uuid:2aae81a4-bc1a-4b01-828e-20b854b00667>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: replies to Hersh on logicism and certainty
M. Randall Holmes M.R.Holmes at dpmms.cam.ac.uk
Mon Oct 5 05:27:32 EDT 1998
(Hersh says)
You have made a solid case. Logicism lives.
Who's arguing? When did I say it was dead?
I said it had been unable to achieve its original goals.
That's not saying it's dead.
(I reply)
If you read my recent posts, you will realize that I admit no such thing.
I describe precisely how I think that mathematics can successfully be
founded on logic, and I support the proposition that properly proved
mathematical statements are certainly true.
(Hersh says)
I proposed and advocated a different kind of philosophy,
based on mathematical practise as it really is, not as
it's supposed to be.
That's proposing a competitor to logicism. It's not
saying logicism is dead.
(I reply)
No such competing viewpoint is necessary, as mathematics is really
as it is supposed to be (as well as we can manage with our admittedly
erratic faculties :-) )
(Hersh says)
We agree. Logicism lives.
(I reply)
We agree on the truth of that statement, yes.
(Hersh closes)
Reuben Hersh
(Hersh replies further (re indubitability))
You say one cannot have a usable or worthwhile notion of partial
or incomplete rigor or proof without having in advance a notion
of perfect rigor or certainty.
I don't think this necessity has ever been demonstrated.
(I reply)
I think this is obvious, and attested in the actual behavior
of mathematicians (probably including yourself in unreflective moments).
(Hersh continues)
Certainly it works the other way--*if* one could have a meaningful,
valid criterion or notion or test of absolute certainty or perfect
proof or rigor, then indeed that would be useful in dealing with
imperfect or incomplete proofs.
(I reply)
Imperfect or incomplete what, exactly? The imperfect falls short of ...
the perfect :-)
(Hersh continues)
However, it may be that the search for perfect or absolute certainty or
proof or rigor is actually made by successively refining and making more
satisfactory our standards of incomplete proof.
(I reply)
History does show successive refinements and strengthening of proof
(betweenness in geometry, uniform convergence in analysis, and others
in set theory which you are better qualified than I to tell about.
You can say that those advances were made possible on the basis of
an existing notion of absolute certainty. I would say that our
notion of absolute certainty was developed in the course of more
careful and critical incomplete proofs.
(I reply)
How can you tell that the standards of proof are refined or strengthened
if you do not know what an adequate proof is? The notion of the absolute
certainty of mathematics is as old as Pythagoras and Euclid; it was not
a brainstorm of 19th century mathematicians.
(Hersh continues)
I emphasize that "incomplete proofs" means virtually all proofs
as presented to seminars or colloquia or printed in journals and treatises
or broadcast on the world wide web. Harvey recently gave examples
of typical ways in which gaps are consciously left in published
proofs. Complete proofs, for most problems
and theorems of any substance, would be far too long and tedious to be
read or published.
(I reply)
I emphasize this point very strongly myself. How can you recognize
a gap if you don't know what a complete proof would look like? The
experience of the Automath project, by the way, suggests that the
increase in size of completely formal proofs expressed in a suitable
notation is by a constant factor and not perhaps by such a large factor
as one might suppose (though the tedium of such proofs is undeniable).
I think the recent experience of Mizar supports this (is anyone in the
Mizar group on this list?)
(Hersh closes)
Reuben Hersh
(I close (there is mercy in the world!))
Sincerely, M. Randall Holmes
holmes at math.idbsu.edu or mrh29 at dpmms.cam.ac.uk
Boise State University and the University of Cambridge
must be held harmless for any silly thing I may say.
"And God posted an angel with a flaming sword at the gates
of Cantor's paradise, that the slow-witted
and the deliberately obtuse might not glimpse
the wonders therein." (Holmes, with apologies to Hilbert)
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-October/002261.html","timestamp":"2014-04-19T12:01:11Z","content_type":null,"content_length":"6699","record_id":"<urn:uuid:a5fa1c39-3313-4743-b16a-c908acff91ba>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00595-ip-10-147-4-33.ec2.internal.warc.gz"} |
Glenwood Landing Math Tutor
Find a Glenwood Landing Math Tutor
My name is Anthony, and my math tutoring career goes all the way back to my high school days, when I tutored middle schoolers with my National Honor Society program in Lindenhurst. During my years
studying applied math in college, I continued this practice independently, assisting students in middl...
23 Subjects: including precalculus, differential equations, proofreading, SAT reading
...I think I'm pretty personable and down to earth, but also knowledgeable and structured. I believe that everybody learns at his or her own pace and with his or her own style and that one's
instructor needs to be cognizant of this. Although I have taken several upper level engineering courses, I ...
16 Subjects: including prealgebra, precalculus, reading, trigonometry
...These experiences have helped me to understand the subject matters in depth. As a summa cumlaude graduate, I am very enthusiastic to share my expertise and experience to make students succeed.
I am qualified to teach GRE math, SAT math, elementary science, math, microbiology, molecular biology, immunology, cell biology, genetics and organic chemistry.
16 Subjects: including algebra 1, chemistry, trigonometry, SAT math
...After reviewing the basic concepts of Geometry, including lines, angles, and triangles, the focus of the course is proof. An essential concept is that of congruence, and methods of proving that
triangles are congruent are introduced. Applications are made to quadrilaterals, including parallelograms, rectangles, squares and trapezoids.
6 Subjects: including algebra 1, algebra 2, calculus, geometry
...When I am not busy with school or leisure, I will be helping you or your child grasp a difficult subject or improve a test score. I encourage my students to talk me through their thought
process when tackling a problem, so that I can pinpoint exactly what is preventing them from arriving at the ...
22 Subjects: including precalculus, ACT Math, SAT math, algebra 1 | {"url":"http://www.purplemath.com/Glenwood_Landing_Math_tutors.php","timestamp":"2014-04-16T22:13:02Z","content_type":null,"content_length":"24353","record_id":"<urn:uuid:b8d932e4-06bf-4af3-8d3c-e52e2b73e039>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
If this quantity exceeds one, the Universe curves back on itself to form a closed space of finite volume, but without boundary In such a space the sum of the angles in a triangle would exceed 180
degrees, and a starship traveling on a straight line would eventually return to its point of origin. If the quantity above is less than one, the Universe is an open space in which triangles contain
less than 180 degrees. If the quantity is exactly one, the space is Euclidean, which is also called flat.
Physics of the False Vacuum
THE FALSE VACUUM arises naturally in any theory that contains scalar fields, that is, fields that resemble electric or magnetic fields except that they have no direction. The Higgs fields of the
Standard Model of particle physics or the more speculative grand unified theories are examples of scalar fields. It is typical of Higgs fields that the energy density is minimal not when the field
vanishes, but instead at some nonzero value of the field. For example, the energy density diagram might look like
The energy density is zero if [t], so this condition corresponds to the ordinary vacuum of empty space. In this context it is usually called the "true" vacuum. The state in which the scalar field is
The peculiar properties of the false vacuum stem from its pressure, which is large and negative (see box on the right). Mechanically such a negative pressure corresponds to a suction, which does not
sound like something that would drive the Universe into a period of rapid expansion. The mechanical effects of pressure, however, depend on pressure differences, so they are unimportant if the
pressure is reasonably uniform. According to general relativity, however, there is a gravitational effect that is very important under these circumstances. Pressures, like energy densities, create
gravitational fields, and in particular a positive pressure creates an attractive gravitational field. The negative pressure of the false vacuum, therefore, creates a repulsive gravitational field,
which is the driving force behind inflation.
There are many versions of inflationary theories but generically they assume that some small patch of the early Universe somehow came to be in a false vacuum state Various possibilities have been
discussed, including supercooling during a phase transition in the early Universe, or a purely random fluctuation of the fields. A chance fluctuation seems reasonable even if the probability is low,
since the inflating region will enlarge by many orders of magnitude, while the non-inflating regions will remain microscopic. Inflation is a wildfire that will inevitably take over the forest, as
long as there is some chance that it will start.
Once a patch of the early Universe is in the false vacuum state, the repulsive gravitational effect drives the patch into an inflationary period of exponential expansion. To produce a universe with
the special features of the Big Bang discussed above, the expansion factor must be at least about 10^25. There is no upper limit to the amount of expansion. Eventually the false vacuum decays, and
the energy that had been locked in it is released. This energy produces a hot, uniform, soup of particles, which is exactly the assumed starting point of the traditional Big Bang theory. At this
point the inflationary theory joins onto the older theory, maintaining all the successes for which the Big Bang theory is believed.
In the inflationary theory the Universe begins incredibly small, perhaps as small as 10^24cm, a hundred billion times smaller than a proton. The expansion takes place while the false vacuum maintains
a nearly constant energy density, which means that the total energy increases by the cube of the linear expansion factor, or at least a factor of 10^75. Although this sounds like a blatant violation
of energy conservation, it is in fact consistent with physics as we know it.
The resolution to the energy paradox lies in the subtle behavior of gravity. Although it has not been widely appreciated, Newtonian physics unambiguously implies that the energy of a gravitational
field is always negative a fact which holds also in general relativity. The Newtonian argument closely parallels the derivation of the energy density of an electrostatic field, except that the answer
has the opposite sign because the force law has the opposite sign: two positive masses attract, while two positive charges repel. The possibility that the negative energy of gravity could balance the
positive energy for the matter of the Universe was suggested as early as 1932 by Richard Tolman, although a viable mechanism for the energy transfer was not known.
During inflation, while the energy of matter increases by a factor of 10^75 or more, the energy of the gravitational field becomes more and more negative to compensate. The total energy - matter plus
gravitational - remains constant and very small, and could even be exactly zero. Conservation of energy places no limit on how much the Universe can inflate, as there is no limit to the amount of
negative energy that can be stored in the gravitational field.
This borrowing of energy from the gravitational field gives the inflationary paradigm an entirely different perspective from the classical Big Bang theory, in which all the particles in the Universe
(or at least their precursors) were assumed to be in place from the start. Inflation provides a mechanism by which the entire Universe can develop from just a few ounces of primordial matter.
Inflation is radically at odds with the old dictum of Democritus and Lucretius, "Nothing can be created from nothing" If inflation is right, everything can be created from nothing, or at least from
very little. If inflation is right, the Universe can properly be called the ultimate free lunch.
Pressure of the False Vacuum
THE PRESSURE OF THE FALSE VACUUM can be determined by a simple energy-conservation argument. Imagine a chamber filled with false vacuum, as shown in the diagram below.
For simplicity, assume that the chamber is small enough so that gravitational effects can be ignored. Since the energy density of the false vacuum is fixed at some value u[f], the energy inside the
chamber is U=u[f]V, where V is the volume. Now suppose the piston is quickly pulled outward, increasing the volume by dV. If any familiar substance were inside the chamber, the energy density would
decrease. The false vacuum, however, cannot rapidly lower its energy density, so the energy density remains constant and the total energy increases. Since energy is conserved, the extra energy must
be supplied by the agent that pulled on the piston. A force is required, therefore, to pull the piston outward, implying that the false vacuum creates a suction, or negative pressure p. Since the
change in energy is dU = u[f]dV, which must equal the work done, dW = -pdV, the pressure of the false vacuum is given by
p = -u[f].
The pressure is negative, and extremely large. General relativity predicts that the gravitational field which slows the expansion of the universe is proportional to u[f] + 3p, so the negative
pressure of the false vacuum overcomes the positive energy density to produce a net repulsive gravitational field.
The solution to the horizon problem. The green line shows the radius of the region that evolves to become the presently observable Universe, as described by the traditional Big Bang theory. The black
line shows the corresponding curve for the inflationary theory. Due to the spectacular growth spurt during inflation, the inflationary curve shows a much smaller Universe than in the standard theory
for the period before inflation. The uniformity is established at this early time, and the region is then stretched by inflation to become large enough to encompass the observed Universe. Note that
the numbers describing inflation are illustrative, as the range of possibilities is very large. | {"url":"http://ned.ipac.caltech.edu/level5/Guth/Guth3.html","timestamp":"2014-04-18T13:11:42Z","content_type":null,"content_length":"12949","record_id":"<urn:uuid:a0276b07-8e29-461b-bb6c-4782e455c62c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Just Rakudo It
A Word for the Slow
January 20, 2011
So, my solution for Masak’s p1 has the distinction of being by far the least efficient working solution. Which is a shame, because I think this one was my favorite solution of the contest. It may be
slow, and I’d never advise anyone to use it for anything practical (given the other, much more efficient solutions), but in my opinion it’s a charming piece of code.
The key organizational piece for my solution is the Ordering class. (BTW, I didn’t like that name at all, it’s probably my least favorite part of my solution. My theory is Masak was too awestruck by
my inefficiency to quibble about the name.) I was striking about for how to represent a series of matrix multiplications, and hit on the idea of using a very simple stack-based language to do it. The
language has two operations: an Int represents putting the matrix with that index on the stack. The string "*" represents popping the top two matrices on the stack, multiplying them, and pushing the
result back on the stack. Here’s the code for making that happen while tracking the total number of multiplications:
method calculate-multiplications(@matrices) {
my @stack;
my $total-multiplications = 0;
for @.ops {
when "*" {
my $a = @stack.pop;
my $b = @stack.pop;
my ($multiplications, $matrix) = multiply($a, $b);
$total-multiplications += $multiplications;
when Int {
I’m an old Forth programmer from way back, and I can’t begin to say how much I love how easy p6 makes it to implement a simple stack machine!
Getting the string version of this is equally easy:
method Str() {
my @stack;
for @.ops {
when "*" {
my $a = @stack.pop;
my $b = @stack.pop;
when Int {
@stack.push("A{$_ + 1}");
This time instead of a stack of Pairs (for the matrix size), we have a stack of Str representing each sub-matrix’s name. At the end we pop the last thing on the stack, and it’s the string
representing the entire multiplication. And by making this Ordering.Str, any time you print an Ordering you get this nice string form — handy both for the desired output of the program and for
I won’t comment on the guts of the generate-orderings function, which is heavily borrowed from HOP via List::Utils. Just note that given the number of matrices, it lazily generates all the possible
permutations — both the source of my code’s elegance and its extreme inefficiency.
Oonce you’ve got the array @matrices set up, calculating and reporting the best ordering (very slowly!) is as simple as
say generate-orderings(+@matrices).min(*.calculate-multiplications(@matrices));
(Note that I split the main body of this off into a function in the actual code, to make it easier to test internally.)
So clearly, I badly need to study more dynamic programming. But at the same time, I think there may be useful bits in my code that can be put to better use somewhere else.
A Piece of Pi
January 19, 2011
Between masak’s Perl 6 contest and the Perl 6 Advent calendar, I haven’t done a lot of blogging here lately. Apologies!
So, a discussion on #perl6 the other day got me thinking about another possible interesting use of the Real role. Here’s the basic class:
class TimesPi does Real {
has Real $.x;
method new($x) { self.bless(*, :$x); }
method Bridge() { $.x * pi; }
method Str() { $.x ~ "π"; }
method perl() { "({ $.x })π"; }
sub postfix:<π>($x) {
So, that’s simple enough, right? TimesPi stores a Real number $.x internally, and the value it represents is that number times pi. There’s a postfix operator π to make it really easy to construct
these numbers. Because we’ve defined a Bridge method, this class has access to all the normal methods and operators of Real. Still, as presented above it is pretty useless, but defining some
operators hints at a useful purposes for this class.
multi sub infix:<+>(TimesPi $lhs, TimesPi $rhs) {
TimesPi.new($lhs.x + $rhs.x);
multi sub infix:<->(TimesPi $lhs, TimesPi $rhs) {
TimesPi.new($lhs.x - $rhs.x);
multi sub prefix:<->(TimesPi $n) {
TimesPi.new(- $n.x);
multi sub infix:<*>(TimesPi $lhs, Real $rhs) {
TimesPi.new($lhs.x * $rhs);
multi sub infix:<*>(Real $lhs, TimesPi $rhs) {
TimesPi.new($lhs * $rhs.x);
multi sub infix:<*>(TimesPi $lhs, TimesPi $rhs) {
$lhs.Bridge * $rhs.Bridge;
multi sub infix:</>(TimesPi $lhs, Real $rhs) {
TimesPi.new($lhs.x / $rhs);
multi sub infix:</>(TimesPi $lhs, TimesPi $rhs) {
$lhs.x / $rhs.x;
With these operators in place, basic arithmetic involving TimesPi numbers will stay in the TimesPi class when appropriate. For instance, if you add two TimesPi numbers, the result will be a TimesPi.
The cool thing about this is that it is as exact $.x values allow, rather than forcing everything to be a floating point calculation of limited accuracy.
We can even take things a step further, using this to perform exact trig calculations:
multi method sin(TimesPi $x: $base = Radians) {
return $x.Bridge.sin($base) unless $base == Radians;
given $x.x {
when Int { 0; }
when Rat {
given $x.x.denominator {
when 1 { 0; }
when 2 { ($x.x.numerator - 1) %% 4 ?? 1 !! -1 }
# could have a case for 6 as well...
default {
This checks for cases where we know the exact value of the result, and returns that if it can, otherwise falling back to the standard Real.sin method.
Of course, just when I was feeling like I might be on to something here, I realized that $.x was just the number of degrees in the angle divided by 180. Sigh.
Perl 6 Fibonacci versus Haskell
December 29, 2010
There’s been some discussion on reddit today about whether
my @fib := 1, 1, *+* ...^ * >= 100;
is unreadable gibberish or not, with the following Haskell suggested as an easier-to-understand version.
fib = 1 : 1 : zipWith (+) fib (tail fib)
(I’ve “corrected” both versions so they start the sequence with 1, 1.)
The first thing to observe here is that this are not the same at all! The Perl 6 version is the Fibonacci numbers less than 100, while the Haskell version lazily generates the entire infinite
sequence. If we simplify the Perl 6 to also be the (lazy) infinite Fibonacci sequence, we get the noticeably simpler
my @fib := 1, 1, *+* ... *;
To my (admittedly used to Perl 6) eye, this sequence is about as clean and straightforward as it is possible to get. We have the first two elements of the sequence:
1, 1
We have the operation to apply repeatedly to get the further elements of the sequence:
And we are told the sequence will go on forever:
... *
The *+* construct may be unfamiliar to people who aren’t Perl 6 programmers, but I hardly think it is more conceptually difficult than referring to two recursive copies of the sequence you are
building, as the Haskell version does. Instead, it directly represents the simple understanding of how to get the next element in the Fibonacci sequence in source code.
Of course, this being Perl, there is more than one way to do it. Here’s a direct translation of the Haskell version into idiomatic Perl 6:
my @fib := 1, 1, (@fib Z+ @fib[1..*]);
Well, allowing the use of operators and metaoperators, that is, as zipWith (+) becomes Z+ and tail fib becomes @fib[1..*]. To the best of my knowledge no current Perl 6 implementation actually
supports this. I’d be surprised if any Perl 6 programmer would prefer this version, but it is out there.
If you’re insistent on writing function calls instead of operators, you could also say it
my @fib := 1, 1, zipwith(&[+], @fib, tail(@fib));
presuming you write a tail function, but that’s an easy one-liner.
Series and Sequences
December 29, 2010
rokoteko asked on #perl6 about using sequences to calculate arbitrarily accurate values. I’m not sure why he thought I was the expert on this, but I do have some ideas, and thought they should be
blogged for future reference.
Say we want to calculate pi using a sequence. The Taylor series for atan is
atan(x) = x - x^3/3 + x^5/5 - x^7/7 + x^9/9...
We can represent those terms easily as a lazy list in Perl 6:
sub atan-taylor($x) {
(1..*).map(1 / (* * 2 - 1)) Z* ($x, * * $x * -$x ... *)
Note that we don’t try to do this exclusively as a sequence; getting 1, 1/3, 1/5, 1/7... is much easier using map, and then we mix that with the sequence of powers of $x using Z*.
So, one nice thing about the sequence operator is that you can easily use it to get all the Rats in the terms of a Taylor series, because once the terms get small enough, they will switch to Nums. So
we can say
> (atan-taylor(1/5) ...^ Num).perl
(1/5, -1/375, 1/15625, -1/546875, 1/17578125, -1/537109375)
This doesn’t help us get Rat approximation to the series, however, because summing those values results in a Num:
> ([+] (atan-taylor(1/5) ...^ Num)).WHAT
However, we can use the same idea with the triangle-reduce operator to easily get a version that does work:
> (([\+] atan-taylor(1/5)) ...^ Num).perl
(1/5, 74/375, 9253/46875, 323852/1640625, 24288907/123046875)
We’re mostly interested in the last element there, which is easily split off from the rest:
> (([\+] atan-taylor(1/5)) ...^ Num)[*-1].perl
So, having laid that groundwork, how do we calculate pi?
My first answer was the classic pi = 4*atan(1). Unfortunately, as sorear++ pointed out, it is terrible for these purposes. Why?
atan(1) = 1 - 1/3 + 1/5 - 1/7 + 1/9...
A little thought there shows that if you want to get to the denominator of 537109375 that took six terms for atan(1/5), it will take 268,554,687 terms for atan(1). Yeah, that’s not very practical.
Luckily, the above-linked web page has a much better formula to use:
atan(1) = 4 * atan(1/5) - atan(1/239)
It takes a little playing around, but it’s reasonably clean to implement this in p6:
use List::Utils;
sub atan-taylor($x) {
(1..*).map(1 / (* * 2 - 1)) Z* ($x, * * $x * -$x ... *)
my @fifth-times-four := atan-taylor(1/5) Z* (4, 4 ... *);
my @neg-two-three-ninth := atan-taylor(1/239) Z* (-1, -1 ... *);
my @terms := sorted-merge(@fifth-times-four, @neg-two-three-ninth, *.abs R<=> *.abs);
my @pi-over4 = ([\+] @terms) ...^ Num;
say (@pi-over4[*-1] * 4).perl;
say (@pi-over4[*-1] * 4);
The results are
I use sorted-merge from List::Utils to merge the two sequences of Taylor series terms into one strictly decreasing (in magnitude) sequence. That, in turn, makes it easy to use the triangle-reduce
metaoperator to stop summing the terms when they’ve gotten so small they are no longer representable by a Rat.
What is this good for? Well right now, not much. Sure, we’re gotten a fairly accurate Rat version of pi, but we could have gotten that more quickly and accurately by just saying pi.Rat(1e-15).
But once we have working FatRats, this approach will let us get arbitrarily accurate rational approximations to pi. Indeed, it suggests we could have slow but very accurate ways of calculating all
sorts of transcendental functions…
Moving Right Along
November 2, 2010
Of the six goals in my last post, four of them now work well. Here’s the latest PDF my code has produced. I’ve added three new tunes to it: another of my tunes, “Tali Foster’s”, a quick snippet from
our ceili band’s notes from a few years back, and a double from the repertoire of Rufus Guinchard, “Sydney Pittman’s”. Together they demonstrate time signatures (6/8 and a single bar of 9/8), key
changes, broken rhythms, and of course more than one tune on a page. (Note that 1st and 2nd endings are still unimplemented, and look very wrong in “Tali Foster’s”.)
I have to say that this PDF really impressed me with Lilypond’s output. It’s hard to put my finger on it, but something about having all four tunes together like that on the page looks really good,
IMO. I’m getting excited about the prospect of being able to produce entire books of tunes with this tool.
In terms of Perl 6, this last nine days of work has been very straightforward. I did whine enough about the undeclared variable error message bug to convince pmichaud++ to fix it. I’d worried a lot
about nested levels of Contexts — the key, time signature, etc — but I realized as I looked at it there are no nested levels, all changes affect the rest of the piece (until there is another change).
I refactored to change several of the subs into methods, and made the context an attribute of their class. I’ve set things up so that Context objects are read-only, and you just create a new one when
you need to change the context of the tune. So far this seems to work well.
I guess at this point I really need to implement 1st and 2nd endings, and then push on by throwing more tunes at it to see where they don’t work.
Update: Just in case you wanted to see what “Tali Foster’s” was supposed to look like with proper repeats, here’s the latest PDF. That is to say, first and second endings now work, at least in this
one case. I’ve also realized that while key signatures work at the moment, accidentals don’t properly modify the key signature until the end of the bar — so that’s another thing that needs doing.
ABC Update
October 23, 2010
I’ve been slowly poking away at the ABC module without reporting on it here, as the changes have been pretty straightforward. All of the goals at the end of my First Sheet Music post have now been
achieved. Here’s the latest version of “The Star of Rakudo” PDF. If you compare it to the output of my old sheet music generator, you’ll see every musical element except the time signature is now
correctly rendered by the combination of the Perl 6 ABC code and Lilypond. (In addition, I’ve also added support for rests and triplets, which are now found in the module’s version of the Star of
Rakudo ABC file as an example.)
Where to go from here?
1) I guess I ought to fix the time signature thing. That should be trivial.
2) Support for ABC files whose base note duration is something other than an eighth note. (Right now we’ve just hardcoded things to assume eighth notes are the base unit of time.)
3) Broken rhythms.
4) In-line time signature and key signature changes.
5) Handling more than one ABC tune at a time in the input.
I don’t see any major challenges here, other than finding the time to work on this project!
Update: Later in the same day, I’ve already got #1 and #5 working, but I just realized I left out one important (and possibly tricky) one:
6) Handling first and second endings.
Fibonacci and Primes
October 20, 2010
The middle challenge was to find the first prime Fibonacci number greater than 227000, add one to it, and then sum the prime numbers which were its factors. Here’s my first implementation:
sub is-prime($a) {
return Bool::True if $a == 2;
! [||] $a <<%%<< (2, 3, *+2 ... * > $a.sqrt);
my @fib := (1, 1, *+* ... *);
my $cutoff = 227000;
my $least-prime = 0;
for @fib -> $f {
next if $f <= $cutoff;
next unless is-prime($f);
$least-prime = $f;
my $x = $least-prime + 1;
say [+] (2, 3, *+2 ... * > $x.sqrt).grep({ $x %% $^a && is-prime($a) });
Despite what seems like an obvious inefficiency (approximating the prime numbers with the odd numbers), this is pretty snappy, executing in 12.5 seconds.
I was planning to go on and talk about my new Math::Prime module here, but looking at this code, I think it can be expressed rather more nicely with a tweak or two here. Let’s see.
sub is-prime($a) {
return Bool::True if $a == 2;
! [||] $a <<%%<< (2, 3, *+2 ... * > $a.sqrt);
my @fib := (1, 1, *+* ... *);
my $cutoff = 227000;
my $least-prime = @fib.first({ $_ > $cutoff && is-prime($_) });
my $x = $least-prime + 1;
say [+] (2, 3, *+2 ... * > $x.sqrt).grep({ $x %% $^a && is-prime($a) });
So that’s what the first method is good for!
I did indeed write Math::Prime just so I could use it here. It’s not a huge change from the previous version, really:
use Math::Prime;
my @fib := (1, 1, *+* ... *);
my $cutoff = 227000;
my $least-prime = @fib.first({ $_ > $cutoff && is-prime($_) });
my $x = $least-prime + 1;
say [+] (primes() ... * > $x.sqrt).grep({ $x %% $^a });
Unfortunately, Math::Prime isn’t optimized yet, and so this version, while a bit nicer, is actually slower than the previous version.
Update: With Moritz’s help I did some optimizations to Math::Prime, and the script using it is now very significantly faster than the others.
Summing Subsets
October 9, 2010
So, the third challenge was to count the number of subsets of a set of numbers such that the largest number in the subset is the sum of the rest of the numbers in the subset. My first attempt was
very straightforward: create all the subsets and check to see if they have the desired property:
my @a = 3, 4, 9, 14, 15, 19, 28, 37, 47, 50, 54, 56, 59, 61, 70, 73, 78, 81, 92,
95, 97, 99;
my $hits = 0;
for 1..(2 ** +@a) -> $key {
my @b = gather for 0..+@a -> $i { take @a[$i] if $key +& (2 ** $i); }
my $test = @b.pop;
next if $test != [+] @b;
say (@b, $test).join(' ');
say "$hits hits";
I think this works correctly, but it will be a long time before we know — as I type this, it’s been running for ten hours on my fastest computer, and I don’t anticipate it finishing any time soon.
My second attempt relies on recursion, the fact the list is sorted, and skipping fruitless branches to get the job done much faster — 47 seconds, to be precise.
sub CountSumSets($target, @a) {
my $sets = 0;
for ^ +@a -> $i {
if @a[$i] < $target {
$sets += CountSumSets($target - @a[$i], @a[($i + 1) .. (@a - 1)]);
} elsif @a[$i] == $target {
$sets += 1;
my @a = 3, 4, 9, 14, 15, 19, 28, 37, 47, 50, 54, 56, 59, 61, 70, 73, 78, 81, 92, 95, 97, 99;
@a .= reverse;
my $hits = 0;
for ^ +@a -> $i {
$hits += CountSumSets(@a[$i], @a[($i + 1) .. (@a - 1)]);
say $hits;
Longest palindrome
October 9, 2010
Via Hacker News, I found the Greplin Programming Challenge, and couldn’t resist trying it in Perl 6. If you’re the sort of person who enjoys that sort of thing, I highly encourage you to stop reading
here and go try it yourself!
I’m going to blog about all three challenges one-by-one. I suppose the third will be the most interesting, if for no other reason than my current implementation looks like it will take ~32 hours to
run, so I’m probably going to need to find a more clever solution.
Basically, the first challenge is to find the longest palindrome in a string without spaces. As stated, I thought the challenge implied it was case-sensitive; obviously it’s easiest enough to add a
call to lc to get a case-insensitive version.
I was pretty sure a naive version would bring Rakudo to its knees, so I tried to be slightly clever. My solution is still O(N^2), but N is the number of occurrences of a given letter rather than the
full length of the string, so that’s fairly reasonable.
sub longest-palindrome($string) {
my @c = $string.comb(/./);
my %hc;
for ^ +@c -> $i {
if %hc{@c[$i]} {
} else {
%hc{@c[$i]} = [$i];
my @candidates := gather for %hc.keys.sort -> $c {
say "Checking $c";
my $list = %hc{$c};
say :$list.perl;
for $list.list -> $i1 {
for $list.reverse -> $i2 {
last if $i2 <= $i1;
my $j1 = $i1;
my $j2 = $i2;
my $candidate = Bool::True;
while ++$j1 < --$j2 {
if @c[$j1] ne @c[$j2] {
$candidate = Bool::False;
if $candidate {
say @c[$i1..$i2];
take @c[$i1..$i2].join('');
@candidates.sort({$^a.chars <=> $^b.chars}).perl.say;
Basically, my notion was to store all the occurrences of each letter in a hash of arrays, then pair up every two occurrences of the same letter and see if they are a palindrome. This probably isn’t
the most elegant solution, nor the fastest, but it was fast enough to get solve the challenge problem in a minute or two, and easy enough to code I could do it in under an hour at three in the
Interestingly, I think there might be a way to solve this using regular expressions…
Update: Moritz has a solution which blows this one out of the water, both much faster and more elegant. (The key idea was using the regex engine to find the center of potential palindromes.) I’ll let
him tell you about it…
Taking a Rest
October 4, 2010
Once I was reasonably happy with ABC::Duration and ABC::Note, I went ahead and added ABC::Rest, which was truly easy thanks to the magic of code reuse. It actually turned out to be quite a struggle
to implement, but that was only because I accidentally typed is $.type rather than has $.type and spent a couple hours trying to sort out what the LTA error message meant.
class ABC::Rest does ABC::Duration {
has $.type;
method new($type, ABC::Duration $duration) {
self.bless(*, :$type, :ticks($duration.ticks));
method Str() {
$.type ~ self.duration-to-str;
There are several sorts of rests in ABC, which is why ABC::Rest has a $type attribute. In practice, I’ve only implemented “z” rests so far, as they are the basic type.
Rests were already in the grammar, and adding an action to support them was dead easy:
method rest($/) {
make ABC::Rest.new(~$<rest_type>, $<note_length>.ast);
After a trivial refactor to make the Lilypond duration handling code easily available, it was just took one line to add rests to the Lilypond output:
when “rest” { print ” r{ DurationToLilypond($context, $element.value) } ” }
That’s for the output section of the code. No changes were required to make rests work in the “figure out the overall duration” section of the code, because it looks for elements which do
ABC::Duration, and so it automatically found the rests and correctly handled them.
To me, this looks like a solid win for treating duration as a role. | {"url":"https://justrakudoit.wordpress.com/page/5/","timestamp":"2014-04-20T20:56:37Z","content_type":null,"content_length":"53063","record_id":"<urn:uuid:30e2bf00-8c29-4c1d-b5ce-9ba41fc6a0d8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
s-step Orthomin and GMRES implemented on parallel computers
Results 1 - 10 of 12
- Society for Industrial and Applied Mathematics , 1997
"... We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic
operations on parallel machines, including general principles for constructing e cient algorithms. We illustrate ..."
Cited by 532 (26 self)
Add to MetaCart
We survey general techniques and open problems in numerical linear algebra on parallel architectures. We rst discuss basic principles of parallel processing, describing the costs of basic operations
on parallel machines, including general principles for constructing e cient algorithms. We illustrate these principles using current architectures and software systems, and by showing how one would
implement matrix multiplication. Then, we present direct and iterative algorithms for solving linear systems of equations, linear least squares problems, the symmetric eigenvalue problem, the
nonsymmetric eigenvalue problem, and the singular value decomposition. We consider dense, band and sparse matrices.
- Num. Lin. Alg. with Appl , 1991
"... Recently Eirola and Nevanlinna have proposed an iterative solution method for unsymmetric linear systems, in which the preconditioner is updated from step to step. Following their ideas we
suggest variants of GMRES, in which a preconditioner is constructed at each iteration step by a suitable approx ..."
Cited by 58 (16 self)
Add to MetaCart
Recently Eirola and Nevanlinna have proposed an iterative solution method for unsymmetric linear systems, in which the preconditioner is updated from step to step. Following their ideas we suggest
variants of GMRES, in which a preconditioner is constructed at each iteration step by a suitable approximation process, e.g., by GMRES itself. Keywords: GMRES, nonsymmetric linear systems, iterative
solver, ENmethod This version is dated June 23, 1992 Introduction The GMRES method, proposed in [13], is a popular method for the iterative solution of sparse linear systems with an unsymmetric
nonsingular matrix. In its original form, so-called full GMRES, it is optimal in the sense that it minimizes the residual over the current Krylov subspace. However, it is often too expensive since
the required orthogonalization per iteration step grows quadratically with the number of steps. For that reason, one often uses in practice variants of GMRES. The most wellknown variant, already
suggested i...
- Parallel Computing , 1999
"... In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of
parallelism. We briefly discuss the solution of dense linear systems, before studying the solution of sparse equat ..."
Cited by 5 (0 self)
Add to MetaCart
In this review paper, we consider some important developments and trends in algorithm design for the solution of linear systems concentrating on aspects that involve the exploitation of parallelism.
We briefly discuss the solution of dense linear systems, before studying the solution of sparse equations by direct and iterative methods. We consider preconditioning techniques for iterative solvers
and discuss some of the present research issues in this field. Keywords: linear systems, dense matrices, sparse matrices, tridiagonal systems, parallelism, direct methods, iterative methods, Krylov
methods, preconditioning. AMS(MOS) subject classifications: 65F05, 65F50. 1 Introduction Solution methods for systems of linear equations Ax = b; (1) where A is a coefficient matrix of order n and x
and b are n-vectors, are usually grouped into two distinct classes: direct methods and iterative methods. However, CCLRC - Rutherford Appleton Laboratory, Oxfordshire, England and CERFACS,
- Lecture Notes on Parallel Iterative Methods for discretized PDE's. AGARD Special Course on Parallel Computing in CFD, available from http://www.math.ruu.nl/people/vorst/#lec , 1995
"... In these notes we will present anoverview of a number of related iterative methods for the solution of linear systems of equations. These methods are so-called Krylov projection type methods and
they include popular methods as Conjugate Gradients, Bi-Conjugate Gradients, CGS, Bi-CGSTAB, QMR, LSQR an ..."
Cited by 3 (0 self)
Add to MetaCart
In these notes we will present anoverview of a number of related iterative methods for the solution of linear systems of equations. These methods are so-called Krylov projection type methods and they
include popular methods as Conjugate Gradients, Bi-Conjugate Gradients, CGS, Bi-CGSTAB, QMR, LSQR and GMRES. We will showhow these methods can be derived from simple basic iteration formulas. We will
not give convergence proofs, but we will refer for these, as far as available, to litterature. Iterative methods are often used in combination with so-called preconditioning operators (approximations
for the inverses of the operator of the system to be solved). Since these preconditioners are not essential in the derivation of the iterative methods, we will not givemuch attention to them in these
notes. However, in most of the actual iteration schemes, we have included them in order to facilitate the use of these schemes in actual computations. For the application of the iterative schemes one
usually thinks of linear sparse systems, e.g., like those arising in the nite element or nite di erence approximations of (systems of) partial di erential equations. However, the structure of the
operators plays no explicit role in any oftheseschemes, and these schemes might also successfully be used to solve certain large dense linear systems. Depending on the situation that might be
attractive in terms of numbers of oating point operations. It will turn out that all of the iterative are parallelizable in a straight forward manner. However, especially for computers with a memory
hierarchy (i.e., like cache or vector registers), and for distributed memory computers, the performance can often be improved signi cantly through rescheduling of the operations. We will discuss
parallel implementations, and occasionally we will report on experimental ndings.
- Computational Economics , 2000
"... This paper investigates parallel solution methods to simulate large-scale macroeconometric models with forward-looking variables. The method chosen is the Newton-Krylov algorithm. We concentrate
on a parallel solution to the sparse linear system arising in the Newton algorithm, and we empirically ..."
Cited by 2 (1 self)
Add to MetaCart
This paper investigates parallel solution methods to simulate large-scale macroeconometric models with forward-looking variables. The method chosen is the Newton-Krylov algorithm. We concentrate on a
parallel solution to the sparse linear system arising in the Newton algorithm, and we empirically analyze the scalability of the GMRES method, which belongs to the class of so-called Krylov subspace
methods. The results obtained using an implementation of the PETSc 2.0 software library on an IBM SP2 show a near linear scalability for the problem tested. Keywords: Parallel computing,
Newton-Krylov methods, sparse matrices, forward-looking models, GMRES, scalability. JEL Classification: C63, C88, C30. 1 Introduction There are many engineering problems for which parallel computing
has proven efficient. Economic problems are, however, often quite different in both structure and quantification. This is particularly true for systems of equations representing large economic
models, wh...
"... In this chapter we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are so-called Krylov projection type methods
and they include popular methods as Conjugate Gradients, Bi-Conjugate Gradients, LSQR and GMRES. We will sk ..."
Cited by 2 (0 self)
Add to MetaCart
In this chapter we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are so-called Krylov projection type methods and
they include popular methods as Conjugate Gradients, Bi-Conjugate Gradients, LSQR and GMRES. We will sketch how these methods can be derived from simple basic iteration formulas, and how they are
interrelated. Iterative schemes are usually considered as an alternative for the solution of linear sparse systems, like those arising in, e.g., finite element or finite difference approximation of
(systems of) partial differential equations. The structure of the operators plays no explicit role in any of these schemes, and the operator may be given even as a rule or a subroutine. Although
these methods seem to be almost trivially parallellizable at first glance, this is sometimes a point of concern because of the inner products involved. We will consider this point in some detail.
Iterative methods ...
, 1994
"... Introduction In these notes we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are so-called Krylov projection
type methods and they include popular methods as Conjugate Gradients, Bi-Conjugate Gradients, LSQR and GMRE ..."
Cited by 2 (0 self)
Add to MetaCart
Introduction In these notes we will present an overview of a number of related iterative methods for the solution of linear systems of equations. These methods are so-called Krylov projection type
methods and they include popular methods as Conjugate Gradients, Bi-Conjugate Gradients, LSQR and GMRES. We will show how these methods can be derived from simple basic iteration formulas. We will
not give convergence proofs, but we will refer for these, as far as available, to litterature. Iterative methods are often used in combination with so-called preconditioning operators (approximations
for the inverses of the operator of the system to be solved). Since these preconditioners are not essential in the derivation of these iterative methods, we will not discuss on them explicitly in
these notes. However, in most of the actual iteration schemes, we have included them in order to facilitate the use of these schemes in actual computations. For the application of the iterative
"... e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not
easily parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preco ..."
Add to MetaCart
e inner products, vector updates and matrix vector product are easily parallelized and vectorized. The more successful preconditionings, i.e, based upon incomplete LU decomposition, are not easily
parallelizable. For that reason one is often satisfied with the use of only diagonal scaling as a preconditioner on highly parallel computers, such as the CM2 [24]. On distributed memory computers we
need large grained parallelism in order to reduce synchronization overhead. This can be achieved by combining the work required for a successive number of iteration steps. The idea is to construct
first in parallel a straight forward Krylov basis for the search subspace in which an update for the current solution will be determined. Once this basis has been computed, the vectors are
orthogonalized, as is done in Krylov subspace methods. The construction as well as the orthogonalization can be done with large grained parallelism, and has su#cient degree of parallelism in it. This
approach has be
, 2004
"... A conjugate gradient (CG)-type algorithm CG Plan is introduced for calculating an approximate solution of Newton’s equation within large-scale optimization frameworks. The approximate solution
must satisfy suitable properties to ensure global convergence. In practice, the CG algorithm is widely used ..."
Add to MetaCart
A conjugate gradient (CG)-type algorithm CG Plan is introduced for calculating an approximate solution of Newton’s equation within large-scale optimization frameworks. The approximate solution must
satisfy suitable properties to ensure global convergence. In practice, the CG algorithm is widely used, but it is not suitable when the Hessian matrix is indefinite, as it can stop prematurely. CG
Plan is a symmetric variant of the composite step Bi-CG method of Bank and Chan, suitably adapted for optimization problems. It is an alternative to CG that copes with the indefinite case. We show
convergence for CG Plan, then prove that the practical implementation always provides a gradient related direction within a truncated Newton method (algorithm TN Plan). Some preliminary numerical
results support the theory. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=440955","timestamp":"2014-04-19T21:13:23Z","content_type":null,"content_length":"38973","record_id":"<urn:uuid:39128cdb-caed-4d10-a937-748c6c6e2cfc>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linearization problems... very confused:(
January 6th 2011, 11:03 AM #16
Trust me, you haven't given me a hard time at all. I could point you to a thread or two where it's been pretty slow going.
You're very welcome. Have a good one!
Please in future post the entire question.
You are asked "Which of the following is the equation of the tangent ..."
But you do not give the options.
It is much easier to eliminate candidates than to find the equation from scratch.
I will do that next time.. sorry. I just wanted to learn how to do it without depending on a calculator/answer choices. These types of questions weren't even on the test today!
January 6th 2011, 11:40 PM #17
Grand Panjandrum
Nov 2005
January 7th 2011, 12:55 PM #18
Jul 2010 | {"url":"http://mathhelpforum.com/calculus/167596-linearization-problems-very-confused-2.html","timestamp":"2014-04-16T10:24:19Z","content_type":null,"content_length":"39859","record_id":"<urn:uuid:804f4fa5-dee0-4939-9319-1d9a26ce8cf8>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do i measure or find polarity.
Best Response
You've already chosen the best response.
i know we use the electroneg' but my teacher explainde something different something about opposing bonds canceling out.
Best Response
You've already chosen the best response.
I think you are looking for the molecular dipole moment: http://hyperphysics.phy-astr.gsu.edu/hbase/electric/diph2o.html The short answer is that you don't calculate polarity because you can't
precisely calculate partial charge. Given partial charge, here is the formula for polarity: \[\mu=Qr\]Where mu is the dipole moment, Q is the partial charge on the atoms, and r is the distance
between them. Q is measured in coulombs (C), and r in meters (m). One coulomb meter is a debye (D). Because electrons spend most (all if ionic) of their time around the more electronegative atom,
you can tell which atoms will have positive or negative partial charges. Because the charges are vectors, they have a magnitude and a direction, they can cancel one another out. Methane is an
example. Look at the chemical structure: http://en.wikipedia.org/wiki/Methane The direction of the bonds in ammonia do not cancel each other out, so it has a net "upward" (if you look at the
picture) molecular dipole moment: http://en.wikipedia.org/wiki/Ammonia
Best Response
You've already chosen the best response.
\[xa-xb=sqrt{(\Delta)}/23.06\] xa is known aatom n xb is unknown for the actual way they use it, or you can use the table F=4.0 most important n USUALLY a \[\Delta\] 1.8 or so its ionic n less
than that is covalent
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4dfcf3720b8bbe4f12e6c806","timestamp":"2014-04-20T20:55:19Z","content_type":null,"content_length":"34009","record_id":"<urn:uuid:2af00dd2-1495-41ed-9aee-6303f233e1c0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
- J. Stat. Phys , 2001
"... We study numerically and analytically the spectrum of incidence matrices of random labeled graphs on N vertices: any pair of vertices is connected by an edge with probability p. We give two
algorithms to compute the moments of the eigenvalue distribution as explicit polynomials in N and p. For large ..."
Cited by 18 (3 self)
Add to MetaCart
We study numerically and analytically the spectrum of incidence matrices of random labeled graphs on N vertices: any pair of vertices is connected by an edge with probability p. We give two
algorithms to compute the moments of the eigenvalue distribution as explicit polynomials in N and p. For large N and fixed p the spectrum contains a large eigenvalue at Np and a semicircle of “small
” eigenvalues. For large N and fixed average connectivity pN (dilute or sparse random matrices limit) we show that the spectrum always contains a discrete component. An anomaly in the spectrum near
eigenvalue 0 for connectivity close to e is observed. We develop recursion relations to compute the moments as explicit polynomials in pN. Their growth is slow enough so that they determine the
spectrum. The extension of our methods to the Laplacian matrix is given in Appendix.
- The Modulo 1 Central Limit Theorem and Benford’s Law for Products, to appear in the International Journal of Algebra.http://arxiv.org/abs/math/0607686 MN2 , 2007
"... Abstract. Consider the ensemble of real symmetric Toeplitz matrices, each independent entry an i.i.d. random variable chosen from a fixed probability distribution p of mean 0, variance 1, and
finite higher moments. Previous investigations showed that the limiting spectral measure (the density of nor ..."
Cited by 7 (0 self)
Add to MetaCart
Abstract. Consider the ensemble of real symmetric Toeplitz matrices, each independent entry an i.i.d. random variable chosen from a fixed probability distribution p of mean 0, variance 1, and finite
higher moments. Previous investigations showed that the limiting spectral measure (the density of normalized eigenvalues) converges weakly and almost surely, independent of p, to a distribution which
is almost the standard Gaussian. The deviations from Gaussian behavior can be interpreted as arising from obstructions to solutions of Diophantine equations. We show that these obstructions vanish if
instead one considers real symmetric palindromic Toeplitz matrices, matrices where the first row is a palindrome. A similar result was previously proved for a related circulant ensemble through an
analysis of the explicit formulas for eigenvalues. By Cauchy’s interlacing property and the rank inequality, this ensemble has the same limiting spectral distribution as the palindromic Toeplitz
matrices; a consequence of combining the two approaches is a version of the almost sure Central Limit Theorem. Thus our analysis of these Diophantine equations provides
- Exper. Math , 2008
"... Keywords: Ramanujan graphs, random graphs, largest non-trivial eigenvalues, Tracy-Widom distribution Recently Friedman proved Alon’s conjecture for many families of d-regular graphs, namely that
given any ǫ> 0 “most ” graphs have their largest non-trivial eigenvalue at most 2 √ d − 1+ ǫ in absolute ..."
Cited by 6 (0 self)
Add to MetaCart
Keywords: Ramanujan graphs, random graphs, largest non-trivial eigenvalues, Tracy-Widom distribution Recently Friedman proved Alon’s conjecture for many families of d-regular graphs, namely that
given any ǫ> 0 “most ” graphs have their largest non-trivial eigenvalue at most 2 √ d − 1+ ǫ in absolute value; if the absolute value of the largest non-trivial eigenvalue is at most 2 √ d − 1 then
the graph is said to be Ramanujan. These graphs have important applications in communication network theory, allowing the construction of superconcentrators and nonblocking networks, coding theory
and cryptography. As many of these applications depend on the size of the largest non-trivial positive and negative eigenvalues, it is natural to investigate their distributions. We show these are
well-modeled by the β = 1 Tracy-Widom distribution for several families. If the observed growth rates of the mean and standard deviation as a function of the number of vertices holds in the limit,
then in the limit approximately 52% of d-regular graphs from bipartite families should be Ramanujan, and about 27 % from nonbipartite families (assuming the largest positive and negative eigenvalues
are independent).
"... Abstract. We examine the empirical distribution of the eigenvalues and the eigenvectors of adjacency matrices of sparse regular random graphs. We find that when the degree sequence of the graph
slowly increases to infinity with the number of vertices, the empirical spectral distribution converges to ..."
Cited by 6 (0 self)
Add to MetaCart
Abstract. We examine the empirical distribution of the eigenvalues and the eigenvectors of adjacency matrices of sparse regular random graphs. We find that when the degree sequence of the graph
slowly increases to infinity with the number of vertices, the empirical spectral distribution converges to the semicircle law. Moreover, we prove concentration estimates on the number of eigenvalues
over progressively smaller intervals. We also show that, with high probability, all the eigenvectors are delocalized. 1.
"... Abstract. Trace formulae for d-regular graphs are derived and used to express the spectral density in terms of the periodic walks on the graphs under consideration. The trace formulae depend on
a parameter w which can be tuned continuously to assign different weights to different periodic orbit cont ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. Trace formulae for d-regular graphs are derived and used to express the spectral density in terms of the periodic walks on the graphs under consideration. The trace formulae depend on a
parameter w which can be tuned continuously to assign different weights to different periodic orbit contributions. At the special value w = 1, the only periodic orbits which contribute are the non
back- scattering orbits, and the smooth part in the trace formula coincides with the Kesten-McKay expression. As w deviates from unity, non vanishing weights are assigned to the periodic walks with
back-scatter, and the smooth part is modified in a consistent way. The trace formulae presented here are the tools to be used in the second paper in this sequence, for showing the connection between
the spectral properties of d-regular graphs and the theory of random matrices. 1. Introduction and
"... According to one of the basic conjectures in Quantum Chaos, the eigenvalues of a quantized chaotic Hamiltonian behave like the spectrum of the typical member of the appropriate ensemble of
random matrices. We study one of the simplest examples of this phenomenon in the context of ergodic actions ..."
Cited by 1 (0 self)
Add to MetaCart
According to one of the basic conjectures in Quantum Chaos, the eigenvalues of a quantized chaotic Hamiltonian behave like the spectrum of the typical member of the appropriate ensemble of random
matrices. We study one of the simplest examples of this phenomenon in the context of ergodic actions of groups generated by several linear toral automorphisms { \cat maps". Our numerical experiments
indicate that for \generic" choices of cat maps, the unfolded consecutive spacings distribution in the irreducible components of the N-th quantization (given by the N-dimensional Weil representation)
approaches the GOE/GSE law of Random Matrix Theory. For certain special \arithmetic " transformations, related to the Ramanujan graphs of Lubotzky, Phillips and Sarnak, the experiments indicate that
the unfolded consecutive spacings distribution follows Poisson statistics; we provide a sharp estimate in that direction.
, 2005
"... Abstract. Consider the ensemble of real symmetric Toeplitz matrices, each independent entry an i.i.d. random variable chosen from a fixed probability distribution p of mean 0, variance 1, and
finite higher moments. Previous investigations showed that the limiting spectral measure (the density of nor ..."
Add to MetaCart
Abstract. Consider the ensemble of real symmetric Toeplitz matrices, each independent entry an i.i.d. random variable chosen from a fixed probability distribution p of mean 0, variance 1, and finite
higher moments. Previous investigations showed that the limiting spectral measure (the density of normalized eigenvalues) converges (weakly and almost surely), independent of p, to a distribution
which is almost the Gaussian. The deviations from Gaussian behavior can be interpreted as arising from obstructions to solutions of Diophantine equations. We show that these obstructions vanish if
instead one considers real symmetric palindromic Toeplitz matrices (matrices where the first row is a palindrome), and the resulting spectral measures converge (weakly and almost surely) to the
Gaussian. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3041280","timestamp":"2014-04-20T20:19:08Z","content_type":null,"content_length":"30519","record_id":"<urn:uuid:976ace1f-7cab-4833-ae00-af1dfbca2111>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00009-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dividing Rational Expressions!
October 15th 2006, 05:46 AM #1
Oct 2006
Dividing Rational Expressions!
Please help with this prob. It is over fifty and I cant finish it all by my self in just 8 hours time so can anybody help me with this 15 items of expressions?
here are they:
2a+2b__________ a^2-b^2
------ Divided by ---------
a^2__________ 4a
a^2-4__________ a^2-a-2
------ Divided by ----------
a^2-1__________ a^2+a-2
4a^2-25 ________ 12a+30
-------- Divided by---------
a^2-16 ________ 2a^2+8a
a^4-b^4________ a^2+b^2
-------- Divided by---------
5a-5b___________ 5
--------- Divided by a+b
Last edited by Hardheaded; October 15th 2006 at 05:48 AM. Reason: typo
^^^ Continuation
x^2-13x+42_________ 2x^2-13x-7
----------- Divided by------------
6+x-x^2____________ 2x^2-5x-3
x^2+2x^2________ 1-4x^2
----------Divided by----------
----------Divided by------------
---------Divided by------------
-----------Divided by--------
-------------Divided by------------
1___________ 1___________1
---Dividedby-----Divided by-----
( ----)^2 Dividedby (----)^4
Thanks for the ones who can help ;-)
*note: this (^) symbolizes that the number at the right of it is raised
______and n symbolizes that all nos. inside the pharenthese is raised
one again,
Please help with this prob. It is over fifty and I cant finish it all by my self in just 8 hours time so can anybody help me with this 15 items of expressions?
here are they:
2a+2b__________ a^2-b^2
------ Divided by ---------
a^2__________ 4a
a^2-4__________ a^2-a-2
------ Divided by ----------
a^2-1__________ a^2+a-2
4a^2-25 ________ 12a+30
-------- Divided by---------
a^2-16 ________ 2a^2+8a
a^4-b^4________ a^2+b^2
-------- Divided by---------
5a-5b___________ 5
--------- Divided by a+b
I will do just first problem because I really can't write without LaTex, someone else on forum will help you with others:
[(2a+2b)/a^2] / [(a^2-b^2)/4a] = [2(a+b)/a^2] * [4a/[(a+b)(a-b)]] =
Can anybody answer the remaining items?
Please I desperately need it!
I think, rather than try to have someone do all these for you, it would be better to have someone explain what needs to be done.
So, the first thing you need to do with problems like these is factor whatever you can in the numerators and denominators. Then you cancel what you can.
The hardest part of this problem is the factoring, so let's look at the kinds of factoring problems you have here.
a^2 - 4
This is of the form of the difference between two squares:
a^2 - b^2 = (a+b)(a-b)
a^2 - 4 = (a+2)(a-2)
a^2 - 1 is similar: a^2 - 1 = (a+1)(a-1)
Now we are left with a^2 - a - 2 and a^2 + a - 2. Typically you can guess at these:
a^2 - a - 2 = (a + _)(a + _)
The missing numbers must be factors of -2 of which there are only 4. So there are only a small number of possibilities here:
(a + 2)(a - 1) = a^2 + a - 2
(a + 1)(a - 2) = a^2 - a - 2
(The other combinations just repeat these factors in a different order.)
As it happens, in the process we also managed to factor a^2 + a - 2.
So the problem, after all the factoring has been done, becomes:
Cancelling the common factors we get:
All of these problems are done in exactly the same way. Since the only problem I can imagine you would have with these would be factoring, what I would suggest is that you go through these and
factor what you can factor. Then make a list of what you are having problems factoring and post those. That would make it much more simple for all of us to help you where you most need it.
October 15th 2006, 06:08 AM #2
Oct 2006
October 15th 2006, 06:10 AM #3
October 16th 2006, 02:44 AM #4
Oct 2006
October 16th 2006, 03:53 AM #5 | {"url":"http://mathhelpforum.com/algebra/6442-dividing-rational-expressions.html","timestamp":"2014-04-20T05:44:11Z","content_type":null,"content_length":"46123","record_id":"<urn:uuid:42e7e121-7d76-4fd4-b8c2-e81fb6b61427>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00243-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turning an exponential graph into a line graph
February 8th 2008, 03:50 PM
Turning an exponential graph into a line graph
Hi, in physics we did an experiment which would determine the acceleration due to gravity. we dropped a golf ball from a table and made a d-t graph and all the good stuff. My teacher said that as
one of the methods we must manipulate the graph so that the outcome is a straight line from which we can find the slope ( not a secant or line of best fit). Can anyone please tell me the best way
to change an exponential graph into a straight line graph?
- Thanks
February 8th 2008, 04:13 PM
Hi, in physics we did an experiment which would determine the acceleration due to gravity. we dropped a golf ball from a table and made a d-t graph and all the good stuff. My teacher said that as
one of the methods we must manipulate the graph so that the outcome is a straight line from which we can find the slope ( not a secant or line of best fit). Can anyone please tell me the best way
to change an exponential graph into a straight line graph?
- Thanks
I will venture a guess to this: Try doing some "algibraic" operation to every data point to make them all appear in a straight line. You said it's an "exponential" graph (to be clear, this does
not necessarily mean it's actually an exponential graph, it just has that type of shape). So, think of different types of "exponential" functions, $e^t,~10^t,~t^2,~t^3$, that your graph might
represent and then "de-exponentialize" (I'm fairly certain that's not a word) your graph by using various inverse-"exponential" functions on each of your data points.
I hope that's somewhat clear. My main point is, if you think it looks like an $e^t$ graph, then use the opposite of $e^t$, the $\ln$ function, to make the points a straight line. Do you know what
the inverse functions to the ones I listed are?
February 8th 2008, 05:03 PM
mr fantastic
Hi, in physics we did an experiment which would determine the acceleration due to gravity. we dropped a golf ball from a table and made a d-t graph and all the good stuff. My teacher said that as
one of the methods we must manipulate the graph so that the outcome is a straight line from which we can find the slope ( not a secant or line of best fit). Can anyone please tell me the best way
to change an exponential graph into a straight line graph?
- Thanks
First of all, plot the data. What curve seems to fit it best? The simplest curve. Probably parabolic (quadratic model) ..... Does it look like it goes through the origin?
So assume a power rule, something of the form $d = k t^m$.
Now take the log of both sides (doesn't matter what base).
$\log d = \log (k t^m) = \log k + \log t^m = \log k + m \log t$.
In other words, $\log d = m \log t + \log k$.
This has the form $y = m x + c$ where y is log d, x is log t and c is log k. In other words, a line.
So here's what you do:
Take the log of all your data. Do a plot of log d versus log t. Draw the line of best fit.
The gradient gives you m. The log d intercept lets you calculate k.
Crystal ball gazing: You should find the value of m is very close to 2 and the value of k is close to 5. Depending on the accuaracy of the data, you might even be able to do better than 5 .....
Note: This data alone will not let you calculate the acceleration due to gravity ......
February 9th 2008, 04:28 PM
thanks for the help, i ended up makign the graph a fairly straight line. but it also says that we have to use the equation of the line along with one of the basic d/a/t ( d = v1t+0.5at^2..)
formulas to find the acceleration
any ideas lol?
thanks again
February 9th 2008, 04:55 PM
mr fantastic
Well, on the one hand you have your model for the data, namely $d = k t^n$. From the graph of log d versus log t you have calculated the values of k and n .....
On the other hand, there's a formula that says that for constant acceleration, $d = v_1 t + 0.5 a t^2$. In this formula $v_1$ is the initial velcocity ( = 0 in your experiment, I assume?) and a
is acceleration. In your experiment, a = g .....
Compare $d = k t^n$ with $d = 0.5 g t^2$: k = 0.5 g therefore g = .....
February 9th 2008, 05:16 PM
Well, on the one hand you have your model for the data, namely $d = k t^n$. From the graph of log d versus log t you have calculated the values of k and n .....
On the other hand, there's a formula that says that for constant acceleration, $d = v_1 t + 0.5 a t^2$. In this formula $v_1$ is the initial velcocity ( = 0 in your experiment, I assume?) and a
is acceleration. In your experiment, a = g .....
Compare $d = k t^n$ with $d = 0.5 g t^2$: k = 0.5 g therefore g = .....
that helped alot man. wasnt too hard, but im just not thinking straight today. thanks for the descriptive information | {"url":"http://mathhelpforum.com/pre-calculus/27816-turning-exponential-graph-into-line-graph-print.html","timestamp":"2014-04-19T12:28:32Z","content_type":null,"content_length":"14344","record_id":"<urn:uuid:33eb4b32-8199-465b-8b3a-874aa6127c71>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Burnham, IL Geometry Tutor
Find a Burnham, IL Geometry Tutor
...Math, pre-algebra, algebra 1 and 2 are classes I have taught during my teaching career. My background is working with students from urban and suburban school districts. Technology is useful to
help with the different modalities in learning, in which, children learn more effectively.
7 Subjects: including geometry, algebra 1, algebra 2, dyslexia
...When I'm not in class, I swim, play tennis and the oboe, teach First Aid/CPR certification courses for the American Red Cross, and for the past 6 years I have volunteered as an EMT.
My educational background lends itself to tutoring not only in the biological and medical sciences, but ...
13 Subjects: including geometry, chemistry, calculus, biology
...Please contact me to get started in your learning adventure. Thank you,Nicole I have been using Microsoft Office and its suite of programs for the last 10 years for my career. I also receive
annual training on new features within the various Microsoft programs, and a monthly newsletter to keep me updated on any changes.
36 Subjects: including geometry, English, reading, writing
...I am generally available afternoons/evenings and can work out a consistent schedule if that's what you are looking for. I believe that EVERY student deserves a quality education and am open to
discussing rates on a case-by-case basis. Please reach out to me so we can set something up!
27 Subjects: including geometry, chemistry, physics, calculus
...I'm not just a tutor, I'm a professional educator. I will take your child's education seriously.I have 11 years of experience teaching High School Algebra 1. Last year, 94% of my students
passed the Indiana Algebra 1 End-of-Course Assessment.
14 Subjects: including geometry, calculus, algebra 1, trigonometry
Related Burnham, IL Tutors
Burnham, IL Accounting Tutors
Burnham, IL ACT Tutors
Burnham, IL Algebra Tutors
Burnham, IL Algebra 2 Tutors
Burnham, IL Calculus Tutors
Burnham, IL Geometry Tutors
Burnham, IL Math Tutors
Burnham, IL Prealgebra Tutors
Burnham, IL Precalculus Tutors
Burnham, IL SAT Tutors
Burnham, IL SAT Math Tutors
Burnham, IL Science Tutors
Burnham, IL Statistics Tutors
Burnham, IL Trigonometry Tutors
Nearby Cities With geometry Tutor
Calumet City geometry Tutors
Crete, IL geometry Tutors
Dixmoor, IL geometry Tutors
Dolton geometry Tutors
East Chicago geometry Tutors
East Hazel Crest, IL geometry Tutors
Greenwood, IL geometry Tutors
Hammond, IN geometry Tutors
Merrionette Park, IL geometry Tutors
Olympia Fields geometry Tutors
Phoenix, IL geometry Tutors
Riverdale, IL geometry Tutors
South Holland geometry Tutors
Thornton, IL geometry Tutors
Whiting, IN geometry Tutors | {"url":"http://www.purplemath.com/burnham_il_geometry_tutors.php","timestamp":"2014-04-20T11:25:24Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:dd9b48ae-5de9-4b7b-a8dd-f1e60a07b3c3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Hazard Package
Hazard Function Technology
Some of the most relevant outcomes of medical procedures, or of the life-history of machines, are time-related events. The "raw data" for such events is the time interval between some defined "time
zero" (t=0) and the occurrence of the event. The distribution of a collection of these time intervals could be viewed as a cumulative distribution table or graph, although commonly the compliment of
the cumulative distribution is displayed as a so-called survivorship function. Another way to visualize the intervals would be as a histogram or probability density function; however, because the
fundamental questions about these intervals relates to some biologic or natural phenomenon across time, the more natural domain for study is as the rate of occurrence.
The rate of occurrence of a time-related event is known as the hazard function. John Graunt brought this word from dicing into the arena of time-related events during the 17th century. It is
sometimes called the "force of mortality." In financial circles, it is the inverse of Mills ratio.
Actually, all one is dealing with is the distribution of a positive variable, so the methodology embodied in hazard function analysis is applicable to any positively distributed variable.
The nature of living things and real machines is such that lifetimes (or other time-related events) often lead to rather simple, low-order distributions. For this reason, we have believed that
low-order, parametric characterization of the distribution can be accomplished.
The parametric approach taken in the hazard procedures developed in the early 1980s at the University of Alabama at Birmingham was a decompositional approach. The distribution of intervals is viewed
as consisting of one or more overlapping "phases" (herein called early, constant, and late) additive in hazard (competing risks). A generic functional form is utilized for the phases that can be
simplified into a large number of hierarchically nested forms.
Each phase is scaled by a log-linear function of concomitant information. This allows the model to be non-proportional in hazards, an assumption often made, but often unrealistic.
Finally, the hazard model has been enriched in 3 ways. Because the intervals may not be known completely (incomplete, censored data), right censoring, left censoring, and interval censoring has been
incorporated into the procedure. Second, the events considered may be repeating. This automatically accommodates a wide class of time-varying co-variables, that class that can be considered to change
at specific intervals. Third, the event may be weighted on a positive scale (such as cost). Thus, the procedure, at its most complex, can accommodate time-related repeating cost data, with
time-varying co-variables, and a non-proportional hazard structure. | {"url":"http://www.lerner.ccf.org/qhs/software/hazard/","timestamp":"2014-04-17T10:18:51Z","content_type":null,"content_length":"13856","record_id":"<urn:uuid:d3aca0eb-9c9f-4ce5-80b3-c20b4259576b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Drawing cubic graphs with at most five slopes
Keszegh, Balázs and Pach, János and Pálvölgyi, Dömötör and Tóth, Géza (2007) Drawing cubic graphs with at most five slopes. In: Graph Drawing 14th International Symposium, GD 2006, September 18-20,
2006, Karlsruhe, Germany , pp. 114-125 (Official URL: http://dx.doi.org/10.1007/978-3-540-70904-6_13).
Full text not available from this repository.
We show that every graph G with maximum degree three has a straight-line drawing in the plane using edges of at most five different slopes. Moreover, if G is connected and has at least one vertex of
degree less than three, then four directions suffice.
Repository Staff Only: item control page | {"url":"http://gdea.informatik.uni-koeln.de/767/","timestamp":"2014-04-19T14:32:33Z","content_type":null,"content_length":"22015","record_id":"<urn:uuid:8e080de3-7a65-431e-b4f7-8b00f09a92ec>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Emil Briggs
(me.delete@this.nowherespam.com), October 20, 2012 10:04 am
> >
> No, you don't believe in
> perpetual motion. You're apparently doing what I suggested was possible: you
> are using a computationally clumsy algorithm to match the work done by the
> processors to the wimpy bandwidth available. I wasn't quite certain about what
> N you were referring to in the N^3 scaling. If you really are referring to the
> number of atoms (which itself goes as M^3, where M is the number of atoms in a
> line on a cubic lattice), then you are (and you do refer to it as a DFT and not
> as an FFT) forgoing the advantages of divide and conquer. Ummm...
> Robert.
Robert I don't know exactly what sort of background you have in these types of calculations so please forgive me if I start at too low a level here. The DFT acronym refers to Density Functional
Theory and is not related to fourier transforms of any kind.
DFT calculations are accurate because they originate from quantum mechanical first principles. QM is non local and the wavefunctions associated with the electrons extends over all space. In practice
we can truncate them at some point or use periodic boundary conditions to limit the region of interest but as we increase the number of atoms the size of the region we have to address increases in a
roughly linear manner. So we pick up one factor of N. But as the number of atoms increases the number of electrons increases as well and we pick up another factor of N. Finally the orthogonality
constraint for the wave functions gives us another factor and we're at N^3.
There are a variety of techniques to solve the resultant equations (referred to as the Kohn-Sham equations). Some of them use FFT's. Others use finite difference methods. The FFT based methods have
scaling issues on computer systems with poor interconnect bandwidth. The finite difference methods can do better on such systems.
In either case though the N^3 scaling of the algorithm with the number of atoms is the real fundamental limit. Even a system with with enough interconnect bandwidth to provide good FFT performance on
thousands of nodes would still run into this problem. Hence the work on finding algorithms that don't have that limit. | {"url":"http://www.realworldtech.com/forum/?threadid=125646&curpostid=128456","timestamp":"2014-04-21T12:29:37Z","content_type":null,"content_length":"61363","record_id":"<urn:uuid:a679da23-393a-46ba-8620-cf9ef28b57ef>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayes Rule Applet
These applets demonstrates Bayes Rule and probability updating in two contexts: a medical diagnosis and employee incentive pay.
The Horrible Disease
How worried should you be if you test positive for some disease? What does it mean if a test for some disease is "95% accurate"? Does it mean that, if you test positive, you have a 95% chance of
having the disease. While this sounds sensible, the answer is usually "no." The actual probability depends not only on the reliability of the test, but also the number of infections in the population
to begin with. This applet demonstrates this idea.
Rewarding Employees
It is often difficult to observe effort on the part of employees, so companies are forced to reward employees based on success or failure (a measure of performance) which is only partially controlled
by effort. How likely is it that bonuses are going to the bad employees, who simply et lucky, rather than the good ones? This applet answers this question. | {"url":"http://www.gametheory.net/Mike/applets/Bayes/","timestamp":"2014-04-19T22:06:20Z","content_type":null,"content_length":"5173","record_id":"<urn:uuid:76752815-62f6-49e3-88e6-829ccf140d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measurement Scales (1 of 6)
Measurement is the assignment of numbers to objects or events in a systematic fashion. Four levels of measurement scales are commonly distinguished:
, and
There is a relationship between the level of measurement and the appropriateness of various statistical procedures. For example, it would be silly to compute the mean of nominal measurements.
However, the appropriateness of statistical analyses involving means for ordinal level data has been controversial. One position is that data must be measured on an interval or a ratio scale for the
computation of means and other
to be valid. Therefore, if data are measured on an ordinal scale, the
but not the
can serve as a measure of
central tendency
The arguments on both sides of this issue will be examined in the context of an hypothetical experiment designed to determine whether people prefer to work with color or with black and white computer
displays. Twenty subjects viewed black and white displays and 20 subjects viewed color displays. | {"url":"http://davidmlane.com/hyperstat/A30028.html","timestamp":"2014-04-18T01:03:23Z","content_type":null,"content_length":"3729","record_id":"<urn:uuid:a3f7a5b1-4b72-4f2a-9283-6f5689f215f6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
A001168 - OEIS
G. Barequet, M. Moffie, A. Ribo, and G. Rote, Counting polyominoes on twisted cylinders, Integers (electronic journal, 6 (2006), A22, 37 pp.
Stirling Chow and Frank Ruskey, Gray codes for column-convex polyominoes and a new class of distributive lattices, Discrete Mathematics, 309 (2009), 5284-5297. [From N. J. A. Sloane, Sep 15 2009]
A. R. Conway and A. J. Guttmann, On two-dimensional percolation, J. Phys. A: Math. Gen. 28(1995) 891-904.
S. R. Finch, Mathematical Constants, Cambridge, 2003, pp. 378-382.
J. Fortier, A. Goupil, J. Lortie and J. Tremblay, Exhaustive generation of gominoes, Theoretical Computer Science, 2012; http://dx.doi.org/10.1016/j.tcs.2012.02.032. - From N. J. A. Sloane, Sep 20
J. E. Goodman and J. O'Rourke, editors, Handbook of Discrete and Computational Geometry, CRC Press, 1997, p. 229.
A. J. Guttmann, ed., Polygons, Polyominoes and Polycubes, Springer, 2009, p. 478. (Table 16.10 has 56 terms of this sequence.) [From Robert A. Russell, Nov 05 2010]
I. Jensen and A. J. Guttmann, Statistics of lattice animals (polyominoes) and polygons. J. Phys. A 33, L257-L263 (2000).
D. A. Klarner and R. L. Rivest, A procedure for improving the upper bound for the number of n-ominoes, Canadian J. of Mathematics, 25 (1973), 585-602.
W. F. Lunnon, Counting polyominoes, pp. 347-372 of A. O. L. Atkin and B. J. Birch, editors, Computers in Number Theory. Academic Press, NY, 1971.
W. F. Lunnon, Counting hexagonal and triangular polyominoes, pp. 87-100 of R. C. Read, editor, Graph Theory and Computing. Academic Press, NY, 1972.
N. Madras, A pattern theorem for lattice clusters, Annals of Combinatorics, 3 (1999), 357-384.
D. H. Redelmeier, Counting polyominoes: yet another attack, Discrete Math., 36 (1981), 191-203.
N. J. A. Sloane, A Handbook of Integer Sequences, Academic Press, 1973 (includes this sequence).
N. J. A. Sloane and Simon Plouffe, The Encyclopedia of Integer Sequences, Academic Press, 1995 (includes this sequence). | {"url":"https://oeis.org/A001168","timestamp":"2014-04-18T22:46:33Z","content_type":null,"content_length":"22950","record_id":"<urn:uuid:d4aa2b5d-a1ef-45db-9415-2971ac1fdd01>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laplace operator
From Wikipedia, the free encyclopedia
(Redirected from
In mathematics the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a function on Euclidean space. It is usually denoted by the symbols ∇·∇, ∇^2 or
∆. The Laplacian ∆f(p) of a function f at a point p, up to a constant depending on the dimension, is the rate at which the average value of f over spheres centered at p, deviates from f(p) as the
radius of the sphere grows. In a Cartesian coordinate system, the Laplacian is given by the sum of second partial derivatives of the function with respect to each independent variable. In other
coordinate systems such as cylindrical and spherical coordinates, the Laplacian also has a useful form.
The Laplace operator is named after the French mathematician Pierre-Simon de Laplace (1749–1827), who first applied the operator to the study of celestial mechanics, where the operator gives a
constant multiple of the mass density when it is applied to a given gravitational potential. Solutions of the equation ∆f = 0, now called Laplace's equation, are the so-called harmonic functions, and
represent the possible gravitational fields in free space.
The Laplacian occurs in differential equations that describe many physical phenomena, such as electric and gravitational potentials, the diffusion equation for heat and fluid flow, wave propagation,
and quantum mechanics. The Laplacian represents the flux density of the gradient flow of a function. For instance, the net rate at which a chemical dissolved in a fluid moves toward or away from some
point is proportional to the Laplacian of the chemical concentration at that point; expressed symbolically, the resulting equation is the diffusion equation. For these reasons, it is extensively used
in the sciences for modelling all kinds of physical phenomena. The Laplacian is the simplest elliptic operator, and is at the core of Hodge theory as well as the results of de Rham cohomology. In
image processing and computer vision, the Laplacian operator has been used for various tasks such as blob and edge detection.
The Laplace operator is a second order differential operator in the n-dimensional Euclidean space, defined as the divergence (∇·) of the gradient (∇ƒ). Thus if ƒ is a twice-differentiable real-valued
function, then the Laplacian of ƒ is defined by
$\Delta f = abla^2 f = abla \cdot abla f$ (1)
where the latter notations derive from formally writing $abla = \left ( \frac{\partial}{\partial x_1} , \dots , \frac{\partial}{\partial x_n} \right ).$ Equivalently, the Laplacian of ƒ is the sum of
all the unmixed second partial derivatives in the Cartesian coordinates $x_i$ :
$\Delta f = \sum_{i=1}^n \frac {\partial^2 f}{\partial x^2_i}$ (2)
As a second-order differential operator, the Laplace operator maps C^k-functions to C^k−2-functions for k ≥ 2. The expression (1) (or equivalently (2)) defines an operator ∆ : C^k(R^n) → C^k−2(R^n),
or more generally an operator ∆ : C^k(Ω) → C^k−2(Ω) for any open set Ω.
In the physical theory of diffusion, the Laplace operator (via Laplace's equation) arises naturally in the mathematical description of equilibrium.^1 Specifically, if u is the density at equilibrium
of some quantity such as a chemical concentration, then the net flux of u through the boundary of any smooth region V is zero, provided there is no source or sink within V:
$\int_{\partial V} abla u \cdot \mathbf{n}\, dS = 0,$
where n is the outward unit normal to the boundary of V. By the divergence theorem,
$\int_V \operatorname{div} abla u\, dV = \int_{\partial V} abla u \cdot \mathbf{n}\, dS = 0.$
Since this holds for all smooth regions V, it can be shown that this implies
$\operatorname{div} abla u = \Delta u = 0.$
The left-hand side of this equation is the Laplace operator. The Laplace operator itself has a physical interpretation for non-equilibrium diffusion as the extent to which a point represents a source
or sink of chemical concentration, in a sense made precise by the diffusion equation.
Density associated to a potential
If φ denotes the electrostatic potential associated to a charge distribution q, then the charge distribution itself is given by the Laplacian of φ:
$q = \Delta\varphi.\,$ (1)
This is a consequence of Gauss's law. Indeed, if V is any smooth region, then by Gauss's law the flux of the electrostatic field E is equal to the charge enclosed (in appropriate units):
$\int_{\partial V} \mathbf{E}\cdot \mathbf{n}\, dS = \int_{\partial V} abla\varphi\cdot \mathbf{n}\, dS = \int_V q\,dV,$
where the first equality uses the fact that the electrostatic field is the gradient of the electrostatic potential. The divergence theorem now gives
$\int_V \Delta\varphi\,dV = \int_V q\, dV,$
and since this holds for all regions V, (1) follows.
The same approach implies that the Laplacian of the gravitational potential is the mass distribution. Often the charge (or mass) distribution are given, and the associated potential is unknown.
Finding the potential function subject to suitable boundary conditions is equivalent to solving Poisson's equation.
Energy minimization
Another motivation for the Laplacian appearing in physics is that solutions to $\Delta f = 0$ in a region U are functions that make the Dirichlet energy functional stationary:
$E(f) = \frac{1}{2} \int_U \Vert abla f \Vert^2 \,dx.$
To see this, suppose $f\colon U\to \mathbb{R}$ is a function, and $u\colon U\to \mathbb{R}$ is a function that vanishes on the boundary of U. Then
$\frac{d}{d\varepsilon}\Big|_{\varepsilon = 0} E(f+\varepsilon u) = \int_U abla f \cdot abla u \, dx = -\int_U u \Delta f\, dx$
where the last equality follows using Green's first identity. This calculation shows that if $\Delta f = 0$, then E is stationary around f. Conversely, if E is stationary around f, then $\Delta f=0$
by the fundamental lemma of calculus of variations.
Coordinate expressions
Two dimensions
The Laplace operator in two dimensions is given by
$\Delta f = \frac{\partial^2f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2}$
where x and y are the standard Cartesian coordinates of the xy-plane.
\begin{align} \Delta f &= {1 \over r} {\partial \over \partial r} \left( r {\partial f \over \partial r} \right) + {1 \over r^2} {\partial^2 f \over \partial \theta^2}\\ &= {1 \over r} {\partial
f \over \partial r} + {\partial^2 f \over \partial r^2} + {1 \over r^2} {\partial^2 f \over \partial \theta^2} . \end{align}
Three dimensions
In three dimensions, it is common to work with the Laplacian in a variety of different coordinate systems.
$\Delta f = \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} + \frac{\partial^2 f}{\partial z^2}.$
$\Delta f = {1 \over \rho} {\partial \over \partial \rho} \left(\rho {\partial f \over \partial \rho} \right) + {1 \over \rho^2} {\partial^2 f \over \partial \varphi^2} + {\partial^2 f \over \
partial z^2 }.$
$\Delta f = {1 \over r^2} {\partial \over \partial r} \left(r^2 {\partial f \over \partial r} \right) + {1 \over r^2 \sin \theta} {\partial \over \partial \theta} \left(\sin \theta {\partial f \
over \partial \theta} \right) + {1 \over r^2 \sin^2 \theta} {\partial^2 f \over \partial \varphi^2}.$
(here φ represents the azimuthal angle and θ the zenith angle or co-latitude).
In general curvilinear coordinates ($\xi^1, \xi^2, \xi^3$):
$abla^2 = abla \xi^m \cdot abla \xi^n {\partial^2 \over \partial \xi^m \partial \xi^n} + abla^2 \xi^m {\partial \over \partial \xi^m },$
where summation over the repeated indices is implied.
N dimensions
In spherical coordinates in N dimensions, with the parametrization x = rθ ∈ R^N with r representing a positive real radius and θ an element of the unit sphere S^N−1,
$\Delta f = \frac{\partial^2 f}{\partial r^2} + \frac{N-1}{r} \frac{\partial f}{\partial r} + \frac{1}{r^2} \Delta_{S^{N-1}} f$
where $\Delta_{S^{N-1}}$ is the Laplace–Beltrami operator on the (N−1)-sphere, known as the spherical Laplacian. The two radial terms can be equivalently rewritten as
$\frac{1}{r^{N-1}} \frac{\partial}{\partial r} \Bigl(r^{N-1} \frac{\partial f}{\partial r} \Bigr).$
As a consequence, the spherical Laplacian of a function defined on S^N−1 ⊂ R^N can be computed as the ordinary Laplacian of the function extended to R^N\{0} so that it is constant along rays, i.e.,
homogeneous of degree zero.
Spectral theory
The spectrum of the Laplace operator consists of all eigenvalues λ for which there is a corresponding eigenfunction ƒ with
$-\Delta f = \lambda f.$
This is known as the Helmholtz equation. If Ω is a bounded domain in R^n then the eigenfunctions of the Laplacian are an orthonormal basis for the Hilbert space L^2(Ω). This result essentially
follows from the spectral theorem on compact self-adjoint operators, applied to the inverse of the Laplacian (which is compact, by the Poincaré inequality and Kondrakov embedding theorem).^2 It can
also be shown that the eigenfunctions are infinitely differentiable functions.^3 More generally, these results hold for the Laplace–Beltrami operator on any compact Riemannian manifold with boundary,
or indeed for the Dirichlet eigenvalue problem of any elliptic operator with smooth coefficients on a bounded domain. When Ω is the n-sphere, the eigenfunctions of the Laplacian are the well-known
spherical harmonics.
Laplace–Beltrami operator
The Laplacian also can be generalized to an elliptic operator called the Laplace–Beltrami operator defined on a Riemannian manifold. The d'Alembert operator generalizes to a hyperbolic operator on
pseudo-Riemannian manifolds. The Laplace–Beltrami operator, when applied to a function, is the trace of the function's Hessian:
$\Delta f = \mathrm{tr}(H(f))\,\!$
where the trace is taken with respect to the inverse of the metric tensor. The Laplace–Beltrami operator also can be generalized to an operator (also called the Laplace–Beltrami operator) which
operates on tensor fields, by a similar formula.
Another generalization of the Laplace operator that is available on pseudo-Riemannian manifolds uses the exterior derivative, in terms of which the “geometer's Laplacian" is expressed as
$\Delta f = d^* d f\,$
Here d^∗ is the codifferential, which can also be expressed using the Hodge dual. Note that this operator differs in sign from the "analyst's Laplacian" defined above, a point which must always be
kept in mind when reading papers in global analysis. More generally, the "Hodge" Laplacian is defined on differential forms α by
$\Delta \alpha = d^* d\alpha + dd^*\alpha.\,$
This is known as the Laplace–de Rham operator, which is related to the Laplace–Beltrami operator by the Weitzenböck identity.
The Laplacian can be generalized in certain ways to non-Euclidean spaces, where it may be elliptic, hyperbolic, or ultrahyperbolic.
In the Minkowski space the Laplace–Beltrami operator becomes the d'Alembert operator or d'Alembertian:
$\square = \frac {1}{c^2}{\partial^2 \over \partial t^2 } - {\partial^2 \over \partial x^2 } - {\partial^2 \over \partial y^2 } - {\partial^2 \over \partial z^2 }.$
It is the generalisation of the Laplace operator in the sense that it is the differential operator which is invariant under the isometry group of the underlying space and it reduces to the Laplace
operator if restricted to time independent functions. Note that the overall sign of the metric here is chosen such that the spatial parts of the operator admit a negative sign, which is the usual
convention in high energy particle physics. The D'Alembert operator is also known as the wave operator, because it is the differential operator appearing in the wave equations and it is also part of
the Klein–Gordon equation, which reduces to the wave equation in the massless case. The additional factor of c in the metric is needed in physics if space and time are measured in different units; a
similar factor would be required if, for example, the x direction were measured in meters while the y direction were measured in centimeters. Indeed, theoretical physicists usually work in units such
that c=1 in order to simplify the equation.
See also
1. ^ Evans 1998, §2.2
2. ^ Gilbarg & Trudinger 2001, Theorem 8.6
3. ^ Gilbarg & Trudinger 2001, Corollary 8.11
External links | {"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Laplacian","timestamp":"2014-04-17T04:09:50Z","content_type":null,"content_length":"117911","record_id":"<urn:uuid:517f890a-255a-4a93-8d6b-3166e5c607c6>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Digest
Summaries of Media Coverage of Math
Edited by Allyn Jackson, AMS
Contributors: Mike Breen (AMS), Claudia Clark (freelance science writer), Lisa DeKeukelaere (2004 AMS Media Fellow), Annette Emerson (AMS), Brie Finegold (University of California, Santa Barbara)
December 2006
"Wobblology," by Davide Castelvecchi. New Scientist, 23/30 December 2006, pages 38-39.
"Strange but True: Turning a Wobbly Table Will Make It Steady," by JR Minkel. ScientificAmerican.com, 25 January 2007.
"The Science of Steadying a Wobbly Table": Interview with Keith Devlin. Weekend Edition Saturday, National Public Radio, 17 February 2007.
If you've ever attempted to steady a wobbly table with four equally long legs, you may appreciate these articles. Ph.D. student Roger Fenn wrote a proof in the late 1960s that "for any smoothly
curving floor that bulges upward like a hill, there is at least one way to position the table so that it is balanced and horizontal," but he did not offer a way to find this spot. Now mathematician
Burkard Polster and colleagues have proven that rotating the table will solve the problem, an idea that appeared in a column by Martin Gardner in Scientific American over 30 years ago. (Their proof
has been accepted for publication by the Mathematical Intelligencer.) This proof applies to both square and rectangular tables, on a floor that has no places that slope more than 35.26 degrees.
Note that the table might not be level at the balanced location. Polster and colleagues also suggest a procedure for balancing the table: lift up the table leg diagonal to the wobbly leg so that
both legs are about the same distance from the floor, then rotate. A demonstration of the technique is online.
--- Claudia Clark
Return to Top
"New math could improve 'invisibility cloak'." CBC News, 27 December 2006.
The article begins by announcing "Mathematicians who came up with a way to explain how a new 'invisibility cloak' hides objects have now developed a theory that could let the technology hide items
that emit light." The research described is that of mathematician Allan Greenleaf (University of Rochester, NY) and colleagues Matti Lassas (Helsinki University of Technology in Finland), Yaroslav
Kurylev (Loughborough University in Leicestershire, England), and Gunther Uhlmann (University of Wahsington, Seattle). After communicating with David R. Smith (Duke University), whose team recently
demonstrated independently the first working invisibility cloak, Greenleaf announced that his latest work "predicts behavior inside the cloak." Greenleaf and team are now working to confirm the
relationship between their work and experiments, which have previously included detecting tumors. Detailed information on the research described in the newspaper article is posted on the University
of Rochester website.
--- Annette Emerson
Return to Top
"Tsunami Data Points to Value of Reefs in Warming Era," by Christopher Joyce. All Things Considered, National Public Radio, 26 December 2006.
Joseph Fernando, a mechanical and aerospace engineer at Arizona State Unviersity, was born and raised on the coast of Sri Lanka, one of the places hit hard by the 2004 Indian Ocean tsunami. In his
interview on the radio program he explains how he and other researchers created mathematical models to simulate the tsunami, and the results confirm that shorelines behind reefs fared much better.
As summarized on the NPR program website, "That raises both the value of reefs as well as alarm over their rapid disappearance. And with climate change expected to raise sea levels, the data also
suggest that reefs might help protect low-lying areas from higher wave surges." The research paper co-authored by Fernando, "Episodes of nonlinear internal waves in the northern East China Sea,"
was published in GeoPhysical Research Letters.
--- Annette Emerson
Return to Top
"Major progress in prime number theory", by Krishnaswami Alladi. The Hindu, 25 December 2006.
This article reports on the work of Ben Green and Terence Tao concerning arithmetic progressions of prime numbers, on the occasion of Tao receiving the 2006 SASTRA Ramanujan Prize. This
US$10,000-prize is given every year on the birthday of Srinivasa Ramanujan (December 22) in his hometown of Kumbakonam, India. An arithmetic progression is a sequence of numbers that differ by a
fixed amount. For example, 10, 17, 24, 31, 28 is an arithmetic progression of length 5 where the difference between the numbers is 7. Green and Tao made a major advance by showing that there are
arithmetic progressions that consist only of prime numbers and that are as long as you choose. This work contributes to a line of research stretching back to the first part of the 20th century and
uses results of Ramanujan himself.
--- Allyn Jackson
Return to Top
"Erziehung berechnen (To compute upbringing)", by George Szpiro. Neue Zürcher Zeitung, 24 December 2006.
This article describes a paper by the econometrician Michael Beenstock in which family interactions are modeled. The results are not surprising (e.g., spend more time with kids who cry a lot), but
they show how mathematics can be useful even in situations seemingly far removed from the subject.
--- Allyn Jackson
Return to Top
"What a Flake: Computers get the hang of ice-crystal growth," by Peter Weiss. Science News Online, 23 December 2006.
Several mathematicians have recently made progress in accurately modeling how a snowflake develops its shape. Such models have been studied by scientists since the 1600s and can be used to better
understand how clouds affect climate and the likelihood that airplanes flying through clouds will gather dangerous ice formations on their wings. Earlier models, based on partial differential
equations or simple computer algorithms, produced results that looked like snowflakes but did not account for the physical factors that impact how a snowflake develops. The new models look at
multiple properties like temperature, humidity, and even randomness, since most real snowflakes are imperfect. They succeeded in mimicking both the final shape and the developmental stages in two
dimensions, but modeling the process in three-dimensions remains a challenge.
--- Lisa DeKeukelaere
Return to Top
"Breakthrough of the Year: The Poincaré Conjecture--Proved," by Dana Mackenzie. Science, 22 December 2006, pages 1848-1849;
"Math takes Science's spotlight in 2006," by Alan Boyle. MSNBC, 22 December 2006;
"Maths solution tops science class," by Paul Rincon. BBC News, 22 December 2006;
"Journal Science selects top 10 scientific breakthroughs of 2006," People's Daily Online, 22 December 2006.
Each year Science looks back and chooses ten significant breakthroughs from the past year, labelling one "The Breakthrough of the Year." This year, for the first time, a mathematical breakthrough is
the Breakthrough of the Year---the proof of the Poincaré Conjecture. Mackenzie explains the conjecture's history and some of the controversy surrounding Grigory Perelman's proof. Poincaré proposed
the conjecture about the properties of three-dimensional manifolds in 1904. In 2002 Perelman posted on a preprint server the first of three papers providing the means to prove the conjecture as well
as a more general result, the Thurston Geometrization Conjecture. The usual process of verifying this work was not followed because the papers weren't submitted to a refereed journal. It was only
recently that the mathematics community has come to a consensus that the Poincaré Conjecture has been proved. Mackenzie writes that "While bringing new results to topology, Perelman's work brought
new techniques to geometry." Other breakthroughs in 2006 include the sequencing of Neandertal DNA and the documentation of the accelerated shrinking of ice sheets in Greenland and Antarctica (these
are described in an article beginning on page 1850). As of this writing, Science's breakthrough articles were available online.
--- Mike Breen
Return to Top
"Master Class in Evolutionary Modeling": Review of Evolutionary Dynamics: Exploring the Equations of Life, by Martin Nowak. Reviewed by Steven A. Frank. Science, 22 December 2006, page 1878.
In this article, University of California, Irvine, professor of ecology and evolutionary biology Steven Frank reviews the book Evolutionary Dynamics: Exploring the Equations of Life. Frank notes that
author Martin Nowak is not the only person to claim that "evolution is the single most significant idea in biology." But where "almost all mathematical syntheses of evolution have been confined to
population genetics," Frank writes, Nowak shows "the many ways in which the mathematics of evolution led to advances in diverse subjects, including cancer, game theory, and language."
While most of the theory presented in the book has been previously published, Frank states that "the lucid presentation, drawing frequently on the author's own research, provides a uniquely
compelling introduction to mathematical biology." Indeed, Frank suggests that the book can be used as a starting point for one's own research; he concurs with this statement of Nowak's: "I will start
with the basics and in a few steps lead you to some of the most interesting and unanswered research questions in the field. Having read the book, you will know what you need to embark on your own
journey and make your own discoveries."
--- Claudia Clark
Return to Top
"Measures for measures," by Sune Lehmann, Andrew D. Jackson, and Benny E. Lautrup. Nature, 21/28 December 2006.
Universities and grant foundations try to dole out promotions and funding based on the quality of an academic's work---but how robust are their measures of quality? Three Danish scientists compared
three methods for ranking academics: number of papers published, mean number of citations received per paper published, and a score called the Hirsch index that takes both factors into account. The
methodology for the comparison, involving conditional probability and Bayes' theorem, may be somewhat difficult to decipher as detailed in the article, but the results and warnings appear clearly.
Mean number of citations per paper published is the best choice, followed by the Hirsch index, while number of papers published fares little better than random score assignment. The authors caution
that, "unable to measure what they want to maximize (quality), institutions will maximize what they can measure" and conclude by noting that actually reading an applicant's papers is still the best
way to go.
--- Lisa DeKeukelaere
Return to Top
"Painting by numbers," by Scott LaFee. The San Diego Union Tribune, 21 December 2006.
This article was accompanied by some stunning fractal pictures (including "Cheshire Cat" by Kerry Mitchell, at the top of the Math Digest page). LaFee gives some
mathematical background on fractals and examples of occurences in nature (such as blood vessels). He also explains how the images are generated with modern software: Pixels
Cheshire cat, by Kerry are assigned numbers and are colored based on the number's behavior in an iterative process. Churning through numbers doesn't guarantee a beautiful picture, however, an
Mitchell (copyright Kerry artist must understand both form and function to create such images.
--- Mike Breen
Return to Top
"2006 in Review," by Nicola Jones. news@nature.com, 20 December 2006;
"Top 10 stories of 2006." news@nature.com, 28 December 2006.
In her "romp through ten of this year's big science developments", Jones includes "Russian recluse spurns prize," the story of Perelman's refusal of the Fields Medal (link to article) and speculation
on whether he would also decline the Clay Math Institute's US$1 million award for his proof of the Poincaré conjecture. Making second place on the top 10 Readers' Choice list (of most clicked-on
stories) of 2006 was a mathematics-related article, "Geometric whirlpools revealed" ("recipe for making symmetrical holes in water is easy," 19 May 2006). Mathematics stories did not make the top 10
Editor's Choice or top 10 Most Talked About lists, but eighth place on the top 10 News Features (longer tales worth another read) was "Fractals in art: In the hands of a master" ("fractal analysis
has been used to assess the authenticity of paintings purporting to be the work of Jackson Pollock. Alison Abbott reports," 8 February 2006).
--- Annette Emerson
Return to Top
"A prime example," by Karen Gold. Guardian Unlimited, 19 December 2006;
"Making mathematics music to their ears," by Alexandra Frean. Times Online, 23 December 2006.
Mathematician Marcus du Sautoy is one of the great popularizers of mathematics and often appears in media in the U.K., Australia, and New Zealand. The article by Gold notes that "he recently landed
the landmark British TV scientist slot---the Royal Institution (London) Christmas Lectures. With the title The Num8er My5teries, and subjects ranging from codes, magic tricks, and the shape of the
universed, he hopes to turn a generation of young teenagers on to maths." du Sautoy recalls that other mathematicians laughed at the notion of his trying to explain the Riemann hypothesis to general
readers, and his response is that because mathematics is "a totally logical subject, and a pathway has been marked out" he can---if he himself completely understands it---explain it so other people
get it. The article by Frean focuses on du Sautoy's efforts to engage students between the ages 11 and 14, the period when young people often lose enthusiasm for math. du Sautoy, "who often plays a
trumpet during lectures to illustrate the similarities between harmonics and the sine waves used to predict prime numbers, suggests that maths teaching should be similar to music teaching," and that
teenagers struggling with the subject might benefit from learning a musical instrument.
The Guardian story was also published under the titles "Think math isn't sexy?," Taipei Times 23 December 2006; and "The magic of maths," Mail and Guardian Online, 15 January 2007.
--- Annette Emerson
Return to Top
"Die Berechnung der Bedeutung: Die Mathematik hinter Googles Webseiten Klassifizierung (The calculation of meaning: The mathematics behind Google's website classification)", by George Szpiro. Neue
Zürcher Zeitung, 15 December 2006.
When you use Google to search for information on the Internet, how does Google decide which pages to put at the top of the list? Szpiro describes Google's "Page Rank" system, which is a mathematical
way of classifying web pages. The article is based on the December 2006 installment of the AMS "Feature Column", by David Austin. Austin's column proved to be extremely popular, accumulating so many
hits that it slowed down the entire AMS web site for a period in December.
--- Allyn Jackson
Return to Top
"Mathematician numbers don't add up," from AAP newswire. Herald Sun (Australia), 14 December 2006.
The Australian Academy of Science released a review saying that "underinvestment in maths and statistics is jeopardizing the competitiveness of Australian industry." Mathematician Hyam Rubinstein
(University of Melbourne) tells the reporter that Australia's reputation as a leader in mathematics and statistics has served as a magnet for experts in these fields, but that the reputation is being
upheld by an older generation. He also noted that since 1995 math and statistics departments in Australia have lost one-third of their permanent faculty. Several of the country's newspapers picked up
the report: The Australian, The Age, The Melbourne Herald Sun and The Sydney Morning Herald.
--- Annette Emerson
Return to Top
"Nick Patterson; A Cold War Cryptologist Takes a Crack at Deciphering DNA's Deep Secrets" by Ingfei Chen. New York Times, 12 December 2006.
This article profiles Nick Patterson, a mathematician and cryptographer who applies his code-breaking expertise to problems in genomics. He started his career at the GCHQ, the
British government's code-breaking organization. Patterson then moved to the Center for Communications Research, which is the cryptography branch of the Institute for Defense
Analysis and which is based in Princeton, New Jersey. In the 1990s, he was persuaded by James Simons, a former mathematician and cryptographer himself, to join Simons'
investment company Renaissance Technologies, which relies heavily on mathematical techniques. "But by 2000, [Patterson] was restless," the article says. That's when he took a
An overview of the job at the Whitehead/MIT Center for Genome Research, which later became the Broad Institute. Both cryptography and genome research require the development of pattern recognition
structure of DNA. to make sense of large quantities of data. This is what Patterson excels at. Eric Lander, a mathematician turned geneticist who is the director of the Broad Institute, is quoted
Image created by as saying that Patterson has the statistical insight to tell whether a signal is "simply random fluctuation or whether it's a smoking gun." This same article appeared under the
Michael Ströck. title "A whole new breed of biologists" in the International Herald Tribune, 14 December 2006, page 10.
--- Allyn Jackson
Return to Top
"One Last Mission for Ship Sunk in Pearl Harbor Attack," by Michael E. Ruane. Washington Post, 7 December 2006, page A3.
Ruane explains how a mathematical model is being used to simulate deterioration of sunken ships. The ship discussed most in the article is the USS Arizona, which was sunk in the attack on Pearl
Harbor (the article was published on the 65th anniversary of the attack). The day before the attack, the Arizona took on over one million gallons of thick oil. The question addressed by the model is:
When will the oil in the ship erupt to the surface? Much of the oil has slowly leaked to the surface, but about half of it still remains in the ship. The model predicts that nothing serious will
happen for at least 10 years. The Arizona Memorial National Park Superintendent thinks that any collapse of the ship and subsequent leak will continue to take place gradually.
--- Mike Breen
Return to Top
"Relatively Small Number of Deaths Have a Big Impact in Pfizer Drug Trial," by Carl Bialik. Wall Street Journal Online, 6 December 2006.
After Pfizer announced it was withdrawing a cholesterol drug after some patients died during its clinical trials, "Numbers Guy" Carl Bialik shed some light on the role statistics play in medical
trials. He explains that a "statistical boundary" or "p value" is set up for each clinical trial, where "p measures the probability that a particular result---in this case, the difference in the
rates of death between the two drug groups---can be chalked up to a statistical anomaly... Calculating p in this case is complex, and takes into account several factors, including the number of
people in the study." He expands on the explanation and includes quotes from a couple doctors.
--- Annette Emerson
Return to Top
"Siemens High School Science Awards," by Robert Smith. All Things Considered, National Public Radio, 4 December 2006;
"Student Wins Top Math Award," by Anne Williams. The Register-Guard (Eugene), 5 December 2006;
"High school senior wins scholarship," by Karen Matthews. The Associated Press, 4 December 2006.
In a high-school science competition that pitted a student who studied Parkinson's disease by studying its effect on worms against a high-achiever who discovered new pulsar stars in
outer space, a young mathematician recently walked away with the top honors. Dmitry Vaintrob from Eugene, Oregon, won the US$100,000 Grand Prize scholarship in the individual category
of the 2006-2007 Siemens Competition in Math, Science, and Technology for his work in string topology. His project, which presents a formula for describing the way shapes combine in
Dmitry Vaintrob string theory, may help physicists to understand electricity, magnetism, and gravity. Though Vaintrob is only a high school senior, Harvard mathematics professor and competition judge
wins Siemens Michael Hopkins says that his achievement is impressive at the Ph.D.-level. The Siemens Foundation press release includes more details.
--- Lisa DeKeukelaere
Return to Top
"Never mind the Pollock's [sic]," by Dan Vergano. USA Today, 3 December 2006;
"Jackson Pollock Fractals," a discussion with guest Richard P. Taylor. Talk of the Nation, National Public Radio, 15 December 2006.
Both pieces explore how science, specifically the use of fractals, is a tool to authenticate (or spot faked) art works. The unique style of paintings by Jackson Pollock is the subject. The
researchers are Richard P. Taylor (University of Canterbury, New Zealand and University of Oregon, Eugene), and Katharine Jones-Smith and Harsh Mathur (Case Western Reserve University, Cleveland).
Their respective efforts seem to present different results. The key, according to Mathur, may be that "in statistical physics, a debate is going on over the proper use of the term `fractal' as a way
to designate shapes." Papers and responses of Taylor and Jones-Smith have appeared in Nature. Taylor is interviewed on the NPR program.
--- Annette Emerson
Return to Top
"The Monty Hall Problem," by John Allen Paulos. Who's Counting, abcnews.com, 3 December 2006.
Despite the increase in game shows on television, Paulos reports that---based on all the emails he has received over the years about the so-called Monty Hall problem---no game show has
aroused as much mathematical interest as Let's Make a Deal. In this show a contestant picks one of three doors for a major prize. Once a door is chosen, host Monty Hall opens one of the
What's two remaining doors to reveal the prize behind it and offers the contestant the chance to switch his or her door choice. Paulos briefly explains probability and then goes on to test the
behind the reader's understanding by throwing in a variation of the problem (and he includes the answer at the end of the column).
--- Annette Emerson
Return to Top
"Let there be number": Review of How Mathematics Happened, by Peter S. Rudman. Reviewed by Matthew Killeya. New Scientist, 2 December 2006, page 50.
The reviewer writes that this book "charts the evolution of mathematics from early hunter-gatherer cultures to the civilization of ancient Egypt and Babylon." The book also advances the argument that
people might learn mathematics better if they take it up after leaving school. "It is an underdeveloped yet intriguing argument," the reviewer writes.
--- Allyn Jackson
Return to Top
"Teacher, students revel in joy of high-level math," by Carrie Sturrock. The San Francisco Chronicle, 2 December 2006.
The exam for the William Lowell Putnam Mathematics Competition is one of the most difficult taken by undergraduates. On a 120-point scale, the median
national score is generally less than 10. Ravi Vakil was one of the top five scorers on the exam for his four years as an undergraduate. Now on the math
Ravi Vakil (near center, in white shirt faculty at Stanford University, Vakil coordinates the Putnam exam at Stanford. This article talks about his own research and the enthusiasm he shows when he
and jeans) and Stanford University works with students. Said Aman Kumar, a Stanford sophomore, "When you learn the Putnam with Ravi, math is a fun, sexy thing."
exam-takers, courtesy of Ravi Vakil.
--- Mike Breen
Return to Top
"'Mike's Math': How One Volunteer Is Helping Kids Think Differently," by Kristin Pisarcik. ABC News, 1 December 2006.
In this article, correspondent Kristin Pisarcik interviews "human calculator" and math teacher Mike Byster. Byster has created a system, which he calls "Mike's Math," that enables students to solve
complex arithmetic problems rapidly in their head. His system is based upon patterns and memorized shortcuts. He describes his system on his website as a program that "teaches children how to master
the art of multitasking by solving problems, memorizing the information, storing the information, being able to recall the information and add to it or modify it." For example, he describes the steps
for finding the square of a number in the 50s, in this case 56: start out with 25 and add the one's digit of the number to it (25 + 6 = 31); then square the one's digit of the number (6 x 6 = 36) and
tack this onto the 31 for the answer: 3136. Byster intends that students apply his methods to all academic areas. Byster not only gives presentations in Chicago-area schools, but around the
world---all as a volunteer. To see Mike and some students in action, go to the story online.
--- Claudia Clark
Return to Top
"Day by day---how a cancer grows," by Frank Urquhart. Scotsman News, 1 December 2006.
Sandy Anderson, a mathematician at Dundee University, has a model for tumor growth that might change cancer treatment strategies. About his model, Anderson said, "The intention of most treatment
strategies is to make the environment the tumour is in as harsh as possible, but what you are effectively doing is killing off all the weak cells and leaving all the tough ones behind. My research
shows that we need to consider the environment in which the tumour is growing before we attack it." He said that tumors will be less aggressive if they grow in an "oxygen or nutrient-rich
--- Mike Breen
Return to Top
"The Turing Model Comes of Molecular Age," by Philip K. Maini, Ruth E. Baker, and Ceng-Ming Chong. Science, 1 December 2006, pages 1397-1398.
According to the authors of this perspective article, a report by Stefanie Sick et al, "WNT and DKK Determine Hair Follicle Spacing Through a Reaction-Diffusion Mechanism" (beginning on page 1447 of
the same issue) provides "the first compelling biological evidence" for Alan Turing's model of how complex spatial patterns arise. In 1952, Turing proposed that "diffusion-driven instability" caused
complex biological patterns, but until now no example had been found. Sick and co-authors have identified key compounds in hair follicle growth that appear to behave according to Turing's model. Now,
these authors write, it is time to do experiments to measure key parameters in the system to determine which particular model is correct.
--- Mike Breen
Return to Top
"Million Dollar Math," by Stephen Ornes. Discover, December 2006, page 64.
"Mathematics gives some of the most dramatic examples of the glacial but inexorable advance of the human intellect, " begins Ornes. The recent coverage of the 2006 Fields Medal awards prompts him to
explore "What problems remain?" He notes that in 1900 mathematician David Hilbert listed 23 outstanding problems for the 20^th century, and in 2000 the Clay Mathematics Institute identified seven
so-called "Millennium Prize Problems"---with a reward of US$1 million for each solution. Ornes then summarizes four great problems (two of them identified by the Clay): Riemann Hypothesis, Twin Prime
Conjecture, Navier-Stokes Equation, and Traveling Salesman.
--- Annette Emerson
Return to Top | {"url":"http://cust-serv@ams.org/news/math-in-the-media/mathdigest-md-200612-toc","timestamp":"2014-04-16T11:12:30Z","content_type":null,"content_length":"52234","record_id":"<urn:uuid:7b35f30a-ff6f-42a4-8acb-0998f2ff27df>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by josh on Sunday, December 4, 2011 at 8:16pm.
Which of the following may not qualify as a falsifiable claim?
• phi 103 - Ms. Sue, Sunday, December 4, 2011 at 8:20pm
What following?
• phi 103 - Sirisha, Friday, January 6, 2012 at 6:11pm
Your luck will improve.
Your house will be sold tomorrow.
Granite is more dense than sand.
Smoking may cause heart disease.
• phi 103 - Alon, Friday, February 17, 2012 at 1:38pm
Your luck will improve.
A claim is also said to be falsifiable, in that it could turn out actually to be false, and we know how that might be shown. For instance, "There are no wild kangaroos in Georgia"
Related Questions
phi 103 - Which of the following may not qualify as a falsifiable claim? Your ...
phi 103 - Which of the following may not qualify as a falsifiable claim? Student...
phi 103 - Truth tables can determine which of the following?
PHI 103 - PHI 103 Symbolic Logic- write 2 argument in English,one in the form of...
Physics - The electric flux through each of the six sides of a rectangular box ...
MATH - Two students h and k appeared in an examination. The probability that h ...
Phi 103 - In the conditional "P →Q," "P" is a
PHI 103 - Explain what logic can and cannot do?
Maths trigonometry - using basic trigonometry derive the following equations, ...
PHI 103 - CAN YOU HELP ME WITH THE ASHFORD 6 WEEK 5 QUIZ | {"url":"http://www.jiskha.com/display.cgi?id=1323047816","timestamp":"2014-04-23T20:59:19Z","content_type":null,"content_length":"8693","record_id":"<urn:uuid:22c35263-eba8-4edb-9ff9-bf7ba5c26c3f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Building a Vector Space Search Engine in Python
A vector space search involves converting documents into vectors. Each dimension within the vectors represents a term. If a document contains that term then the value within the vector is greater
than zero.
Here is an implementation of Vector space searching using python (2.4+).
1 Stemming & Stop words
Fetch all terms within documents and clean – use a stemmer to reduce. A stemmer takes words and tries to reduce them to there base or root. Words which have a common stem often have similar meanings.
Example: CONNECTED CONNECTING CONNECTION CONNECTIONS
all map to CONNECT
We also remove any stopwords from the documents. [a,am,an,also,any,and] are all examples of stopwords in English. Stop words have little value in search so we strip them. The stoplist used was taken
from: ftp://ftp.cs.cornell.edu/pub/smart/english.stop
self.stemmer = PorterStemmer()
1 def removeStopWords(self,list):
2 """ Remove common words which have no search value """
3 return [word for word in list if word not in self.stopwords ]
6 def tokenise(self, string):
7 """ break string up into tokens and stem words """
8 string = self.clean(string)
9 words = string.split(" ")
11 return [self.stemmer.stem(word,0,len(word)-1) for word in words]
2 Map Keywords to Vector Dimensions
Map the vector dimensions to keywords using a dictionary: keyword=>position
1 def getVectorKeywordIndex(self, documentList):
2 """ create the keyword associated to the position of the elements within the document vectors """
4 #Mapped documents into a single word string
5 vocabularyString = " ".join(documentList)
7 vocabularyList = self.parser.tokenise(vocabularyString)
8 #Remove common words which have no search value
9 vocabularyList = self.parser.removeStopWords(vocabularyList)
10 uniqueVocabularyList = util.removeDuplicates(vocabularyList)
12 vectorIndex={}
13 offset=0
14 #Associate a position with the keywords which maps to the dimension on the vector used to represent this word
15 for word in uniqueVocabularyList:
16 vectorIndex[word]=offset
17 offset+=1
18 return vectorIndex #(keyword:position)
3 Map Document Strings to Vectors.
We use the simple Term Count Model. A more accurate model would be to use tf-idf (termFrequency-inverseDocumentFrequnecy).
1 def makeVector(self, wordString):
2 """ @pre: unique(vectorIndex) """
4 #Initialise vector with 0's
5 vector = [0] * len(self.vectorKeywordIndex)
6 wordList = self.parser.tokenise(wordString)
7 wordList = self.parser.removeStopWords(wordList)
8 for word in wordList:
9 vector[self.vectorKeywordIndex[word]] += 1; #Use simple Term Count Model
10 return vector
4 Find Related Documents
We now have the ability to find related documents. We can test if two documents are in the concept space by looking at the the cosine of the angle between the document vectors. We use the cosine of
the angle as a metric for comparison. If the cosine is 1 then the angle is 0° and hence the vectors are parallel (and the document terms are related). If the cosine is 0 then the angle is 90° and the
vectors are perpendicular (and the document terms are not related).
We calculate the cosine between the document vectors in python using scipy.
1 def cosine(vector1, vector2):
2 """ related documents j and q are in the concept space by comparing the vectors :
3 cosine = ( V1 * V2 ) / ||V1|| x ||V2|| """
4 return float(dot(vector1,vector2) / (norm(vector1) * norm(vector2)))
5 Search the Vector Space!
In order to perform a search across keywords we need to map the keywords to the vector space. We create a temporary document which represents the search terms and then we compare it against the
document vectors using the same cosine measurement mentioned for relatedness.
1 def search(self,searchList):
2 """ search for documents that match based on a list of terms """
3 queryVector = self.buildQueryVector(searchList)
5 ratings = [util.cosine(queryVector, documentVector) for documentVector in self.documentVectors]
6 ratings.sort(reverse=True)
7 return ratings
Further Extensions
1. Use tf-idf rather than the Term count model for term weightings
2. Instead of linear processing of all document vectors when searching for related content use: Lanczos methods OR a neural network-like approach.
Third Party tools
The stemmer used comes from: http://tartarus.org/~martin/PorterStemmer/python.txt
And the library for performing cosine calculations comes from NumPy: http://www.scipy.org/ | {"url":"http://blog.josephwilk.net/projects/building-a-vector-space-search-engine-in-python.html","timestamp":"2014-04-20T21:29:00Z","content_type":null,"content_length":"26289","record_id":"<urn:uuid:cb8a7498-7b49-418a-9b68-0c74099e491f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
help me out
If i know $\sin x$ then how can we find $\sin(\frac{x}{2})$(Wondering)(Thinking)(Itwasntme)
Maybe one way is that if you have sinx=y , where y is a number, find from the tables for which x you have y. Then if you find x, find x/2 and from the tables again find sin(x/2)
If i know sin x = y from the trig table but i dont have sin (x/2), then????
If you know $\sin{x}$ then you know $x$, if you know $x$ you know $\frac{x}{2}$ , if you know $\frac{x}{2}$ then you know $\sin{\frac{x}{2}}$
ok.... i know sin 45 = 1/sqrt 2 but i dont know sin 22.5 ?????
By using trig tables you should be able to find it.
Is there 22.5 in a trig table ? -_- Note that since 45 is in the first quadrant, 45/2 is also in the first quadrant. We know that $\sin x=2\cos\tfrac x2\sin\tfrac x2$ So $\sin^2x=4\cos^2\tfrac x2\sin
^2\tfrac x2$ Let $T=\sin^2\tfrac x2$ Then $\sin^2x=4(1-T)\cdot T=4T-4T^2$ This is a quadratic equation. In your situation, we have : $4T^2-4T-\frac 12=0$ Solve for T, by remembering it's positive
(because it's a square) And then take its square root, by remembering that it must be positive (because x/2 is in the first quadrant) | {"url":"http://mathhelpforum.com/trigonometry/95942-help-me-out-print.html","timestamp":"2014-04-20T18:33:36Z","content_type":null,"content_length":"11771","record_id":"<urn:uuid:70ca4c30-0dfa-4fba-9dd0-a24d46c75bd2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
New techniques that improve MACE-style model finding
Results 1 - 10 of 20
- In EMNLP-05 , 2005
"... We use logical inference techniques for recognising textual entailment. As the performance of theorem proving turns out to be highly dependent on not readily available background knowledge, we
incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful rob ..."
Cited by 58 (0 self)
Add to MetaCart
We use logical inference techniques for recognising textual entailment. As the performance of theorem proving turns out to be highly dependent on not readily available background knowledge, we
incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate entailment. Finally, we use machine learning to combine these deep
semantic analysis techniques with simple shallow word overlap; the resulting hybrid model achieves high accuracy on the RTE testset, given the state of the art. Our results also show that the
different techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful to use the nature of the dataset as a feature. 1
- In Proceedings of Sixth International Workshop on Computational Semantics IWCS-6 , 2005
"... Wide-coverage and robust NLP techniques always seemed to go hand in hand with shallow analyses. This was certainly true a couple of years ago, ..."
Cited by 46 (5 self)
Add to MetaCart
Wide-coverage and robust NLP techniques always seemed to go hand in hand with shallow analyses. This was certainly true a couple of years ago,
- IN TAP 2009: SHORT PAPERS, ETH , 2009
"... ..."
- In Joint 10th European Software Engineering Conference (ESEC) and 13th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE , 2005
"... We present a technique that enables the use of finite model finding to check the satisfiability of certain formulas whose intended models are infinite. Such formulas arise when using the
language of sets and relations to reason about structured values such as algebraic datatypes. The key idea of our ..."
Cited by 20 (2 self)
Add to MetaCart
We present a technique that enables the use of finite model finding to check the satisfiability of certain formulas whose intended models are infinite. Such formulas arise when using the language of
sets and relations to reason about structured values such as algebraic datatypes. The key idea of our technique is to identify a natural syntactic class of formulas in relational logic for which
reasoning about infinite structures can be reduced to reasoning about finite structures. As a result, when a formula belongs to this class, we can use existing finite model finding tools to check
whether the formula holds in the desired infinite model. 1
, 2004
"... An important feature of object-oriented programming languages is the ability to dynamically instantiate user-defined container data structures such as lists, trees, and hash tables. Programs
implement such data structures using references to dynamically allocated objects, which allows data structure ..."
Cited by 19 (13 self)
Add to MetaCart
An important feature of object-oriented programming languages is the ability to dynamically instantiate user-defined container data structures such as lists, trees, and hash tables. Programs
implement such data structures using references to dynamically allocated objects, which allows data structures to store unbounded numbers of objects, but makes reasoning about programs more
difficult. Reasoning about object-oriented programs with complex data structures is simplified if data structure operations are specified in terms of abstract sets of objects associated with each
data structure. For example, an insertion into a data structure in this approach becomes simply an insertion into a dynamically changing set-valued field of an object, as opposed to a manipulation of
a dynamically linked structure linked to the object. In this paper we explore...
- In Proc. of the PASCAL RTE Challenge , 2005
"... We combine two methods to tackle the textual entailment challenge: a shallow method based on word overlap and a deep method using theorem proving techniques. We use a machine learning technique
to combine features derived from both methods. We submitted two runs, one using all features, yielding an ..."
Cited by 16 (0 self)
Add to MetaCart
We combine two methods to tackle the textual entailment challenge: a shallow method based on word overlap and a deep method using theorem proving techniques. We use a machine learning technique to
combine features derived from both methods. We submitted two runs, one using all features, yielding an accuracy of 0.5625, and one using only the shallow feature, with an accuracy of 0.5550. Our
method currently suffers from a lack of background knowledge and future work will be focussed on that area. 1
- In FoIKS , 2006
"... Abstract. It is claimed in [45] that first-order theorem provers are not efficient for reasoning with ontologies based on description logics compared to specialised description logic reasoners.
However, the development of more expressive ontology languages requires the use of theorem provers able to ..."
Cited by 11 (0 self)
Add to MetaCart
Abstract. It is claimed in [45] that first-order theorem provers are not efficient for reasoning with ontologies based on description logics compared to specialised description logic reasoners.
However, the development of more expressive ontology languages requires the use of theorem provers able to reason with full first-order logic and even its extensions. So far, theorem provers have
extensively been used for running experiments over TPTP containing mainly problems with relatively small axiomatisations. A question arises whether such theorem provers can be used to reason in real
time with large axiomatisations used in expressive ontologies such as SUMO. In this paper we answer this question affirmatively by showing that a carefully engineered theorem prover can answer
queries to ontologies having over 15,000 first-order axioms with equality. Ontologies used in our experiments are based on the language KIF, whose expressive power goes far beyond the description
logic based languages currently used in the Semantic Web.
, 2010
"... Formulas are often monotonic in the sense that if the formula is satisfiable for given domains of discourse, it is also satisfiable for all larger domains. Monotonicity is undecidable in
general, but we devised two calculi that infer it in many cases for higher-order logic. The stronger calculus has ..."
Cited by 9 (8 self)
Add to MetaCart
Formulas are often monotonic in the sense that if the formula is satisfiable for given domains of discourse, it is also satisfiable for all larger domains. Monotonicity is undecidable in general, but
we devised two calculi that infer it in many cases for higher-order logic. The stronger calculus has been implemented in Isabelle’s model finder Nitpick, where it is used to prune the search space,
leading to dramatic speed improvements for formulas involving many atomic types.
- MLCW 2005, volume LNAI 3944 , 2006
"... Abstract. We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the
performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowl ..."
Cited by 8 (0 self)
Add to MetaCart
Abstract. We use logical inference techniques for recognising textual entailment, with theorem proving operating on deep semantic interpretations as the backbone of our system. However, the
performance of theorem proving on its own turns out to be highly dependent on a wide range of background knowledge, which is not necessarily included in publically available knowledge sources.
Therefore, we achieve robustness via two extensions. Firstly, we incorporate model building, a technique borrowed from automated reasoning, and show that it is a useful robust method to approximate
entailment. Secondly, we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap. The resulting hybrid model achieves high accuracy on the RTE
testset, given the state of the art. Our results also show that the various techniques that we employ perform very differently on some of the subsets of the RTE corpus and as a result, it is useful
to use the nature of the dataset as a feature. 1
"... Abstract—Word-level bounded model checking and equivalence checking problems are naturally encoded in the theory of bit-vectors and arrays. The standard practice of deciding formulas of such
theories in the hardware industry is either SAT- (using bit-blasting) or SMT-based methods. These methods per ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract—Word-level bounded model checking and equivalence checking problems are naturally encoded in the theory of bit-vectors and arrays. The standard practice of deciding formulas of such theories
in the hardware industry is either SAT- (using bit-blasting) or SMT-based methods. These methods perform reasoning on a low level but perform it very efficiently. To find alternative potentially
promising model checking and equivalence checking methods, a natural idea is to lift reasoning from the bit and bit-vector levels to higher levels. In such an attempt, in [14] we proposed translating
memory designs into the Effectively PRopositional (EPR) fragment of first-order logic. The first experiments with using such a translation have been encouraging but raised some questions. Since the
high-level encoding we used was incomplete (yet avoiding bit-blasting) some equivalences could not be proved. Another problem was that there was no natural correspondence between models of EPR
formulas and bit-vector based models that would demonstrate non-equivalence and hence design errors. This paper addresses these problems by providing more refined translations of equivalence checking
problems arising from hardware verification into EPR formulas. We provide three such translations and formulate their properties. All three translations are designed in such a way that models of EPR
problems can be translated into bit-vector models demonstrating non-equivalence. We also evaluate the best EPR solvers on industrial equivalence checking problems and compare them with SMT solvers
designed and tuned for such formulas specifically. We present empirical evidence demonstrating that EPR-based methods and solvers are competitive. I. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=615593","timestamp":"2014-04-17T01:09:26Z","content_type":null,"content_length":"37215","record_id":"<urn:uuid:b253a8cc-5c22-4da2-8b9d-fb819ae99f60>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integrals involving Polar Coordinate Conversion
October 24th 2013, 02:11 AM #1
Aug 2012
Integrals involving Polar Coordinate Conversion
The following is part of a proof that I am studying from the book Partial Differential Equations by Lawrence Evans.
Given that $u \in C^{1}(\mathbb{R}^{n}) with t \in (0,s)$ and $w \in \partial B(0,1)$.
Consider the integral $\int_{0}^{s}\int_{\partial B(0,1)} |Du(x+tw)|dSdt = \int_{0}^{s}\int_{\partial B(0,1)}|Du(x+tw)|\frac{t^{n-1}}{t^{n-1}}dSdt$.
How does it follow then that if we let $y=x+tw$ so that $t = |x-y|$. Then converting from polar coordinates, we have the inequality:
$\int_{\partial B(0,1)}|u(x+sw) - u(x)|dS \leq \int_{B(x,s)}\frac{|Du(y)|}{|x-y|^{n-1}}dy$
How does this final inequality follow? How is it a result of polar coordinate conversion?
Let me know if anything is unclear. Thanks.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-math-topics/223432-integrals-involving-polar-coordinate-conversion.html","timestamp":"2014-04-18T16:22:15Z","content_type":null,"content_length":"31492","record_id":"<urn:uuid:d1001283-9343-4234-9313-b49f73c294ef>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
maclaurin series with radical integral
April 8th 2009, 08:27 AM #1
Apr 2009
maclaurin series with radical integral
I am trying to get the first four terms of the Maclaurin Series for the following functiong:
integral from 0 to x of sqrt(1+t^3) dt
Sadly, I'm stuck trying to find the integral so that I can get the first f(0) term. Using u-substitution I get:
u = 1 + t^3
du = 3t^2 dt
dt= du / 3t^2
I think that I need to put 3t^2 in terms of u to be able to continue with the integration, but I cannot see a connection. If I make u = t^3 then would 3t^2 = 3u^2/3 ??
hello again
remember any integral from 0 to 0 or in fact a to a is 0.
No one can do the integral
The Maclaurin series of f is $\sum_{n=0}^{+\infty} \frac{f^{(n)}(0)}{n!}\:x^n$
$f(0) = 0$
$f'(x) = \sqrt{1+x^3} \implies f'(0) = 1$
You can show that $f^{(2)}(0) = 0$ and $f^{(3)}(0) = 0$
April 8th 2009, 10:24 AM #2
April 8th 2009, 11:03 AM #3
MHF Contributor
Nov 2008 | {"url":"http://mathhelpforum.com/calculus/82876-maclaurin-series-radical-integral.html","timestamp":"2014-04-20T18:55:00Z","content_type":null,"content_length":"37650","record_id":"<urn:uuid:bfd04259-6877-4b6b-a451-c5de298db104>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00443-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kernel Conditional Graphical Model
Fernando Perez-Cruz, Zoubin Ghahramani and Massimiliano Pontil
In: Workshop “Graphical Models and Kernels”, Dec 2005, Whistler (Canada).
We address the general classification problem, in which the labels are an $L$-dimensional vector with $q$ possible values in each entry. The discriminative algorithms for solving this problem encode
the dependencies among the labels in a graph to reduce its complexity. We present a unifying framework that allows us to compare these algorithms. In the related literature, most papers are difficult
to follow, because their notation is not simple and can be misleading. Hence, our unifying framework is a main contribution of this paper. We will propose a new algorithm for this problem, which can
be trained independently per clique. Given that the cliques are responsible for the complete decision, we can train them using all the discriminative information in the training examples. As the
training is done independently per clique, we will be able to apply it to any graphical model and deal with large training datasets. | {"url":"http://eprints.pascal-network.org/archive/00001181/","timestamp":"2014-04-19T17:06:17Z","content_type":null,"content_length":"6980","record_id":"<urn:uuid:f1d0beaa-d1d8-480d-87bf-4473c7eb37d5>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Standard Cosmological Model
3.3 Structure Formation
The Friedmann-Lemaître model is unstable to the gravitational growth of departures from a homogeneous mass distribution. The present large-scale homogeneity could have grown out of primeval chaos,
but the initial conditions would be absurdly special. That is, the Friedmann-Lemaître model requires that the present structure - the clustering of mass in galaxies and systems of galaxies - grew out
of small primeval departures from homogeneity. The consistency test for an acceptable set of cosmological parameters is that one has to be able to assign a physically sensible initial condition that
evolves into the present structure of the universe. The constraint from this consideration in line 3c is discussed by White et al. [44], and in line 3b by Bahcall et al. ([45], [46]). Here I explain
the cautious ratings in line 3a.
As has been widely discussed, it may be possible to read the values of [47] and references therein). This assumes Nature has kept the evolution of the early universe simple, however, and we have hit
on the right picture for its evolution. We may know in the next few years. If the precision measurements of the CBR anisotropy from the MAP and PLANCK satellites match in all detail the prediction of
one of the structure formation models now under discussion it will compel acceptance. But meanwhile we should bear in mind the possibility that Nature was not kind enough to have presented us with a
simple problem.
Figure 3. Angular fluctuations of the CBR in low density cosmologically flat adiabatic (dashed line) and isocurvature (solid line) CDM models for structure formation. The variance of the CBR
temperature anisotropy per logarithmic interval of angular scale l is (T[l])^2, as in Eqs. (5) to (7). Data are from the compilation by Ratra [48].
An example of the possible ambiguity in the interpretation of the present anisotropy measurements is shown in Fig. 3. The two models assume the same dynamical actors -cold dark matter (CDM), baryons,
three families of massless neutrinos, and the CBR - but different initial conditions. In the adiabatic model the primeval entropy per conserved particle number is homogeneous, the space distribution
of the primeval mass density fluctuations is a stationary random process with the scale-invariant spectrum k, and the cosmological parameters are h = 0.625 (following [49]). The isocurvature initial
condition in the other model is that the primeval mass distribution is homogeneous - there are no curvature fluctuations - and structure formation is seeded by an inhomogeneous composition. In the
model shown here the primeval entropy per baryon is homogeneous, to agree with the standard model for light element production, and the primeval distribution of the CDM has fluctuation spectrum
The cosmological parameters are h = 0.7. The lower density parameter produces a more reasonable-looking cluster mass function for the isocurvature initial condition [50]. In both models the density
parameter in baryons is [B] = 0.03, the rest of m = -3 in Eq. (16). The tilt to m = -1.8 requires only modest theoretical ingenuity [51]. That is, both models have pedigrees from commonly discussed
early universe physics.
The lesson from Fig. 3 is that at least two families of models, with different relations between l at the peak, come close to the measurements of the CBR fluctuation spectrum, within the still
substantial uncertainties. An estimate of T[l] in progress should be capable of distinguishing between the adiabatic and isocurvature models, even given the freedom to adjust the shape of P (k). The
interesting possibility is that some other model for structure formation with a very different value of
I assign a failing grade to the Einstein-de Sitter model in line 3a because the adiabatic and isocurvature models both prefer low [52], [53]). I add question marks to indicate this still is a
model-dependent result. | {"url":"http://ned.ipac.caltech.edu/level5/Peebles1/Peeb3_3.html","timestamp":"2014-04-18T08:10:03Z","content_type":null,"content_length":"7807","record_id":"<urn:uuid:a7b0489f-8e1b-4654-bccf-52c44205d6d5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Secant to Tangent Applet
This applet is designed to build the student's visual intuition about the line tangent to a curve at a point being the limit of the secant lines through that point. The x-coordinate of the point, x_0
is controlled by a slider, as is delx, the distance to x_1. To change functions, right-click (control-click for Mac users) on the function and select "redefine" from the pop up menu. Similarly, the
viewing window can be modified by right-clicking on empty space in the drawing pad. From the pop up menu for the drawing pad, select properties.
Mike May, S.J., 2/19/2006, Created with GeoGebra
GeoGebra is a GNUed software package for mathematics visualization. The home for the applications is http://www.geogebra.org.
Return to the Applets for courses below calculus page.
Return to the Calculus Applet page.
Return to the GeoGebra Applet page.
Last updated By Mike May, S.J., August 18, 2007. | {"url":"http://www.slu.edu/classes/maymk/GeoGebra/SecantToTangent.html","timestamp":"2014-04-19T22:30:45Z","content_type":null,"content_length":"5646","record_id":"<urn:uuid:bb14b7db-d1cb-4ff4-9009-1fc0e2aa5282>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
capacitor reactance - diyAudio
Actually the formula you should use is:
f = 1/(2*PI*R*C)
or: C = 1/(2*PI*F*R)
f = -3dB point.
R= Rk'//Rk
Rk' = (Rp+ra)/(mu+1) (Common-cathode stage)
Rp = Plate resistance
ra = Ri or internal resistance of the tube.
Rk = cathode bias resistor.
Note that C should always be much larger than Rk but not much larger than necessary for the lowest -3dB point wanted: t = R*C | {"url":"http://www.diyaudio.com/forums/tubes-valves/41239-capacitor-reactance.html","timestamp":"2014-04-18T09:37:30Z","content_type":null,"content_length":"51725","record_id":"<urn:uuid:cdfb9c93-68c1-48ac-a22b-5419218188c9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Charlestown, PA
Find a Charlestown, PA ACT Tutor
...In between formal tutoring sessions, I offer my students FREE email support to keep them moving past particularly tough problems. In addition, I offer FREE ALL NIGHT email/phone support just
before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy...
14 Subjects: including ACT Math, physics, ASVAB, calculus
...I have successfully taught speech, phonics, reading comprehension, grammar, rhetoric, literature (poetry and fiction), and writing. I encourage students to incorporate their own personal
interests into our discussions because I believe that the study of language is not merely a memorization task...
17 Subjects: including ACT Math, reading, English, grammar
...I provide extensive work in the area of vocabulary development, including synonyms, antonyms, and word analogies. All aspects of grammar are also an integral part of the SAT test preparation
program which includes identifying parts of speech, different sentence structures, and punctuation. My p...
51 Subjects: including ACT Math, reading, English, Spanish
...My personal notes will help the student master the basics, then expand to harder problems. It's not necessary to do all the problems, but you MUST get the easy and intermediate ones right!
Most people don't write the way they speak.
35 Subjects: including ACT Math, chemistry, English, physics
...Identify and solve Exponential and Logarithmic Functions Learn to perform the steps to solve equations and graph solutions. Understand FOIL. Discover methods for factoring trinomials quickly
and easily.
27 Subjects: including ACT Math, calculus, geometry, statistics
Related Charlestown, PA Tutors
Charlestown, PA Accounting Tutors
Charlestown, PA ACT Tutors
Charlestown, PA Algebra Tutors
Charlestown, PA Algebra 2 Tutors
Charlestown, PA Calculus Tutors
Charlestown, PA Geometry Tutors
Charlestown, PA Math Tutors
Charlestown, PA Prealgebra Tutors
Charlestown, PA Precalculus Tutors
Charlestown, PA SAT Tutors
Charlestown, PA SAT Math Tutors
Charlestown, PA Science Tutors
Charlestown, PA Statistics Tutors
Charlestown, PA Trigonometry Tutors
Nearby Cities With ACT Tutor
Chesterbrook, PA ACT Tutors
Devault ACT Tutors
Eagle, PA ACT Tutors
Frazer, PA ACT Tutors
Gulph Mills, PA ACT Tutors
Ithan, PA ACT Tutors
Kimberton ACT Tutors
Linfield, PA ACT Tutors
Rahns, PA ACT Tutors
Romansville, PA ACT Tutors
Saint Davids, PA ACT Tutors
Southeastern ACT Tutors
Strafford, PA ACT Tutors
Upton, PA ACT Tutors
Valley Forge ACT Tutors | {"url":"http://www.purplemath.com/Charlestown_PA_ACT_tutors.php","timestamp":"2014-04-17T11:13:26Z","content_type":null,"content_length":"23788","record_id":"<urn:uuid:6af69a92-1aa8-4c09-ac69-f10522e6ed81>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by dany
Total # Posts: 12
The probability that a positive divisor of 60 is greater than 9 can be written as a/b, where a and b are coprime positive integers. What is the value of a+b?
Ariel makes a sandwich using four kinds of Italian lunch meat: A, B, C and D and two kinds of Italian cheese: X and Z. Ariel's sandwich has a single layer of each type of meat and a single layer of
each kind of cheese, but he also wants to make sure that the two types of c...
determine the secon termof an A.P whose sixth term is 12 and eighth term is 22?
chemistry 12
2Li^++2I^-==>2Li+I2 would 2I^- be oxidised and 2Li be reduced reaction?
chemistry 12
use the standard potential reduction potentials table to balance the following redox equation: H2O2+I-+H+==>H2O+I2 would what i did be correct H2H2+I^-+H^+==>H2O+I2 H2H2+H^+==>H2O And I^-==>I2 Give
us H2H2(aq)+2H^+(aq)+2e-==>2H2O(l) And 2I^-(aq) ==> I2(s)+2e-...
math grade 11
How many conformations does monosodium glutamate (MSG) have and why?
electrical physics
In Hydrogen atom , if an electron jump from n2 level into n1 level . Prove that the wave number (reciprocal of wave length) of emitted radiation is : 1/λ=me4 /8ε02h3c (1/n22 -1/n12)
What is the electrons velocity that made its momentum equal to the momentum of photon whose wave length 5200A?
I can't understand why a ball bounces many time before coming to a rest
how do you evaluate (3/4)*-3
Grammatical Corrections in my Essay...?
The mysterious statue One day one guy named Chad he buys a garden a really big garden and he built a cemetery it was the most famous cemetery in all America!! It was really clean, preatty, with so
many roses that the entire garden smells to roses they have water fountains tha... | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=dany","timestamp":"2014-04-21T00:34:30Z","content_type":null,"content_length":"8232","record_id":"<urn:uuid:b43f4918-b8f2-421e-8f47-616d27379166>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00325-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rahns, PA Prealgebra Tutor
Find a Rahns, PA Prealgebra Tutor
...I have received several undergraduate poetry prizes, including First Place in Christianity & Literature's Student Writing Contest. I was also a Research Assistant for Dr. Foster, whom I
assisted in preparing manuscripts on Renaissance literature.
38 Subjects: including prealgebra, reading, Spanish, writing
...Solve problems involving decimals, percents, and ratios. 4. Solve problems involving exponents. 5. Solve problems involving radicals. 6.
27 Subjects: including prealgebra, calculus, ACT Math, economics
...I graduated high school in the top 10% of my class with a gap over 100.00 and graduated college with honors. While in high school, I obtained a 4.0 in both honors chemistry an c.f. advanced
chemistry. I graduated high school in the top 10% of my class with a gap over 100.00 and graduated college with honors while he majoring in Chemistry.
19 Subjects: including prealgebra, chemistry, English, algebra 2
...My excellent mathematical skills and over 15 years of tutoring experience emphasize that I am qualified to tutor in various subjects, including: * Elementary Math (Grades 3 - 5) * Middle
School Math (Grades 6 - 8) * Pre-Algebra * Algebra I & II * Geometry * Trigonometry ...
13 Subjects: including prealgebra, calculus, geometry, GRE
...I run my own training and coaching business and an quite familiar with marketing functions. In addition, I have also taught Marketing principals at the Lansdale School of Business. I have
several years' experience using Outlook.
9 Subjects: including prealgebra, public speaking, Microsoft Word, Microsoft PowerPoint
Related Rahns, PA Tutors
Rahns, PA Accounting Tutors
Rahns, PA ACT Tutors
Rahns, PA Algebra Tutors
Rahns, PA Algebra 2 Tutors
Rahns, PA Calculus Tutors
Rahns, PA Geometry Tutors
Rahns, PA Math Tutors
Rahns, PA Prealgebra Tutors
Rahns, PA Precalculus Tutors
Rahns, PA SAT Tutors
Rahns, PA SAT Math Tutors
Rahns, PA Science Tutors
Rahns, PA Statistics Tutors
Rahns, PA Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Charlestown, PA prealgebra Tutors
Congo, PA prealgebra Tutors
Creamery prealgebra Tutors
Delphi, PA prealgebra Tutors
Eagleville, PA prealgebra Tutors
Englesville, PA prealgebra Tutors
Fagleysville, PA prealgebra Tutors
Gabelsville, PA prealgebra Tutors
Graterford, PA prealgebra Tutors
Gulph Mills, PA prealgebra Tutors
Linfield, PA prealgebra Tutors
Morysville, PA prealgebra Tutors
Trappe, PA prealgebra Tutors
Valley Forge prealgebra Tutors
Zieglersville, PA prealgebra Tutors | {"url":"http://www.purplemath.com/Rahns_PA_prealgebra_tutors.php","timestamp":"2014-04-16T04:39:36Z","content_type":null,"content_length":"23951","record_id":"<urn:uuid:7ebefcfe-5ebd-4502-b0e8-fdecb4d87d01>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
and SubQuantum Field Theory
ANNO May 18, 1998
1 On the SubQuantum Paradigm and SubQuantum Field Theory – SubQFT
1.1 «Is it crazy enough to be correct?» (Niels Bohr)
Judge yourself. The unified field is a Faraday–Maxwell electromagnetic field. Maxwell–Lorentz equations (ML-equations) for potentials in standard 4-dimensional form are satisfied precisely. This is
achieved by involving new subquantum fundamental field sources, strict definition of which requires higher dimensionality. Subquantum charged currents in the right side of ML-equation correspond to
virtual vacuum currents of Quantum Electrodynamics but are determined independently. Source currents do not «see» each other to such extend that at each point of space in any direction there is a
couple of currents of unlike charge signs. Each such charged source moves in the field in accordance with subquantum law of motion, and each of them, independently of the others, generates field in
accordance with ML-equations.
The law of motion may be found from the condition of existence of a solution in the form of stable electrons with «required» properties. The law of motion sought is strictly defined precisely by the
condition of electron generation! If an electron exists it follows that protons, atoms and the Universe exist too! It exists, hence it is a solution for the joint system of the ML-equations and the
subquantum source motion equations. All particles and fields – they are just visible exhibitions of interaction between the field and its sources!
The Quantum Theory answered the question: – How atom is possible?
Subquantum Field Theory (SubQFT) has been summoned to answer the question: – How electron is possible?
According to the initial semiclassical model of Niels Bohr, – the atom of hydrogen was becoming possible in theory provided it had the quite definite (by the postulates of Bohr) set of stationary
electronic orbits for which the validity of the ML-equations was partly abolished and full absence of radiation for these stationary states of atom was postulated. By that time the ML-equations
inside electron had already been «temporarily» abolished.
The further development of quantum physics led to the development of new quantum «kinematics» (both non-relativistic, and relativistic), that abolished the detailed space-time description of
electrons motion along continuous classical trajectories. Only within the framework of such «truncated» kinematics of electrons and positrons was it possible to «consistently preserve» the validity
of the ML-equations. Developed within the frameworks of Quantum Electrodynamics the procedures of renormalization of electron charge and mass have, in a sense, «closed» the development of Quantum
Electrodynamics and led it to its utmost logical «conclusion».
The construction of SubQFT begins with the development of subquantum model of electron – the basic, maximum symmetric and easier than others arranged object of our quantum world. The subquantum model
of based single electron connects its possibility with the presence at a subquantum level of the quite definite set of stationary charged subcurrents which also do not radiate at their accelerated
(hyperbolic) motion in a field of electron. These very charged subcurrents are the true sources of the unified field which are included in the right parts of the ML-equations. The reconstruction of
both subcurrents as the sources of the unified field and their mathematical and physical properties leans, mainly, on the symmetries of the ML-equations.
The basic part in SubQFT, after Lorentz's group, is played by hyperbolic symmetry of the ML-equations, – as the stationary and conservative subquantum structure can be constructed only and solely
from the subcharges that move hyperbolically (uniformly accelerated) and are completely deprived of radiation. Only among entirely hyperbolic subquantum structures should one search for absolutely
stationary and steady structures of single and based electron and a proton.
It is postulated, that the charged continuously distributed in space subquantum sources of a unified field fill with themselves all physical space (with the exception of very small areas of
«inaccessibility» in the central parts of electron, proton…), which in classical electrodynamics did not assume any charged sources of a field outside the «self»-charge of electron, but in Quantum
Electrodynamics was filled with vacuum currents of virtual electrons and positrons with conformable kinematics.
At any distance from the center of symmetry of electron and in any spatial direction (two angular parameters) from this point there exists a pair of unlike-charged subcurrents of quite certain
density. Such a «superdense» packing by the charged subcurrents of «empty» from the charged particles space – subquantum aether, – is a characteristic feature of SubQFT and, in particular, of
subquantum model of electron. This «superdense» packing of each point of a subquantum aether by subcurrents is formed by two two-parametrical sets of quite classical continuous (hyperbolic)
trajectories formed by the subcurrents that move in electron field, go through this point and rest their ends against infinity.
At spatial infinity from the center of electron, where its field comes to zero, the set of subcurrents possesses the greatest possible symmetry. There the sets or positively and negatively charged
subcurrents are equal to each other and isotropic (do not depend on direction), and general for all the subcharges rate of movement is equal to the velocity of light. Coming from infinity at the
velocity of light, the subcharges of both signs dissipate on a field of electron on hyperbolic trajectories, coming back into infinity. Depending on the sign of a charge and on the aiming parameter
of their hyperbolic trajectory of motion in relation to the center of electron, subcharges reach their minimal apical distance to the center, where they have the minimal apical speed, and turn back
into infinity on the other side of hyperbole.
While the stationary electronic orbits in the atom of Bohr were incompatible with the ML-equations, the hyperbolic motion of subcharges in electron field takes place without radiation in full
conformity to the ML-equations.
(In both Russian and French versions the characteristic features of Subquatum Paradigm are described more fully and in more detail!)
Last modifications: January 11 2000 RU FR Back to Contens
1.2 New Perusal of the Gospel of Maxwell
Maxim Karpenko concludes his book «Universum Sapiens» with the following paragraph:
Not so long ago, one physicist told the story happened to him. In that strange state of somnolence, when the most unbelievable visions appear, the God himself has emerged in front of him. The
physicist is always the physicist, and he, with an inherent in any scientist passion for new knowledge at any time and place has got into a conversation with the God trying mostly to clear up the
attitude of the God towards certain physical concepts. Maxwell's equations have also been mentioned in this or that connection. At the end of the conversation, when the physicist tried to get the
appraisal of our attempts to render the real picture of the World from the highest authority, the God said: «You have the book written thousands years ago – the Gospel. So, both the Gospel and
Maxwell's equations equally correlate with the truth». That is why, while it is not the only reason, I wish to conclude this book with the words of my favorite Richard Bach: «Everything in this book
may be wrong».
Vladimir Vizgin wrote in his remarkable work «The dogma of belief of physicist-theorists»:
Commenting on the present situation of relationships between physics and mathematics, the well-known Russian mathematician, academician Vladimir Arnol'd wrote about its relationship to a state of
affairs in Newton's epoch: «Fundamental physical laws are simply described in purely geometrical terms. This fact (remaining enigmatic today as well) has struck Newton so, that he thought it to be
the proof of existence of the God»…
Max von Laue recalled, that at the end of the XIX – the beginning of the XX centuries, such physicists as Ludwig Boltzmann, Heinrich Hertz, Max Planсk and others in this very key spoke about
equations of Maxwell: «The understanding of how the most difficult and various phenomena are mathematically brought to such firm and harmoniously wonderful equations of Maxwell, is one of the
strongest experiences accessible to a person». Boltzmann quoted once the verses regarding these formulas: «Was it not the God who wrote these signs, that have calmed alarm of my soul and have opened
to me a secret of nature?» (from «Faust» Goethe. – V.V.).
The translation from Russian was made by Masha and Natasha Zazerska RU FR Back to Contens
Last modifications: March 24 2003
1.3 «We are extremely lucky… what we do now» (Richard Feynman)
Just one paragraph from the physical bestseller «The Character of Physical Law» by Richard Feynman:
We are extremely lucky to live in the century when it is still possible to make discoveries. It is like the discovery of America, which may be discovered only once and forever. The century we live in
is the century of discovery of the basic laws of nature, and this time will never come again. It is a wonderful time, the time of emotions and delights, but it will be over one day. Of course in the
future interests will be different. People will be interested in interrelations between phenomena of different levels – biological, etc., or, if to take discoveries, in investigation of another
planets, but in any case it will not be the same as what we do now. [18]
Last modifications: January 12 2000 RU FR Back to Contens
1.4 «I Hope and Believe» (Konrad Lorenz)
I don't imagine, that I can give the knowledge
To improve people and to put them on the right path.
Unlike Faust, I fancy, that I could give something, that would both teach people and put them on the right path. This thought doesn't seem to me too arrogant. At least it is less arrogant, then the
reverse one, – if the latter comes not from conviction, that you can't teach people, but from the assumption that «these people» are not able to understand the new study. This happens only in
extraordinary cases, when some genius surpasses his time by centuries. If somebody is listened to by his contemporaries and even his book are read by them, then it can be asserted with confidence
that this is not the genius. At best, he can please himself with the thought, that he has something «on business» to say. All that can be said works in the best way just when a speaker only slightly
surpasses listeners with his new ideas. Then they react with the thought: «That is it, I could have guessed it myself!» [23,Ch.14]
The situation with the ideas of subquantum field theory and the dynamics of their perception has regular features of instincts' collisions and drama of ideas. It can't be said that nobody reads them.
People read them, but… for the most part don't react with the thought: «That is it, I could have guessed it myself!» Can it be, that the author of the subquantum paradigm is the genius of the higher
class by Stanislav Lem's classification? Fortunately, – this is not so! Mainly because the author does not surpass his time, on the contrary, – he is behind his time. And this gap can be, by
different estimates, in the interval from 50 to 100 years. The most probable and reliable estimate of this gap is 90 years. The natural time of development of the subquantum field theory – SubQFT –
could have been the years from 1909 till 1914!
Though under another name, but SubQFT first saw the light of the day in 1908 in the works of Italian mathematician Tullio Levi-Civita [7]. The destiny of his ideas was settled (solved) far north from
sunny Italy – behind the fortress walls of Goettingen. The key figures of this drama of ideas in Goettingen were the mathematics professors: David Hilbert and Hermann Minkowski. All this, that seemed
to be the drama of ideas to the academical life romantics, in reality has all the features of instincts' collisions and launched in complete accordance with the customs of Romans, involved in the
actions around the gladiatorial fights in the Rome Coliseum.
Minkowski skillfully realized the finishing (trimming) of the results, obtained in the series of innovative works of his predecessors: Lorentz–Poincaré–Einstein, using the already ready tooling of
Italian mathematicians [10]. Minkowski built up a new dwelling (frame) for electromagnetic field and its sources, – the World of Minkowski. After that great moment of geometrization the symmetry was
naturally observed (kept) – the symmetry of electromagnetism equations, written down in 4-vector's form, relatively to transformations from Lorenz group, the godfather of which was Henri Poincaré.
Minkowski attracted attention to kinematic excellence of hyperbolic motion of field sources. He came across the possibility of using one more symmetry of Maxwell–Lorentz equations as the forming one
in electrodynamics. He had already used splendidly the first symmetry, building the World, that lately was called by his name. So the next one was on the waiting list… – symmetry of hyperbolic motion
of sources, the symmetry, keeping conservatism of the field created by these sources. During his last «mathematics walk» on Thursdays, exactly a week before his own funeral, Minkowski spoke «with
particular vivacity» [to the mathematics professors of Goettingen] about his last results in electrodynamics. [12] At noon of the next Tuesday, on 12th of January 1909 he passed away.
After the death of Hermann Minkowski at the suggestion of David Hilbert, Max Born became the person empowered to act for Mrs. Minkowski in the work of publishing the physics works of her husband.
[12] There were, amongst others, draft notes and outlines, left by Minkowski about the hyperbolic motion, as also the reliable evidences of the reaction of Minkowski's Genius about the ideas and the
program of Levi-Civita.
The work of Born about hyperbolic motion [3] is the conclusive evidence of distinctly negative Hilbert's attitude to creative plans of Minkowski on this subject. Undoubtedly, this work [3] purposed
several objects at one time, clearly formulated and put by Hilbert before Born. It was required to set out «the right point of view» on space nature, linked with absolutely solid (rulers) dynamics at
least in the small. It was necessary to interlace this theory in a very natural way with hyperbolic motion, and to prevent the using of this weapon by those, who can take it into their heads to
sacrifice solid to some liquid. But, with all that, – to leave in secret the fact, that the mathematics professor of Goettingen and the Hilbert's colleague took part in «the plot against the mind».
And this is not at all a hyperbole. In everything, where he saw obstacles in the way of his mission accomplishment, Hilbert didn't admit any compromises and acted very cruelly. He didn't consider his
colleagues to have the right on their own choice of forms and means of (mathematical) truths' perception, the choice, which could be different from his own one – «the only possible» and «absolutely
correct». Think about his painful reaction on Brouwer intuitionism.
Minkowski, Levi-Civita and their ideas about the further geometrization of the description the nature of sources surpassed their time! Neither Minkowski nor Levi-Civita knew then in full measure,
that they made the daring attempt of the movement against the main direction of the physics thinking development in the beginning of XX century. The powerful dominating stream of efforts on the
boundless strengthening of ATOMISM was already gathering the strength that time. The XXth century in the history of naturally-scientific thought – is the century of complete domination of ATOMISTIC
INSTINCT in the depths of PERCEIVING MIND of its acknowledged leaders.
Last modifications: November 24 2002 RU FR Back to Contens
2 Just a Stroke to a Portrait…
Dostoevsky gives me more than any scientist, more than Gauss!, as rendered by Alexander Moshkowski, are unexpected and hardly suitable for comprehension. But let us imagine ourselves in the role of
Einstein reading «The Gambler» of Fyodor Mihailovich Dostoevsky and sharing with the main character of the novel the strain and the excitement that raise a strange feeling, a challenge to the destiny
, a desire to give it a fillip or to show the tongue to it. Both Dostoevsky and Einstein achieved trustworthiness of the most paradoxical transformations. Their works are rich with intuitive and
extra-logical judgments, developments of the plot and actions of characters. Dostoevsky gave the ethic stimulation to the creator of the unified theory, strengthened his cosmic religious feeling.
Last modifications: January 12 2000 RU FR Back to Contens
4 Electromagnetic Asymmetry in Hadron-Antihadron Pairs.
Appeal to an EXPERIMENT
Qualitative analysis of subquantum level field scalar component manifestation in our corpuscular (quantified) world makes us to anticipate violation of symmetry of magnetic moment values in
proton-antiproton (hadron-antihadron) pairs, masses at rest and other low-energy parameters.
Big and laborious work is necessary to make up theoretical description of this phenomenon. An experiment can say its final word prior to any reliable theoretical results in this area.
Most accessible schemes of experimental situations should be analyzed and shown, as well as existing experimental result data bases should be examined in order to analyze anticipated asymmetry.
Now it's your turn, Your Majesty EXPERIMENT!
Last modifications: January 12 2000 RU FR Back to Contens
The Literature Quoted:
3. Born M. Ann. d. Phys., 1909, Bd 30, S. 1
7. Levi-Civita T. Sui campi elettromagnetici puri, bei C. Ferrari, Venezia 1908; Sulle azione meccaniche etc.; Prendiconti d. Pr. Acad. dei Lincei 18, 5a.
10. Ricci G., Levi-Civita T. Math. Ann. 1901, v. 54, p. 125
12. Reid C. HILBERT (With an appreciation of Hilbert's mathematical work by Hermann Weyl), Springer – Verlag, 1970
18. Feynman R. The Character of Physical Law, Cox and Wyman LTD, London 1965
23. Lorenz K. Das sogenannte Böse (Zur Naturgeschichte der Agression), Taschenbuch Verlag, München
Primary website – http://www.ltn.lv/~elefzaze/
html/php makeup by Alexander A. Zazerskiy
©1998–2005 Alexander S. Zazerskiy | {"url":"http://reocities.com/CollegePark/center/3086/","timestamp":"2014-04-19T12:01:13Z","content_type":null,"content_length":"45885","record_id":"<urn:uuid:49b22e14-42d7-4359-8ac6-599f5fd23822>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Riemann-Siegel Theta Function Approximation
This is my first post on the physicsforums so go easy on me :)
I am writing a simple program to generate the zero's of the Riemann zeta function accurately.
However I need the first say, ten terms of the theta function
[tex]\theta\left(x\right) = arg\left(\Gamma\left(\frac{2ix+1}{4}\right)\right)-\frac{xln\pi}{2}[/tex]
to get an acceptably accurate answer.
Wikipedia gives an approximation here;
but I need a larger expansion of the series
I tried to get MATLAB to generate the terms but am having no luck,
The algorithm is basically using Siegels Z-function and detecting a change of sign.
However, as the language I am using (a very basic pseudocode) is incapable of calculating the gamma function, I need the theta function to be expanded in the way it has been on the wiki page but with
more terms so the algorithm can calculate the value approximately.
Sorry if this doesn't really make sense, but I am hoping someone here can help. | {"url":"http://www.physicsforums.com/showthread.php?p=2415835","timestamp":"2014-04-18T13:54:01Z","content_type":null,"content_length":"25938","record_id":"<urn:uuid:64a22ea9-6852-4e8e-b486-f45968cfe51e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Buffalo II & transformers - Page 2 - diyAudio
AN_912.pdf, right hand paragraph, top of the page: "Equation 4 through Equation 7 can be used to predict the impedance seen by each DAC output (ZNORM and ZCOMP)".
If no R0 is used it should be comparable to a resistor with infinite resistance?
RL<<R0 -> Znorm = RL/4N^2 (Very approximate)
Given the LL1674, N=4 -> Znorm = RL/64
Vs= (sqrt(2)*Imax/2)*(N*R0*RL/(RL+2R0*N^2))
Approximate R0 with infinity and we get
We want Vs=2 and Imax is 4mA I think?
Plug this in and Znorm=88ohm seen by the dac. Do we just add the DC resistance to this, making Znorm=121ohm?
I'm waiting for the guys over at TPA to start taking orders for the B-II.
Now, either I've done the equations wrong or else it doesn't seam like such a perfect match any more?
In the thread discussing the B-II it's said the load seen by the dac is very important. It needs to be very low if the advantage of current out mode is to be gained. 121ohm isn't very low in this
Last edited by markusA; 5th April 2010 at 08:55 AM. | {"url":"http://www.diyaudio.com/forums/twisted-pear/164294-buffalo-ii-transformers-2.html","timestamp":"2014-04-18T16:53:16Z","content_type":null,"content_length":"84501","record_id":"<urn:uuid:22aef7dd-debe-4856-ac58-9b36658b2f2a>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theoretical aspects of learning
Home News Shopping FAQ Library Download Help Support
Contents : Articles : Optimization of learning
Theoretical aspects of spaced repetition in learning Piotr Wozniak,
┃ This text was derived from P.A.Wozniak, Optimization of learning : Simulation of the learning process conducted along the SuperMemo schedule (1990) and has been updated with revised figures ┃
┃ (original text included additional figures related to the forgetting rate which has been significantly overestimated due to an error in the implementation of the simulation model) ┃
This article should help you plan your learning and better understand your lifetime capacity for learning new things. Most of the figures and formulas have been theoretically derived. However, over
the last ten years, these theoretical constructs have been confirmed many times by exact measurements taken during an actual learning process
A simple simulation model makes it possible to predict the outcome of a long-term learning process based on spaced repetition. Probability of forgetting at each repetition is determined by the
forgetting index. By using a Spaced Repetition Algorithm and a real distribution of element difficulty (A-Factor Distribution), it is possible to predict the course of learning over many years by
means of computer simulation (note that you can run a similar simulation of your own learning process based on your own real learning data in SuperMemo 98 and later with Tools : Statistics :
The simulation model takes the following assumptions:
1. Learning proceeds along a standard repetition spacing algorithm (e.g. Algorithm SM-11)
2. A bell-shaped distribution of A-factors is taken from a generic knowledge system created with SuperMemo
3. The matrix of optimal factors is taken from a generic knowledge system and does not change in the course of the learning process
4. At repetitions, a specified portion of items, determined by the forgetting index, is taken as forgotten and reenters the process without a change to their A-factors
The above assumptions eliminate the following problems that might otherwise be encountered while trying to estimate the results of a long-term learning process:
1. The variability of individual mnemonic skills can be entirely encompassed by the distribution of A-factors (Point 2). After all, the same knowledge system used by a skilled student will show a
greater proportion of higher A-factors
2. The variability of the difficulty of the studied material, which again, can entirely be reflected by the distribution of A-factors (Point 2)
3. The variability of the mnemonic capability of the brain as a result of training, which is discounted by using a constant distribution of A-factors (Point 2)
4. The variability of the mnemonic capability of the brain with aging, which can be discounted by using a constant value of the matrix of optimal factors (Point 3). A significant loss of memory with
aging can be observed only as a result of a pathological process or because of lack of training (Restak 1984). Otherwise, the mnemonic capability of the brain is likely to increase with age as a
result of training!
For simplicity of the description, in the following paragraphs I will use the term generic material, meaning a learning material with a typical distribution of A-factors. It is important to notice
that the term reflects also the mnemonic capability of the student. This comes from the fact that good students tend to exhibit a greater proportion of high A-factors in their collections.
Here is the short summary of conclusions that could be drawn from simulation experiments based on the discussed model:
Figure 1 Learning curve for a generic material, forgetting index equal to 10%, and daily working time of 1 minute
• In a long-term process, for the forgetting index equal to 10%, the average rate of learning for generic material can be approximated to 200-300 items/year/min, i.e. one minute of learning per day
results in acquisition of 200-300 items per year. Users of SuperMemo usually report the average rate of learning from 50-2000 items/year/min
• For a generic material, the number of items memorized in consecutive years when working one minute per day can be approximated with the following equation:
NewItems=aar*(3*e^^- 0.3*year+1)+1)
NewItems - items memorized in consecutive years when working one minute per day,
year - ordinal number of the year,
aar - asymptotic acquisition rate, i.e. the minimum learning rate reached after many years of repetitions (usually about 200 items/year/min)
• Eliminating 10% of the most difficult items in a generic material may produce an increase in the speed of learning of up to 300%. The lower the forgetting index, the greater the increase.
• In a long-term process, for the forgetting index equal to 10%, and for a fixed daily working time, the average time spent on memorizing new items is only 5% of the total time spent on
repetitions. This value is almost independent of the size of the learning material
• The maximum lifetime capacity of the human brain to acquire new knowledge by means of learning procedures based on the discussed model can be estimated as no more than several million items.
• For a generic material and the forgetting index of about 10%, the function of time required daily for repetitions per item can roughly be approximated using the formula:
time = 1/500 * year^^-1.5 + 1/30000
time - average daily time spent for repetitions per item in a given year (in minutes),
year - year of the process.
• As the time necessary for repetitions of a single item is almost independent of the total size of the learned material, the above formula may be used to approximate the workload for learning
material of any size.
For example, the total workload for a 3000-element collection in the first year will be 3000/500*1+3000/30000=6.1 (min/day).
Figure 2 Workload, in minutes per day, in a generic 3000-item learning material, for the forgetting index equal to 10%
• The relationship between the forgetting index and knowledge retention can accurately be expressed using the following formula:
Retention = -FI/ln(1-FI)
Retention - overall knowledge retention expressed as a fraction (0..1),
FI - forgetting index expressed as a fraction (forgetting index equals 1 minus knowledge retention at repetitions).
The above formula can be derived from the formula for the exponential decay of memory traces (R=e^-d*t where R - retention, d - decay constant, t - time)
• The greatest overall increase in the optimal interval can be observed for the forgetting index of about 20%. The overall increase takes into the consideration the fact that for forgotten items,
the optimal interval decreases. Therefore, for the forgetting index greater than 20%, the positive effect of long intervals on memory resulting from the spacing effect is offset by the increasing
number of forgotten items.
• The greatest overall knowledge acquisition rate is obtained for the forgetting index of about 20-30% (see Figure 3). This results from the trade-off between reducing the repetition workload and
increasing the relearning workload as the forgetting index progresses upward. In other words, high values of the forgetting index result in longer intervals, but the gain is offset by an
additional workload coming from a greater number of forgotten items that have to be relearned.
Figure 3 Dependence of the knowledge acquisition rate on the forgetting index
• When the forgetting index drops below 5%, the repetition workload increases rapidly (see Figure 3). The recommended value of the forgetting index used in the practice of learning is 6-14%.
Figure 4 Trade-off between the knowledge retention (forgetting index) and the workload (number of repetitions of an average item in 10,000 days)
• As compared with equally spaced repetition schedules, for the forgetting index equal to 10%, in the period of 50 years, the discussed model produces an about 50-fold increase in the speed of
knowledge acquisition (i.e. speed of learning)
• In a long-term learning process, 50% of repetitions are devoted to 2.5% of short-interval learning material (actual learning process measurements). This number can vary greatly in practice and in
ill-structured learning material, even a smaller proportion of items can take most of the learning time. A user of SuperMemo can use SuperMemo's statistical tools to verify this number on his/her
own. The actual figures will strongly depend on the intensity of memorizing new material. The following example is taken from a 10-year-long learning process:
│ Length of interval │ Percent of elements │ Percent of workload │
│ 1-60 days │ 5% │ 63% │
│ 61-300 days │ 13% │ 23% │
│ 301-1000 days │ 19% │ 7% │
│ over 1000 days │ 63% │ 7% │
• The following table illustrates the proportion of time spent on repetitions of material characterized by a different number of memory lapses (actual learning process measurements):
│ Number of lapses │ Percent of elements │ Percent of workload │
│ 0 │ 62% │ 42% │
│ 1 │ 16% │ 16% │
│ 2 │ 9% │ 15% │
│ 3 │ 5% │ 9% │
│ 4 │ 3% │ 6% │
│ 5 and more │ 5% │ 12% │
• The following figure shows an actual recovery of the measured forgetting index after a one-time use of the rescheduling algorithm (Tools : Mercy) spanning a rescheduling period of about 20 days.
The average requested forgetting index was equal to 10%. The measured forgetting index was reset at the time of rescheduling and surpassed 13% shortly after resuming repetition. The measured
forgetting index returned to the level of 11% only after 7 months of repetitions | {"url":"http://www.supermemo.com/articles/theory.htm","timestamp":"2014-04-21T01:59:56Z","content_type":null,"content_length":"18829","record_id":"<urn:uuid:a3fb716f-674c-4d7d-81ab-71ced4a38cfa>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum evolution
Quantum evolution: an introduction to time-dependent quantum mechanics
A unique introduction to the concepts of quantum mechanics, Quantum Evolution addresses the present status of time-dependent quantum mechanics for few-body systems with electromagnetic interactions.
It bridges between the quantum mechanics of stationary quantum systems and a number of recent advanced theoretical treatises on various aspects of quantum mechanics. The focus is on strongly-quantum
and semi-classical systems, including the quantum manifestations of orderly and chaotic nonlinear classical dynamics.
We haven't found any reviews in the usual places.
Quantum Collapse and Revivals 41
Further Topics in Classical Theory 71
Classically Integrable Quantum Systems 87
11 other sections not shown
Bibliographic information | {"url":"http://books.google.com/books?id=99k9AQAAIAAJ&q=approximation&dq=related:ISBN0471164348&lr=&source=gbs_word_cloud_r&cad=5","timestamp":"2014-04-19T15:36:43Z","content_type":null,"content_length":"116011","record_id":"<urn:uuid:2a4ba4d1-e0ea-42e4-a2ab-7ce5d580fc10>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
28 October 2005 08:00
Complete these addition facts by first making 10 then adding the difference. Students will need instruction to understand the game. Good for mental arithmetic
22 October 2005 11:45
1 other
You must follow the rules to get through the maze. Choose easy, medium, hard. Improve your mental math addition and subtraction problem-solving with this game! Demonstrate with projector for whole
class in lower grades.
1 (2 marks) | {"url":"http://blogmarks.net/user/knann/marks/tag/addition,esteacher%3Amath","timestamp":"2014-04-24T20:09:52Z","content_type":null,"content_length":"16008","record_id":"<urn:uuid:1a6996b8-5c6e-41fc-b090-c671810530e5>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
There are a couple ways of beating him. You can figure it out on your own if you use some brute force, listing out a bunch of the possibilities at a given stage, and some logic, eliminating some of
the many possibilities. Start from the final desired state, and work your way backwards. After going down the "tree" 4 or 5 levels, you'll have a lot of different possibilities to work with, but by
then you should be able to see what works and what doesn't. That was my way, there's a nicer, neater, more general way to solve this, and here it goes:
Write the number of pearls in each line in binary, so starting with 3, 4, 5, 6, you have:
Now, add each column up, one column at a time, modulo 2. In other words, at the bottom of each column, write a 1 if there are an odd number of 1s in the column above, and a 0 if there are an even
number, like so:
Now, whenever it's your turn, your goal is to always have that "sum" (like 0100 in this case) to be all zeroes. If you remove 4 pearls from the row of 4, 5, or 6, you'll get:
However! This only takes you so far. Once the number of pearls gets small, this approach breaks down. Imagine you've left it with 2 rows, each containing 1:
Looks good, right? Wrong. Obviously, that guy will just take 1, leaving you with 1 and you lose. | {"url":"http://www.physicsforums.com/showthread.php?t=29304","timestamp":"2014-04-18T08:27:11Z","content_type":null,"content_length":"30506","record_id":"<urn:uuid:e53a5dc3-c204-446b-a2cb-b5294b6df09f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gauss's Law
Gauss's Law,
the Divergence Theorem,
and the Electric Field
The Divergence Theorem
The Divergence Theorem relates the divergence of a function within some region to the values of that function on the boundary of that region. Let be some volume of space, and let be its surface. If
is the function in question, then
where we have introduced the "Del-dot" symbol for the divergence operator.
Gauss's Law
Gauss's law for the electric field says that the electric flux through any closed surface is proportional to the amount of electric charge contained within that surface. Again let be some volume of
space, and let be its surface. Then
where is the total charge contained in . We can write this in terms of the charge density as
so that Gauss's law becomes
where is a fundamental constant. You can look at a not-entirely rigorous proof.
Gauss's Law in Differential Form
The formulas for the Divergence Theorem and Gauss's Law have some similarities, which suggest the following development of Gauss's law into a differential form. The Divergence Theorem tells us that
(we have replaced F with E), so that
Now, this is true for any region , which is only possible if the integrands are equal:
which is Gauss's Law in differential form. Notice that while the integral form was concerned with the behavior of the electric field and the charge density over some spread-out region, the
differential form is about their behavior at a point.
Exploring a Field
In this exercise you will explore the electric field of a (not necessarily uniformly) charged cylinder. The cylinder is much bigger than the applet screen:
Use the following tools to explore the field:
1. Watch the field arrow as it grows and shrinks.
2. Shift-control-alt-left click (or S-C-middle) to drop a field arrow.
3. Shift-right to change the field arrow into a divergence meter. See how its edges bow outward or inward.
4. Shift-alt-left click (or S-middle) to drop a divmeter.
5. Click the right button to draw a field line. The color along the line indicates the strength of the field; red is strong, and blue is weak.
6. Click the middle button (or alt-left) to draw an equipotential.
7. Draw a (green) surface for Gauss's law:
□ Shift-left drag to draw a rectangle.
□ Shift-control-left drag to draw a circle. The applet calculates and prints the amount of charge within the surface. Click the left button again to erase the surface.
8. Drop an array of indicators:
□ To drop field arrows, hit "A".
□ To drop divergence meters, hit "D".
9. Hit "E", backspace, or delete to erase the lines and laws.
1. Where is the center of the cylinder?
2. How does the charge density change with the distance from the center? It is a polynomial. (Use the circle-drawing tool.)
3. Is there any point where the divergence of the electric field is equal to zero?
4. Verify the Divergence Theorem.
Confusion to Avoid
Sometimes, particularly in math textbooks, you will see the Divergence Theorem referred to as "Gauss's Theorem". This is confusing but not incorrect. Be sure you do not confuse Gauss's Law with
Gauss's Theorem. The Law is an experimental law of physics, while the Theorem is a mathematical law that depends only on the definitions of field, divergence, and surface and volume integrals.
Gauss, like Euler, was a little too prolific for his own good. He discovered many more things than can be named for him without creating confusion. | {"url":"http://homepages.rpi.edu/~persap/P2S11/software/gauss/divergence.html","timestamp":"2014-04-16T16:03:43Z","content_type":null,"content_length":"11454","record_id":"<urn:uuid:ac4ffb2e-1c73-4c7b-b67a-2fbb8c4b022c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Items tagged with homework
okay so i have to write a maple function that decides wheter a line given by a vector that spans it intersects a triangle with given vertices.
an example of what it should look it is
ISC([[0,0],[10,10],[10,10]],[2,1]) returns true
ISC([[0,0],[10,10],[10,0]],[-2,1]) returns false
thanks in advance. i'm just having severe problems with writing functions and what not as you can tell. anyway, Thanks!! | {"url":"http://www.mapleprimes.com/tags/homework?page=4","timestamp":"2014-04-17T03:50:35Z","content_type":null,"content_length":"108159","record_id":"<urn:uuid:6902b51f-5393-468c-a5b8-a182cafeb445>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reader Comments and Retorts
Go to end of page
Statements posted here are those of our readers and do not represent the BaseballThinkFactory. Names are provided by the poster and are not verified. We ask that posters follow our submission policy.
Please report any inappropriate comments.
Steve Balboni's Personal Trainer Posted: January 26, 2014 at 08:54 PM (#4646565)
A quick look at the team projections shows no "superteam" in 2014. For example, the projections have nine American League teams winning between 83 and 91 games - and no team winning more than 91. It
also has no team (including Houston!) winning fewer than 70 games. A 21-win range between the best and worst team in the entire league would be quite a bit of parity!
Pleasant Nate (Upgraded from 'Nate') Posted: January 26, 2014 at 09:08 PM (#4646567)
Unless I'm missing something, and I may very well be so please correct me, these seem overly regressed for established players. For example, Miguel Cabrera is projected at .396/.535. His last three
years are (most recent first):
Some other projections:
Steamer: 418/594
Oliver: 413/592
ZiPS: 404/581
Maybe all of those are wrong and this is right, but that seems a significant outlier forecast and I'd be interested in hearing why. Rinse and repeat for the Vottos, Tulos, etc of the world.
Baldrick Posted: January 26, 2014 at 09:27 PM (#4646572)
If the Mariners score 707 runs I will be ecstatic. They haven't come anywhere close to that since 2007.
GregD Posted: January 26, 2014 at 09:35 PM (#4646573)
I want to see the 2 % of the simulations where the Astros make the playoffs!
ursus arctos Posted: January 26, 2014 at 09:49 PM (#4646577)
Clark the Cub is sad.
Russlan is fond of Dillon Gee Posted: January 26, 2014 at 10:32 PM (#4646582)
A 21-win range between the best and worst team in the entire league would be quite a bit of parity!
These projections are regressed to the mean. The average disparity between the best team and the worst team for each simulated season is going to be greater than the disparity between the best
average projection and the worst average projection.
I hope I explained that well.
escabeche Posted: January 26, 2014 at 10:42 PM (#4646589)
Why are the Orioles projected to get so much worse, I wonder? They didn't make much progress in the offseason, but they didn't lose much either.
Dale Sams Posted: January 26, 2014 at 11:36 PM (#4646597)
First come first serve. I'll take the Yanks over 85 wins
Best Regards, President of Comfort Posted: January 26, 2014 at 11:36 PM (#4646598)
Why are the Orioles projected to get so much worse, I wonder? They didn't make much progress in the offseason, but they didn't lose much either.
I have to imagine that Chris Davis' projection is closer to 1.5 WAR than 6.
JE (Jason Epstein) Posted: January 27, 2014 at 12:00 AM (#4646602)
Why are the Orioles projected to get so much worse, I wonder?
In fairness, Keith Law says they were never good to begin with.
kthejoker Posted: January 27, 2014 at 12:19 AM (#4646605)
I will gladly bet a $10 sponsorship on BB-REF that the Astros win 70 games this year. Either way I'll be happy, and it'll at least give me a reason to care when we're hopelessly out of it in July.
bookbook Posted: January 27, 2014 at 12:32 AM (#4646606)
'd take the under on the M's, assuming they don't do anything else notable this offseason.
DJS and the Infinite Sadness Posted: January 27, 2014 at 12:51 AM (#4646608)
A quick look at the team projections shows no "superteam" in 2014. For example, the projections have nine American League teams winning between 83 and 91 games - and no team winning more than 91. It
also has no team (including Houston!) winning fewer than 70 games.
Projections will always have a tighter range because they are mean projections. We expect 3 teams, on average, to perform to their 90th percentile, 6 to 80th or better, etc, but we don't know which
ones they are yet.
RoyalsRetro (AG#1F) Posted: January 27, 2014 at 12:54 AM (#4646610)
I'll take the over on the Royals. Their pitching will regress, and they may not even be as good as last year, but 77 seems low. They're probably slightly over .500.
Mark Armour Posted: January 27, 2014 at 01:15 AM (#4646612)
He is projecting pretty low offensive levels it looks like. This might be true. The Red Sox are projected to score 723 runs, way down from 853, but still the second most in baseball.
madvillain Posted: January 27, 2014 at 01:25 AM (#4646613)
sweet, the White Sox faired pretty damn well, speaks volumes to the work Hahn has done overhauling the roster.
In fairness, Keith Law says they were never good to begin with.
loathe as I am to agree with Law, I kind agree with your simplified take on law's take. They were never that good, maybe like 85 wins good, you can't win a 80% of your one run games very often, ask
the 2005 White Sox. Doesn't mean we shouldn't appreciate it when it happens, but it's not repeatable.
TerpNats Posted: January 27, 2014 at 01:37 AM (#4646615)
I have a feeling that if the Nats do win the NL East, it will be with more than 87 wins.
SouthSideRyan Posted: January 27, 2014 at 01:42 AM (#4646617)
[16]Considering they won 85 last year while winning 39% of their one-run games, I'd say you're underselling them a bit.
SouthSideRyan Posted: January 27, 2014 at 01:51 AM (#4646619)
Rather disappointing and horrifying that the Cubs are projected to be the worst team in baseball while the Marlins and Astros still exist.
Rants Mulliniks Posted: January 27, 2014 at 09:31 AM (#4646640)
The 78 wins looks about right for the Jays. Man they are a frustrating team.
Pops Freshenmeyer Posted: January 27, 2014 at 09:38 AM (#4646643)
Rather disappointing and horrifying that the Cubs are projected to be the worst team in baseball while the Marlins and Astros still exist.
By a full three games.
zonk Posted: January 27, 2014 at 09:40 AM (#4646645)
Rather disappointing and horrifying that the Cubs are projected to be the worst team in baseball while the Marlins and Astros still exist.
As big a mess as the Cubs were when Thed took over, and as much as 90 losses seems near certain... It's awfully hard for me to see how this regime gets anything more than one more 90 loss season. I
think I'm being more patient than most - I like the farm system, I think we're starting to see some real depth, and if Rizzo/Castro can hopefully rebound, the MLB cupboard isn't wholly bare - but
really, you can't have more than 4 years of complete futility.
Russlan is fond of Dillon Gee Posted: January 27, 2014 at 10:01 AM (#4646655)
Rather disappointing and horrifying that the Cubs are projected to be the worst team in baseball while the Marlins and Astros still exist.
The Marlins had 4 starters last year who made 17 or more starts with ERA+ of better than 100 who are younger than 25 years old. They also have Giancarlo Stanton. You can only project so badly when
you should have a solid rotation and a young superstar hitter.
Matthew E Posted: January 27, 2014 at 10:23 AM (#4646662)
The 78 wins looks about right for the Jays. Man they are a frustrating team.
I think 78 is way high. I wouldn't be surprised if they didn't come within 15 wins of that.
jdennis Posted: January 27, 2014 at 10:47 AM (#4646672)
The team with the most wins has 91 if I read it correctly. While the 67 for the worst team could be accurate, I might have put something in the model that made sure a team won at least 95 games. I
can't think of a season in which there wasn't a team with at least 95 wins. Also, I wonder how much the remaining free agents would change things. I also wonder how much of this is based on macro
data and how much is on player projections.
DJS and the Infinite Sadness Posted: January 27, 2014 at 11:06 AM (#4646685)
The team with the most wins has 91 if I read it correctly. While the 67 for the worst team could be accurate, I might have put something in the model that made sure a team won at least 95 games. I
can't think of a season in which there wasn't a team with at least 95 wins. Also, I wonder how much the remaining free agents would change things. I also wonder how much of this is based on macro
data and how much is on player projections.
That's not how projections (or basic probability) work. They're supposed to have a tighter spread because they represent the mean projection for each team. Those projections aren't saying that 91
wins will lead MLB, only that there's no team that has an *average expectation* of more than 91 wins. Obviously, some teams will perform to levels they only have a 10% or 20% chance of reaching (or
falling to).
If teams were coin flips, the mean projection for every team would be 81 wins. But that's not the same as saying that 81 wins will lead the league because on average, you'd expect around 92 wins to
be the average league-best in a league of 162 coin flips and 30 teams.
Fancy Pants Handle doesn't need no water Posted: January 27, 2014 at 11:46 AM (#4646700)
That's not how projections (or basic probability) work. They're supposed to have a tighter spread because they represent the mean projection for each team. Those projections aren't saying that 91
wins will lead MLB, only that there's no team that has an *average expectation* of more than 91 wins. Obviously, some teams will perform to levels they only have a 10% or 20% chance of reaching
(or falling to).
If teams were coin flips, the mean projection for every team would be 81 wins. But that's not the same as saying that 81 wins will lead the league because on average, you'd expect around 92 wins
to be the average league-best in a league of 162 coin flips and 30 teams.
People are statistically illiterate #######.
McCoy Posted: January 27, 2014 at 12:14 PM (#4646717)
With Tanaka off the board the Cubs are going to have to find some pitching if they want to avoid losing lots of games. After 2015 Jeff Samardzija is gone and all that will be left is the carcass of
Edwin Jackson for another year and hopefully the resurrected Travis Wood.
Jose Can Still Seabiscuit Posted: January 27, 2014 at 12:15 PM (#4646719)
The 78 wins looks about right for the Jays. Man they are a frustrating team.
I think 78 is way high. I wouldn't be surprised if they didn't come within 15 wins of that.
That seems crazy to me. I'll be stunned if they fall below 70 and really think 85 is as likely as 75.
Matthew E Posted: January 27, 2014 at 12:24 PM (#4646726)
That seems crazy to me. I'll be stunned if they fall below 70 and really think 85 is as likely as 75.
Well, I like your guess better than my own, but I have this hunch that the Jays are due for a bad year after two years of consistent performance. I think some of the stuff that's been working for
them is going to stop working.
snapper (history's 42nd greatest monster) Posted: January 27, 2014 at 12:35 PM (#4646734)
Well, I like your guess better than my own, but I have this hunch that the Jays are due for a bad year after two years of consistent performance. I think some of the stuff that's been working for
them is going to stop working.
Consistent performance? Their pitching was a train wreck last year!
RoyalsRetro (AG#1F) Posted: January 27, 2014 at 12:37 PM (#4646735)
He is projecting pretty low offensive levels it looks like. This might be true. The Red Sox are projected to score 723 runs, way down from 853, but still the second most in baseball.
Hmmm, that seems to hurt the credibility of these projections quite a bit.
Matthew E Posted: January 27, 2014 at 12:38 PM (#4646737)
Consistent performance? Their pitching was a train wreck last year!
Yes. Consistently so.
jacksone (AKA It's OK...) Posted: January 27, 2014 at 01:05 PM (#4646753)
Well, I like your guess better than my own, but I have this hunch that the Jays are due for a bad year after two years of consistent performance. I think some of the stuff that's been working for
them is going to stop working.
Didn't they significantly underperform last year?
Matthew E Posted: January 27, 2014 at 01:12 PM (#4646759)
Didn't they significantly underperform last year?
They did about what they did in 2012. In 2012 I thought they were just underperforming; their performance in 2013 changed my mind.
Jose Can Still Seabiscuit Posted: January 27, 2014 at 01:59 PM (#4646784)
He is projecting pretty low offensive levels it looks like
I didn't do it for the National league teams but if you total up the runs scored by the American League teams it comes out to exactly the same 10,525 runs that were scored by AL teams in 2013. It
doesn't look like it's a reduced environment, just a more even environment. The same reasons for the more smoothed out win/loss projections that Dan lays out in #26 probably explain that.
Someone from that top 3-4 teams is going to score over 800 runs and someone from the group of Minnesota, Houston and Kansas City is probably going to score around 625.
snapper (history's 42nd greatest monster) Posted: January 27, 2014 at 02:03 PM (#4646788)
I didn't do it for the National league teams but if you total up the runs scored by the American League teams it comes out to exactly the same 10,525 runs that were scored by AL teams in 2013. It
doesn't look like it's a reduced environment, just a more even environment. The same reasons for the more smoothed out win/loss projections that Dan lays out in #26 probably explain that.
Someone from that top 3-4 teams is going to score over 800 runs and someone from the group of Minnesota, Houston and Kansas City is probably going to score around 625.
Right, you have to remember that every year ~33% of teams will exceed or fall short of expectations by 1 SD.
RoyalsRetro (AG#1F) Posted: January 27, 2014 at 02:32 PM (#4646815)
Projections will always have a tighter range because they are mean projections. We expect 3 teams, on average, to perform to their 90th percentile, 6 to 80th or better, etc, but we don't know
which ones they are yet.
The only stats class I took was Intro to Stats, so bear with me, but if you ran enough simulations, would the results eventually be that every team finished 81-81? Or do they not regress like that?
AROM Posted: January 27, 2014 at 02:36 PM (#4646819)
Well, if you ran enough simulations you might eventually stumble across one where every team was 81-81. But it's not a process that moves in that direction. And the odds of 30 teams all hitting that
mark are probably so small that you'd never see it happen in your lifetime.
If you start with all teams being equal 81-81 teams, then just by chance some of them will win 90 games, others will win 70 games.
Pleasant Nate (Upgraded from 'Nate') Posted: January 27, 2014 at 02:37 PM (#4646820)
I didn't do it for the National league teams but if you total up the runs scored by the American League teams it comes out to exactly the same 10,525 runs that were scored by AL teams in 2013. It
doesn't look like it's a reduced environment, just a more even environment. The same reasons for the more smoothed out win/loss projections that Dan lays out in #26 probably explain that.
Someone from that top 3-4 teams is going to score over 800 runs and someone from the group of Minnesota, Houston and Kansas City is probably going to score around 625.
While acknowledging this is right, why do ZiPS and the other projection systems in #2 spit out consistently higher results? Obviously Dan is well respected around here, and justifiably so, yet it
seems there is a materially different overriding approach in Clay's* projections.
*The systems agree on some players and has some normal variation. That said, the number of players that are projected lower in Clay's far outweighs the number projected higher in Clay's, particularly
for established players. This isn't 'Clay doesn't like Miguel Cabrera/Tulo/Votto/etc', it's 'Clay is regressing Miguel Cabrera/Tulo/Votto/etc. more than anyone else'. I find the outlier approach
interesting and worth exploring, especially for someone with Clay's track record, but barring a better explanation I'd rather just use other sources. Of course, a Dan/Clay debate would be the best
outcome. Get on it, boys.
Pleasant Nate (Upgraded from 'Nate') Posted: January 27, 2014 at 02:38 PM (#4646822)
Right on time! Sean, feel free to jump in too, to the extent that you can. I'd love to hear opinions here.
Fancy Pants Handle doesn't need no water Posted: January 27, 2014 at 02:41 PM (#4646824)
The only stats class I took was Intro to Stats, so bear with me, but if you ran enough simulations, would the results eventually be that every team finished 81-81? Or do they not regress like
As you run more simulation, the average of each team will get closer to their true mean (in the case of a coin that would be 81).
That's basically the progress by which these projections are arrived at. They take the average of thousands of simulations, to get the most likely outcome. But you have to remember that the most
likely outcome, isn't very likely at all. If you looked at each individual simulation however, you would probably find at least one 95+ team in most. It's just evened out by the time that team
finished with 85 in another sim.
Edited for crappy grammar.
RoyalsRetro (AG#1F) Posted: January 27, 2014 at 02:45 PM (#4646827)
Thanks Fancy Pants and AROM
Fancy Pants Handle doesn't need no water Posted: January 27, 2014 at 02:55 PM (#4646833)
Thanks Fancy Pants and AROM
I think we both read the question a bit differently by the way.
Looking at it Sean's way, there is about a 6.26% chance of any individual "team" finishing at exactly 81. Not accounting for interdependency of results, the odds of that happening 30 times in a row
would be 0.000079%, or a bit less than one in a million.
Jose Can Still Seabiscuit Posted: January 27, 2014 at 03:37 PM (#4646856)
PECOTA usually includes a category called Average Win Total or something like that that I find very interesting. Basically it would rank the AL East teams (like with Clay's list) but it would show
that while the Rays are projected to win 90 games right now the average AL East winner finishes with 95 wins in the various runs. Those numbers usually look a lot more like the real standings than
the projected versions.
AROM Posted: January 27, 2014 at 03:47 PM (#4646870)
I think we both read the question a bit differently by the way.
Yeah, I think so. Looking at in another way, the more games you play in your sim, the less spread in the results you'll have. Was that what was meant in the original question?
With 162 games, an average team will have a +1 SD result of 87 wins, or .540 winning percentage.
Play 1 million games, then +1 SD will be a percentage of .5005. With 1 million trials, a team playing .506 ball (the equivalent of 82 wins in 162) will be 12 SD from the mean, which means it pretty
much doesn't happen. (an average team playing 12 SD above the mean in 162 games would be 157-5)
dave h Posted: January 27, 2014 at 03:50 PM (#4646877)
I think it's worth noting (and correct me if I say something wrong here) but there's also a difference between our best estimate of team quality and the true value of team quality. Our best estimate
for the league as a whole is made by regressing every team to the mean, and while that will improve the estimate for some teams it will not do so for others. We have to do it for every team
regardless because we don't know which ones should be regressed the full amount (or greater) and which ones shouldn't.
That was probably really unclear, so here's a thought experiment (and again, those who know this better can correct me). If you played the season a thousand times (actually playing the games, not
calculations) without changing the teams at all (impossible, sure, but bear with me) then you wouldn't have to regress much at all because the observed value would be quite accurate. At that point
the best team would have a lower average win total than the best team has in a given season, but it would have a greater average win total than even a very good projection.
Fancy Pants Handle doesn't need no water Posted: January 27, 2014 at 03:55 PM (#4646882)
Yeah, I think so. Looking at in another way, the more games you play in your sim, the less spread in the results you'll have. Was that what was meant in the original question?
I assumed it meant repeatedly simulate the season, and average the results, but I am not certain.
DJS and the Infinite Sadness Posted: January 27, 2014 at 04:12 PM (#4646894)
Looking at it Sean's way, there is about a 6.26% chance of any individual "team" finishing at exactly 81. Not accounting for interdependency of results, the odds of that happening 30 times in a row
would be 0.000079%, or a bit less than one in a million.
Just to be obnoxious, 6.26% is binomial. There are a set number of wins in a 2430-game season, so if you're not flipping the coin 2430 times, you want hypergeometric. So 6.37%.
Jim Wisinski Posted: January 27, 2014 at 04:16 PM (#4646899)
PECOTA usually includes a category called Average Win Total or something like that that I find very interesting. Basically it would rank the AL East teams (like with Clay's list) but it would
show that while the Rays are projected to win 90 games right now the average AL East winner finishes with 95 wins in the various runs. Those numbers usually look a lot more like the real
standings than the projected versions.
I think SG does that in the RLYW blowouts.
Edit: Yes, he does. Looking back at last year's blowout is fun. Blue Jays at 29% for the division (Red Sox at 15), Angels at 40%, Nationals at 45%, Giants at 28%. Whoops!
madvillain Posted: January 27, 2014 at 04:18 PM (#4646900)
[16]Considering they won 85 last year while winning 39% of their one-run games, I'd say you're underselling them a bit.
And their 2nd order win percentage was .503, their 3rd order was .513. It's a mediocre team that got extremely lucky to win 91 games in '12. I think it is what it is.
AROM Posted: January 27, 2014 at 04:22 PM (#4646904)
Actually, there's a bigger nit to pick. The chance of something that is 6.26 (or 6.37%) likely to happen 30 times in a row is .0626^30, which is a number so big I'm not sure what to call it,
something like 1 in a 1 followed by 36 zeros big.
I see the error, you got .00079% by taking .626 and raising to the 30th power. That's the equivalent of taking a likely event (.626 winning percentage is about 100 wins) such as the best team in
baseball winning a single game. The best team in baseball winning 30 in a row? Very unlikely - less than one in a million. Take an unlikely event and make it happen 30 times in a row and the numbers
get silly.
Nasty Nate Posted: January 27, 2014 at 04:28 PM (#4646911)
The Orioles were an 85-win team last year and a 93-win team the year before. We don't need calculations to determine team wins for past seasons.
Sorry to be snarky, but it's a pet peeve of mine.
Lance Reddick! Lance him! Posted: January 27, 2014 at 04:39 PM (#4646917)
I'm not sure what to call it, something like 1 in a 1 followed by 36 zeros big.
C'mon, man, undecillion!
Gamingboy Posted: January 27, 2014 at 05:27 PM (#4646948)
It's not showing up for me, but I'm curious as to what he sees Tanaka doing
RoyalsRetro (AG#1F) Posted: January 27, 2014 at 05:32 PM (#4646952)
The site is down, but
this site
says Davenport projects 15-9 2.92 ERA 1.126 WHIP, 6.6 WARP, 216.2 IP, 41 BB, 202 SO
Der-K: Hipster doofus Posted: January 27, 2014 at 05:34 PM (#4646953)
He and Felix each have arguments for the #2 pitcher in baseball, behind Kershaw.
Fancy Pants Handle doesn't need no water Posted: January 27, 2014 at 07:45 PM (#4647009)
I see the error, you got .00079% by taking .626 and raising to the 30th power. That's the equivalent of taking a likely event (.626 winning percentage is about 100 wins) such as the best team in
baseball winning a single game. The best team in baseball winning 30 in a row? Very unlikely - less than one in a million. Take an unlikely event and make it happen 30 times in a row and the
numbers get silly.
Hmm, clearly I needed more coffee. It did seem to big at the time.
You must be Registered and Logged In to post comments.
<< Back to main | {"url":"http://www.baseballthinkfactory.org/newsstand/discussion/clay_davenport_first_projections_for_2014","timestamp":"2014-04-19T22:34:21Z","content_type":null,"content_length":"62827","record_id":"<urn:uuid:5c87c9e8-3347-4ae7-abf7-62b3b5d826a1>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Percentage Question
October 2nd 2011, 02:50 PM
Percentage Question
I have a question regarding percentages and I would really appreciate an input.
Lets say that 10 balls are distributed between A person and B person. Person A has 6 balls and person B has 4 balls.
From the aforementioned information we know that.
1. A has 50% more balls than B.
2. A has 2 more balls than B
(which i assume is equivalent to)
A has 2/10 or 20% more balls than B.
What assumption am I making wrong?
Moreover, lets say that I am watching a football match where club A has ball possession of 60% and club B ball possession of 40%.
What is the right thing to say? Club A has 20% more ball possession than Club B? or club A has 50% more possession than B? having in mind that
(A/B) - 1 = (3/2) - 1 = (1/2) = 50%
October 2nd 2011, 03:18 PM
Archie Meade
Re: Percentage Question
I have a question regarding percentages and I would really appreciate an input.
Lets say that 10 balls are distributed between A person and B person. Person A has 6 balls and person B has 4 balls.
From the aforementioned information we know that.
1. A has 50% more balls than B.
2. A has 2 more balls than B
(which i assume is equivalent to)
A has 2/10 or 20% more balls than B.
What assumption am I making wrong?
Moreover, lets say that I am watching a football match where club A has ball possession of 60% and club B ball possession of 40%.
What is the right thing to say? Club A has 20% more ball possession than Club B? or club A has 50% more possession than B? having in mind that
(A/B) - 1 = (3/2) - 1 = (1/2) = 50%
A has "20% of the total number of balls" more than B.
A has "50% of B's total" more than B.
A has 2 balls more than B.
All the above are correct as such.
For the second example, again the figures need to be qualified
depending on whether you are referring to the total ball possession
or the ball possession of club B. | {"url":"http://mathhelpforum.com/algebra/189388-percentage-question-print.html","timestamp":"2014-04-18T13:27:31Z","content_type":null,"content_length":"5919","record_id":"<urn:uuid:78c61f62-1b2b-4162-b2c4-38816df57582>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Yahoo Groups
random walks in an infinite state space
Expand Messages
View Source
Page 126 of AIMA, 2nd ed, notes that:
"It is easy to prove that a random walk will eventually find a goal or
complete its exploration, provided that the space is finite."
Here, a footnote continues:
"The infinite case is much more tricky. Random walks are complete on
infinite one-dimensional and two dimensional grids, but not on three
dimensional grids! In the latter case, the probability that the walk
ever returns to the starting point is only about 0.3405."
I was surprised by the claim above (random walks are complete for 1 and
2 dimensions but not for 3). Can anyone explain why this is true?
View Source
It all began with Polya's 1921 proof:
Then follow the link from there "[Pages Linking Here]"
which has links to discussions of 1-, 2-, and 3-dimensional walks.
- Bob
Robert P. Futrelle | Biological Knowledge Laboratory
Associate Professor | College of Computer and Information
| Science MS WVH202
Office: (617)-373-4239 | Northeastern University
Fax: (617)-373-5121 | 360 Huntington Ave.
| Boston, MA 02115
http://www.ccs.neu.edu/home/futrelle http://www.bionlp.org http://www.diagrams.org http://biologicalknowledge.com
>Page 126 of AIMA, 2nd ed, notes that:
>"It is easy to prove that a random walk will eventually find a goal or
>complete its exploration, provided that the space is finite."
>Here, a footnote continues:
>"The infinite case is much more tricky. Random walks are complete on
>infinite one-dimensional and two dimensional grids, but not on three
>dimensional grids! In the latter case, the probability that the walk
>ever returns to the starting point is only about 0.3405."
>I was surprised by the claim above (random walks are complete for 1 and
>2 dimensions but not for 3). Can anyone explain why this is true?
>YAHOO! GROUPS LINKS
> Visit your group
>"<http://groups.yahoo.com/group/aima-talk>aima-talk" on the web.
> To unsubscribe from this group, send an email to:
> Your use of Yahoo! Groups is subject to the
><http://docs.yahoo.com/info/terms/>Yahoo! Terms of Service.
Robert P. Futrelle | Biological Knowledge Laboratory
Associate Professor | College of Computer and Information
| Science MS WVH202
Office: (617)-373-4239 | Northeastern University
Fax: (617)-373-5121 | 360 Huntington Ave.
| Boston, MA 02115
http://www.ccs.neu.edu/home/futrelle http://www.bionlp.org http://www.diagrams.org http://biologicalknowledge.com
Your message has been successfully submitted and would be delivered to recipients shortly. | {"url":"https://groups.yahoo.com/neo/groups/aima-talk/conversations/topics/573","timestamp":"2014-04-18T23:34:16Z","content_type":null,"content_length":"45518","record_id":"<urn:uuid:1a50ce79-ad70-4f44-9ba2-1894396b1ded>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00637-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proving Bernoulli numbers eq'n please help
May 22nd 2007, 03:04 PM
Proving Bernoulli numbers eq'n please help
I've been trying to find a way to prove the formula for the bernoulli numbers. I've been trying to prove it by induction because that's pretty much the only proof I've learned so far, the only
problem is the m-1 on top in the sigma not'n. I'm not sure what to do with it, would anyone be able to give me a bit of a head start. The simplest way possible would be nice.
May 22nd 2007, 03:09 PM
I've been trying to find a way to prove the formula for the bernoulli numbers. I've been trying to prove it by induction because that's pretty much the only proof I've learned so far, the only
problem is the m-1 on top in the sigma not'n. I'm not sure what to do with it, would anyone be able to give me a bit of a head start. The simplest way possible would be nice.
see if these help:
Faulhaber's formula
Bernoulli number
Geometric progression
May 22nd 2007, 03:16 PM
I've looked at those, what I dont get from that is how they got from the first equation to the one below. And I just really want to know what to do with the m-1 on the left side of the equation.
I expanded it to be 0^n + 1^n+...+(m+1)^n What I need to figure out is a way to use mathematical induction which means i need to make a set from the bernoulli equation and then claim 1 is in the
set, prove it, then k+1 is in the set. How/where would I sub in the 1 for that? Basically all I want to do is a left-side, right-side proof
May 22nd 2007, 08:00 PM
I've been trying to find a way to prove the formula for the bernoulli numbers. I've been trying to prove it by induction because that's pretty much the only proof I've learned so far, the only
problem is the m-1 on top in the sigma not'n. I'm not sure what to do with it, would anyone be able to give me a bit of a head start. The simplest way possible would be nice.
This is the (well one at least) definition of the Bernoulli numbers - what are
you trying to proove.
Do you have another definition of the Bernoulli numbers that you wish to
prove this formula from? | {"url":"http://mathhelpforum.com/advanced-math-topics/15265-proving-bernoulli-numbers-eqn-please-help-print.html","timestamp":"2014-04-17T01:19:45Z","content_type":null,"content_length":"7167","record_id":"<urn:uuid:bbd97f34-8f78-48e9-bea7-0cb847953f09>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] strange sin/cos performance
David Cournapeau cournape@gmail....
Mon Aug 3 08:44:28 CDT 2009
On Mon, Aug 3, 2009 at 10:32 PM, Andrew Friedley<afriedle@indiana.edu> wrote:
> While working on GSoC stuff I came across this weird performance behavior
> for sine and cosine -- using float32 is way slower than float64. On a 2ghz
> opteron:
> sin float32 1.12447786331
> sin float64 0.133481025696
> cos float32 1.14155912399
> cos float64 0.131420135498
Which OS are you on ? FWIW, on max os x, with recent svn checkout, I
get expected results (float32 ~ twice faster).
> The times are in seconds, and are best of three runs of ten iterations of
> numpy.{sin,cos} over a 1000-element array (script attached). I've produced
> similar results on a PS3 system also. The opteron is running Python 2.6.1
> and NumPy 1.3.0, while the PS3 has Python 2.5.1 and NumPy 1.1.1.
> I haven't jumped into the code yet, but does anyone know why sin/cos are
> ~8.5x slower for 32-bit floats compared to 64-bit doubles?
My guess would be that you are on a platform where there is no sinf,
and our sinf replacement is bad for some reason.
> Side question: I see people in emails writing things like 'timeit foo(x)'
> and having it run some sort of standard benchmark, how exactly do I do that?
> Is that some environment other than a normal Python?
Yes, that's in ipython.
More information about the NumPy-Discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-August/044276.html","timestamp":"2014-04-16T10:18:17Z","content_type":null,"content_length":"4098","record_id":"<urn:uuid:88a0e7d6-69c6-4e9a-9666-392c330b7672>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brain Teasers Category
How can you cut a round cake three times to make eight equal slices?
Cut #1 – Down the center of the cake (vertically) leaving two equal halves.
Cut #2 – Across the center of the cake (horizontally) leaving four equal slices.
Cut #3 – Through the middle edge of the cake slicing all four of the pieces in equal halves, leaving eight equal slices (four equal tops and four equal bottoms).
Cut #1 – Down the center of the cake (vertically) leaving two equal halves.
Cut #2 – Across the center of the cake (horizontally) leaving four equal slices.
Cut #3 – Through the middle edge of the cake slicing all four of the pieces in equal halves, leaving eight equal slices (four equal tops and four equal bottoms).
Why can’t a man living in Okeechobee, Florida be buried west of the Mississippi River?
Because the man would surely object to being buried since he’s living in Okeechobee.
Because the man would surely object to being buried since he’s living in Okeechobee.
You have a single match and are in a pitch black room with a candle, an oil lamp and a gas stove. Which do you light first?
The match.
Some months have 30 days and some have 31 days. How many months have 28 days?
All of them. Every month has a day 28, even though some continue on after reaching 28.
All of them. Every month has a day 28, even though some continue on after reaching 28.
Your doctor gives you three pills and tells you to take one every half hour. How much time will have passed by the time you’ve taken all three pills?
One hour. You take the first pill, then wait a half hour and take the second pill, then at the hour mark you take the third and last pill.
One hour. You take the first pill, then wait a half hour and take the second pill, then at the hour mark you take the third and last pill.
If you choose an answer to this question at random, what is the chance that you will be correct?
a) 25%
b) 50%
c) 60%
d) 25%
I consider this paradox, because it’s self-referential. Both A and D would be correct if there were four unique answers, but since A and D are the same answer, the chance that you would choose a
correct answer is 50%, which makes B correct. But if there’s only one correct answer, the odds of choosing the correct one at random goes back to 25%. And around and round you go.
There’s a lot of discussion at Richard Wiseman’s blog and more at Lifehacker, where I first saw this.
I consider this paradox, because it’s self-referential. Both A and D would be correct if there were four unique answers, but since A and D are the same answer, the chance that you would choose a
correct answer is 50%, which makes B correct. But if there’s only one correct answer, the odds of choosing the correct one at random goes back to 25%. And around and round you go.
There’s a lot of discussion at Richard Wiseman’s blog and more at Lifehacker, where I first saw this.
This equation is incomplete: 1 2 3 4 5 6 7 8 9 = 100
One way to make it accurate is by adding seven plus and minus signs, like so.
1 + 2 + 3 – 4 + 5 + 6 + 78 + 9 = 100
How can you do it using only 3 plus or minus signs?
123 – 45 – 67 + 89 = 100
A door, a body, and a knot can be this.
Double. Double doors, a stunt double (for an actor) and a double knot. And there’s also the double rainbow
By Sef Daystrom
Double. Double doors, a stunt double (for an actor) and a double knot. And there’s also the double rainbow
By Sef Daystrom
These series follow a pattern.
This series does not follow the pattern
What is the pattern?
The numbers must be in ascending order. This can be a fun one to have people work out in person, as they test out three-number series and you can tell them whether or not they satisfy the pattern.
Thanks to Patrick for sending this in and thanks to this video from Veritasium for inspiring Patrick.
The numbers must be in ascending order. This can be a fun one to have people work out in person, as they test out three-number series and you can tell them whether or not they satisfy the pattern.
Thanks to Patrick for sending this in and thanks to this video from Veritasium for inspiring Patrick.
What is significant about the following set of letters?
They are the only letters in the alphabet that are not found at the beginning of the name of a state in the United States of America. The rest of the letters in the alphabet, namely
ACDFGHIKLMNOPRSTUVW, start the name of at least one state. Incidentally, eight different states each start with the letters M and N, tying them for the most states starting with a particular letter.
The next highest is a three-way tie between A, I and W with four each.
They are the only letters in the alphabet that are not found at the beginning of the name of a state in the United States of America. The rest of the letters in the alphabet, namely
ACDFGHIKLMNOPRSTUVW, start the name of at least one state. Incidentally, eight different states each start with the letters M and N, tying them for the most states starting with a particular letter.
The next highest is a three-way tie between A, I and W with four each. | {"url":"http://riddlesbrainteasers.com/category/brain-teasers/","timestamp":"2014-04-21T02:55:39Z","content_type":null,"content_length":"40615","record_id":"<urn:uuid:5c9f57c2-f320-4cb1-b152-7a28eca1301c>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Roots of Polynomials
October 19th 2008, 02:08 PM #1
Junior Member
Sep 2008
Finding Roots of Polynomials
On problems like $-12x^3+12x^2+24x$
I try to solve for roots.
What I get is, (12+x)(-x^2+x+2), leaving me to find
x=-12 x=-1 x=2
Come to find out, x=-12 should be x=0. How can I fix this?
Also, for problems that do not factor so easy, $-32x^3+12x+0$, what are some factor methods? thank you
On problems like $-12x^3+12x^2+24x$
I try to solve for roots.
What I get is, (12+x)(-x^2+x+2), leaving me to find
x=-12 x=-1 x=2
Come to find out, x=-12 should be x=0. How can I fix this?
Also, for problems that do not factor so easy, $-32x^3+12x+0$, what are some factor methods? thank you
It looks like it should be what you said except for one minor mistake, but heres a tip that will help you with the first and 2nd making life easier.
Ok first off:
Recognize the common factor here 12x, personally I would pull out a -12x
thus becoming $(-12x)(x^2-x-2)$this become an easily visible factor(depending on your math level) : we find ourselves with 3 equations :
12x x+1 x-2
I don't see where you got the answer -12, I see 0 though,
Pulling out an $12+x$ gives you a completely different equation then the original, wouldn't it.. Im no professional but I can factor and I don't see how -12 could be an answer
$<br /> <br /> -32x^3 + 12x<br />$
Take out the common factor of -6x
$<br /> <br /> -6x(4x^2 - 2)<br />$
Then we get -6x = 0
Therefore x=0
Move 2 over to the other side
$<br /> <br /> 4x^2 = 2<br />$
Divide both sides by 4
$<br /> <br /> x^2 = \frac{1}{2}<br />$
Then take the square root of both sides and you get 2 more roots
$<br /> <br /> x = + \sqrt \frac{1}{2}<br /> <br />$
$<br /> <br /> x = - \sqrt \frac{1}{2}<br />$
Further Examination
If you plug in -12 to the original equation it equals 22,176.. which is definately not a root haha, did your professor/teacher tell you that was the answer or is it on answer? Many of my personal
worksheets for Calculus have wrong answers on them all the time, teachers update questions but forget to update the answers.
Thanks for the thanks
$<br /> <br /> -32x^3 + 12x<br />$
Take out the common factor of -6x
$<br /> <br /> -6x(4x^2 - 2)<br />$
Then we get -6x = 0
Therefore x=0
Move 2 over to the other side
$<br /> <br /> 4x^2 = 2<br />$
Divide both sides by 4
$<br /> <br /> x^2 = \frac{1}{2}<br />$
Then take the square root of both sides and you get 2 more roots
$<br /> <br /> x = + \sqrt \frac{1}{2}<br /> <br />$
$<br /> <br /> x = - \sqrt \frac{1}{2}<br />$
You can't factor out a 6
Here is the solution
October 19th 2008, 02:28 PM #2
Junior Member
Oct 2008
October 19th 2008, 02:45 PM #3
Junior Member
Sep 2008
October 19th 2008, 03:16 PM #4
Junior Member
Oct 2008
October 19th 2008, 06:13 PM #5 | {"url":"http://mathhelpforum.com/algebra/54533-finding-roots-polynomials.html","timestamp":"2014-04-20T13:57:05Z","content_type":null,"content_length":"45509","record_id":"<urn:uuid:d720236d-2b76-49f9-9b61-2b636cdad153>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
Highway and Road Calculator
Highway and Road Calculator contains 69 Calculators and Converters, that can quickly and easily calculate and convert different Highway, Road and Civil Engineering parameters. Automatic & Accurate
Calculations and Conversions with every Unit and Value Changes. Available in Imperial and Metric Units. Most Comprehensive Highway and Road Calculator.
*** Available in Metric and Imperial Units ***
Highway and Road Calculator contains following 37 Calculators:
• Radius of Curve - Arc Definition (Circular Curves)
• Radius of Curve - Chord Definition (Circular Curves)
• Degree of Curve - Arc Definition (Circular Curves)
• Degree of Curve - Chord Definition (Circular Curves)
• Tangent Distance (Circular Curves)
• External Distance (Circular Curves)
• Midordinate (Circular Curves)
• Length of Long Chord (Circular Curves)
• Length of Curve (Circular Curves)
• Central Angle for Portion of Curve - Arc Definition (Circular Curves)
• Central Angle for Portion of Curve - Chord Definition (Circular Curves)
• Tangent Offset (Circular Curves)
• Chord Offset (Circular Curves)
• Rate of Change of Grade (Parabolic Curves)
• Elevation of Point of Vertical Curvature (Parabolic Curves)
• Elevation of Point x distant from Point of Vertical Curvature (Parabolic Curves)
• Distance from Point of Vertical Curvature to Lowest Point on a Sag Curve/Highest Point on a Summit Curve (Parabolic Curves)
• Elevation of Lowest Point on a Sag Curve/Highest Point on a Summit Curve (Parabolic Curves)
• Minimum Length of Crest Vertical Curves (Sight Distance < Length of Vertical Curves)
• Minimum Length of Crest Vertical Curves (Sight Distance > Length of Vertical Curves)
• Rate of Vertical Curvature
• Structural Number (Surface Course)
• Structural Number (Base Course)
• Structural Number (Subbase Course)
• Structural Number (Pavements)
• Minimum Length of Spiral Curve
• Thrust of Structure (Culverts)
• Flexibility Factor
• Ring-Compression Stress
• Horizontal Deflection of Pipe (Iowa Formula)
• Design Pressure - Corrugated Steel Structures (Height of Cover < Pipe Diameter)
• Design Pressure - Corrugated Steel Structures (Height of Cover >= Pipe Diameter)
• Compressive Thrust (Conduit Walls)
• Ultimate Wall Stress (294 > Ratio of Pipe Diameter and Radius of Gyration of Pipe Cross Section < 500)
• Ultimate Wall Stress (Ratio of Pipe Diameter and Radius of Gyration of Pipe Cross Section > 500)
• Design Stress of Wall
• Pipe Wall Area
Highway and Road Calculator contains following 32 Converters:
• Acceleration
• Angle
• Area
• Density
• Energy/Work
• Flow Rate (Mass)
• Flow Rate (Volume)
• Fluid
• Force
• Frequency
• Hardness
• Length
• Mass
• Metric Weight
• Metrology
• Moment of Force
• Moment of Inertia
• Prefixes
• Pressure
• Radiation
• Specific Heat Capacity
• Specific Volume
• Temperature
• Thermal Conductivity
• Thermal Expansion
• Time
• Torque
• Velocity
• Viscosity (Dynamic)
• Viscosity (Oil & Water)
• Viscosity (Kinematic)
• Volume
Key Features:
• Complete coverage of calculators and converters in Highway, Road and Civil Engineering Parameters.
• Automatic Calculation & Conversion of the Output with respect to changes in the Input/Options/Units.
• Formulas are provided for each calculator.
• Values of Higher Order can also be calculated.
• Extremely Accurate Calculations and Conversions.
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
Most Comprehensive Highway and Road Calculator
Very limited app Not worth the money. Practically useless for road alignments. It is not even possible to enter negative grades into the formulas.
What's New
Additional option for entering negative values for all 'Grade' related calculations.
A rebar (short for reinforcing bar), also known as reinforcing steel, reinforcement steel, re-rod, a deformed bar, reo, or reo bar, is a common steel bar, and is commonly used as a tensioning device
in reinforced concrete and reinforced masonry structures holding the concrete in compression. It is usually in the form of carbon steel bars or wires, and the surfaces may be deformed for a better
bond with the concrete.
Use this simple tool for quick estimates.
our rebar calculator can help you determine how much weight is in your job.
Once you know how much rebar you need, place an order or request an estimate.
To calculate the total rebar for that particular slab.
An I-beam, also known as H-beam, W-beam (for "wide flange"), Universal Beam, Rolled Steel Joist , or double-T, is a beam with an I- or H-shaped cross-section. The horizontal elements of the "I" are
flanges, while the vertical element is termed the "web".
The web resists shear forces, while the flanges resist most of the bending moment experienced by the beam. Beam theory shows that the I-shaped section is a very efficient form for carrying both
bending and shear loads in the plane of the web. On the other hand, the cross-section has a reduced capacity in the transverse direction, and is also inefficient in carrying torsion, for which hollow
structural sections are often preferred instead.
Now anyone can make quick, easy and precise calculation in the field and at the office. With Successful Roads road formula calculator, you can eliminate the need to learn and memorize dozens of
formulas needed for road construction and repair work. Simply enter your data into the program and instantly you are given the results without error. Eliminate human error and do the job right the
first time!
ACSM 18th Health & Fitness Summit & Exposition - Conference companion app for Android includes general schedule, session schedule, exhibit hall map and exhibitor listing, directory, and more.
Updated March 31, 2014: Android 2.X series users should use the HTML5 version at http://acsm.resultsathand.com.
Bridge Automation's Vert Curve is a simple App that calculates elevations along a symmetric vertical curve. The curve is parabolic with equal distances between the vertical point of intersection
(VPI) and begin and end of vertical curve. Elevations or Points Of Interest (POIs) can fall on the curve or on the tangent sections on either side of the curve. One or more POIs can be input. App
allows both SI and Imperial units. This App is useful for Civil Engineers, Roadway Engineers, Highway Engineers, Students and STEM Education.
More from developer
Ohms Law Calculator provides the best way to calculate Voltage, Current, Resistance and Power. Accurate Calculations and Conversions with every Unit and Value Change. Formulas and Definitions are
provided for all calculators.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Ohms Law Calculator includes 4 following modules.
• Voltage Calculator
• Current Calculator
• Resistance Calculator
• Power Calculator
Voltage Calculator:
Calculates Voltage with respect to
• Current
• Power / Resistance
Current Calculator:
Calculates Current with respect to
• Voltage
• Power / Resistance
Resistance Calculator:
Calculates Resistance with respect to
• Voltage
• Current / Power
Power Calculator:
Calculates Power with respect to
• Voltage
• Resistance / Current
Key Features:
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
• Multiple options for Calculating each values.
• Accurate Calculation of the Output with respect to changes in the Input/Options/Units.
• Values of higher order can also be calculated.
• Formulas and Definitions are provided for all calculators.
Thermodynamics Tables contains 5 Essential Thermal Engineering Tables and Laws of Thermodynamics. A Useful Thermodynamics and Thermal Engineering Utility.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Thermodynamics Tables:
• Emissivity Coefficient Table
• Laws of Thermodynamics
• Radiation Constant Table
• Surface Absorbtivities Table
• Thermal Conductivity Table (Common Liquids)
• Thermal Conductivity Table (Heat Exchanger Materials)
Key Features:
• Professionally and Newly designed user-interface.
• Accurate and Useful Information.
• Pleasant Presentation of Table.
A Useful Thermal Engineering Utility
Electrical Engineering Pack consists of 39 Electrical Calculators and 16 Electrical Converters. A complete guide for Electrical Engineers, Technicians and Students.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Electrical Calculator contains 39 Calculators, that can quickly and easily calculate different electrical parameters. Automatic Calculations and Conversions with every Unit and Value Changes.
Electrical Calculator:
• Ohms Law Calculator
• Voltage Calculator
• Current Calculator
• Resistance Calculator
• Power Calculator
• Single Phase Power Calculator
• Three Phase Power Calculator
• Single Phase Current Calculator
• Three Phase Current Calculator
• DC HorsePower Calculator
• Single Phase HorsePower Calculator
• Three Phase HorsePower Calculator
• DC Current (HP) Calculator
• Single Phase Current (HP) Calculator
• Three Phase Current (HP) Calculator
• Efficiency (DC) Calculator
• Efficiency (Single Phase) Calculator
• Efficiency (Three Phase) Calculator
• Power Factor (Single Phase) Calculator
• Power Factor (Three Phase) Calculator
• Light Calculation
• Luminous Intensity Calculator
• Luminous Flux Calculator
• Solid Angle Calculator
• Energy Cost Calculator
• Energy Storage Calculator
• Resistance
• Inductance
• Capacitance
• Star to Delta Conversion
• Delta to Star Conversion
• Inductive Reactance Calculator
• Capacitive Reactance Calculator
• Resonant Frequency Calculator
• Inductor Sizing Equation
• Capcitor Sizing Equation
• Resistance (Series) Calculator
• Resistance (Parallel) Calculator
• Inductance (Series) Calculator
• Inductance (Parallel) Calculator
• Capacitance (Series) Calculator
• Capacitance (Parallel) Calculator
Electrical Converter is a conversion calculator that can quickly and easy translate different electrical units of measure. It consists of 16 Categories with 173 Units and 2162 Conversions.
Electrical Converter:
• Field Strength
• Electric Potential
• Resistance
• Resistivity
• Conductance
• Conductivity
• Capacitance
• Inductance
• Charge
• Linear Charge Density
• Surface Charge Density
• Volume Charge Density
• Current
• Linear Current Density
• Surface Current Density
• Power
Key Features:
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
• Multiple options for calculating each values.
• Automatic calculation of the output with respect to changes in the Input, Options and Units.
• Multiple Units are provided for each parameters for conversion purpose.
• Formulas are provided for each calculator.
• Extremely Accurate Calculators.
A Complete Electrical Guide
Fluid Mechanics Converter is a conversion calculator that can quickly and easily translate different units of measure related to Fluid Mechanics. A Highly Useful Engineering Utility.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Fluid Mechanics Converter:
• Fluid Converter
• Flow Rate (Mass) Converter
• Flow Rate (Volume) Converter
• Viscosity (Dynamic) Converter
• Viscosity (Oil & Water) Converter
• Viscosity (Kinematic) Converter
Key Features:
• Professionally and newly designed user interface.
• Automatic calculation of the output with respect to changes in input.
• Extremely accurate converters.
Highly Useful Engineering Utility
Money Counter is a simple application to count your currency notes and coins of various denomination. Handy tool to count your precious money. Useful at Billing Centers, Shopping Malls, Banking
Counters etc. An Essential Everyday Utility.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Key Features:
• Professional User Interface.
• Automatic and Accurate Calculation.
• Pleasant Presentation.
An Essential Everyday Utility
My Electrical Calculator contains 39 Calculators, that can quickly and easily calculate different electrical parameters. Automatic Calculations and Conversions with every Unit and Value Changes. A
Must have utility.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
My Electrical Calculator contains following 39 Calculators:
• Ohms Law Calculator
• Voltage Calculator
• Current Calculator
• Resistance Calculator
• Power Calculator
• Single Phase Power Calculator
• Three Phase Power Calculator
• Single Phase Current Calculator
• Three Phase Current Calculator
• DC HorsePower Calculator
• Single Phase HorsePower Calculator
• Three Phase HorsePower Calculator
• DC Current (HP) Calculator
• Single Phase Current (HP) Calculator
• Three Phase Current (HP) Calculator
• Efficiency (DC) Calculator
• Efficiency (Single Phase) Calculator
• Efficiency (Three Phase) Calculator
• Power Factor (Single Phase) Calculator
• Power Factor (Three Phase) Calculator
• Light Calculation
• Luminous Intensity Calculator
• Luminous Flux Calculator
• Solid Angle Calculator
• Energy Cost Calculator
• Energy Storage Calculator
• Resistance
• Inductance
• Capacitance
• Star to Delta Conversion
• Delta to Star Conversion
• Inductive Reactance Calculator
• Capacitive Reactance Calculator
• Resonant Frequency Calculator
• Inductor Sizing Equation
• Capcitor Sizing Equation
• Resistance (Series) Calculator
• Resistance (Parallel) Calculator
• Inductance (Series) Calculator
• Inductance (Parallel) Calculator
• Capacitance (Series) Calculator
• Capacitance (Parallel) Calculator
Key Features:
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
• Multiple options for Calculating each values.
• Automatic Calculation of the Output with respect to changes in the Input, Options and Units.
• Multiple Units are provided for each parameters for conversion purpose.
• Formulas are provided for each calculator.
• Extremely Accurate Calculators.
Most Comprehensive Electrical Calculator
Physics Constant Table provides the important physics constants in a table format. Professional User Interface with Pleasant Presentation. A Highly Useful Educational & Engineering Utility.
Key Features:
• Professional user-interface.
• Pleasant Presentation.
• Accurate Information.
Highly Useful Educational and Engineering Utility
Acoustics Engineering Pack contains 94 Calculators and References, that can quickly and easily calculate and helps you to refer different Acoustical Engineering parameters. Automatic & Accurate
Calculations and Conversions with every Unit and Value Changes. A Complete Acoustical Engineering Dictionary.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Acoustics Engineering Pack contains following 94 Calculators and References:
• Absorption Coefficients of Building Materials and Finishes
• Acceleration Converter
• Acoustic Flow Meter
• Acoustic Impedance
• Acoustic Intensity and Sound Intensity Level Relationship
• Acoustic Power and Sound Power Level Relationship
• Area Converter
• Blade Pass Frequency
• Bragg's Law
• Capacitive Reactance
• Coincidence
• Cutoff Frequency
• Damping Factor
• Data Transfer Converter
• Day Night Sound Level
• Density Converter
• Diffraction Grating Equation
• Doppler Effect - Approaching Receiver
• Doppler Effect - Approaching Source
• Doppler Effect - Receding Receiver
• Doppler Effect - Receding Source
• Doppler Effect - Wavelength Behind
• Doppler Effect - Wavelength Front
• Electrical Harmonics
• Energy Converter
• Force Converter
• Frequency Converter
• Frequency Limits (1/3-Octave bands)
• Frequency Limits (Octave bands)
• Fresnel Number
• Inductive Reactance
• Inverse Square Law
• Length Converter
• Level Damping
• Mach Number
• Magnetic Flux Converter
• Magnetic Flux Level
• Mass Converter
• Mean Absorption Coefficient
• Metric Weight Converter
• Noise Criterion (Location)
• Noise Criterion (Sound Pressure Levels)
• Noise Exposure Level
• Noise Exposure Level - Duration
• Noise Generation - Ducts
• Noise Pollution Level
• Noise Rating Curves
• Ohm's Law of Acoustics (Acoustic Impedance)
• Ohm's Law of Acoustics (Particle Velocity)
• Ohm's Law of Acoustics (Sound Intensity)
• Ohm's Law of Acoustics (Sound Pressure)
• Outdoor Ambient Sound Levels
• Particle Velocity and Particle Velocity Level Relationship
• Power Converter
• Preferred Noise Criterion
• Prefixes Converter
• Pressure Converter
• Quality Factor
• Radar Range
• Radiation Converter
• Resonant Frequency
• Reverberation Time
• RMS Noise
• Room Constant
• Room Criteria
• Sound Absorption Coefficient
• Sound Absorption Coefficient (Common Materials)
• Sound Attenuation
• Sound Attenuation Level - Main Duct to Branches
• Sound Converter
• Sound Energy and Sound Energy Level Relationship
• Sound Energy Density and Sound Density Level Relationship
• Sound Intensity Level
• Sound Power Emitted
• Sound Power Level
• Sound Pressure - Recommended Maximum Levels in Rooms
• Sound Pressure (Receiver)
• Sound Pressure and Sound Pressure Level Relationship
• Sound Pressure Level
• Sound Pressure Level (Linear Sound Source)
• Sound Speed
• Sound Transmission Loss - Building Elements
• Sound Transmission through Duct Walls
• Sound Wavelength
• Speed of Sound (Common Liquids)
• Speed of Sound (Common Solids)
• Speed of Sound (Gases)
• Temperature Converter
• Time Converter
• Velocity Converter
• Voltage to Voltage Level Conversion
• Volume Converter
• Wave Frequency
• Wave Velocity
Key Features:
• Complete coverage of calculators and references in Acoustic Engineering field.
• Professionally and Newly designed user interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
• Automatic Calculation of the output with respect to changes in the Input, Options and Units.
• Formulas are provided for each calculator.
• Extremely Accurate Calculators and Pleasant Presentation of References.
Complete Acoustical Engineering Guide
Thermodynamics Calculator contains 44 Calculators and References, that can quickly and easily calculate different Thermodynamics and Thermal Engineering parameters. Automatic & Accurate Calculations
and Conversions with every value changes. A Complete Thermal Engineering Dictionary.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Thermodynamics Calculator contains following 44 Calculators & References:
• Carnot Efficiency (Carnot Cycle)
• Eckert Number
• Emissivity Coefficient
• Energy Efficiency
• Energy/Work Converter
• Enthalpy
• Entropy
• Fick's Law of Diffusion
• Fouling Factor
• Fourier Number
• Heat Flow
• Heat Storage
• Heat Transfer Rate
• Kinetic Energy
• Laws of Thermodynamics
• Lewis Number
• Liquid Phase Diffusion Coefficient
• Nusselt Number
• Peclet Number
• Potential Energy
• Prandtl Number
• Radiation Constant
• Radiation Converter
• Radiation Heat Transfer
• Rankine Efficiency
• Sherwood Number
• Specific Heat Capacity Calculator
• Specific Heat Capacity Converter
• Stefan-Boltzmann Law
• Surface Absorbtivities
• Temperature Converter
• Thermal Area Expansion
• Thermal Conductivity (Common Liquids)
• Thermal Conductivity (Heat Exchanger Materials)
• Thermal Conductivity Calculator
• Thermal Conductivity Converter
• Thermal Diffusivity
• Thermal Expansion Converter
• Thermal Linear & Volumetric Expansion Relationship
• Thermal Linear Expansion
• Thermal Resistivity
• Thermal Transmittance
• Thermal Volumetric Expansion
• Work
Key Features:
• Complete coverage of calculators and references in Thermodynamics and Thermal Engineering fields.
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
• Automatic Calculation of the Output with respect to changes in Input.
• Formulas are provided for all the calculators.
• Extremely Accurate Calculators and Pleasant Presentation of References.
Most Comprehensive Thermal Engineering Calculator and Reference
Structural Engineering Calculator contains 90 calculators and converters, that can quickly and easily calculate and convert different Structural and Civil Engineering parameters. Automatic & accurate
calculations and conversions with every unit and value changes.
*** Available in Metric and Imperial Units ***
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Structural Engineering Calculator contains following 58 Calculators:
• Shear Capacity of Flexural Members
• Critical Ratio
• Effective Length Factor
• Slenderness Ratio
• Allowable Compressive Stress of Building Columns (Slenderness Ratio < Critical Ratio)
• Allowable Compressive Stress of Building Columns (Slenderness Ratio > Critical Ratio)
• Safety Factor
• Maximum Load - Axially Loaded Members
• Allowable Bending Stress (Compact Members)
• Allowable Bending Stress (Noncompact Members)
• Moment Gradient Factor
• Allowable Stress - Compression Flange
• Plastic Moment
• Maximum Unbraced Length for Plastic Design (I-shaped Beams)
• Maximum Unbraced Length for Plastic Design (Solid Rectangular Bars and Symmetrical Box Beams)
• Laterally Unbraced Length - Full Plastic Bending Capacity (I Shapes and Channels)
• Laterally Unbraced Length - Full Plastic Bending Capacity (Solid Rectangular Bars and Box Beams)
• Laterally Unbraced Length - Full Plastic Bending Capacity (Solid Rectangular Bars bent about major axis)
• Limiting Buckling Moment
• Nominal Moment (Compact Beams)
• Critical Elastic Moment - Compact Beams
• Critical Elastic Moment - Solid Rectangular Bars and Symmetrical Box
• Allowable Shear Stress
• Allowable Shear Stress with Tension Field Action
• Area required by the Bearing Plate (Plate Covering the Full Area of Concrete Support)
• Area required by the Bearing Plate (Plate Covering less than the Full Area of Concrete Support)
• Minimum Plate Thickness
• Area required for a Base Plate under a Column supported by a Concrete
• Plate Length
• Thickness of Plate (Cantilever Bending)
• Flange Thickness (H-shaped Column)
• Web Thickness (H-shaped Column)
• Actual Bearing Pressure under the Plate
• Allowable Bearing Stress (Rollers/Rockers)
• Web Depth/Thickness Ratio (Unstiffened Web)
• Web Depth/Thickness Ratio (Transverse Stiffeners)
• Deflection at the Top (Wall with Solid Rectangular Cross Section)
• Deflection at the Top (Shear Wall with a Concentrated Load at the top)
• Deflection at the Top (Fixed Wall against rotation at the top)
• Combined Axial Compression (Ratio of Computed Axial Stress to Allowable Axial Stress > 0.15)
• Combined Axial Compression (Ratio of Computed Axial Stress to Allowable Axial Stress <= 0.15)
• Axial Stress for a Concentrated Load (Applied at a distance larger than depth of the beam from the end of the beam)
• Axial Stress for a Concentrated Load (Applied close to the beam end)
• Concentrated Load of Reaction (Applied at a distance from the beam end of atleast half the depth of beam)
• Concentrated Load of Reaction (Applied closer than half the depth of beam)
• Relative Slenderness of Web and Flange
• Total Column Load (Relative Slenderness of Web and Flange < 2.3)
• Total Column Load (Relative Slenderness of Web and Flange < 1.7)
• Combined Cross-sectional Area of a pair of Column-Web Stiffeners
• Column-Web Depth clear of Fillets
• Thickness of Column Flange
• Allowable Bearing Stress on Projected Area of Fasteners
• Maximum Unit Stress in Steel
• Maximum Stress in the Bottom Flange
• Number of Shear Connectors
• Total Horizontal Shear (Based on Area of Concrete Flange)
• Total Horizontal Shear (Based on Area of Steel Beam)
• Total Horizontal Shear (Based on Area of Longitudinal Reinforcement)
Structural Engineering Calculator also contains 32 Unit Converters related to Structural and Civil Engineering.
Most Comprehensive Structural Engineering Calculator
Fluid Mechanics Calculator contains 97 Calculators, that can quickly and easily calculate different Fluid Mechanics, Civil, Structural, Pipe Flow and Engineering parameters. Automatic & Accurate
Calculations and Conversions with value changes. A Complete Engineering Dictionary.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Fluid Mechanics Calculator contains following 97 Calculators:
• Absolute Pressure
• Brake Horsepower
• Bernoulli Theorem for Head Loss
• Bulk Modulus
• Buoyant Force
• Chezy Coefficient
• Chezy Velocity
• Compressibility
• External Hydrostatic Pressure
• Flow Rate
• Fluid Density with Pressure
• Fluid Pressure
• Hydraulic Radius
• Kinematic Viscosity
• Liquid Phase Diffusion Coefficient
• Pump Efficiency
• Manning Flow Velocity
• Mean Depth
• Minor Losses
• Net Positive Suction Head and Cavitation
• Specific Gas Constant
• Specific Gravity with Water Weight
• Specific Gravity with Water Weight Loss
• Specific Volume
• Thrust Block
• Water Horsepower
• Acoustic Flow Meter
• Bazin's Weir Flow
• Broad Crested Weir
• Curb Capture Flow Rate
• Curb Gutter Flow Rate
• French Drain Seepage Rate
• Gutter Capture Efficiency
• Gutter Carryover
• Gutter Interception Capacity
• Rectangular Weir
• Rectangular Weir Discharge - Francis Equation
• Orifice Flow Rate
• Parshall Flume Flow Rate
• Permeameter Porous Medium Flow Rate
• Unconfined Aquifer Well Flow Rate
• V notch Weir
• Venturi Meter for Flow Rate
• Hazen Williams - Fluid Flow Rate
• Hazen Williams - Mean Fluid Velocity
• Aluminum Pipe - Pressure Rating
• Buried Corrugated Metal Pipe Thrust - Cross Sectional Area
• Buried Corrugated Metal Pipe Thrust - Pipe Wall
• Buried Corrugated Metal Pipe Thrust - Pressure
• Ductile Iron Pipe - Pressure
• Ductile Iron Pipe - Wall Thickness
• Pipe Vacuum Pressure Load
• Pipe Water Buoyancy Factor
• Plastic Pipe - AWWA C900 Pressure Class
• Plastic Pipe - Inside Diameter Controlled
• Plastic Pipe - Outside Diameter Controlled
• Plastic Pipe - Outside Diameter Controlled Short Term Strength
• Plastic Pipe - Short Term Pressure Rating
• Slotted Pipe Gutter Interception
• Smooth Wall Steel Pipe - Pressure Rating
• Soil Load Per Linear Length of Pipe
• Restrained Anchored Pipe Stress
• Pipe Soil Weight Pressure
• Unrestrained Pipe Length Change
• Poiseuille's Law
• Stokes Law
• Cauchy Number
• Cavitation Number
• Eckert Number
• Euler Number
• Fourier Number
• Froude Number
• Knudsen Number
• Lewis Number
• Mach Number
• Prandtl Number
• Reynolds Number
• Schmidt Number
• Sherwood Number
• Nusselt Number
• Peclet Number
• Strouhal Number
• Threshold Odor Number
• Weber Number
• Darcy Weisbach - Head Loss
• Darcy's Law - Flow Rate
• Darcy's Law - Flux
• Darcy's Law - Hydraulic Gradient
• Darcy's Law - Porosity
• Darcy's Law - Saturated Soil
• Darcy's Law - Seepage Velocity
• Darcy's Law - Seepage Velocity and Porosity
• Darcy's Law - Void Ratio
• Water Hammer - Maximum Surge Pressure for a Fluid
• Water Hammer - Maximum Surge Pressure for Water
• Water Hammer - Maximum Surge Pressure Head
• Water Hammer - Pressure Increase
Key Features:
• Complete coverage of calculators in Civil, Structural, Pipe Flow, Fluid Mechanics and Engineering fields.
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
• Automatic Calculation of the Output with respect to changes in the input.
• Formulas and Definitions are provided for each calculator.
• Extremely Accurate Calculators.
Most Comprehensive Fluid Mechanics, Civil, Structural & Engineering Calculator
Pocket PC Magazine, the foremost magazine for Pocket PC and Smartphones has rated Universal Converter as The Behemoth of Converters and An Application that made traveling easier.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Universal Converter is a conversion calculator that can quickly and easy translate different units of measure. It consists of 62 Categories with 1320 Units and 54458 Conversions.
• Acceleration
• Angle
• Area
• Astronomy
• Brix & Baume Degrees
• Charge
• Clothing (Men)
• Clothing (Women)
• Clothing (Ring Size)
• Color
• Cooking
• Data Transfer
• Density
• Electric Current
• Electric Potential
• Electrical Capacitance
• Electrical Conductance
• Electrical Conductivity
• Electrical Field Strength
• Electrical Inductance
• Electrical Resistance
• Electrical Resistivity
• Energy/Work
• Flow Rate (Mass)
• Flow Rate (Volume)
• Fluid
• Force
• Frequency
• Fuel Consumption
• Hardness
• Length
• Light (Illuminance)
• Light (Luminance)
• Magnetic Flux
• Mass
• Memory (Computer)
• Metric Weight
• Metrology
• Moment of Force
• Moment of Inertia
• Money Counter
• Number (Roman Number)
• Percentage
• Permeability
• Power
• Prefixes
• Pressure
• Radiation
• Sound
• Specific Heat Capacity
• Specific Volume
• Temperature
• Thermal Conductivity
• Thermal Expansion
• Time
• Torque
• Typography
• Velocity
• Viscosity (Dynamic)
• Viscosity (Oil & Water)
• Viscosity (Kinematic)
• Volume
Key Features:
• Automatic calculation of values based on input.
• Automatic Calculation of values based on units.
• Values of higher order can also be converted.
• Professionally and Newly designed user-interface that speeds up data entry and conversion speed.
• Easy and Very Simple to Use.
Complete Educational and Engineering Dictionary
My Physics Calculator contains 134 Calculators, that can quickly and easily calculate different physics and engineering parameters. Automatic & Accurate Calculations and Conversions. A Complete
Physics and Engineering Dictionary.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
My Physics Calculator contains following 134 Calculators:
• Force
• Kinetic Friction
• Static Friction
• Centripetal Force
• Centripetal Acceleration
• Gravitational Acceleration
• Angular Acceleration
• Work
• Total Work
• Power with Work
• Power with Displacement
• Power with Velocity
• Displacement or Distance
• Differential Pressure
• Density
• Water Density
• Kinetic Energy
• Potential Energy
• Elastic Potential Energy
• Einstein Mass Energy
• Gravitational Potential
• Velocity
• Circular Velocity
• Average Velocity
• Escape Velocity
• Drift Velocity
• Newton's Law of Gravity
• Newton's Second Law of Motion
• Archimedes' Principle
• Kepler's Third Law
• Hooke's Law
• Pascal's Law
• Poiseuille's Law
• Darcy's Law
• Stokes Law
• Souders-Brown Equation
• Podmore Factor
• Coulomb's Law
• Mirror Equation
• Cavitation Number
• Euler Number
• Fourier Number
• Knudsen Number
• Mach Number
• Nusselt Number
• Reynolds Number
• Weber Number
• Froude Number
• Prandtl Number
• Schmidt Number
• Brinell Hardness Number
• Doppler Effect - Wavelength Front
• Doppler Effect - Wavelength Behind
• Doppler Effect - Approaching Source
• Doppler Effect - Receding Source
• Doppler Effect - Approaching Receiver
• Doppler Effect - Receding Receiver
• Projectile Motion for Vertical Velocity
• Projectile Motion for Vertical Displacement
• Projectile Motion for Horizontal Displacement
• Projectile Motion for Range
• Impulse with Velocity
• Impulse with Time
• Momentum with Velocity
• Momentum with Time
• Moment
• Torque
• Moment of Inertia
• Transverse Strength
• Standard Surface Factor
• Rectangular Tank Capacity
• Cylinder Tank Capacity
• Apparent Porosity
• True porosity
• Kinematic Viscosity
• Mass Flow Rate
• Seismic Geophone
• Weight in Planets
• Wenner Spacing - Soil Resistivity
• Luminosity of Stars
• Temperature
• Thermal Conductivity
• Thermal Diffusivity
• Thermal Linear Expansion
• Thermal Volumetric Expansion
• Thermal Linear and Volumetric Relationship Expansion
• Heat Flow
• Heat Transfer Rate
• Specific Heat Capacity
• Sound Pressure Level
• Sound Intensity Level
• Sound Power Emitted
• Sound Wavelength
• Sound Speed
• RMS Noise
• Noise Pollution Level
• Simple Pendulum
• Physical Pendulum
• Leaf Springs
• Radar Range
• Coincidence
• Helical Spring Rate
• Helical Spring Axial Deflection
• Helical Spring Index
• Amount of Substance
• Metric Weight
• Millspindle
• GSM of Paper
• D Exponent
• Bend Allowance
• Physics Constant Table
• Inductive Reactance
• Capacitive Reactance
• Resonant Frequency
• Inductor Sizing Equation
• Capacitor Sizing Equation
• Resistance
• Battery Life
• Battery Charge Time
• Kva
• Potentiometer
• Voltage Divider
• Electrodialysis
• Electrical Harmonics
• Horsepower
• Voltage (Ohm's Law)
• Power (Ohm's Law)
• Resistance (Ohm's Law)
• Current (Ohm's Law)
• Shear Modulus
• Bulk Modulus
• Youngs Modulus
• Stress
• Strain
Key Features:
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
• Automatic Calculation of the Output with respect to changes in the Input.
• Formulas and Definitions are provided for each calculator.
• Extremely Accurate Calculators.
Most Comprehensive Physics & Engineering Calculator
Ohms Acoustic Law Calculator provides the best way to calculate Acoustic Impedance, Particle Velocity, Sound Intensity and Sound Pressure. Automatic and Accurate Calculation. Formulas and Definitions
are provided with all calculators.
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Ohms Acoustic Law Calculator includes 4 following modules.
• Acoustic Impedance Calculator
• Particle Velocity Calculator
• Sound Intensity Calculator
• Sound Pressure Calculator
Acoustic Impedance Calculator:
Calculates Acoustic Impedance with respect to
• Sound Pressure / Sound Intensity
• Sound Pressure / Particle Velocity
• Sound Intensity / Particle Velocity
Particle Velocity Calculator:
Calculates Particle Velocity with respect to
• Sound Pressure / Acoustic Impedance
• Sound Intensity / Sound Pressure
• Sound Intensity / Acoustic Impedance
Sound Intensity Calculator:
Calculates Sound Intensity with respect to
• Sound Pressure / Acoustic Impedance
• Particle Velocity / Sound Pressure
• Particle Velocity / Acoustic Impedance
Sound Pressure Calculator:
Calculates Sound Pressure with respect to
• Particle Velocity / Acoustic Impedance
• Sound Intensity / Particle Velocity
• Sound Intensity / Acoustic Impedance
Key Features:
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
• Multiple options for Calculating each values.
• Accurate Calculation of the Output with respect to changes in the Input/Options.
• Values of higher order can also be calculated.
• Formulas and Definitions are provided with all calculators.
574 Calculators & Converters related to Civil, Beams, Columns, Piling, Concrete, Survey, Soil & Earthwork, Structural Engineering, Bridges, Highway & Road, Hydraulics and Timber.
Civil Engineering Pack contains 574 Calculators and Converters, that can quickly and easily calculate and convert different Civil Engineering parameters. Automatic & Accurate Calculations and
Conversions with every Unit and Value Changes. Available in Imperial and Metric Units. Most Comprehensive Civil Engineering and Construction Calculator.
Civil Engineering Pack contains the following 12 modules:
• Beam Calculator
• Column Calculator
• Piles and Piling Calculator
• Concrete Calculator
• Engineering Survey Calculator
• Soil and Earthwork Calculator
• Structural Engineering Calculator
• Bridge Calculator
• Highway and Road Calculator
• Hydraulics and Waterworks Calculator
• Timber Engineering Calculator
• Unit Converter
*** Available in Metric and Imperial Units ***
* Beam Calculator contains 34 Calculators that can quickly and easily calculate different Beam parameters.
* Column Calculator contains 35 Calculators that can quickly and easily calculate different Column parameters.
* Piles & Piling Calculator contains 22 Calculators that can quickly and easily calculate different Piles and Piling parameters.
* Concrete Calculator contains 56 Calculators that can quickly and easily calculate different Concrete parameters.
* Engineering Survey Calculator contains 33 Calculators that can quickly and easily calculate different Surveying parameters.
* Soil & Earthwork Calculator contains 60 Calculators that can quickly and easily calculate different Soil and Earthwork parameters.
* Structural Engineering Calculator contains 58 Calculators that can quickly and easily calculate different Structural Engineering parameters.
* Bridge Calculator contains 58 Calculators that can quickly and easily calculate different Bridge and Suspension Cable parameters.
* Highway & Road Calculator contains 37 Calculators that can quickly and easily calculate different Highway and Road parameters.
* Hydraulics & Waterworks Calculator contains 94 Calculators that can quickly and easily calculate different Hydraulics and Waterworks parameters.
* Timber Engineering Calculator contains 55 Calculators that can quickly and easily calculate different Timber Engineering parameters.
Unit Converter contains following 32 Converters:
• Acceleration
• Angle
• Area
• Density
• Energy/Work
• Flow Rate (Mass)
• Flow Rate (Volume)
• Fluid
• Force
• Frequency
• Hardness
• Length
• Mass
• Metric Weight
• Metrology
• Moment of Force
• Moment of Inertia
• Prefixes
• Pressure
• Radiation
• Specific Heat Capacity
• Specific Volume
• Temperature
• Thermal Conductivity
• Thermal Expansion
• Time
• Torque
• Velocity
• Viscosity (Dynamic)
• Viscosity (Oil & Water)
• Viscosity (Kinematic)
• Volume
Key Features:
• All Calculators are available both in SI Units (Metric System) and USCS Units (Imperial System).
• Complete coverage of calculators and converters in Civil Engineering and Construction Parameters.
• Automatic Calculation & Conversion of the Output with respect to changes in the Input/Options/Units.
• Formulas are provided for each calculator.
• Values of Higher Order can also be calculated.
• Extremely Accurate Calculations and Conversions.
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
Most Comprehensive Civil and Construction Calculator Pack
Engineering Survey Calculator contains 65 Calculators and Converters, that can quickly and easily calculate and convert different Surveying and Civil Engineering parameters. Automatic & Accurate
Calculations and Conversions with every Unit and Value Changes. Available in Imperial and Metric Units. Most Comprehensive Engineering Survey Calculator.
*** Available in Metric and Imperial Units ***
*** Available in English, Français, Español, Italiano, Deutsch & Português ***
Engineering Survey Calculator contains following 33 Calculators:
• Standard Deviation (Series of Observations)
• Probable Error (Single Observation)
• Probable Error (Combined Effects of Accidental Errors)
• Error of the Mean (Based on Combined Effects of Accidental Errors)
• Error of the Mean (Based on Specified Error of a Single Measurement)
• Specified Error of a Single Measurement
• Probable Error of the Mean
• Temperature Correction
• Measurement Correction on a Slope
• Tension Correction to Measured Length
• Sag Correction to Measured Length
• Horizontal Distance (Slope Measurements)
• Slope Correction (Slopes of 10% or less)
• Slope Correction (Slopes greater than 10%)
• Correction due to Incorrect Tape Length
• Correction due to Nonstandard Tension
• Sag Correction (Between Points of Support)
• Departure from a Level Surface
• Displacement (Horizontal Sights)
• Combined Effect of Refraction and Curvature of Earth
• Quantities of Material to be Excavated/Filled
• Relative Accuracy required between directly connected Bench Marks (First Order - Class I)
• Relative Accuracy required between directly connected Bench Marks (First Order - Class II)
• Relative Accuracy required between directly connected Bench Marks (Second Order - Class I)
• Relative Accuracy required between directly connected Bench Marks (Second Order - Class II)
• Relative Accuracy required between directly connected Bench Marks (Third Order)
• Horizontal Distance between the Instrument and the Rod (Stadia Surveying)
• Vertical Distance between the Instrument and the Rod (Stadia Surveying)
• Stadia Distance (From Instrument Spindle to Rod - Horizontal Sights)
• Stadia Constant
• Photo Scale
• Photo Scale (Using Focal Length)
• Map Scale
Engineering Survey Calculator contains following 32 Converters:
• Acceleration
• Angle
• Area
• Density
• Energy/Work
• Flow Rate (Mass)
• Flow Rate (Volume)
• Fluid
• Force
• Frequency
• Hardness
• Length
• Mass
• Metric Weight
• Metrology
• Moment of Force
• Moment of Inertia
• Prefixes
• Pressure
• Radiation
• Specific Heat Capacity
• Specific Volume
• Temperature
• Thermal Conductivity
• Thermal Expansion
• Time
• Torque
• Velocity
• Viscosity (Dynamic)
• Viscosity (Oil & Water)
• Viscosity (Kinematic)
• Volume
Key Features:
• Complete coverage of calculators and converters in Surveying and Civil Engineering Parameters.
• Automatic Calculation & Conversion of the Output with respect to changes in the Input, Options and Units.
• Formulas are provided for each calculator.
• Extremely Accurate Calculations and Conversions.
• Professionally and Newly designed user-interface that speeds up Data Entry, Easy Viewing and Calculation Speed.
Most Comprehensive Engineering Survey Calculator | {"url":"https://play.google.com/store/apps/details?id=com.sis.HighwayAndRoadCalculator&referrer=utm_source%3Dappbrain","timestamp":"2014-04-20T07:39:44Z","content_type":null,"content_length":"195825","record_id":"<urn:uuid:a53b7e30-cc8a-49f3-9416-f130ae680162>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra (MAT-119)
Algebra (MAT-119)
A course for students who have mastered basic algebra and need a deeper understanding of algebra before progressing to other credit mathematics courses. Topics include solving linear and quadratic
equations and inequalities, absolute value equations and inequalities, graphs of linear and quadratic equations, equations of lines, systems of equations, introduction to functions, quadratic
functions, polynomials functions, rational functions, radical functions, rational exponents and applications. Prerequisites: A grade of "C" or higher in MAT 016 or MAT 022, and ENG089, or
satisfactory performance on the College Basic Skills Placement Test for Algebra. 4 lecture hours per week. 4 credit hours. | {"url":"http://catalog.ucc.edu/Lists/Courses/CustomDispForm.aspx?ID=12925","timestamp":"2014-04-17T04:17:09Z","content_type":null,"content_length":"53488","record_id":"<urn:uuid:4fdac668-03d1-4cfa-8564-5b0c8a330797>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics 351, Thermal Physics (Statistics and Thermodynamics), Spring 2014
Homework 10. Due Friday, April 18. Problems 6.33, 6.37, 6.38, 6.45, 6.47 ("freeze out" occurs when kT is comparable to the spacing between the lowest energy levels. To calculate energy levels, you
can assume a 1D box of width 1 cm), 6.52.
Homework 11. Due Friday, April 25. Problems 7.8, 7.9, 7.10, 7.12, 7.15, 7.19.
Welcome to Thermal Physics. This is a one-term course on Thermodynamics and Statistical Mechanics. Thermodynamics is the study of bulk properties of matter dealing with temperature and heat flow. It
was largely put together in the 19th century. Statistical Mechanics attempts to explain the principles of thermodynamics. It does so using concepts from 20th century physics, especially Quantum
Professor: Valery Kiryukhin, Serin W 118. (email: vkir -AT- physics.rutgers.edu, phone: (848) 445-8752)
Text: Introduction to Thermal Physics by Daniel V. Schroeder.
Lectures: Wed 10:20 - 11:40 am, Fri 3:20 - 4:40 pm, SEC 117
Office Hours: Wed afternoon (email to confirm).
Homework Grader: Anshuman Panda, anshuman -AT- physics.rutgers.edu
There is a syllabus for this class. There will be weekly homework, a midterm exam and a final exam. Please let me know as soon as possible if you cannot make either the midterm or final exam.
Lecture Notes
The lectures will be blackboard based, so it is essential that you attend. There are also slides originally written up by Prof. Gershenson, which are available on this page . They serve as a good
compliment to the text and lectures.
Homework will be assigned every week, announced in this web page, and will be due in one week (exceptions as noted). Here is the homework list . Here is a directory with homework solutions .
Note that the homeworks are 20% of your grade. This is a higher weight than homeworks are usually accorded. Thermodynamics and Statistical Mechanics are notoriously opaque subjects and the only way
to get a handle on them is to solve problems. The homeworks will be returned one week after submission.
Here is how the grading breaks down (approximate):
Homework: 20%
Midterm Exam: 30%
Final Exam: 50%
Midterm and Final Exam
The Midterm exam will be in class on March 14, for a full lecture period. The Final Exam will be during the finals period, on May 13, from 4:00-7:00 pm, in SEC 117.
Online Gradebook
There is an online gradebook for this course. Please check this for exam grades, etc.
Students with Disabilities
Please consult me as early as possible if you have a disability that might interfere with an optimal learning experience.
Also, please consult the website on disabilities . The University has coordinators for students with disabilities. | {"url":"http://www.physics.rutgers.edu/ugrad/351/","timestamp":"2014-04-20T13:20:18Z","content_type":null,"content_length":"4251","record_id":"<urn:uuid:e792164c-ab1f-4c3c-a6db-2b97c20ec03c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving Equations with a Variable on Both Sides - Problem 3
In any single variable equation, start by simplifying -- distribute and combine any like terms on both sides of the equation. Now you can start solving using inverse operations. The goal is to get
the variable on one side of the equal sign and all the numbers on the other side. Since there are variables on both sides, you want to eliminate the variable from one side of the equation. After you
have done this, the variable should only be on one side of the equal sign. Next, you want all the constants on the other side of the equal sign. Use inverse operations to eliminate any constants that
are on the same side of the equal sign as the variable. Remember that when solving equations, you must work in the reverse order of PEMDAS. In some problems after you bring the variable to one side
and all the constants to the other side, the result will be an equation that is not true. In other word, the result will be an equation in which one side does NOT equal the other side. In cases like
this, we say there is no solution to the equation. This means that there is no value for the variable that will make the equation true.
Great, looking at this problem, what we're going to be looking for is what value for x makes this equation true. Again that's what a solution means. So before I can start this problem I know I'm
going to have to do some simplifying on both sides before I do any solving and solving is where you like add stuff to both sides, divide both sides by whatever. Okay what I mean by simplify is the
distributing and combining like terms. So if I distribute that 3, 3x take away 6 plus 4, be really careful that the 3 gets multiplied by x and also by the -2 and then I can combine that -6 plus 4 is
just the same thing as -2. Now that side of the equation is simplified.
Let's look at the other side 2x plus 6 plus x. Let's combine those Xs, 2x plus x is 3x plus 6. I'm just going to rewrite that one more time so that I can keep my work going and I can know where I am.
Now I have both sides of the equation that are simplified. What I would want to do next is get all of my Xs together on the same side of the equation but look out my friends, something weird is about
to happen. The correct Mathy thing to do would be to subtract 3x from both sides but look I'm getting zero Xs, like it cancels out on both sides and what I have is -2 equals 6. This is really weird
and what this means is that this equation has no solution. This is like your answer. You write no solution on your paper. What this means is there's no number fraction, decimal, upside down, whatever
you wanted, like there's nothing that I could stick in there for x that would make this equation true and the way I know is because my Xs cancel out -2 is never equal to 6. That's how I know there's
no solution. That's one of the weird things about Math is that almost always you get an answer, you get a solution. This is a situation where there's like no answer. You wouldn't have known that
looking from the very beginning. This is another one of those problems you just need to work through carefully, show all your work and when you get something like this, don't freak out. Sometimes it
happens, sometimes there's no solution.
expressions equations solutions implying like terms combining like terms distributive property | {"url":"https://www.brightstorm.com/math/algebra/solving-equations/solving-equations-with-a-variable-on-both-sides-problem-3/","timestamp":"2014-04-19T17:04:28Z","content_type":null,"content_length":"60069","record_id":"<urn:uuid:740939aa-28eb-4b82-80cf-fac5ab4f7632>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
is this equation true? \[\huge r = 2^n\] where: r = number of rows of the truth table n = number of variables
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507b8217e4b07c5f7c1f33e5","timestamp":"2014-04-18T18:57:19Z","content_type":null,"content_length":"80465","record_id":"<urn:uuid:e7e9b724-8623-4e41-84f8-64b4f96f2fe5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Dima Pasechnik
bio website cs.ox.ac.uk/people/…
location Oxford, United Kingdom
age 50
visits member for 3 years, 4 months
seen 18 hours ago
stats profile views 1,598
Apr Fast checking that overdetermined polynomial system does not have a solution
13 comment Note that the 1st method is not exact, and the software mentioned does its work with the usual floating point numbers. Particularly if your integer coefficients will get long, this won't
be very reliable. That is to say, that you will have to look inside your systems just to make sure the coefficients don't blow up.
Apr Does a spherical building embeds in a building of type $A_n$?
4 comment well, I won't be surprised if this has been improved (perhaps even by Tits himself) - that's why I referred to the book that is more or less the state of the art.
Apr Does a spherical building embeds in a building of type $A_n$?
3 comment there has been some work done for $F_4$ and $E_k$ along the same lines as for polar spaces, but I don't know how conclusive are the results. Certainly, when there is just one building,
given a field (e.g. for finite fields), the answer is yes.
3 answered Does a spherical building embeds in a building of type $A_n$?
Apr Does a spherical building embeds in a building of type $A_n$?
2 comment you probably need to restrict your $B$ somehow. There are very weird generalised quadrangles (and thus buildings of type $C_2$) known which aren't embeddable in projective spaces at all.
Apr integrality of a linear program — binary equality constaints
2 comment any sufficiently generic $c$ will give optimal face consisting of just one vertex. How often your polyhedron will have this vertex being 0-1? This will not happen very often, for sure.
Apr integrality of a linear program — binary equality constaints
2 comment Please qualify what exactly you mean by a 0-1 solution. The underlying polyhedron of the LP has an optimal face, say, $F$. Are you asking for a criterion for $F$ to contain a 0-1 vector?
Something else?
Mar In which fixed-point free representations is the sum of every 3 elements invertible?
31 comment OK, Geoff, I must say I was totally humbled by your reference to Clifford's theorem :-)
Mar In which fixed-point free representations is the sum of every 3 elements invertible?
30 comment Why would 3 even divide $|G|$ ? And why must the normal 3-complement be Abelian?
Mar In which fixed-point free representations is the sum of every 3 elements invertible?
30 comment you wrote "sum of every 3 elements", meaning "sum of every 2 elements and the identity"...
Mar Which graphs generate a matroidal independence complex?
30 revised added 72 characters in body
Mar Which graphs generate a matroidal independence complex?
30 comment I stand corrected.
29 answered Which graphs generate a matroidal independence complex?
29 reviewed Approve suggested edit on An inequality involving sums of powers
Mar Counting extrema on a simplex
25 comment you certainly can write down the Lagrange conditions and use algebraic geometry to write down some bounds, but they won't be very useful: they would typically count complex as well as
real solutions, and I bet you won't beat known results on number of maximal independent sets in graphs this way. See arxiv.org/abs/1104.1243
25 answered Counting extrema on a simplex
Mar Independence Number of K4-free planar graphs
25 comment OK, I was thinking about the maximum ratio, sorry.
Mar Independence Number of K4-free planar graphs
24 comment IMHO it gets nontrivial if you request the graph to be 3-connected, as well.
Mar Independence Number of K4-free planar graphs
24 comment Do you want to restrict to connected graphs? Otherwise, a bunch of non-connected vertices is as best as you can hope for.
Mar reviewed Approve suggested edit on Is the countable intersection of residual sets in [0,1] with Hausdorff dimension 1 of full Hausdorff dimension? | {"url":"http://mathoverflow.net/users/11100/dima-pasechnik?tab=activity","timestamp":"2014-04-21T02:17:18Z","content_type":null,"content_length":"47705","record_id":"<urn:uuid:07c9fb93-3bc6-45a4-bdad-03d5a968f75e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlinear regression and goodness-of-fit (a mixed real and theoretical problem)
February 3rd 2013, 08:14 AM #1
Jul 2012
Nonlinear regression and goodness-of-fit (a mixed real and theoretical problem)
Dear Sirs/Madams:
I'm going to show you a real problem that I have at my research laboratory
and that is a theoretical problem too (statistics). A very interesting problem.
It is well known that when we fit experimental data to a linear model
we can use the R2 (R squared) value in order to compare the goodness-of-fit
of this fitted particular model to the data in front of other linear models.
However, when we fit experimental data to a NONLINEAR model the
R2 is not well defined. In fact, the underlying hypothesis that
SST=SSE+SSR (total variance=explained variance+residual variance)
is not true in this case.
Some authors had proposed to use an R2-like parameter
for nonlinear models: plot experimental values in front
of the predicted values (using a fitted model) and getting
the linear regression (R2) of this plot.
I must highlight that this R2 (from now on R2*) is not the conventional R2
that we usually now.
My experimental data is a phyiscal parameter (i.e.: flow)
in front of time (400 values of time).
I'm trying to fit this data to some nonlinear models (no mather what models).
The fact is that when I apply this R2* criteria, I got
very good R2* in all cases (>90%).
However, if I see the predicted curves (using the
fitted models) I see very clear that some models
does not fit good (despite its R2 is very high).
It is to say, very high R2 but very bad
graphical fitting.
I have read some technical books, for example:
"Nonlinear regression" (G.A.F. Seber by Wiley Series).
But the authors of those books offer a way of comparing two models
(one model in front of another model each time) by means of contrast
of hypothesis.
The fact is that I have a lot of models and
a lot of experiments and these kind of
test is very tedious.
I must highlight too that the methods of those books
suppose that the residual data is normaly distributed,
but my residual data does not follow a particular distribution.
Does anyone know a parameter that can
express the goodness-of-fit of a nonlinear model
to a experimental data without the need of comparing models (among them)
by contrasting of hypothesis and wihout the assumption that the residuals are normally distributed ?
I will be very pleased if anyone can help me.
Thanks in advance.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-applied-math/212486-nonlinear-regression-goodness-fit-mixed-real-theoretical-problem.html","timestamp":"2014-04-16T04:15:47Z","content_type":null,"content_length":"33015","record_id":"<urn:uuid:5e986a97-1c7e-41fc-9c7c-67b7eda41841>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by pat
Total # Posts: 610
5th grammar rule
write all the proper nouns in each sentence. also need to capitalize the proper nouns. The town of blair, nebraska, is home to dana college. please show me some of example thank you.
physical science
If a 50 kg rock is reasting at the edge of a cliff of 12 meters high what is it potential energy?
Que ______,Ana?
How was the conflict between supporters of a strong federal government and champions of states rights characterized then, as opposed to now?
what were the reason our forefathers divided the government into the legislative, executive and judicial branches.
5th grade
how will the value of 32,184,567 change if the number 2 is replaced by 9? this will be my last question for today thank you!!
5th grade
what is the value of x in the equation 25=10+? please! help me one more time.
5th grade
i need help with this problem what is the mode for the following number? 15,31,32,20,29,32,13 thanks!!
5th grade
i need help with my homework! 6 squared---? write the 9 digit number that has, 5 in the tenth place, 6 in the thousands place 3 in the millions place 4 in the hndred-thousands place 9 in the
hundreths place. 0 in all the other place. -,---,---.-- also can you show me how to wr...
He studied to show himself approved to God. What part of speech is approved?
if the speed of a moon car is .124 miles/second. How long, in minutes, would it take the moon car to go 5 kilometers.
anat. & phys.
draw & label all tissue/organs of endocrine sys. during stres response
6th grade
Simplify the following expression: 84/7(4-1)
a triangle with a 18"base.What is the maximum length to use 180in squared?
the engineer of a passenger train traveling at 25.0 m/s sights a freight train whose caboose is 200m ahead on the same track. The freight train is traveling at 15.0 m/s in the same direction as the
passenger train. The engineer of the passenger train immediately applies the br...
I have two different questions I can't figure out. 1) Projectile motion: At what initial speed must the basketball player throw the ball at an angle of 55 degrees above the horizontal, to make the
foul shot? The horizontal distance from the ball to the basket is 13 feet. I...
A football player punts the football so that it will have a hang time of 4.5s and land 46m away. If the ball leaves the player's foot 1.5m above the ground, what must be (a) the magnitude and (b)
angle (relative to the horizontal) of the ball's initial velocity?
how much pressure does 35 grams of carbon dioxide at 400 kelvin exert?
The Rebecca Company acquired merchandise inventory costing $10,000 on September 1. The company will not pay for the inventory until October 1. This transaction will affect the Rebecca Company by
increasing the Merchandise Inventory account by $10,000 and _____. A. increasing t...
what is the pv of an annuity due that promises to pay you $500 per yeaR FOR THE NExt 20 years if the interest rate is 7%?
what is the pv of an annuity that promises to pay you $500 per year for the nest 20 years if the interest rate is 7%?
Jewelry Markings
Can anyone tell me what the following markings mean on a pendant...appears to be brass. India B136 Thank you...
literary and non-literary prose.. i have to write a passage citing examples of the above on a source of my choice... so what do they mean.. or how should i go about it...
adult education
what would hospitals administrators use to make decisions about inpatient services
ummm i have come up with this is it alright and areas where i could improve pleas... ann always makes it difficult for people to understand her.or so they think what sometimes maybe they fail to
realize is that she is insightful.Her outlook towards life, and interpretations of...
actually i hoping that maybe u could give me some suggestions as to what i could maybe write on.
if i have to write a passage using rhetorical devices on a person or an ideology.. what would be the best thing for me to write on and how can i got about it.
i need to know how to start the passage!!!! or an idea as to how to go about it!!! then maybe i could post the end product or something!!
using rhetorical devices write a passage(300 words) on a person, idea or ideology.
i want to know how i should go about it.. how i should begin i know rhetoric is persuasive but what exactly can i write on and stuff... i need your help please!
write a passage (300 words)using rhetorical devices on a person,idea or ideology.
what did Harold Rosenberg mean by "spectator vs view"
what dose harold rosenberg mean when talk about "spectator vs viewer".
In 1953 Robert Rauschenberg erased the drawing of a very famous abstract expressionist artist? and why?
Are there situations in business where sampling would not be effective? --------------------------------------------------------------------------------
Calculate the pH of the buffer solution that results when 10 mL of 0.10 M NaH2PO4 is added to 3 mL of 0.05 M NaOH.
When f(x)= 2/x (4th power) - 7/x (2nd power) + 5 find the derivative
Solve the problem. You traveled 189 miles on 7 gallons of gasoline. At this rate, how many gallons of gasoline would you need to travel 297 miles?
If f(x)=3e^x+6x^e find f'(x).
"if u could do this physics problem thatd be super. A coach is hitting pop flies to the outfielders. If the baseball (m = 145 g) stays in contact with the bat for 0.04 s and leaves the bat with a
speed of 50 m/s, what is the average force acting on the ball?"
4tt grade math
if 14 counters are half. then what is the one ------ counters? if 12 countes are two and fifth , then what is the one -------- counters. thanks!
Solve the inequality x2 + 7x + 12 < 0.
earth science
what glaciers no longer exist in alaska?
3rd math
write the fact family for 2,11,and 9 ______________? ______________? _______________? ________________? I don,t understand of rule -.25 in $1.25 out $1.oo in $0.30 out $0.05 in $___ ? out $0.75 in
2.40 out _____? please help me!!
Carlos Martin received a statement from his bank showing a balance of $56.75 as of March 15th. His checkbook shows a balance of $87.37 as of March 20. The bank returned all the cancelled checks but
two. One check was for $5.00 and the other was for $13.25. How much did Carlos ...
how do you find the valule of the ratio 7X9 : 8X7?
P divid 5=12 ________=P? I'M NOT SURE ON THIS PROBLEM? THANKS!
3rd math
an average porcupine has about 30,000 quills. about how many quills would 4 have? number model_________? answer_______________? thanks!
3rd grade math
179,323,175 round to the nearest million sue can you please check my answer . this is my answer 280,000,000. hope i got it right.
3 rd math
no i'm not try to cheat. i just want to understand how do you round the #132 to the nearest million . if 2 in million place should i round up or keep it same value. i just want to able to understand
. on my kids math practic only . it's not an actual homework .
3re grade math
i need help on this problem. 132,164,569 round to the nearest million? 179,323,175 round to the nearest million? thanks!
3rd grade math
seating capacity of 24,042 round this to the nearest 1,000.? detroit shock 22,076 round this to the nearest 1,ooo? thanks!!
3rd math
how do you estimate 962 ? next one is 132 ? i'm kind of confuse on these problem.
3rd grade math
i need help!! if the estimate is greater than or equal or 1,500, find the exact sum. if the estimate is less than 1,500, do not solve the problem. 867+734=1601 number model:_______________? thank
Right triangle (tri)
ladder 13 feet long . if she set the base of the ladder on the level ground 5 feet from the side of a houes, how many feet above the ground will the top of the ladder be when it rest against the
Right triangle (tri)
ladder 13 feet long . if she set the base of the ladder on the level ground 5 feet from the side of a houes, how many feet above the ground will the top of the ladder be when it rest against the
martin Luther King High School
ladder 13 feet long . if she set the base of the ladder on the level ground 5 feet from the side of a houes, how many feet above the ground will the top of the ladder be when it rest against the
i need help on this problem! the temperature on july 23rd was 15C warmer than it was on june 23rd in the city. if the temp was 29c on june 23rd, then what was the temp in that city on july 23rd?
AP Government and Politics
Hi, I have a debate in class about taxes. I have to prepare arguments why we do not need taxes, and create closing paragraph providing ideas why taxes are wrong, unfair, illegal, unconstitutional.
Any ideas? thank you
11th grade
A sign is being held up by two walls which are 22ft apart. You want the sign, which has a mass of 200kg, to hang in the middle of the two walls. If you have 50ft of rope, which has a breaking point
of 2200N, and you want the sign to hang in equilibrium, all the while pleasig y...
I need to rotate the line the given no. of degrees (a) about the x-intercept and (b) about the y-intercept need to write equation of each image. dont know how to start y = 2x- 3;90degree
4 grade math
the estimate the number of cans of soda thet drink each week? number of student just pretend this is a bar graph. 0,1,2,3,4,5,6,7. number of student.number of can of soda 2 students of zero and 3 can
of soda for 3 students,2 soda of 4 students, 3 can soda 1 student and 5 can o...
4grade math
the temperature on july 23rd was 15'C warmer than it was on june 23rd in a city. if the temperature was 29'C on june 23rd, then what was the temperature in that city o july 23rd? number
4 grade mat
write the missing number to make number sentence true. ______=(25/5)+(8*4) next on i dont get it what is (20 DIVIDE 4) dIVIDE =5
4 grade math
i have a question on this problem? write true if is true, False if it is false? (6*5)/3 and 15>(7*6)*(10-9)
1) Two boxes are placed are placed side by side on a frictionless surface so that they touch each other. Box A has a mass of 4 kg. Box B has a mass of 12 kg. A constant force of 24 N is applied to
box A. What force does box A exert on box B?
4 grade math
i have a question on a math problem, please#1 explain why 7*8 is not a number sentence. Next question is write two true number sentence. thank you!
Find the equation of the line perpendicular to the line passing through (2,4) and (3,7)
Find the equation of the line passing through (-5, -4) and parallel to the line passing through (-3,2) and (6,8)
The height of a rocket fired vertically into the air from the ground is given by the formula h(t) = -16t (2nd power) + 384t + 4, where t is measured in seconds. How long will it take to reach its
maximum height and what is the maximum height reached by the rocket
that is what i am having trouble with =/ i got 96= 160t-16t^2 and i dont know how to factor. so i tried -16t^2+160t-96 and used the quadratic formula and i got two answer: .6 and 9.4...i do not know
what i am doing wrong, but i really need help finding t when it's 96 feet....
*Revised* If a ball is thrown vertically upward with a velocity of 160 ft/s, then its height after t seconds is s = 160t - 16t^2. What is the velocity of the ball when it is 96 ft above the ground on
its way up? (Consider up to be the positive direction. Round the answer to on...
If a ball is thrown vertically upward with a velocity of 160 ft/s, then its height after t seconds is s = 160t - 16t2. What is the velocity of the ball when it is 96 ft above the ground on its way
up? (Consider up to be the positive direction. Round the answer to one decimal p...
Evaluate. Remember the order of operations. Give your answer as a fraction in simplest terms. (-15)(6-4 divided by 2) + (-3)3rd power __________________________________ 6 / (4-11) my answer was -87/
-0.85714 which does not look right. Any help is appreciated
how did you translate 0.0019 radians/sec into minutes?
At what rate will $300 yield 67.50 interest in 4 years and 6 months?
$1000buys how many bushels at 1.331/2?
2,514 barrels at 5.831/3 is how much?
how do you find 2/3 of a penny? the problem is 276 yards at 162/3 cents?
A tire manufacturer wishes to set a minimum mileage guarantee on a new set of tires. The mean is 67,900 with a standard deviation of 2,050. The manufacturer wants to set a minimum guaranteed mileage
so that no more than 4% of the tires will have to be replaced. What should the...
A student threw a ball directly upward from the balcony of a building. The height of the ball as measured from the ground "t" seconds after it was thrown, is given by the expression h= -16t(2nd
power) + 64t + 768 When does the ball reach the ground?
Simplify the following [2x / (3x (2nd power) - 5x - 2] + [(x-1) / (x (2nd power) -x - 20)]
Solve the following 2x (2nd power) + 5x - 12 = 0
Solve the following equation for x 2y = x / (y + x)
Factor each of following expressions x (4th power) - x (3rd power) - 6x (2nd power (a-b)(2nd power) - (a 2nd power + b) 2nd power
Simplify the following expression 2(3 x - 2)(2nd Power) - 3x (x + 1) + 4
Simplify the following expression 2(2x - 2)2 - 3x(x + 1) + 4
how do you find the cube root of a number?
What is the diameter of a wheel whose area is 264 square inches?
5th grade math
what is a letter in the alphabet with 3 acute angles and 2 obtuse angles? I think it is X.
A, B, C and D are collinear, points c x y z are collinear, AB = BC = CX YZ, AD = 54, XY = 22, and XZ = 33 i need help ho to figure this out
if ab=bc=cx=yz,ad=54,xy=22,and xz=33 how do i find the indicated lenght
project management
I to create a proposal for a customer
That would come under Owners Equity
help with quiz 00779000 is the answer to question 3 a In "shall i compare Thee to a Summer's Day?
it says surface area and use 3.14 for pi
how do you find volume of a rectangle?
eighth grade math
omg thank you so muchhh!!!
6th grade
what are the best materials to use to build a coliseum for a school project?
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=pat&page=5","timestamp":"2014-04-21T13:09:47Z","content_type":null,"content_length":"26562","record_id":"<urn:uuid:a90e250a-e41f-4eeb-942b-8d844bc07807>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00335-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Mclean Math Tutor
Find a West Mclean Math Tutor
...I have a bachelor's degree magna cum laude from U.C. Berkeley in Zoology. I have a Ph.D. in Molecular Genetics.
25 Subjects: including ACT Math, SAT math, prealgebra, algebra 1
...I've tutored elementary math through Algebra II, GED prep, 100-level college algebra, test prep, etc. I have solid math content knowledge. I have a bachelor's degree in math and I passed the
Praxis II math exam with a 162 (VA required 147, and 162 was an above-average score). The best way to maximize success is ensure the student completes all assigned homework correctly all the time.
10 Subjects: including prealgebra, algebra 1, algebra 2, geometry
...I have experience working with children ages 10 and up, so I can work with those who are younger or those who are older and need assistance with more advanced coursework. I do work full time,
but would be available during evening hours and sometimes on the weekends on a case by case basis. I'd ...
25 Subjects: including calculus, chemistry, elementary (k-6th), physics
...I know French, Spanish and Portuguese. I possess a BA in French and lived in Paris for a summer. I speak Spanish fluently and lived in Buenos Aires, Argentina for a summer.
19 Subjects: including prealgebra, logic, probability, Spanish
...I have been an English tutor for over six years, and I love tutoring students of all ages, backgrounds, and skill levels. Here are a few of my specialties: Writing an Essay, Grammar,
Literature, Poetry (writing and analyzing), and Communicating in English. I can also help students with math (up to Algebra 1).I am very well versed in English grammar.
22 Subjects: including prealgebra, ASVAB, ESL/ESOL, English | {"url":"http://www.purplemath.com/West_Mclean_Math_tutors.php","timestamp":"2014-04-19T02:34:01Z","content_type":null,"content_length":"23678","record_id":"<urn:uuid:1e52dace-554b-49b9-b035-0f9d7c7c062a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
What's New - June
June 27, 2002: More brain candy for the younger set. The SafeCracker program represents an unusual safe. Each of the buttons must be pressed in the correct order to open it. The distance to move and
the direction are indicated on each button. The last button is marked "LAST". The safecracker's job is to locate the first button and unlock the safe by clicking the chain of buttons from first to
last. (By the way, even though the image at left looks computer playable, it's not, so save your clicking finger. You can copy it to paper and try it though. You can also follow the link above to
download the executable or Delphi source code for SafeCracker in order to generate and solve many puzzles of many sizes.)
June 20, 2002: One of the proposed changes for the Traveling Salesman Program was implemented today. It was just too good to resist, reducing exhaustive search times for the 13 city closed route by
94% and increasing the maximum practical search size by a city or two. See the Further Explorations section at the bottom of the Traveling Salesman Program page for more details.
June 18, 2002: The Traveling Salesman Problem is interesting because it is easy to state but hard to solve. Given a set of cities, plan a roundtrip for our salesman that visits all the cities and
minimizes the total distance traveled. This turns out to be a very hard problem because of the explosion of possible paths as the number of cities increases. Examining all paths is probably only
practical up to 13 or 14 cities. Above that number we need heuristic algorithms that give pretty good results. Lots of academics are spending lots of time finding better solving techniques with some
success, but you won't find that code here. You can try your hand at beating the heuristics in my Traveling Salesman Program. Or benchmark your computer by timing an exhaustive search for a 13 city
June 13, 2002: One of the early experiments in machine learning was a "machine" which used matchboxes and colored beads to play tic-tac-toe. It was invented by Dr. Donald Michie over 40 years ago.
300 (or perhaps 304) matchboxes represent the the board positions presented to the machine. For each move, a bead is selected randomly from the appropriate matchbox. At the end of the game, wins are
rewarded by more of the winning beads and losses punished by confiscating beads. You can train this computerized version of the TicTacToe machine for yourself.
June 3, 2002: Here's an Equation Search program that can be quite challenging to play. Given four sets of four numbers each, find an arrangement of the numbers combined with two operators to form an
equation of the form (N1 op1 N2) op2 N3 = N4 that is satisfied by each of the sets of numbers. Operator choices may be restricted to from1-4 operator types chosen from +, -, ×, and ÷. Problems are
randomly generated by the program. Use all four operations and allow number values up to 99 and it can take a while to "unlock the code". | {"url":"http://www.delphiforfun.org/whatsnew/WhatsNew_June2002.htm","timestamp":"2014-04-21T14:42:08Z","content_type":null,"content_length":"7145","record_id":"<urn:uuid:f6e8436f-b96b-4774-ad71-d3c1b957666d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00074-ip-10-147-4-33.ec2.internal.warc.gz"} |
Happy Valentine's Day
Posted by: Dave Richeson | February 14, 2009
Happy Valentine’s Day
For your reading pleasure:
The Calculus of Saying “I Love You”: Why you should never date man who knows more math than you.
(I wouldn’t say the mathematics is perfect, but it is fun to read.)
Also, from xkcd.com:
1. Doppleganger blog? I think you got the jump on me. But I think it’s funny we both ended up choosing ~ the same title.
I think this world is big enough for the both of us, let me know if you disagree.
“Divide By Zero” is my blog’s name too!
By: Nick Hershman on February 14, 2009
at 10:18 pm
□ Nick—maybe you could consider changing your blog title to “Divide by 2″ or something like that… :-) Just kidding! I’m happy to share the (similar) blog names. Have fun!
By: Dave Richeson on February 15, 2009
at 3:25 pm | {"url":"http://divisbyzero.com/2009/02/14/happy-valentines-day/","timestamp":"2014-04-18T03:23:11Z","content_type":null,"content_length":"60830","record_id":"<urn:uuid:73ef3b7d-58db-470b-b188-4a61d0d39d06>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Skills-Based Math, Just in Time Learning, and Bad Habits of Mind
By Barry Garelick
In the never-ending dialogue about math education that has come to be known as the “math wars”, proponents of reform-based math tend to characterize math as it was taught in the 60’s (and prior) as
“skills-based”. The term connotes a teaching of math that focused almost exclusively on procedures and facts in isolation to the conceptual underpinning that holds math together. The
“skills-based” appellation also suggests that those students who may have mastered their math courses in K-12 were missing the conceptual basis of mathematics and were taught the subject as a means
to do computation, rather than explore the wonders of mathematics for its own sake.
Without delving too far into the math wars, I and others have written that while traditional math may sometimes have been taught poorly, it also was taught properly. In fact, a view of the
textbooks in use at that time reveal that they provided both procedures and concept. Missing perhaps were more challenging problems, but also missing from the reformers’ arguments is the fact that
not only are procedures and concepts taught in tandem but that computational fluency leads to conceptual understanding. (See http://www.psy.cmu.edu/~siegler/r-jhnsn-etal-01.pdf )
John Woodward, currently Dean of the School of Education at the University of Puget Sound, is one such person who refers to traditional math teaching as “skills-based” in various papers he has
written (as well as in a personal communication to me). I was therefore interested to learn that he chaired the panel that wrote “Improving Mathematical Problem Solving in Grades 4 through 8″ which
was published by the Department of Education’s “What Works Clearinghouse”. [1] (http://ies.ed.gov/ncee/wwc/pdf/practice_guides/mps_pg_052212.pdf) Upon going through the guidance, I was heartened to
see that the panel recommends whole class instruction, defining terms so that students are not thrown off by unfamiliar vocabulary, and helping students recognize and articulate mathematical concepts
and notation.
The recommendation of whole class instruction is admittedly a step in the traditional direction, as opposed to reform methods such as problem-based learning in small groups, facilitated by a teacher
who refrains from direct/explicit instruction. As if to ensure that such a step is not interpreted as advocating a purely “skills-based” approach to teaching math, the report is careful to
recommend that whole class instruction include presentation of non-routine as well as routine problems. Non-routine problems are those for which there are not predictable approaches suggested by the
problem, or worked-out examples that apply to them.
There is no argument from me or others in the traditional camp that students benefit by being given both routine and non-routine problems. It is important to recognize, however, that routine
problems are prerequisite for solving the non-routine ones. And while students certainly should be given challenging non-routine problems, they must be able to be solved using prior knowledge of
skills and procedures.
The necessity of prior knowledge is something that reformers tend to dismiss. A prevalent belief among math reformers is that just as students develop problem solving habits for routine problems, a
similar “habit of mind” development occurs for solving non-routine problems. And in fact, it appears that based on an example of a non-routine problem included in their report Woodward and the other
panel members are thinking along the “habits of mind” route. In the problem, the student is asked to find the value of an angle as shown below:
The problem is described as “likely non-routine for a student who has only studied simple geometry problems involving parallel lines and a transversal.” This is true but the authors fail to
completely characterize why students would find it non-routine. The problem is solved by drawing in a line that is not shown, called a “supplemental line”. If the students have had no prior
knowledge in supplemental lines and how they are used in proofs, the problem is non-routine not because of its newness, but because they lack the prior knowledge and skills needed to solve the
The figure below shows how drawing in a supplemental line to extend an existing one creates a transversal where there wasn’t one before. At the top parallel line, the supplementary angle to 155 is
easily calculated as 25. The transversal now makes it obvious that the supplemental angle of 70 is an alternate interior angle and is the second angle in the triangle formed by the supplemental
line. Since angle x is an exterior angle to the triangle, it is the sum of the two remote angles 70 and 25, or 95.
The report does not make clear for what grade level the non-routine problem is being presented. I assume that since the report is for math taught in grades 4-8, that this problem would be for eighth
graders. While an appropriate way to introduce how to use supplementary lines in proofs and solving problems (followed by explicit and systematic instruction in the technique) the report makes no
mention of using it in this fashion. Without the knowledge of drawing supplemental lines, students are at a significant disadvantage in trying to solve the problem. Teachers guiding the student
would ultimately give hints about supplemental lines, and would provide the needed knowledge in a “just in time” basis. The new knowledge acquired in such fashion may show the student how to
proceed, but does not develop any kind of habit of mind.
In another chapter of the report (on how teachers can provide prompts to help students solve problems), they give an example of a problem in which, again, students do not have the proper tools to
solve it efficiently. In particular, they pose the following problem: Find five different numbers whose average is 15. They then give an example of the type of “prompts” teachers can give students
to help them solve it.
They describe a student who is picking numbers, adding them and dividing by 5. The teacher notices that the student has some numbers bigger than 15 and some smaller and through questioning, gets the
student to observe that they can’t all be greater than 15, nor all smaller than 15 because then the average would be greater than or smaller than 15. The student says “Some have to be bigger and some
smaller. I guess that is why I tried the five numbers I did.” The teacher responds: “That’s what I guess, too. So, the next step is to think about how much bigger some have to be, and how much
smaller the others have to be. Okay?”
In essence the teacher is helping the student develop a more efficient way to do guess and check which is an inherently inefficient process. The problem would be a good one for a pre-algebra or
algebra class in which students have had some instruction in expressing words algebraically. Rather than present this problem to students who lack algebraic knowledge or skills, it could be
presented to pre-algebra and algebra students. Then, rather than prompting the student to do an inefficient method efficiently, the teacher could prompt the student by asking what is an average, and
whether the problem tells us what the sum of the five numbers is. Since the problem does not provide the sum, the student can be prompted to express the unknown sum as “x”, thus setting up a way to
express the average using algebraic symbols. Since the sum is divided by how many numbers are summed, an equation of x/5 = 15 is obtained. Early students of algebra know how to solve the one-step
equation to obtain 75. Now it is much easier to then find five different numbers that average 15, since the student now only needs to find 5 different numbers that sum to 75.
People may object to my criticisms here by saying that the recommendations of including non-routine problems and of guiding students via prompts are very reasonable and sound. I agree; they are.
But despite the authors’ willingness to enter into discussions of traditional modes of teaching where the edu-establishment has been reluctant to go before, the examples I have discussed here belie a
general cautiousness. It is as if they are afraid of an outcome that will be their recurring nightmare: Skills-based math. And so they fall back on their conceptions of “habits of mind”. The
authors probably believe that they have taken significant steps to meet the traditionalists half way. It is probably more accurate to say that their best intentions are driven by an agenda that will
continue to teach math in a “just in time” manner, and will foster bad habits of mind. They have obsessed over the simplest good ideas to the point that they become bad ones.
Barry Garelick has written extensively about math education in various publications including Education Next, Educational Leadership, and Education News. He recently retired and has obtained
his credential to teach math (middle school/high school) in California.
[1] The entire panel is as follows: John Woodward (Chair) University of Puget Sound, Sybilla Beckmann University of Georgia, Mark Driscoll Education Development Center, Megan Franke University of
California, Los Angeles, Patricia Herzig, Independent Math Consultant, Asha Jitendra University of Minnesota, Kenneth R. Koedinger Carnegie Mellon University, Philip Ogbuehi, Los Angeles Unified
School District. | {"url":"http://www.educationnews.org/education-policy-and-politics/skills-based-math-just-in-time-learning-and-bad-habits-of-mind/","timestamp":"2014-04-20T21:39:04Z","content_type":null,"content_length":"48427","record_id":"<urn:uuid:406e7cb8-3cf5-4116-a8c2-d4cda8b745da>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00185-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - If no singularity, what’s inside a big black hole?
Bernie G
“How can the gravitational acceleration at an event horizon be smaller than at the surface of a neutron star ?”
Because gravitational acceleration varies as the inverse of r squared. One of us is making a mistake. I was under the impression that distant super-massive black holes (10 billion solar masses)
“disappeared” because the gravitational acceleration at the event horizon is so small (and the curvature so large) that infalling material doesn’t even radiate until it is well within the black hole.
Hence I volunteer to sit on the ring and bravely stick my toes inside the event horizon of a trillion solar mass black hole, where the gravity (gulp) should be about as strong as in California.
To challange the staus quo even further, here in a nutshell is my minority viewpoint about the size of a star composed of relativistic material inside a black hole:
The gravitational energy could be as low as (4GM^2)/(5R) for a typical density profile, or possibly as high as (GM^2)/R (unlikely) if the star has a high density core. The total energy creating
pressure would be (Mc^2)/3. Using the viral theorem (the energy creating pressure equals half the gravitational energy), a non-rotating star of relativistic material would have a radius as small as
(1.2GM)/(c^2) or as large as (1.5GM)/(c^2), or between 60 - 75% of the Schwarzchild radius.
If this model is true, it could be verified someday by the observation of the merger of two approximately equal mass black holes: a massive ejection from the relativistic stars would occur.
I don't really get what you are saying; the event horizon is a boundary beyond which photons cannot escape the gravitational pull of the BH. Its radius is only dependent on the total mass of the BH.
As neutrons stars are stable and do not collapse gravitationally, the gravitational acceleration at the event horizon for a BH of equal mass must be much stronger than at the surface of the neutron
star ?! If it was the other way around all neutron stars would immediately collapse... | {"url":"http://www.physicsforums.com/showpost.php?p=3712638&postcount=143","timestamp":"2014-04-19T17:38:43Z","content_type":null,"content_length":"9494","record_id":"<urn:uuid:abdd6e06-086b-41f7-a8bf-1190c8ffb662>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stock Analysis using R
June 26, 2010
By C
Want to do some quick, in depth technical analysis of Apple stock price using R? Theres a package for that!
package allows you to develop, testing, and deploy of statistically based trading models. It provides the infrastructure for downloading/importing data from a variety of locations, analyze that data
and produce charts that help determine statistical trends. I appreciated Digital Dude calling this package to my attention in a
recent comment
. I also noticed that
Revolution Analytics
had highlighted the package on its finance page. Actually, I had come across quantmod a few months ago - and it instantly got me excited about the power of R. To give you an idea of typical usage,
the following creates a stock chart of the last three months of Apple stock data.
getSymbols("AAPL")chartSeries(AAPL, subset='last 3 months')addBBands()
The getSymbols function is used to retrieve stock data. Data can originate in a number of locations. In the example above, we are obtaining a single stock, Apple. If you wanted to download several
different stock quotes, you can do so in a single command.
Once you have retrieved stock data, you can focus on subsets of dates quickly.
You can also merge data to view comparisons.
The chartSeries command creates the plot pictured above. It captures a large amount of information, the date, open and close price, and volume of trading for each day. Finally, the addBBands() call
Bollinger Bands
to the chart. Informally, this amounts to a line indicating moving average and two lines a standard deviation above and below this moving average. For the uninitiated,
technical indicators
(and overlays) can be broken up into four categories - Trend, Volatility, Momentum, and Volume. Those available in Quantmod are listed below.
Indicator TTR Name quantmod Name
Welles Wilder's Directional Movement Indicator ADX addADX
Double Exponential Moving Average DEMA addDEMA
Exponential Moving Average EMA addEMA
Simple Moving Average SMA addSMA
Parabolic Stop and Reverse SAR addSAR
Exponential Volume Weighted Moving Average EVWMA addEVWMA
Moving Average Convergence Divergence MACD addMACD
Triple Smoothed Exponential Oscillator TRIX addTRIX
Weighted Moving Average WMA addWMA
ZLEMA ZLEMA addZLEMA
Indicator TTR Name quantmod Name
Average True Range ATR addATR
Bollinger Bands BBands addBBands
Price Envelope N/A addEnvelope
Indicator TTR Name quantmod Name
Commodity Channel Index CCI addCCI
Chande Momentum Oscillator CMO addCMO
Detrended Price Oscillator DPO addDPO
momentum addMomentum
Rate of Change ROC addROC
Relative Strength Indicator RSI addRSI
Stocastic Momentum Index SMI addSMI
Williams %R WPR addWPR
Indicator TTR Name quantmod Name
Chaiken Money Flow CMF addCMF
Volume N/A addVo
This really just scratches the surface of what is possible with quantmod. For instance, see
this post
on using quantmod with gold related data.
Later posts will include other applications - there is simply too much to cover at one time.
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or
One Response to Stock Analysis using R
1. searchforarticl (Search For Articles) on June 26, 2010 at 9:02 pm
Stock Analysis using R | (Articles about R): An article about R: Want to do some quick, in depth technical analysi… [link to post] | {"url":"http://www.r-bloggers.com/stock-analysis-using-r/","timestamp":"2014-04-18T10:36:53Z","content_type":null,"content_length":"56418","record_id":"<urn:uuid:76f7da48-4f57-4fe6-bd61-d045e8817df9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
Available Categories
Other Options
Available Categories
Other Options
Articles from this author
• Can I start trading otions after reading the book Options For Beginners?
Yes. This is really up to you. Some people like to dive right in...
- 05/13/2009
• Do I have to calculate historical volatility as taught in ODDS The Key To 95% Winners?
No, we teach this to help you understand the concept of volatility, as...
- 11/16/2011
• Do I really need to learn how to calculate the historical volatility in Option Wizardry?
No, we teach this to help you understand the concept of volatility and how it...
- 11/16/2011
• Do you ever use weekly options, and if so, what strategies would be best to take advantage of the rapid decay in price?
We do not analyze weeklies, nor do we analyze LEAPS -- surprisingly, for the...
- 05/09/2012
• Do you look at earnings when analyzing your trades?
No. We do not have a method of trading that makes decisions based on...
- 05/08/2009
• Does Don use other strategies besides the credit spreads taught in Option Wizardry?
Yes. Don is an expert at understanding the markets and volatility. He is...
- 05/14/2009
• Does OOL 7.0 follow weeklys ? (Yes, the CBOE has deemed that it be spelled that way)
Yes, ODDS Online does follow weeklys for those stocks, ETFs and indexes upon...
- 12/06/2011
• Does this same theory taught in Option Wizardry work on commodity options as stock options?
Since we don't trade this on commodities or futures, we cannot answer...
- 05/14/2009
• How did the 6# system do in 2008?
We got out of the leveraged index funds and into money market accounts the...
- 11/10/2009
• How Do I Calculate the Profit/Loss Of Credit Spreads?
Profit/Loss on Credit Spreads : Max Profit = Net credit Max Risk...
- 12/06/2011
• How Do I Calculate the Profit/Loss Of Credit Spreads?
Profit/Loss on Credit Spreads : Max Profit = Net credit Max Risk...
- 12/06/2011
• How do I determine a target price in Trademaster?
The target price is usually the breakeven price, strike price, or profit...
- 05/14/2009
• How do I exit on credit spreads taught in Option Wizardry?
The goal of a credit spread is for all of the options to expire...
- 05/14/2009
• How do I find the best broker?
We do not recommend brokers, but want you to use whomever you prefer. ...
- 05/08/2009
• How do I find the probability of trades other than credit spreads while using TradeMaster?
Simply find the breakeven price of your trade and then enter that value...
- 05/14/2009
• How do I interpret the Asset Allocation number given in TradeMaster?
The asset allocation number is given as a percentage. This feature was...
- 05/14/2009
• How do I interpret the expected profit loss number of any trade?
The expected profit/loss is the average amount you would expect to make if...
- 05/08/2009
• How do I know when to exit a credit spread trade?
The objective with a credit spread is to hold the option to expiration. There...
- 12/06/2011
• How do I know when to exit a credit spread?
The objective with a credit spread is to hold the option to expiration. There...
- 05/08/2009
• How do you know if the site and the info on 3EZ Factors is current?
The 3EZ Factors site is updated weekly, and the most current date is posted...
- 10/27/2011
• How much money do I need to trade options?
This varies depending upon what type of strategy you use, and the frequency...
- 05/11/2009
• I am having trouble viewing the dvds. Can you tell me what to do that will help?
The following should fix the problem: Go to "My Computer" on...
- 08/10/2010
• I am trying to do the calculations taught in ODDS Key To 95% Winners in a spreadsheet but have no idea what to do. Can you help?
You can call to get some limited help from our tech support line at...
- 05/14/2009
• I am trying to use a spreadsheet for the calculations taught in Option Wizardry, but have no idea what to do. Can you help?
You can call to get some limited help from our tech support line at...
- 05/14/2009
• I am using TradeMaster according to the instructions but I can't seem to get filled on a trade, what am I doing wrong?
The market isn't always willing to give a high probability trade. When...
- 05/14/2009
• I can't seem to find a good trade a credit spread like in Option Wizardry, so what should I do?
The simple answer is to wait. Sometimes the market will not give up a good...
- 05/14/2009
• I don't know how to do the formulas taught in ODDS Key. Can you help with how to do these?
Yes, we can. However we also want you to know that these formulas are...
- 12/27/2011
• I understand the concepts in Option Wizardry, but how do I get started?
This is easy. You start by selecting how often you want to win. If you...
- 05/14/2009
• I understand the concepts taught in ODDS Key To 95% Winners, but where do I get started?
This is easy, you start by selecting how often you want to win. If you want...
- 05/14/2009
• If I have a covered call trade on & the company is bought, what happens to the option I sold?
The volatility goes to zero with a cash acquisition, not the option. ...
- 05/21/2010
• If the Credit Spread finder gives the breakeven credit for the trade how do I know what I need to make a profit?
You need to factor in your commissions and some profit above this breakeven...
- 05/14/2009
• In credit spreads taught in Option Wizardry, do you have a stop loss?
No. We know the max risk in the trade and we are willing to trust the...
- 08/24/2010
• In ODDS Online 7.0, can I save copies of reports and then import them into Microsoft Excel?
Yes, absolutely. It's very simple. Bring up the page you...
- 12/06/2011
• In ODDS The Key to 95% Winners, how do I calculate the formula in the bonus report "How To Spot 90% Winners...?"
Special Supplement to:How To Spot 90% Winners Instantly With Nothing More...
- 11/10/2009
• In Option Wizardry, how can I make money when my credit is $0.50, and my commisions for the trade are $50?
You can't make money on this unless you can do multiple contracts. Your...
- 05/14/2009
• In Option Wizardry, how do I calculate the formula in the bonus report titled "How To Spot 90% Winners....?"
Special Supplement to:How To Spot 90% Winners Instantly With Nothing More...
- 11/10/2009
• In Option Wizardry, you talk about why you do not do calendar spreads. Can you give more details as to why?
There are a couple of important things to know about calendar spreads before...
- 09/16/2010
• In the bonus report How To Spot 90% Winners, what is the value of 'e' in the formula?
e is a fundamental constant of nature. The natural constant e is...
- 09/09/2010
• In using Metastock with Don's Explorer for Weeklys, what are the steps to use to be sure I see the right list of trades?
Follow these steps: Open ExplorerHighlight the explorationHit...
- 06/25/2012
• Is there a big difference between ODDS Trademaster & ODDS Online?
There is a big difference between ODDS Trademaster & ODDS Online, as...
- 04/16/2010
• Is there a good time of the month to initiate a trade?
It depends upon what type of strategy you are using. Short term...
- 12/07/2011
• Is there a good time of the month to initiate an option trade?
It depends upon what type of strategy you are using. Short term...
- 05/14/2009
• My head hurts from all of the math in Option Wizardry, so is there an easier way?
There are a few things you can do... The easiest thing to do is sign up for...
- 05/14/2009
• My online broker will not let me place a trade for a net credit, like is taught in Option Wizadry. What should I do?
Call your order in. Never try to leg into these trades. If you try to enter...
- 05/14/2009
• On your website home page there is a volatility thermometer. What is that used for?
The Volatility Thermometer, on our home page at www.donfishback.com, is used...
- 05/21/2009
• Should I put stop losses on my positions?
It depends. There is nothing wrong with doing that, if you choose...
- 05/22/2009
• What can happen if you do not close out a put position you bought previously, as mentioned on pg 80 & 81?
Chapter 10, PRINCIPAL RISKS OF OPTIONS POSITIONS RISKS OF OPTION HOLDERS 4....
- 04/05/2010
• What day did the most recent 3 EZ Factors signal turn bullish? The sentiment looks like it turned bullish sometime in Sept. 2011.
You can get that date by logging in and going to the 3EZ Factors Central Hub...
- 10/27/2011
• What funds do you go into when the system says be out of the agressive funds?
We trade through Fidelity Brokerage, so we use:(FDLXX)...
- 08/11/2011
• What happens if I am assigned on a short option position?
If you are assigned on a short put option, you will receive 100 shares of...
- 05/11/2009
• What happens to an option when there is an acquisition?
The volatility goes to zero with a cash acquisition, not the option. ...
- 05/21/2010
• What happens to my options when a stock splits?
There is no one answer to this question. Every split can be treated...
- 05/11/2009
• What is an iron condor option trade?
The iron condor or 4-way is nothing more than two credit spreads implemented...
- 03/05/2013
• What is the Excel function for Formula #5 that converts a percentage to a standard deviation?
The Excel function is normsinv. To enter in Excel, type: =normsinv(p)...
- 02/18/2010
• What is the Excel function for Formula #5 that converts a percentage to a standard deviation?
The Excel function is normsinv. To enter in Excel, type: =normsinv(p)...
- 02/18/2010
• What size of an account do I need to trade high probability credit spreads that are taught in Options Wizardry?
You should have a minimum of $8,000 - $10,000 to trade this strategy, if you...
- 05/14/2009
• Where can I find out if a stock is in a merger or acquisition?
Here's a great source for mergers and acquisitions:...
- 01/19/2011
• Where can I get quotes to use for my calculations for Option Wizardry?
There are numerous website where quotes are available for free. If you...
- 05/14/2009
• Where can I get stock and option pricing and volatility data?
We get our data from our own ODDS Online software (www.oddsonline.com). This...
- 05/11/2009
• Where do I get option quotes needed for ODDS The Key To 95% Winners trades?
You can get quotes from numerous websites. Our favorites are; www.cboe.com ,...
- 05/14/2009
• Where do I get the data needed to put into TradeMaster?
There are numerous places to get the data needed. On our website in the...
- 05/14/2009
• With options, how can an option price go up so much more than the stock price does?
In certain circumstances, the price of the option is independent of the price...
- 10/07/2009 | {"url":"http://www.oddsoptions.com/kb/author/1","timestamp":"2014-04-19T06:52:55Z","content_type":null,"content_length":"133886","record_id":"<urn:uuid:45104fda-33bc-4b7f-9e29-cada27e02645>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
About Fahrenheit and Celsius
If you look at the conversion formulas relating degrees Fahrenheit (°F) and Celsius (°C) that are implemented in our online converter, it does not immediately become obvious where the numerical
factors are coming from. The following will give you a concise explanation.
To understand the relation between different temperature scales, one needs to look at the way each scale was originally set up. In the case of Celsius (also known as Centigrade), named after Anders
Celsius (1701-1744), the scale is based on two well-defined reference points. One is the freezing point of pure water (at normal pressure), taken as zero, 0°C, of the scale. The other point, taken as
100°C, corresponds to the point when pure water boils (again, under normal pressure – in general, phase transformation temperatures depend on the pressure).
Fahrenheit scale, on the other hand, named after Gabriel Fahrenheit (1686-1736), was devised such that a temperature value lower than the water freezing point sets zero of the scale. To mark the
lowest reference point of 0°F, the temperature of an equilibrated mixture of ice, water, and ammonium chloride was used. The second reference point, 32°F, corresponds to water freezing. An additional
point, taken as 96°F, was chosen as the body temperature of a healthy human.
The following plot graphically illustrates the relation between °F and °C scales.
Reference points for both scales are indicated on the graph. The slope and intercept of this straight line are those specific numerical factors that appear in the Fahrenheit and Celsius conversion
It is important to mention that worldwide accepted SI (System International) system of units uses Kelvin as the reference unit for temperature. Kelvin (K) scale is based on a very unique reference
point – the absolute zero of temperature. Temperature conversion between °C and K is very simple:
T [K] = T [°C] + 273.15
Going from Kelvin to Fahrenheit is less straightforward. Our Kelvin converter page allows direct °F and K conversion and provides the corresponding equation. | {"url":"http://www.ferinheighttocelsius.com/F-to-C/doc/about-temperature-scales","timestamp":"2014-04-16T21:52:14Z","content_type":null,"content_length":"7009","record_id":"<urn:uuid:8051c4bc-4bc8-48ec-971e-087a0a7d2f32>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ratios and Proportions
7.1: Ratios and Proportions
Created by: CK-12
Learning Objectives
• Write, simplify, and solve ratios and proportions.
• Use ratios and proportions in problem solving.
Review Queue
1. Are the two triangles congruent? If so, how do you know?
2. If $AC = 5$$GI$
3. How many inches are in a foot? In a yard? In 3 yards?
4. How many cups are in a pint? In a quart? In a gallon? In 7 quarts?
Know What? You want to make a scale drawing of your room and furniture for a little redecorating. Your room measures 12 feet by 12 feet. Also in your room is a twin bed (36 in by 75 in), a desk (4
feet by 2 feet), and a chair (3 feet by 3 feet). You decide to scale down your room to 8 in by 8 in, so the drawing fits on a piece of paper. What size should the bed, desk and chair be? Draw an
appropriate layout for the furniture within the room. Do not round your answers.
Using Ratios
Ratio: A way to compare two numbers. Ratios can be written: $\frac{a}{b}, a:b,$$a$$b$
Example 1: The total bagel sales at a bagel shop for Monday is in the table below. What is the ratio of cinnamon raisin bagels to plain bagels?
Type of Bagel Number Sold
Plain 80
Cinnamon Raisin 30
Sesame 25
Jalapeno Cheddar 20
Everything 45
Honey Wheat 50
Solution: The ratio is $\frac{30}{80}$$\frac{3}{8}$
Example 2: What is the ratio, in simplest form, of Honey Wheat bagels to total bagels sold?
Solution: Remember that order matters. Because the Honey Wheat is listed first, that is the number that comes first in the ratio (on in the numerator of the fraction). Find the total number of bagels
$80 + 30 + 25 + 20 + 45 + 50 = 250$
The ratio is then $\frac{50}{250}=\frac{1}{5}$
We call the ratio 50:250 and 1:5 equivalent because one reduces to the other.
In some problems you may need to write a ratio of more than two numbers. For example, the ratioof the number of cinnamon raisin bagels to sesame bagels to jalapeno cheddar bagels is 30:25:20 or
Measurements are used a lot with ratios and proportions. For example, how many feet are in 2 miles? How many inches are in 4 feet? You will need to know these basic measurements.
Example 3: Simplify the following ratios.
a) $\frac{7 \ ft}{14 \ in}$
b) $9m:900cm$
c) $\frac{4 \ gal}{16 \ gal}$
Solution: Change these so that they are in the same units.
a) $\frac{7 \bcancel{ft}}{14 \ \cancel{in}} \cdot \frac{12 \ \cancel{in}}{1 \ \bcancel{ft}}=\frac{84}{14}=\frac{6}{1}$
Notice that the inches cancel each other out. All ratios should not have units once simplified.
b) It is easier to simplify ratios when they are written as a fraction. $\frac{9 \ m}{900 \ cm} \cdot \frac{100 \ cm}{1 \ m}=\frac{900}{900}=\frac{1}{1}$
c) $\frac{4 \ gal}{16 \ gal}=\frac{1}{4}$
Example 4: A talent show features dancers and singers. The ratio of dancers to singers is 3:2. There are 30 performers total, how many singers are there?
Solution: 3:2 is a reduced ratio, so there is a whole number, $n$
$\text{dancers} = 3n, \text{singers} = 2n \ \longrightarrow \ 3n+2n &= 30\\5n &= 30\ &= 6$
Therefore, there are $3 \cdot 6 = 18$$2 \cdot 6 = 12$$18 + 12 = 30$
Proportion: When two ratios are set equal to each other.
Example 4: Solve the proportions.
a) $\frac{4}{5}=\frac{x}{30}$
b) $\frac{y+1}{8}=\frac{5}{20}$
c) $\frac{6}{5}=\frac{2x+5}{x-2}$
Solution: To solve a proportion, you need to cross-multiply.
In proportions, the blue numbers are called the means and the orange numbers are called the extremes. For the proportion to be true, the product of the means must equal the product of the extremes.
This can be generalized in the Cross-Multiplication Theorem.
Cross-Multiplication Theorem: Let $a, b, c,$$d$$b eq 0$$d eq 0$$\frac{a}{b}=\frac{c}{d}$$ad=bc$
The proof of the Cross-Multiplication Theorem is an algebraic proof. Recall that multiplying by $\frac{2}{2}, \frac{b}{b},$$\frac{d}{d}=1$$(b \div b=1)$
Proof of the Cross-Multiplication Theorem
$\frac{a}{b} &= \frac{c}{d} \qquad \ \text{Multiply the left side by} \ \frac{d}{d} \ \text{and the right side by} \ \frac{b}{b}.\\\frac{a}{b} \cdot \frac{d}{d} &= \frac{c}{d} \cdot \frac{b}{b}\\\
frac{ad}{bd} &= \frac{bc}{bd} \qquad \text{The denominators are the same, so the numerators are equal.}\\ad &= bc$
Think of the Cross-Multiplication Theorem as a shortcut. Without this theorem, you would have to go through all of these steps every time to solve a proportion.
Example 5: Your parents have an architect’s drawing of their home. On the paper, the house’s dimensions are 36 in by 30 in. If the shorter length of your parents’ house is actually 50 feet, what is
the longer length?
Solution: Set up a proportion. If the shorter length is 50 feet, then it will line up with 30 in. It does not matter which numbers you put in the numerators of the fractions, as long as they line up
$\frac{30}{36} = \frac{50}{x} \longrightarrow 1800 &= 30x\\60 &= x$
So, the dimension of your parents’ house is 50 ft by 60 ft.
Properties of Proportions
The Cross-Multiplication Theorem has several sub-theorems that follow from its proof. The formal term is corollary.
Corollary: A theorem that follows quickly, easily, and directly from another theorem.
Below are three corollaries that are immediate results of the Cross Multiplication Theorem and the fundamental laws of algebra.
Corollary 7-1: If $a, b, c,$$d$$\frac{a}{b}=\frac{c}{d}$$\frac{a}{c}=\frac{b}{d}$
Corollary 7-2: If $a, b, c,$$d$$\frac{a}{b}=\frac{c}{d}$$\frac{d}{b}=\frac{c}{a}$
Corollary 7-3: If $a, b, c,$$d$$\frac{a}{b}=\frac{c}{d}$$\frac{b}{a}=\frac{d}{c}$
In other words, a true proportion is also true if you switch the means, switch the extremes, or flip it upside down. Notice that you will still end up with $ad=bc$
Example 6: Suppose we have the proportion $\frac{2}{5}=\frac{14}{35}$
Solution: First of all, we know this is a true proportion because you would multiply $\frac{2}{5}$$\frac{7}{7}$$\frac{14}{35}$
1. $\frac{2}{14}=\frac{5}{35}$
2. $\frac{35}{5}=\frac{14}{2}$
3. $\frac{5}{2}=\frac{35}{14}$
If you cross-multiply all four of these proportions, you would get $70 = 70$
Corollary 7-4: If $a, b, c,$$d$$\frac{a}{b}=\frac{c}{d}$$\frac{a+b}{b}=\frac{c+d}{d}$
Corollary 7-5: If $a, b, c,$$d$$\frac{a}{b}=\frac{c}{d}$$\frac{a-b}{b}=\frac{c-d}{d}$
Example 7: In the picture, $\frac{AB}{XY}=\frac{BC}{YZ}=\frac{AC}{XZ}$
Find the measures of $AC$$XY$
Solution: This is an example of an extended proportion. Substituting in the numbers for the sides we know, we have $\frac{4}{XY}=\frac{3}{9}=\frac{AC}{15}$$XY$$AC$
$\frac{4}{XY} &= \frac{3}{9} && \quad \ \ \ \frac{3}{9}=\frac{AC}{15}\\36 &= 3(XY) && 9(AC)=45\\XY &= 12 && \quad \ AC=5$
Example 8: In the picture, $\frac{ED}{AD}=\frac{BC}{AC}$$y$
Solution: Substituting in the numbers for the sides we know, we have
$\frac{6}{y} =\frac{8}{12+8}. \longrightarrow 8y &= 6(20)\\y &= 15$
Example 9: If $\frac{AB}{BE}=\frac{AC}{CD}$$BE$
$\frac{12}{BE}=\frac{20}{25} \longrightarrow 20(BE) &= 12(25)\\BE &= 15$
Know What? Revisited Everything needs to be scaled down by a factor of $\frac{1}{18} \ (144 \ in \div 8 \ in)$
Bed: 36 in by 75 in $\longrightarrow$
Desk: 48 in by 24 in $\longrightarrow$
Chair: 36 in by 36 in $\longrightarrow$
There are several layout options for these three pieces of furniture. Draw an 8 in by 8 in square and then the appropriate rectangles for the furniture. Then, cut out the rectangles and place inside
the square.
Review Questions
1. The votes for president in a club election were: $\text{Smith}: 24 \qquad \text{Munoz}: 32 \qquad \text{Park}: 20$
1. Votes for Munoz to Smith
2. Votes for Park to Munoz
3. Votes for Smith to total votes
4. Votes for Smith to Munoz to Park
Use the picture to write the following ratios for questions 2-6.
2. $AE:EF$
3. $EB:AB$
4. $DF:FC$
5. $EF:BC$
6. Perimeter $ABCD$$AEFD$$EBCF$
7. The measures of the angles of a triangle are have the ratio 3:3:4. What are the measures of the angles?
8. The lengths of the sides in a triangle are in a 3:4:5 ratio. The perimeter of the triangle is 36. What are the lengths of the sides?
9. The length and width of a rectangle are in a 3:5 ratio. The perimeter of the rectangle is 64. What are the length and width?
10. The length and width of a rectangle are in a 4:7 ratio. The perimeter of the rectangle is 352. What are the length and width?
11. The ratio of the short side to the long side in a parallelogram is 5:9. The perimeter of the parallelogram is 112. What are the lengths of the sides?
12. The length and width of a rectangle are in a 3:11 ratio. The area of the rectangle is 528. What are the length and width of the rectangle?
13. Writing Explain why $\frac{a+b}{b}=\frac{c+d}{d}$$ad=bc$
14. Writing Explain why $\frac{a-b}{b}=\frac{c-d}{d}$$ad=bc$
Solve each proportion.
15. $\frac{x}{10}=\frac{42}{35}$
16. $\frac{x}{x-2}=\frac{5}{7}$
17. $\frac{6}{9}=\frac{y}{24}$
18. $\frac{x}{9}=\frac{16}{x}$
19. $\frac{y-3}{8}=\frac{y+6}{5}$
20. $\frac{20}{z+5}=\frac{16}{7}$
21. Shawna drove 245 miles and used 8.2 gallons of gas. At the same rate, if she drove 416 miles, how many gallons of gas will she need? Round to the nearest tenth.
22. The president, vice-president, and financial officer of a company divide the profits is a 4:3:2 ratio. If the company made $1,800,000 last year, how much did each person receive?
23. Many recipes describe ratios between ingredients. For example, one recipe for paper mache paste suggests 3 parts flour to 5 parts water. If we have one cup of flour, how much water should we add
to make the paste?
24. A recipe for krispy rice treats calls for 6 cups of rice cereal and 40 large marshmallows. You want to make a larger batch of goodies and have 9 cups of rice cereal. How many large marshmallows
do you need? However, you only have the miniature marshmallows at your house. You find a list of substitution quantities on the internet that suggests 10 large marshmallows are equivalent to 1
cup miniatures. How many cups of miniatures do you need?
Given the true proportion, $\frac{10}{6}=\frac{15}{d}=\frac{x}{y}$$d, x,$$y$
25. $\frac{10}{y}=\frac{x}{6}$
26. $\frac{15}{10}=\frac{d}{6}$
27. $\frac{6+10}{10}=\frac{y+x}{x}$
28. $\frac{15}{x}=\frac{y}{d}$
For questions 24-27, $\frac{AE}{ED} = \frac{BC}{CD}$$\frac{ED}{AD}=\frac{CD}{DB}=\frac{EC}{AB}$
29. Find $DB$
30. Find $EC$
31. Find $CB$
32. Find $AD$
Review Queue Answers
1. Yes, they are congruent by SAS.
2. $GI = 5$
3. 12 in = 1 ft, 36 in = 3 ft, 108 in = 3 yrds
4. 2c = 1 pt, 4c = 1 qt, 16 c = 4 qt = 1 gal, 28c = 7 qt
Files can only be attached to the latest version of None | {"url":"http://www.ck12.org/book/Geometry---Second-Edition/r1/section/7.1/","timestamp":"2014-04-18T04:09:43Z","content_type":null,"content_length":"148571","record_id":"<urn:uuid:8a77a3a3-f48f-4c67-8175-8f2d00b48fab>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00234-ip-10-147-4-33.ec2.internal.warc.gz"} |
One-way ANOVA
Contents - Index
One-way ANOVA
When the effect that one factor have on one dependent variable is studied, one-way ANOVA is used to compare the means of several different groups. It is a generalization of Student's t-test which
compares means of two groups. The null hypotheses that is tested with an ANOVA is that there is no difference between the group means, and a low p-value indicates that the nullhypothesis should be
rejected. If, e.g. the effects of various blood pressure drugs are studied, the effects of the drugs can be compared with an ANOVA where given drug is the studied factor and blood pressure is the
dependent variable. If the ANOVA gives a low p-value, it would indicate that there is indeed a difference in effect on blood pressure between the studied drugs.
The analysis is based on the assumption that the data in each group is drawn independently from a normal distribution, and that all group distributions share a common variance. This is illustrated in
the figure below where the data is shown as histograms within each group. The curves show the typical bell shaped form of a normal distribution, and since each curve have the same width, the groups
have the same variance. If the samples are independent of each other, it means that the result for one sample does not depend on the results of the other samples. These assumptions must be fulfilled
or otherwise, the result from the ANOVA might be misleading.
The ANOVA will only tell you whether there is a significant difference of means between the groups, but not which of the groups that differ from each other. If the ANOVA results in a p-value below
the threshold value (e.g. <0.05), you can do a post hoc test to see if there is a significant difference between pairs of groups. GenEx offers three different post hoc tests: Tukey-Kramer's,
Bonferroni's, and Dunnett's test. They should be used as follows.
• Tukey-Kramer's test is appropriet when all or many pairwise comparisons are of intrest.
• Bonferroni's test is appropriet when a small selected number of pairwise comparisons are of intrest.
• Dunnett's test is appropriet when all groups should be compared against one control group.
How to
Enter the data in the Data editor together with the classification columns. The data can include several different classification columns, but only one will be used in the one-way ANOVA. Do not use
zero (0) in the classification columns!
To analyse your data, press the One-way ANOVA button in the Statistics tab in to top of the main window.
This will open the analysis in the Control panel where you choose the genes that you want to analyze and which one of the classification columns that should be used to divide the data into groups.
You can also choose whether to do a post hoc test or not.
The different post hoc tests all require additional information for the analysis. All tests will produce confidence intervals for each pair wise comparison, so the confidence level must be specified
or left at its default value of 95%. Both Bonferroni's and Dunnett's test is available as 1 and 2 sided test where 2 sided is the default. Bonferroni's test require that at least one pair wise
comparison is chosen from the Comparisons list, and Dunnett's test require that one control group is chosen from the Control group list. If the number of specified pair wise comparisons is large in
Bonferroni's test, it might be better to perform Tukey-Kramer's test instead.
To see the results, press the Run button down at the right. The results are presented as one ANOVA table for each gene, with sums of squares (SS), degrees of freedom (df), mean sums of squares (MS),
F-statistics (F), and p-value. If several genes are tested at once, you will be warned that you are performing multiple tests and be suggested a p-value to use as a threshold to keep the overall
significance at 0.05. The suggested value is the idāk corrected p-value.
If a post hoc test is chosen, an additional window with the pair wise comparisons will be shown. There is one result table for each gene including an confidence interval (of specified confidence
level) for the difference between the groups (CI low-high), estimated difference between the groups (diff), a test statistic, and a p-value. A p-value below the threshold value indicates that there
is a significant difference between those groups. The family error rate is controlled for within the analysis of one gene, but if more than one gene is tested, a message box will warn that multiple
tests are perform and suggest a corrected p-value in the same way as for the ANOVA table. No exact p-values are calculated in Dunnett's test, but it is stated if the p-value is >=0.05, <0.05 (0.1<=
p-value <0.5), or <0.01.
Warning: Do not use 0 (zero) in the classification columns that defines the groups. | {"url":"http://multid.se/genex/hs460.htm","timestamp":"2014-04-16T13:04:18Z","content_type":null,"content_length":"14406","record_id":"<urn:uuid:bc4fd2c8-61ef-4186-bf8c-2df93fc4489b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: bootstrapping with senspec
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: bootstrapping with senspec
From "Bains, Lauren" <Lauren.Bains@insel.ch>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject st: bootstrapping with senspec
Date Tue, 4 Sep 2012 09:23:19 +0000
I am trying to use bootstrapping in STATA 12.1 to calculate 95% confidence intervals of "sensitivity", "specificity", and "accuracy" on a clustered dataset of diagnosing positive and negative lymph node metastases clustered by pelvic side (right and left pelvic sides). I am new to programming with STATA, and am having some problems with the CIs, which I assume are likely related to my initial programming attempts.
I am using the module senspec to return the true positives (TP), false negatives (FN), TN, FP, calculate accuracy, and return the sensitivity, specificity, and accuracy, which I downloaded from:
My bootstrapping program looks like this (apologies for what is likely an inelegant attempt):
capture program drop bootstrap_sens_spec_da
program define sens_spec_da, rclass
tempvar s_calc_sens s_calc_spec fp1 fn1 tp1 tn1
senspec `1' `2', sensitivity(`s_calc_sens') specificity(`s_calc_spec') nfpos(`fp1') nfneg(`fn1') ntpos(`tp1') ntneg(`tn1')
return scalar calc_da = (`tp1'+`tn1')/(`tp1'+`tn1'+`fp1'+`fn1')
return scalar calc_sens =`s_calc_sens'
return scalar calc_spec =`s_calc_spec'
Then, I am using bootstrapping to calculate the confidence intervals:
bootstrap r(calc_sens) r(calc_spec) r(calc_da), reps(1000) cluster(side): sens_spec_da histo_LN_ bin_R3_LN_
estat bootstrap, all
Some of the time this seems to work although the CIs seem large, compared with the results that one gets for sensitivity and specificity when not accounting for clustering using, for example, diagt. Sometimes it does not work at all. Using diagt to find the sensitivity and specificity for the 3rd reader works fine, but the bootstrapping fails. Here is the output of diagt:
. diagt histo_LN_ bin_R3_LN_
| bin_R3_LN_
histo_LN_ | Pos. Neg. | Total
Abnormal | 25 19 | 44
Normal | 25 171 | 196
Total | 50 190 | 240
True abnormal diagnosis defined as histo_LN_ = 1
[95% Confidence Interval]
Prevalence Pr(A) 18.3% 13.6% 23.8%
Sensitivity Pr(+|A) 56.8% 41.0% 71.7%
Specificity Pr(-|N) 87.2% 81.7% 91.6%
And here is STATA's output of bootstrapping on the readings for R3 (the third reader):
. bootstrap r(calc_sens) r(calc_spec) r(calc_da), reps(1000) cluster(side): sens_spec_da histo_LN_ bin_R3_LN_
Bootstrap results Number of obs = 240
Replications = 1000
command: sens_spec_da histo_LN_ bin_R3_LN_
_bs_1: r(calc_sens)
_bs_2: r(calc_spec)
_bs_3: r(calc_da)
(Replications based on 2 clusters in side)
| Observed Bootstrap Normal-based
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
_bs_1 | 1 . . . . .
_bs_2 | 0 (omitted)
_bs_3 | .1833333 .0235188 7.80 0.000 .1372373 .2294294
(notice that the first two results, for sensitivity and specificity, fail to match with diagt)
This is my first time posting to the STATA listserv, so I give my apologies in advance if I have provided too much (or not enough) detail. I can attach the dataset if that would be helpful. Any suggestions would be much appreciated!
Lauren Bains
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-09/msg00087.html","timestamp":"2014-04-21T02:17:27Z","content_type":null,"content_length":"11458","record_id":"<urn:uuid:1ce89989-a66e-492b-a77c-233f132886d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
expanding steel problem
July 10th 2005, 12:44 AM #1
Jul 2005
help please
i'm finding this question difficult to work out,any help would be great thanks!
here is the question:
A piece of steel is 11.5 meters long at 22 degrees Celsius. It is heated to 1221 degrees Celsius, close to its melting temperature. How long is it?
It should not be that difficult. It is only a substitution, or plugging in, problem. Nothing to analyze really. That is if you know the formula and constants to use.
The question is about linear thermal expansion of a length of steel.
In Physics we learned for this case that
"the change in length is proportinal to the change in temperature"
delta L ----> L*(delta T)
delta L = k*[L*(delta T)] -------(i)
>>>delta L is (change in length) = (final length minus initial length)
Or, (final length) = (initial length) +(delta L)
>>>k = constant of proportionality = average coefficient of linear expansion.
For steel, k = 11*[10^(-6)] per degree Centigrade
>>>L = initial length
>>>delta T = change in temperature = (final temp. minus initial temp.)
L = 11.5 m
delta T = (1221 -22) = 1099 deg Celsius
So, substituting all those into (i),
delta L = [11*10^(-6)]*[11.5 * 1099] = 0.139 m
Therefore, that piece of steel is
11.5 + 0.139 = 11.639 m long
when its temperature is 1221 degrees Celsius.
thanks that really helped
July 10th 2005, 03:03 AM #2
MHF Contributor
Apr 2005
July 10th 2005, 09:01 AM #3
Jul 2005 | {"url":"http://mathhelpforum.com/math-topics/549-expanding-steel-problem.html","timestamp":"2014-04-17T16:54:16Z","content_type":null,"content_length":"34056","record_id":"<urn:uuid:6a7a813f-d98c-40fa-9ec2-48e1c35ba534>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00055-ip-10-147-4-33.ec2.internal.warc.gz"} |
Supplemental Learning Materials - Math
Algebra and trigonometry
Basic math
Calculator help
Charts and graphs
Coordinate planes
General math sites
Order of operations
Problem solving and statistics
Squares, cubes, roots
Unit analysis
Whole numbers
Algebra and trigonometry
Work on a multitude of algebra and trigonometry problems.
Algebra and Trigonometry
Algebra and Trigonometry
Algebra Challenge Exercises
Algebra Help
Algebra Homework Help
Algebra Review in Ten Lessons
Algebra Topics
Basic Algebra
College Algebra
College Algebra
College Algebra
College Algebra
College Algebra Lecture Notes, Handouts, and TI-83 Handouts
College Trigonometry
Complex Numbers
Dave's Short Course on Complex Numbers
Easy Algebra Equations
Graphing Functions Interactively in Pre-Calculus, Calculus, and Differential Equations
Graphs of Functions and Algebra
Intro to Algebra Fast Facts
Intro to Algebra
Introduction to Algebra
Intro to Trigonometry: Tutorial
Math Drills
Math for Morons Like Us
Notes for College Algebra
Online Tutorials in Topics from Algebra, Trigonometry, and Geometry
Review of Topics from Algebra/Trigonometry, or Pre-Calculus
Shannon's Tutorial Menu
Solving Systems of Equations
Solving Three Equations with Three Unknowns
Trigonometry Help
Trigonometry Tutorial
Trigonometry Tutorials
Writing Algebraic Equations
Writing Equations
Basic math
Solve an array of basic math problems
Basic Math
General Math and Probability Tutorials
Hart's Math Rules from Chaminade College Preparatory School
Math Tutorials by Students
Online Finals: Basic Math
Topics in Basic Math and Algebra
Calculator help
Find guides on how to use your calculator
Graphing Calculator Help
Graphing Calculator Instructions
Graphing Calculator Tutorial
How to Use Your TI-83 for Statistics
Sequences with the TI-83 Tutorial
Statistics on the TI-83 Plus
TI-83/84/89/92 Procedures and Help
TI-83 Calculator Tutorials
TI-83 for Calculus
TI-83 Plus Basic Tutorial
TI-83 Plus Tutorial
TI-83 Tutorial
TI-83 Tutorials
TI-83 Tutorial - Matrices
TI-83 Tutorial - Statistics and Programming
TI-84 Plus Basic Tutorial
TI-84 Plus Tutorials
TI-89 Tutorial
TI-83 / TI-83 Plus
TI-83 Plus Tutorial
Tutorials on Using TI-83 in Statistics
Using the TI-83 Graphing Calculator
Charts and graphs
Chart your success in math through learning about charts and graphs
How to Read a Bar Graph
Using Data and Statistics
Coordinate planes
Coordinate a victorious plan for understanding coordinate planes
Basic Graphing/Coordinate Planes
Coordinate Geometry
Graphing on Coordinate Planes
Graphing Fast Facts
Graphing Points and Lines
Introduction to Plotting Points
Points on a Graph
Slope and Y-Intercept
Slope of a Line
Slope of a Line
The Coordinate Plane
Unravel the mystery of decimals
Decimals: Fast Facts
Discover the wonder of exponents
Exponents Lesson
Keeping up with Exponential Growth
Solve problems with fractions
Fractions: Fast Facts
Visual Fractions
What is a Fraction?
General math sites
Links to websites that specialize in math assistance at all levels
Fast Facts
Frank Potter’s Science Gems
Math Power
Math League
Angle for an A in geometry
Angles Discussion
Area and Perimeters
Area and Volume Formulas
Circumference of a Circle
Elementary Geometry eBook
Formula Quick Guide
Perimeters of Polygons
Plane Geometry Tutorials
Pythagorean Theorem
Similar Figures
Similar Figures
Plot a course for success in graphing
Graphing and Calculating Functions
Graphing Lines and Curves
Figure out the brain twisting problems with integers
Adding Integers
Introduction to Integers
Signed Integers
Subtracting Integers
Order of operations
Calculate a strategic plan for order of operations
Order of Operations
Order of Operations
Order of Operations with Exponents
Unveil the secrets of percents
Meaning of Percents
Percent and Probability
Problem Solving with Percents
Using Percents
What is a Percent?
Always be prepared for the twists and turns of pre-calculus
Online Tutorials in Topics from Algebra, Trigonometry, and Geometry
Pre-Calculus with Trigonometry
Review of Topics from Algebra/Trigonometry or Pre-Calculus
Problem solving and statistics
Formulate a plan for solving problems
Introduction to Probability
Mean, Median, Mode, and Range
Mean, Median, Mode, and Range
Mode of a Set of Data
Notes on Probability
Range of a Set of Data
Understanding Mean, Median, and Mode
Using Data and Statistics
Explore the world of ratios and proportions
Proportions Fast Facts
Ratio and Proportion
Working with Ratios
Squares, cubes, roots
Decode the power of squares, cubes and roots
Squares and Cubes
Squares and Square Roots
Unit analysis
Weigh your options with unit analysis
Automated Online Units/Measurements Conversion Program
Whole numbers
Work on operations involving whole numbers
Whole Numbers and their Properties
Whole Number Tips | {"url":"http://www.midwestculinary.com/real-world-academics/student-services/images-files/Supplemental%20Learning%20Materials/Math","timestamp":"2014-04-16T22:46:45Z","content_type":null,"content_length":"58835","record_id":"<urn:uuid:b2b97a2e-d010-428b-928d-a1db3b071e51>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continuity Question
January 22nd 2011, 02:40 AM #1
Dec 2010
Continuity Question
Literally have no idea how to even start this question:
Prove that if f is continuous at $x_0$, then |f| is continuous at $x_0$.
Can someone give me a hint so I can start? Cheers.
January 22nd 2011, 03:04 AM #2 | {"url":"http://mathhelpforum.com/differential-geometry/169016-continuity-question.html","timestamp":"2014-04-20T23:52:18Z","content_type":null,"content_length":"32452","record_id":"<urn:uuid:d67099e8-75d4-4b07-9dfb-d05a517ffc73>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elliptic logarithm: Introduction to to elliptic exp and elliptic log (subsection EllipticExpLogs/04)
Connections within the group of elliptic exp and elliptic log and with other function groups
Representations through more general functions
The elliptic logarithm is the particular case of the hypergeometric function of two variables (Appell function ):
Representations through related equivalent functions
The elliptic exponent is connected with Jacobi amplitude by the following formula:
The elliptic exponent and elliptic logarithm can be expressed through direct and inverse Weierstrass functions by the following formulas:
The elliptic logarithm has the following representation through incomplete elliptic integral :
Relations to inverse functions
The elliptic logarithm is the inverse function to the elliptic exponent and its derivative . Relations between them are described by the following formulas: | {"url":"http://functions.wolfram.com/EllipticFunctions/EllipticLog/introductions/EllipticExpLogs/04/ShowAll.html","timestamp":"2014-04-17T01:22:40Z","content_type":null,"content_length":"39448","record_id":"<urn:uuid:0154b363-5144-4abd-acac-f53bdef9188f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
even more trig questions!
June 25th 2010, 11:39 AM #1
Junior Member
Jun 2010
United Kingdom
even more trig questions!
I have another question =)
First I'll attach the image so that it's easier to follow:
For part a) I tried getting to the required answer but haven't been able to. I'll write down what I've done. Could someone tell me where I've gone wrong?
First I isolated t
so from x=2sint ===> t = arcsin(x/2)
then to get a cartesian equation, I've put t back into the y equation:
which simplifies to y=ln(2/√(4-x^2))
differentiating i get:
(here im not sure if its right)
then when t = pi/3
so y - ln2=-√3 (x-√3)
when y=0
√3.x=3 + ln2
so x= 1/√3 . (3 + ln2)
but that isn't the answer....
You're aiming at the right thing. You have the right overall view of the problem. But your derivative is wrong. Check that out.
Another way to do this is to note that $\frac{dy}{dx}=\left(\frac{dy}{dt}\right)/\left(\frac{dx}{dt}\right)$, and just take the derivatives of the parametric representations.
doing it that way i get dy/dx = √3 which doesn't work either.
Here is what I'm doing:
dx/dt = 2cost
(e^y)(dy/dt)= (sect.tant)
dy/dt= (sect.tant)/(e^y)
==> dy/dx = ((sect.tant)/(e^y)) x (1/2cost)
then putting t=arcsin(x/2) and y=ln2
i get dy/dx= √3
But that gives me x=(1/√3)(3 + ln2)
You've made more work for yourself than necessary. But I definitely agree with the slope of your line. What's the equation of your tangent line?
the tangent has a gradient of √3 and passes through the point (√3, ln2)
thus its equation is
y-ln2 = √3(x-√3)
y=(√3)x - 3 + ln2
at point A, y = 0
thus (√3)x - 3 + ln2 = 0
which suggests that x= (1/√3)(3 - ln2)
Well, isn't your answer correct, now? You had a sign error in post #3 in conjunction with the $\ln(2)$, but that appears to be fixed now. Also, note that $\frac{1}{\sqrt{3}}=\frac{\sqrt{3}}{3}$.
So, aren't you done with this part?
how can i know in an exam that $\frac{1}{\sqrt{3}}=\frac{\sqrt{3}}{3}$ without a calculator? I just don't see it for some reason..
I'll have a go at part b) now =)
Multiply the top and bottom of the LHS by $\sqrt{3}$.
haha, please forget i ever asked that!
For the second part my intuition is to integrate the curve's cartesian equation y= ln(2/√(4-x^2)) between (√3/3)(3-ln2) and 0
and then integrate the curve again between √3 and y= ln(2/√(4-x^2))
add those two, and then subtract the area of the right angle triangle formed between x=A x=√3 and P.
If this is a good way of doing it, when integrating ln(2/√(4-x^2))
do you have to use integration by parts?
If so, if you let u=ln(2/√(4-x^2)) then du/dx here will be the same integral as before but this time replace the t's with t=arcsin(x/2) and the y by y=ln(sect)?
Thank you
Hey, we all have brain freezes.
Again, I think you're making things more difficult than you need. Just integrate the curve from $0$ to $\sqrt{3}$, and then subtract the area of the triangle. There's no need to break up the
integral of the upper curve into two pieces.
As for finding the antiderivative of the function, I think I would probably go with a trig substitution first. What do you think?
Hey, we all have brain freezes.
Again, I think you're making things more difficult than you need. Just integrate the curve from $0$ to $\sqrt{3}$, and then subtract the area of the triangle. There's no need to break up the
integral of the upper curve into two pieces.
As for finding the antiderivative of the function, I think I would probably go with a trig substitution first. What do you think?
no integration by parts?
It's be easy if it were just y= 2/√(4-x^2) but the ln of y= ln(2/√(4-x^2)) throws me off..
i'm not sure what trig substitution to use..
I do trig substitutions all the same way.
1. I draw a right triangle and label the right angle.
2. Look at the signs of the two squared terms. If they have the same sign, then let each side be an unsquared term, and compute the hypotenuse using Pythag. If they have the opposite sign, let
the positive unsquared term be the hypotenuse, and the negative unsquared term be one of the sides. Compute the remaining side using Pythag. Label all sides.
3. Assign the angle theta and label on drawing.
4. Transfer the integrand, the differential, and the limits (if you're doing a definite integral) into the theta domain.
5. Perform the rest of the integration in the theta domain. If you have a definite integral, you can finish here. Otherwise,
6. In the case of an indefinite integral, transfer back to the original variable's domain. You can use your triangle that you drew and labeled in order to do this easily.
So, what do you think now? Where would you start?
I do trig substitutions all the same way.
4. Transfer the integrand, the differential, and the limits (if you're doing a definite integral) into the theta domain.
5. Perform the rest of the integration in the theta domain. If you have a definite integral, you can finish here. Otherwise,
6. In the case of an indefinite integral, transfer back to the original variable's domain. You can use your triangle that you drew and labeled in order to do this easily.
So, what do you think now? Where would you start?
I don't really understand what you mean in number 4..
I've drawn the triangle and labeled all the sides and theta. I have my integrand y= ln(2/√(4-x^2)), what do you mean by differential? what I get when I differentiate y= ln(2/√(4-x^2))? and then I
have my limits 0 and √3.
what do you mean by transfer these into the theta domain?
Do you mean for me to get to:
dy/dx = x / (4-x^2)
So, number 4 works like this. You've got your integral,
Now then, suppose you draw your triangle with theta as the lower left angle. The squared terms have opposite signs, so you let 2 be the hypotenuse, and x the opposite side to theta. That means
the adjacent side has length $\sqrt{4-x^{2}}$. You need to transform the limits, the integrand, and the differential so that there are no more x's, only theta's. Here are some relevant equations
that let you do that:
So, hopefully, getting the integrand to have only theta's should be fairly straight-forward. How do you propose to get the limits and the differential to have only theta's?
June 25th 2010, 11:48 AM #2
June 25th 2010, 12:17 PM #3
Junior Member
Jun 2010
United Kingdom
June 25th 2010, 12:28 PM #4
June 25th 2010, 12:39 PM #5
Junior Member
Jun 2010
United Kingdom
June 25th 2010, 12:46 PM #6
June 25th 2010, 12:49 PM #7
Junior Member
Jun 2010
United Kingdom
June 25th 2010, 12:51 PM #8
June 25th 2010, 01:02 PM #9
Junior Member
Jun 2010
United Kingdom
June 25th 2010, 01:12 PM #10
June 25th 2010, 01:21 PM #11
Junior Member
Jun 2010
United Kingdom
June 25th 2010, 01:30 PM #12
June 25th 2010, 01:43 PM #13
Junior Member
Jun 2010
United Kingdom
June 25th 2010, 02:10 PM #14
Junior Member
Jun 2010
United Kingdom
June 25th 2010, 04:33 PM #15 | {"url":"http://mathhelpforum.com/trigonometry/149345-even-more-trig-questions.html","timestamp":"2014-04-23T21:24:54Z","content_type":null,"content_length":"85464","record_id":"<urn:uuid:3e70c81d-ed5b-437b-8c25-670ca4229d35>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Tutors
Mesa, AZ 85210
Former Supplemental Instructor with a Knack and a Love for Mathematics
I have my Associate of Science Degree in
ematics, and I spent several semesters as a
ematics Supplemental Instructor holding free sessions for students outside of the classroom (which I also attended). Though the sessions were free for the students, I was...
Offering 10+ subjects including algebra 1, algebra 2 and calculus | {"url":"http://www.wyzant.com/geo_Phoenix_Math_tutors.aspx?d=20&pagesize=5&pagenum=7","timestamp":"2014-04-17T04:28:15Z","content_type":null,"content_length":"60601","record_id":"<urn:uuid:ddd21fa3-da90-4ae5-9bac-ab108de55a58>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
D. J. Bernstein
Authenticators and signatures
A state-of-the-art public-key signature system
Standard signatures; signing
The standard signature of a message m under a secret key (p,q,z) is the unique signature (e,f,r,s) such that
• r = H1(z,m);
• H0(r,m) = efs^2 mod pq;
• e is 1 if H0(r,m) is a square modulo q, otherwise -1;
• f is 1 if e H0(r,m) is a square modulo p, otherwise 2;
• s is in {0,1,...,(pq-1)/2}; and
• {s,pq-s} contains a square modulo pq.
Here H1 is the hash function specified below, producing outputs in {0,1,...,15}.
Signers are required to generate standard signatures.
How do I sign a message?
The signer can compute the standard signature of a message m with Algorithm 3.1 of my rwtight paper:
1. Compute r = H1(z,m).
2. Compute h = H0(r,m).
3. Compute u = h^{(q+1)/4} mod q.
4. If (u^2-h) mod q = 0, set e = 1; otherwise set e = -1.
5. Compute v = (eh)^{(p+1)/4} mod p.
6. If (v^2-eh) mod p = 0, set f = 1; otherwise set f = 2.
7. Compute w = f^{(3q-5)/4}u mod q. (This takes just one multiplication modulo q, since the signer precomputed 2^{(3q-5)/4} mod q; recall that f is 1 or 2.)
8. Compute x = f^{(3p-5)/4}v mod p. (The signer precomputed 2^{(3p-5)/4} mod p.)
9. Compute y = w + q(q^(p-2)(x-w) mod p). (The signer precomputed q^(p-2) mod p.)
10. Set s = min{y,pq-y}.
11. Print (e,f,r,s).
The correctness of this algorithm is proven in Theorem 3.2 of my rwtight paper. Of course, other algorithms that produce exactly the same output are acceptable.
Note that there are no Euclid-type Jacobi-symbol computations here.
Signers worried about faults in the computation can check at the end that efs^2 mod pq = h. This is very fast, so I recommend that all signers do it. A more thorough double-check is provided by the
following alternate algorithm:
1. Compute r = H1(z,m).
2. Compute h = H0(r,m).
3. Compute U = h^{(q+1)/8} mod q.
4. If (U^4-h) mod q = 0, set e = 1; otherwise set e = -1.
5. Compute V = (eh)^{(p-3)/8} mod p.
6. If (V^4(eh)^2-eh) mod p = 0, set f = 1; otherwise set f = 2.
7. Compute W = f^{(3q-5)/8}U mod q.
8. Compute X = f^{(9p-11)/8}V^3 eh mod p.
9. Compute Y = W + q(q^(p-2)(X-W) mod p).
10. Compute y = Y^2 mod pq.
11. Set s = min{y,pq-y}.
12. Check that efs^2 mod pq = h.
13. Print (e,f,r,s).
What is the hash function H1?
H1(z,m) is the first four bits of SHA-160L(1,z,m). | {"url":"http://cr.yp.to/sigs/sign.html","timestamp":"2014-04-18T11:04:04Z","content_type":null,"content_length":"2815","record_id":"<urn:uuid:42831983-04d1-444b-982b-57cd5636cd5e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to determine effective tire diameter
Well, as I said it was just for fun. I got interested in the math and kinematics and you can actually calculate the velocity and position of a tire element... wait - as Johngrass1 said - put down the
pencil, step away from the paper...
Billyshope - I think I do understand your point, I just didn't agree and was attempting to explain why. And believe me, I would never think of disagreeing with the industry. In fact, thought I was
just discussing it with you!
Regardless, I did what all good nerds do when faced with a problem - I Googled it. This is not a new question and people have made careers out of tire mechanics and dynamics. (Check out
) The net is that "Rolling Circumference" (ie. the distance a loaded tire rolls in one rev) is not the same as the unloaded tire circumference. The difference is due to the tire deflection under
load. There are plenty of references and papers on this. Alot of patents, papers on estimating tire pressure by monitoring changes in rolling circumference.
This one from Continental Tire. "
Deflation Detection System (DDS). DDC identifies a loss of pressure indirectly, using data from the wheel speed sensors of the electronic braking system – because when a tire loses pressure its
rolling circumference decreases."
The clearest answer was from a tire expert answering the same question.
http://experts.about.com/q/Tires-235...ence-tyres.htm Answer
There are a couple of things that complicate the "circumference" of tires.
1) There are "calculators" that will calculate the circumference (diameter) of a tire based on the tire size. Use a search engine with the key words "tire calculator"
2) These "calculators" give you an answer based on the "size", which is different than the actual physical dimensions. Said another way, a P205/65R15 does not
actually have to be 205 mm wide and have an aspect ratio of 65%. There is quite a bit of variability in the market
3) The actual circumference (diameter) of a free hanging (not touching anything) tire is different than the rolling circumference (Rolling diameter) because the tire deflects
under load. Different inflation pressures and different loads will affect the rolling circumference, but as a general rule a properly inflation and properly loaded tire will have a rolling
circumference about 97% of the free hanging circumference - a 3% difference.
4) The difference in circumference between tires will have a minor effect compared to other factors. Rolling resistance greatly affects fuel economy, so acceleration is also affected and rolling
resistance varies from tire to tire. However, a change in rolling circumference of a tire acts in a similar way as a change in final drive gearing.
Hope this helps.
About Barry Smith
I have over 30 years experience in the design, manufacturing, and testing of tires. I have served as the technical advisor to the "800" number. I have authored or co-authored many publications -
usually without credit. I can answer almost any technical question, but please don`t ask me to compare brands. I have prejudices because of my work experience.
Member SAE (Society of Automotive Engineers) Member Tire Society (Tire Technical Organization) SCCA Regional Competiton License holder Authored many training manuals on tires, their care and use.
So, if you really care - you need to adjust unloaded circumference by about 3%. However, now that I know - I don't care anymore!
Thanks for the responses!
Ed aka "Bob Goodyear" | {"url":"http://www.hotrodders.com/forum/how-determine-effective-tire-diameter-101007.html","timestamp":"2014-04-16T17:49:21Z","content_type":null,"content_length":"176339","record_id":"<urn:uuid:b7071fdc-8d7b-4429-8670-f0f7c0a40eb5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00006-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fourier Expansion in d-dimensions
September 30th 2012, 02:08 PM
Fourier Expansion in d-dimensions
While reading, I came across an expansion of suitable functions $f:\mathbb{R}^d \to \mathbb{R}$ as a Fourier series $f(x)$ ~ $\displaystyle \sum_{k \in \mathbb{Z}^d} \hat{f}(k) e^{ik \cdot x}$
and questions of convergence. I'm familiar with the argument for pointwise convergence of the symmetric sums defined by $s_N(f; x) = \sum_{|k| \le N} \hat{f}(k) e^{ikx}$ when f is, say
differentiable, on $\mathbb{R}$, by convolving the function with the Dirichlet kernel. I decided to try and generalize the argument for the very special case of $f \in C^{\infty}(\mathbb{R}^d)$
and $2 \pi$ periodic in each variable. However, my question then became the appropriate way to generalize the partial summations (or summations over d-tuples in general). The most natural choice
for extending the argument seemed to be by replacing the absolute value with $|x|_{\infty} := \sup_{j=1,...,d} |x_j|$. So summing over symmetric cubes about the origin.
However, there are of course many norms to pick here. In fact, $|x|_p := (\sum_{j=1}^d |x_j|^p )^{\frac{1}{p}$ for all 1 <= p < \infty all provide viable summation candidates. While for my
purposes the limiting case of p = \infty was sufficient, I'm curious about the differences for the intermediate cases. In particular, if $1 \le p < \infty$, f is nice enough to make the question
make sense, and $s_N^p(f; x) := \sum_{|k|_p \le N} \hat{f}(k) e^{i k \cdot x}$, what affect does p have on $\lim_{N \to \infty} s^p_N(f;x)$, either in terms of utility of the expansion or in
convergence in general? | {"url":"http://mathhelpforum.com/differential-geometry/204367-fourier-expansion-d-dimensions-print.html","timestamp":"2014-04-18T18:37:03Z","content_type":null,"content_length":"6921","record_id":"<urn:uuid:c8e6becd-b5b4-48dc-849b-b548f9cefa65>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
AST 392D - Mathematical Techniques in Astronomy
Department of Astronomy
Faculty Office Hours
Weekly Seminars
Péridier Library
Public Outreach
Graduate Program
Prospective Graduate Student Information
Current Graduate Students
Graduate Awards
Undergraduate Program
Degree & Course Information
Awards, Scholarships & Financial Aid
Research & Career Opportunities
College of Natural Sciences
University Course Schedule
AST 392D · Mathematical Techniques in Astronomy 1 2 3
The weekly assignments will count for 50 percent of your grade, exams will count 30 percent and class participation will count for 20 percent. There will be no exams, and there will be no late
problem sets! Because of the nature of your interests in astronomy may make your interest in a particular subject or subjects nearly zero, and because of the realities of observing runs and
conferences which are part of the life of every active professional astrophysicist, you will have two "passes" or as Yancy Shirley put it, two "get out of jail free" cards. You can use these to
excuse yourself from any two weekly (or sections) assignments and class participation.
Current Detailed Contents (Subject to Revision)
1. Vector Analysis
1. A Brief review of Vector Analysis: Gradient, Divergence Curl, and Integrations
2. Some Useful Theorems: Gauss', Stokes', and Helmholtz's
2. Vector Spaces and Matrices
1. Linear Vector Spaces
2. Linear Operators
3. Introduction to Matrices
4. Coordinate Transformations
5. Eigenvalue Problems
6. Diagonalization of Matrices
7. Spaces of Infinite Dimensionality, Hilbert Spaces
3. An Introduction to Tensor Analysis and Differential Geometry
1. Cartesian Tensors in Three-Space
2. Coordinate Transformations and General Tensor Analysis
3. The Metric Tensor
4. Geodesics
5. Christoffel Symbols
6. Covariant Derivatives
7. Parallel Transport
8. Geodesics Through Parallel Transport
9. The Riemann-Christoffel Curvature Tensor
10. Parallel Transport around a Closed Loop and Curvature
11. The Absolute Derivative, Geodesic Deviation and Curvature | {"url":"http://www.as.utexas.edu/astronomy/education/fall03/winget/winget_392d_02.html","timestamp":"2014-04-20T21:53:35Z","content_type":null,"content_length":"17267","record_id":"<urn:uuid:9a7ddade-4940-4eb8-8d2f-055758a80985>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2005/275
Explicit Construction of Secure Frameproof CodesDongvu Tonien and Reihaneh Safavi-NainiAbstract: $\Gamma$ is a $q$-ary code of length $L$. A word $w$ is called a descendant of a coalition of
codewords $w^{(1)}, w^{(2)}, \dots, w^{(t)}$ of $\Gamma$ if at each position $i$, $1 \leq i \leq L$, $w$ inherits a symbol from one of its parents, that is $w_i \in \{ w^{(1)}_i, w^{(2)}_i, \dots, w^
{(t)}_i \}$. A $k$-secure frameproof code ($k$-SFPC) ensures that any two disjoint coalitions of size at most $k$ have no common descendant. Several probabilistic methods prove the existance of codes
but there are not many explicit constructions. Indeed, it is an open problem in [J. Staddon et al., IEEE Trans. on Information Theory, 47 (2001), pp. 1042--1049] to construct explicitly $q$-ary
2-secure frameproof code for arbitrary $q$.
In this paper, we present several explicit constructions of $q$-ary 2-SFPCs. These constructions are generalisation of the binary inner code of the secure code in [V.D. To et al., Proceeding of
IndoCrypt'02, LNCS 2551, pp. 149--162, 2002]. The length of our new code is logarithmically small compared to its size.
Category / Keywords: combinatorial cryptography, fingerprinting codes, secure frameproof codes, traitor tracingPublication Info: International Journal of Pure and Applied Mathematics, Volume 6, No.
3, 2003, 343-360Date: received 16 Aug 2005, last revised 17 Aug 2005Contact author: dong at uow edu auAvailable format(s): PDF | BibTeX Citation Note: This is the revised version of the paper
published in International Journal of Pure and Applied Mathematics, volume 6 no. 3, 2003, 343-360.
Version: 20050817:232701 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2005/275","timestamp":"2014-04-17T13:13:28Z","content_type":null,"content_length":"3063","record_id":"<urn:uuid:b6bf9a1f-cee0-40f0-b20d-d646ee83f2cc>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gear interested
Joined: Mar 2012
Posts: 18
Thread Starter
Mix prices?
Good day!
So, I'll try to make this short and simple....
1. Got a mix gig and they loved the mix, I only charged $50 for it and we didn't sign many papers =( ((i know..))
2. Have had many professionals tell me that my mix is worth $150 at least
3. Have the idea that giving up the session file for mixing a song should be a $250 fee or something? Discount after that if bulk 'order'?
4. How much to charge for revisions if one is included with mix? If mix is $150 should the 2nd revision (1st paid revision) be $80? More??
5. If they get multiple songs mixed, then $150 should be negotiated down a bit? (only if they ask?) or just $150 * 10, etc....
6. I love this website, thank you for you, all of you.
Any help would be greatly appreciated. Much love and success to you all. | {"url":"http://www.gearslutz.com/board/rap-hip-hop-engineering-production/712746-mix-prices.html","timestamp":"2014-04-19T06:56:10Z","content_type":null,"content_length":"152558","record_id":"<urn:uuid:391903b2-4d21-45ba-a76d-030f75691705>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
The finite element method : basic concepts and applications / Darrell W. Pepper, Juan C. Heinrich.
Publication date:
2nd ed. - New York : Taylor & Francis, 2006.
□ Book
□ 312 p. : ill. ; 23 cm.
Includes bibliographical references and index.
□ Preface 1. INTRODUCTION 1.1 Background 1.2 History 1.3 Orientation 1.4 Closure References 2. THE METHOD OF WEIGHTED RESIDUALS AND GALERKIN APPROXIMATIONS 2.1 Background 2.2 Classical
Solutions 2.3 The Weak Statement 2.4 Closure Exercises References 3. THE FINITE ELEMENT METHOD IN ONE DIMENSION 3.1 Overview 3.2 Shape Functions 3.2.1 Linear Elements 3.2.2 Quadratic Elements
3.2.3 Cubic Elements 3.3 Steady Conduction Equation 3.3.1 Galerkin Formulation 3.3.2 Variable Diffusion and Boundary Convection 3.4 Axisymmetric Heat Conduction 3.5 Natural Coordinate System
3.6 Time Dependence 3.6.1 Spatial Discretization 3.6.2 Time Discretization 3.7 Matrix Formulation 3.8 Solution Methods 3.9 Closure Exercises References 4. THE TWO-DIMENSIONAL TRIANGULAR
ELEMENT 4.1 Overview 4.2 The Mesh 4.3 Shape Functions (Linear, Quadratic) 4.3.1 Linear Shape Functions 4.3.2 Quadratic Shape Functions 4.4 Area Coordinates 4.5 Numerical Integration 4.6
Diffusion in a Triangular Element 4.7 Steady-State Diffusion with Boundary Convection 4.8 The Axisymmetric Conduction Equation 4.9 The Quadratic Triangular Element 4.10 Time-Dependent
Diffusion Equation 4.11 Bandwidth 4.12 Mass Lumping 4.13 Closure Exercises References 5. THE TWO-DIMENSIONAL QUADRILATERAL ELEMENT 5.1 Background 5.2 Element Mesh 5.3 Shape Functions 5.3.1
Bilinear Rectangular Element 5.3.2 Quadratic Rectangular Elements 5.4 Natural Coordinate System 5.5 Numerical Integration using Gaussian Quadratures 5.6 Steady-State Conduction with Boundary
Convection 5.7 The Quadratic Quadrilateral Element 5.8 Time-Dependent Diffusion 5.9 Computer Program Exercises 5.10 Closure Exercises References 6. ISOPARAMETRIC TWO-DIMENSIONAL ELEMENTS 6.1
Background 6.2 Natural Coordinate System 6.3 Shape Functions 6.3.1 Bilinear Quadrilateral 6.3.2 Eight-Noded Quadratic Quadrilateral 6.3.3 Linear Triangle 6.3.4 Quadratic Triangle 6.3.5
Directional Cosines 6.4 The Element Matrices 6.5 Inviscid Flow Example 6.6 Closure Exercises References 7. THE THREE-DIMENSIONAL ELEMENT 7.1 Background 7.2 Element Mesh 7.3 Shape Functions
7.3.1 Tetrahedron 7.3.2 Hexahedron 7.4 Numerical Integration 7.5 One Element Heat Conduction Problem 7.5.1 Tetrahedron 7.5.2 Hexahedron 7.6 Time-Dependent Heat Conduction with Radiation and
Convection 7.6.1 Radiation 7.6.2 Shape Factors 7.7 Closure Exercises References 8. FINITE ELEMENTS IN SOLID MECHANICS 8.1 Background 8.2 Two-Dimensional Elasticity - Stress-Strain 8.3
Galerkin Approximation 8.4 Potential Energy 8.5 Thermal Stresses 8.6 Three-Dimensional Solid Elements 8.7 Closure Exercises References 9. APPLICATIONS TO CONVECTIVE TRANSPORT 9.1 Background
9.2 Potential Flow 9.3 Convective Transport 9.4 Nonlinear Convective Transport 9.5 Groundwater Flow 9.6 Lubrication 9.7 Closure Exercises References 10. INTRODUCTION TO FLUID FLOW 10.1
Background 10.2 Viscous Incompressible Flow with Heat Transfer 10.3 The Penalty Function Algorithm 10.4 Application to Natural Convection 10.5 Summary Exercises References APPENDICES A.
Matrix Algebra B. Units C. Thermophysical Properties of Some Common Materials D. Notation E. Computer Programs E.1 MESH-1D, FEM-1D E.2 MESH-2D, FEM-2D E.3 FEM-3D E.4 FEMLAB E.5 MATLAB,
MATHCAD, MAPLE INDEX.
□ (source: Nielsen Book Data)
Publisher's Summary:
This much-anticipated second edition introduces the fundamentals of the finite element method featuring clear-cut examples and an applications-oriented approach. Using the transport equation for
heat transfer as the foundation for the governing equations, this new edition demonstrates the versatility of the method for a wide range of applications, including structural analysis and fluid
flow.Much attention is given to the development of the discrete set of algebraic equations, beginning with simple one-dimensional problems that can be solved by inspection, continuing to two- and
three-dimensional elements, and ending with three chapters describing applications. The increased number of example problems per chapter helps build an understanding of the method to define and
organize required initial and boundary condition data for specific problems. In addition to exercises that can be worked out manually, this new edition refers to user-friendly computer codes for
solving one-, two-, and three-dimensional problems.Among the first FEM textbooks to include finite element software, the book contains a website with access to an even more comprehensive list of
finite element software written in FEMLAB, MAPLE, MathCad, MATLAB, FORTRAN, C++, and JAVA - the most popular programming languages. This textbook is valuable for senior level undergraduates in
mechanical, aeronautical, electrical, chemical, and civil engineering. Useful for short courses and home-study learning, the book can also serve as an introduction for first-year graduate
students new to finite element coursework and as a refresher for industry professionals. The book is a perfect lead-in to Intermediate Finite Element Method: Fluid Flow and Heat and Transfer
Applications (Taylor and Francis, 1999, Hb 1560323094).
(source: Nielsen Book Data)
jump to top | {"url":"http://searchworks.stanford.edu/view/6628185","timestamp":"2014-04-19T07:37:33Z","content_type":null,"content_length":"30931","record_id":"<urn:uuid:416b6a99-abd2-4396-8c84-5f71ff1d24d9>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00479-ip-10-147-4-33.ec2.internal.warc.gz"} |
5 CFR 831.105 - Computation of interest.
§ 831.105 Computation of interest.
(a) The computation of interest is on the basis of 30 days to the month. Interest is computed for the actual calendar time involved in each case, but whenever applicable the rule of average applies.
(b) Interest is allowed on current deductions and deposits at the rate of 4 percent per year to December 31, 1947, and 3 percent per year thereafter, compounded annually, to December 31, 1956. After
December 31, 1956, except as provided below, interest is allowed at the rate of 3 percent per year, compounded annually, to date of final separation or transfer to a position that is not covered by
the retirement system. After December 31, 1956, interest is not allowed:
(c) Interest at the rate of 3 percent per year through December 31, 1984, and, thereafter, at the yearly rate determined by the Secretary of Treasury, compounded annually, is allowed on voluntary
contributions during periods of employment and, after the employee or Member has completed at least 5 years' civilian service, during periods of separation until the beginning date of annuity or
death, whichever is earlier. For refund purposes, however, interest on voluntary contributions terminates on the date of the employee's or Member's final separation or on the date of the employee's
or Member's last transfer to a position in which he or she is not subject to subchapter III of chapter 83 of title 5, United States Code
(d) For noncontributory service performed before October 1, 1982, and for redeposits of refunds paid on an application received by either the individual's employing agency or OPM before October 1,
1982, interest at the rate of 4 percent per year to December 31, 1947, and at the rate of 3 percent per year thereafter, compounded annually, is charged. Interest is charged on the outstanding
balance of a deposit from the midpoint of each service period for which deposit is involved; interest is charged on the outstanding balance of a refund from the date the refund was paid. Interest is
charged to the date of deposit or commencing date of annuity, whichever is earlier, except that interest is not charged for any period of separation from the service which began before October 1,
(e) For noncontributory service performed on or after October 1, 1982, and for redeposits of refunds paid on an application received by the individual's employing agency or OPM on or after October 1,
1982, interest is charged at the rate of 3 percent per year through December 31, 1984, and, thereafter, at the yearly rate determined by the Secretary of Treasury, compounded annually. Interest is
charged on the outstanding balance of a deposit from the midpoint of each service period for which deposit is involved; interest is charged on the outstanding balance of a refund from the date the
refund was paid. Interest is charged to the date of deposit.
(f) No interest is charged on a deposit for military service if that deposit is made before October 1, 1984, or within 2 years of the date that an individual first becomes an employee or Member under
the civil service retirement system, whichever is later. When interest is charged on a deposit for military service, it is charged on the outstanding balance at the rate of 3 percent per year,
compounded annually, from October 1, 1984, or 2 years from the date the individual first becomes an employee or Member, whichever is later, through December 31, 1984, and thereafter at the yearly
rate determined by the Secretary of the Treasury.
(g) For calendar year 1985 and for each subsequent calendar year, OPM will publish a notice in the Federal Register to notify the public of the interest rate that will be in effect during that
calendar year.
(1) The initial interest on each monthly difference between the reduced annuity rate and the annuity rate actually paid equals the amount of the monthly difference times the difference between (i)
1.06 raised to the power whose numerator is the number of months between the date when the monthly difference in annuity rates occurred and the date when the initial interest is computed and whose
denominator is 12; and (ii) 1.
(2) The total initial interest due is the sum of all of the initial interest on each monthly difference computed in accordance with paragraph (h)(1) of this section.
(3) Additional interest on any uncollected balance will be compounded annually and accrued monthly. The additional interest due each month equals the remaining balance due times the difference
between (i) 1.06 raised to the 1/12th power; and (ii) 1.
(1) When an individual's civilian service involves several deposit and/or redeposit periods, OPM will normally use the following order of precedence in applying each installment payment against the
full amount due:
(i) Redeposits of refunds paid on applications received by the individual's employing agency or OPM on or after October 1, 1982;
(ii) Redeposits of refunds paid on applications received by the individual's employing agency or OPM before October 1, 1982;
(2) If an individual specifically requests a different order of precedence, that request will be honored.
(j) Interest under § 831.662 is compounded annually and accrued monthly.
(1) The initial interest on each monthly difference between the reduced annuity rate and the annuity rate actually paid equals the amount of the monthly difference times the difference between—
(i) The sum of one plus the interest rate set under § 831.105(g) raised to the power whose numerator is the number of months between the date when the monthly difference in annuity rates occurred and
the date when the initial interest is computed and whose denominator is 12; and
(2) The total initial interest due is the sum of all of the initial interest on each monthly difference computed in accordance with paragraph (j)(1) of this section.
[33 FR 12498, Sept. 4, 1968, as amended at 47 FR 43637, Oct. 1, 1982; 48 FR 38783, Aug. 26, 1983; 51 FR 31931, Sept. 8, 1986; 52 FR 32287, Aug. 27, 1987; 55 FR 9099, Mar. 12, 1990; 58 FR 52880, Oct.
13, 1993]
Title 5 published on 2014-01-01
no entries appear in the Federal Register after this date. | {"url":"http://www.law.cornell.edu/cfr/text/5/831.105","timestamp":"2014-04-23T23:17:06Z","content_type":null,"content_length":"37926","record_id":"<urn:uuid:5925539e-b563-4803-896a-d0e46e957bae>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to get the largest and smallest numbers in a list
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Sep 2010
Rep Power
How to get the largest and smallest numbers in a list
I've been learning python for a few months now and can create a few games. Now I need this really simple thing and I'm embarrassed to ask about it. I need to find the largest and smallest numbers
in a list.
I've googled and tried all the suggestions but still none of them work. If I put in numbers like '100000', '100', '10'. It will say 10 is the biggest.
Here's my code, any help would be great:
numbers = []
user_input = 0
while user_input != "q":
user_input = input("Enter a number: ")
if user_input == "q":
#print out the numbers
for i in numbers:
print(i, end=" ")
#the amount of numbers
print("The amount of numbers is: {0}".format(str(len(numbers))))
#print the sum of all the numbers
total = 0
for i in numbers:
total += int(i)
print("The total amount is {0}".format(str(total)))
#print the lowest and highest numbers
print("The lowest number is {0}".format(numbers[-1:][0]))
print("The highest number is {0}".format(max(numbers)))
#the mean of all the numbers
mean = total / len(numbers)
print("The mean number is {0}".format(str(mean)))
I don't know what your code is trying to do for finding the lowest number. Changing it to simply calling min(number) caused it to work fine for me. Well, but there are other problems: namely, why
are you storing your numbers as strings? If it's because of the use of string formatting, you may be interested to know that it is quite possible to insert numbers into formatted strings.
Here is my altered version which stores the numbers as ints:
numbers = []
user_input = 0
while user_input != "q":
user_input = input("Enter a number: ")
if user_input != "q":
#There's no reason to break when it equals "q" since this is the
#end of the loop anyway.
#print out the numbers
for i in numbers:
print(i, end=" ")
#the amount of numbers
print("The amount of numbers is: {0:d}".format(len(numbers)))
#print the sum of all the numbers
total = 0
for i in numbers:
total += i
print("The total amount is {0:d}".format(total))
#print the lowest and highest numbers
print("The lowest number is {0:d}".format(min(numbers)))
print("The highest number is {0:d}".format(max(numbers)))
#the mean of all the numbers
mean = total / len(numbers)
print("The mean number is {0:G}".format(mean))
You realized for summing the numbers you'd need to convert from string to a numeric type. Here I've converted the strings to numbers before enlisting them.
# python3
numbers = []
while True:
user_input = input("Enter a number: ")
if user_input == "q":
number = float(user_input)
print('bad data omitted. q terminates entry')
#print out the numbers
for i in numbers:
print(i, end=" ")
#the tally of numbers
print("The tally of numbers is: {0}".format(str(len(numbers))))
#print the sum of all the numbers
total = sum(numbers)
print("The total amount is {0}".format(str(total)))
#print the lowest and highest numbers
print("The lowest number is {0}".format(min(numbers)))
print("The highest number is {0}".format(max(numbers)))
#the mean of all the numbers
mean = total / len(numbers)
print("The mean number is {0}".format(str(mean)))
Last edited by b49P23TIvg; January 24th, 2013 at 09:49 PM.
[code]Code tags[/code] are essential for python code and Makefiles!
Thanks guys.
I got too far ahead of myself and began to forget the basics of the language!
No Profile Picture
Contributing User
Devshed Newbie (0 - 499 posts)
Join Date
Dec 2012
Rep Power
No Profile Picture
Registered User
Devshed Newbie (0 - 499 posts)
Join Date
Sep 2010
Rep Power | {"url":"http://forums.devshed.com/python-programming-11/largest-list-938669.html","timestamp":"2014-04-16T11:00:32Z","content_type":null,"content_length":"61817","record_id":"<urn:uuid:5e717a32-8591-4abb-98e1-4834663ef4a1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reply to comment
September 2004
Now what was that number again?
In March 2004, Daniel Tammet from Kent set a new European record when he recited
How do people pull off incredible (if rather pointless) memory feats like this? And is there anything we can learn from them when it comes to more practical needs for memorising numbers - like
remembering the code on your padlock, or the PIN for your cashpoint card?
Memory and numbers
Memory is fundamental to the way you think, and you use it in almost every activity. You need memory to learn facts and names, but you also need it to acquire a new physical skill or even to tell a
joke. Aptitudes vary enormously from one person to the next, but one person's ability to remember will also vary depending on the task. For example, somebody who has a good memory for numbers might
be hopeless when it comes to remembering a joke (I speak from bitter experience here).
Where does the particular aptitude for remembering numbers come from? For reasons that I will explain in a moment, mathematicians are generally better equipped to remember numbers than other people,
but it is certainly not essential to be a mathematician to have an exceptional ability in this area.
For example, Daniel Tammet puts his remarkable ability to memorise sequences of digits down to the way that he "sees" numbers as colours and images. To him,
, in which the stimulation of one of the senses triggers a reaction in other senses too. Synaesthesia manifests itself in different ways, but in some people it means they get multiple sensory
reactions when exposed to numbers. A famous Russian "memory man" called Shereshevsky described how, to him, the number 2 always appeared as a dark rectangle. I came across another person who always
links the digit 4 with the taste of a tomato. To those on the outside, there appears to be no logic to these associations.
Synaesthetists have a natural advantage when it comes to memory because the brain is more likely to record something in the long term when it ties in with the senses. An event or an object is more
memorable when it has sounds, pictures, texture and particularly smell associated with it.
Like most people, you have probably had the odd experience of smelling, say, an old piece of furniture and being reminded of something that happened to you in the distant past. Smell has a
particularly strong connection with memory, perhaps because the part of the brain that deals with smell is close to the hippocampus, which is where it is believed long term memories are formed. If
you deliberately surround yourself with a particular smell when trying to memorise something, that smell is likely to help trigger the memory later when you need to recall it.
This link between memory and the senses is the basis of some of the memory techniques that are described in study-aids. One method that is often suggested for remembering numbers is to associate each
digit with a rhyming word.
One is bun,
Two is shoe,
Three is tree,
Four is door,
and so on. The idea here is that an abstract number is turned into a tangible object, with all its associated images and sounds. If I wanted to remember the number 24, I could instead remember it as
"shoe-door" and picture myself kicking down the front door (this image comes very readily to mind for some reason). The theory is that the memory of the kicking of the door will be retained for much
longer than the number 24, so when I try to remember the number in a week's time, I will immediately think of the image and simply convert it back to the number I was trying to think of.
It can be a helpful technique for remembering small numbers, but it becomes incredibly cumbersome if you need to remember a number with several digits. 1492 becomes bun-door-wine-shoe. I'm struggling
to picture the appropriate bread-throwing incident at Oddbins that would be needed to memorise this sequence. There must be a better way...
The mathematical approach to remembering numbers
Most people who are good at remembering numbers aren't so because of any sensory experience. It is much more likely to be because numbers have meaning for them. Mathematicians have a strong advantage
here, because regular exposure to numbers means that the properties of numbers become familiar.
Show a mathematician the number 4832 and the chances are that they will immediately register what sort of number it is (four digits, divisible by two). Sometimes mathematicians can't help playing
with the number, too. In this case, you may have found yourself saying 4832, four eights are thirty-two. This sort of play helps to give the number meaning, and to make it memorable.
There have been famous examples of this urge to play with numbers. Alexander Aitken was a professor of mathematics at Edinburgh University whose memory was renowned. He once commented:
If I go for a walk and a motor car passes and it has the registration number 731, I cannot but observe it is 17 times 43. ... When I see a bus conductor with a number on his lapel, I square it
... this isn't deliberate, I just can't help it. ... Sometimes a number has almost no properties at all, like 811, and sometimes a number, like 41, is deeply involved in many theorems that you
Now, which one is has the most interesting number?
One of the most famous examples of remembering numbers because of their mathematical properties is the story of the mathematician GH Hardy who was visiting his friend Ramanujan in hospital. Hardy had
come by taxi, and after greeting Ramanujan, he apologised. "My taxi number was 1729," he said, "I'm afraid it was a bit dull." "On the contrary, 1729 is most interesting," said Ramanujan. "It is the
smallest number that is sum of two cubes in two different ways." (For the record, 1729=12
, or 10
Often, the patterns and meanings behind numbers will stick in the mind without effort, but if they don't, they can be the basis of a method for deliberately memorising a number. You might use them
for remembering a PIN or a phone number, but they can apply to longer numbers too. For example, have a go at remembering this number. Give yourself about ten seconds:
If you try to learn it by rote, you will probably struggle. Short term retention of a number is normally limited to seven digits. Any more than that, and you are unlikely to remember more than the
first few digits. (In the above example, most people remember 15222 easily, but after that get increasingly muddled).
But now put on your mathematical hat. Can you spot a pattern within the digits that will make them much easier to remember? There's probably more than one way to simplify the task here, but there is
one particular pattern which, if you spot it, makes the task trivial.
In fact the number can be broken into pairs of digits, 15 22 29 36 43 50 57, each pair being seven larger than the previous pair. Now all you need to remember is the starting number and the rule.
Remembering Pi
Not all numbers have such convenient patterns behind them, but within every number there are always subgroups of digits that have mathematical meaning. That even applies to
Here are the first 100 digits of
Most people would not be able to remember this as a sequence of single digits, but the task becomes easier if you pick out clumps of interesting numbers.
For example, the first ten decimal places include the consecutive numbers 14-15, and then 65-35 which add to make 100. Later there is a cluster of even digits, 846-264. These are both simple series
with the second two digits transposed (864 has become 846, 246 has become 264). Gradually you can build up a mathematical story that links these patterns together.
This is the sort of approach that professional memorisers use, though they often combine it with other techniques, for example, converting digits into letters which they then turn into words. A
common digit-to-letters rule is as follows:
1 becomes the letter T (a single downstroke),
2 is n (two downstrokes),
3 is M (three downstrokes),
4 is R (r is the fourth letter of four!),
5 is L (L is the Roman fifty, which is close...),
6 is J (J is a bit like a backwards 6),
7 is K (K is like two sevens stuck together),
8 is F (a cursive f resembles an eight),
9 is P (P is a backwards 9),
0 is Z (Z is for zero).
The start of My TuRTLe oPeN JaiL. Picture your turtle opening a jail and, voila, the first nine digits of pi are memorised. Continue this for 42,187 more digits and the world record is yours.
Fortunately, unless you plan to become a memory performer, or decide to pursue some very specialised areas of physics, maths or astronomy, it is very unlikely you are ever going to need to remember
churning out digits of pi
"May I have a large container of coffee."
Count the number of letters in each word of that sentence, and you'll see that the digits of
In the end, whatever number you want to remember, whether it is
About the author
Rob Eastaway is an independent lecturer, and a consultant to the Millennium Maths Project. He specialises in the everyday applications of mathematics. His books include "How to Remember", which
contains a chapter on memorising numbers. It is published by Hodder & Stoughton, priced £7.99.
Among his other books are "Why do buses come in threes?" and "How long is a piece of string?", both reviewed in past issues of Plus. | {"url":"http://plus.maths.org/content/comment/reply/2251","timestamp":"2014-04-20T03:13:29Z","content_type":null,"content_length":"37148","record_id":"<urn:uuid:1f2907af-f1b4-4599-8baf-bf84d6f0b0bb>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00008-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 3 : Solving Linear Equations : 3.2 Solving Equations Using Multiplication and Division
3.2 Extra Example
Click here for an Extra Example.
3.2 Problem Solving Help
Help for Exercises 37-45 on page 142
In Exercises 37-45, the equation you are asked to solve includes a fractional coefficient. So, multiply both sides of the equation by the reciprocal of the coefficient of the variable term. Remember
that the product of a number and its reciprocal is -1. Given a fraction of the form
It may also be helpful to rewrite integers as fractions with a denominator of 1. For example, 10 = x can be rewritten as x. This will help you keep track of the factors that are in the numerator and
the factors that are in the denominator.
* PLEASE NOTE: To view our Extra Example pages, you must have the Adobe Acrobat Reader installed on your computer. You may download the Reader by clicking here if you do not already have it | {"url":"http://www.classzone.com/books/algebra_1_cs/page_build.cfm?content=les_res_2_ch3&ch=3","timestamp":"2014-04-19T10:27:01Z","content_type":null,"content_length":"24843","record_id":"<urn:uuid:8d71a50f-aa1f-431e-bcda-ff176307908b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Post a New Question | Current Questions
4th grade math
So if 26+6+10=? And you are try to find that out you add them that will be 102 then you divided by three you get your answer by the way I in 5 grade
Thursday, January 9, 2014 at 7:38pm
8th Grade Connections Academy
Connections Academy 8th grade Pennsylvania But I think all of the conncetion academy's have the same schoolwork as long as ur in the same grade
Wednesday, January 8, 2014 at 6:06pm
Math grade 9
not sure about grade 9, but I just checked an old grade 10 text from Ontario, and I used to teach it in 10 The proof followed your method using a general case.
Friday, December 20, 2013 at 9:21pm
7th grade Social Studies Ms. Sue please!
This is great I got an excellent grade! Thanks:)
Thursday, December 19, 2013 at 10:46am
Business English
That would depend on what grade you are in. If you are in ninth or maybe tenth grade and under it sounds fine, but if you are above that age range I would suggest making it sound more professional
and convincing if you really want a good grade.
Thursday, December 12, 2013 at 6:41pm
October 2nd is my birthday!!!!!!! Lol
Thursday, December 12, 2013 at 12:40pm
6th grade math
1,2, and 3 are ok #4 has the right answer, but I don't like the way you stated your solution First of all, don't put equal signs in front of each new line when solving. Youre 2nd last line has no
relation to the problem, what happened to the 5 ? here is how I would ...
Wednesday, December 11, 2013 at 9:24pm
if you have a graphing calculator, go to catalog (normally you can get there by hitting the 2nd button than 0), then hit the alpha button and hit T then scroll up to stdDev which is standard
deviation, hit enter, then type in the bracket (to get to is 2nd button then ...
Sunday, December 8, 2013 at 5:58pm
first number ----> machine ----> 2nd number ---> machine ---> 3rd number ----> machine ----> 4th number = 271 Machine basically multiplies the number by 3 and subtracts 2. So we work backwards. 271 +
2 = 273. Divide it by 3 to get the 3rd number, 91. Do the ...
Friday, December 6, 2013 at 4:17pm
data management
so, the answer would just be 1/6? or should it be 1/6 x 1/6= 1/36 since 2nd throw is 1/6 and fist is 1/6 too?
Thursday, December 5, 2013 at 11:33pm
data management
the 2nd throw must be a 6, with P=1/6
Thursday, December 5, 2013 at 11:29pm
Physics I Newton's 2nd Law
excuse me? One of the most fundamental identities is F = ma
Wednesday, December 4, 2013 at 11:51pm
Can anyone help me with this?
Disagree with 1. My Canada is the 2nd largest country in terms of area, but has only 35 Million, about 1/10 of yours. I would pick C
Wednesday, December 4, 2013 at 4:59pm
at t=0, let the 1st boat be at (0,0) The 2nd boat is at (-c,c) So, at time t, the 1st boat is at (0,6t) and the 2nd boat is at (-c+5t,c) The distance is thus d^2 = (-c+5t)^2 + (c-6t)^2 = c^2 - 10ct +
25t^2 + c^2 - 12ct + 36t^2 = 2c^2 - 22ct + 51t^2 2d dd/dt = -22c + 102t dd/dt...
Tuesday, December 3, 2013 at 8:45pm
None of those b/c the 2nd b is b^4 It is 8a^4b^3 + 6a^3b^4 - 2^3b^4
Sunday, December 1, 2013 at 7:38pm
Thank you for your help. It's just that I didn't take functions in grade 11 and now I'm taking Advanced Functions in grade 12 and I'm difficulty with. Thanks for your help though.
Sunday, November 24, 2013 at 7:04pm
from the 2nd ---> y = x+4 sub into the 1st x^2 + (x+4)^2 = 58 2x^2 + 8x+16-58 = 0 x^2 + 4x - 21 = 0 (x+7)(x-3) = 0 x = -7 or x = 3 if x = -7, y = -7+4 = -3 if x = 3 , y = 3+4 = 7 x=3, y=7 or x=-7, y
= -3
Saturday, November 23, 2013 at 5:01pm
Physics, Elena Please Help If u can.
i am solving the 2nd part now
Wednesday, November 20, 2013 at 11:45pm
Physics, Elena Please Help If u can.
please help in last question 2nd part
Wednesday, November 20, 2013 at 10:11pm
Hi can you help me with first part of 1st question . answer of 2nd part is 189
Saturday, November 16, 2013 at 12:39am
just expand it, and simplify y = (x-7)^2 - 5 y = x^2 - 14x +49 - 5 y = x^2 - 14x + 44 you do the 2nd one, let me know what you got.
Wednesday, November 13, 2013 at 10:02pm
Algebra 1 (Reiny or Kuai)
#1. p(1+4) - 2(p+6) = 6 5p - 2p - 12 = 6 3p = 18 p = 3 the 2nd question is correct, the point is NOT on the line
Wednesday, November 13, 2013 at 9:54pm
Janice and Patricia each want to buy a new DVD player. They go to the Hot Electronics and find a DVD player for $75. Hot Electronics offers a different payment plan. Janice is going to pay $15 now,
and then $7.50 per month. Patricia is going to pay $12.50 per month How much ...
Monday, November 11, 2013 at 6:07pm
You are welcome, did you get the 2nd question?
Thursday, November 7, 2013 at 9:14pm
If I I'm trying to find the average weight of a 2nd grade girl which would be the most effective and why? Mean, Median or Mode? If I'm trying to find the average cost of a home in a city which would
be the most effective and why? Mean, Median or Mode? If I'm trying...
Tuesday, November 5, 2013 at 12:32pm
I didn't get the 2nd part, If I plug in 50*10*cos(30) , i get the answer wrong.
Sunday, November 3, 2013 at 4:53am
And also for sliding down the dome's 2nd part
Sunday, November 3, 2013 at 4:40am
Ihnen or euch (recall that Sie and Ihnen are both capitalized when used as 2nd person)
Friday, November 1, 2013 at 6:29pm
Substution Soluton MATH
3. Assuming it was a typo and the 2nd equation was 4x+2y = 8, notice that both terms contain 2y as the y-term. The method I used is called "elimination". if we subtract the two equations, of course
we can only add/subtract like terms, I get: 6x-4x = 2x 2y - 2y = 0...
Thursday, October 31, 2013 at 9:15am
if 2nd side is x, then we have x+2 + x + 2x = 34 4x + 2 = 34 x = 8 so, the sides are 10,8,16
Sunday, October 27, 2013 at 5:16pm
Algebra Help
let rate in 1st leg be x mph time takenfor 1st leg = 24/x hrs rate for 2nd leg = x-10 time for 2nd leg = 6/(x-10) remaining distance = 80-24-6 = 50 miles time for last leg = 50/(1.5x) time for whole
trip at x mph = 80/x difference in times = 22 minutes = 22/60 hrs 80/x - 24/x...
Friday, October 25, 2013 at 9:50pm
algebra 2
total cost is (p+4.50) so, the 2nd equation is correct (except for the typo :-))
Thursday, October 24, 2013 at 5:24pm
Do you know the 2st and 2nd one,,help plzz.??
Thursday, October 24, 2013 at 8:24am
If my mid-term is worth 15%, and my grade is 100% right now. If I get 0 on the exam, my grade would be 85% right?
Wednesday, October 23, 2013 at 1:13am
Physics Classical Mechanics
please could you help me with the 3rd(2nd part) one and the 4th one..plzzz help
Tuesday, October 22, 2013 at 10:21am
It's actually 7.643 N for (a) and 18.5 m/s^2 for centripetal acceleration in (b). Please help with the 2nd part!
Monday, October 21, 2013 at 6:45pm
If I remember correctly from 5th grade (I'm in eighth grade now), my textbook said that the first college in the US was Harvard.
Monday, October 21, 2013 at 12:40pm
Algebra 2
a scalar multiple of a matrix multiplies every element inside the matrix. So, the 2nd row becomes [3 4 3/2 -1]
Monday, October 21, 2013 at 3:34am
well, the x^2 term will not go away, so it will be a 2nd-degree expression. Simplify it and note that there are 3 terms, so it's a trinomial.
Saturday, October 19, 2013 at 3:36pm
My 2nd answer for 6th is C, is that correct?
Tuesday, October 15, 2013 at 12:43am
Simplify (6t to the fifth power) with whole expression in parenthesis to the 2nd pwr.
Thursday, October 10, 2013 at 1:54am
physics (it's rather urgent help is appreciated
i didn' t got that, the 2nd last question right?
Sunday, October 6, 2013 at 10:09am
algebra can someone help my buddy ray out
for y = 3(x+2)^2 the vertex is (-2,0) and the axis of symmetry is x = -2 According to the choices, you probably forgot to add/subtract at the end. if it is y = 3(x+2)^2 + c then the vertex is (-2,c)
for the 2nd, y = 2(x-3)^2 - 4 the vertex is (3,-4) and it opens upwards, So ...
Saturday, October 5, 2013 at 7:47am
logical reasoning
how to write 2nd oct 2013 in 6 letters without using numbers
Thursday, October 3, 2013 at 1:31pm
Sorry for the 2nd post...please omit...
Wednesday, October 2, 2013 at 7:25pm
If x is the 2nd side, then (x+6)+x+(2x-5) = 25 x = 6 and the sides are 12,6,7
Monday, September 30, 2013 at 11:41pm
I'm not sure, but I'd probably say it like this: Chee - ah - rah ( with the stress on the 2nd syllable.
Monday, September 30, 2013 at 6:12am
If I were to draw a skydiver how would I make that an example of the 2nd law???
Sunday, September 29, 2013 at 3:11pm
Calculus Please Help??
For your 2nd part, did you get the partial fraction breakdown of -2x/(x^2+4) - 3/(x-5) from (-5x^2 + 10x - 12)/((x-5)(x^2+4)) ? then your integral would be -ln(x^2+4) - 3ln(x-5) + a constant let me
know what you get for your last question.
Friday, September 27, 2013 at 9:09pm
Grades are assigned on the standard 90, 80, 70, 60 scale. You are to write an algorithm which will receive a student name and a percentage grade and display the letter grade to be awarded
Thursday, September 26, 2013 at 2:46am
Grammar Advisor
Are these correct? I mark my answer with an X. 88. The reason to list all of the preliminary (non-procedural) information in a lesson plan is to be sure have considered each aspect. keep good records
for your future needs. be able to communicate clearly with administrators, ...
Wednesday, September 25, 2013 at 10:17pm
thank you Ms.Sue.. that helped a lot.. I have a 2nd question.. What is a duel? ie. Alexander Hamilton vs Aaron Burr. was it a rivalry/opposition?
Sunday, September 22, 2013 at 7:21pm
first can flip in 2 ways, the 2nd in 2 ways ..... number of outcomes = (2)(2)(2)(2) = 16
Sunday, September 22, 2013 at 10:57am
-(5) to the 2nd power
Thursday, September 19, 2013 at 6:50pm
If both cans are thrown with the same velocity, then the first can must be on its way down when they collide, and the first can will be going up. For the first can, vt-4.9t^2 = 5 t = 5/49 (v+√(v^
2-98)) the 2nd can has been going up for only t-4 seconds, so v(5/49 (v+...
Wednesday, September 18, 2013 at 5:51pm
7th Grade
Obviously, most readers of this board have been in 7th grade. I've even taught 7th grade. However, different states, even different schools, teach different things in 7th grade science. Some
emphasize biological science, others emphasize physical science. Why not check ...
Monday, September 16, 2013 at 9:56pm
7th Grade
I just wanna know anyone who's been in 7th grade what do you learn in science mostly throughout the year??
Monday, September 16, 2013 at 9:51pm
g = 3/4 b g+b=1400 7/4 b = 1400 b = 800 g = 600 1st grade boys: 1/5 * 800 = 160 1st grade girls: 1/6 * 600 = 100 total 1st grade: 260 others: 1140
Friday, September 13, 2013 at 12:19am
here is the 2nd part of the question B. Ben runs for 25 hours. For how many hours does he walk?
Sunday, September 8, 2013 at 4:42pm
Intermediate Algebra
name the type of polynomial and give its degree for 7x to the 3rd power + 6x to the 2nd power -2=
Saturday, September 7, 2013 at 1:56pm
Isn't it 2 moles of KCL, from the balanced equation? for the 2nd step (mole ratio)
Thursday, September 5, 2013 at 8:05am
URGENT 8th Grade Math
I think so, but I dont know about the last. I am in the *th grade, but honestly I am not 2100% sure. Sorry just trying to be helpful. kathi anderson forever never lasts
Wednesday, September 4, 2013 at 12:34pm
Approximately 15% of the students in ST130 obtain A grade. If 3 students are selected at random, find the probability that a)all of them obtain A grade in ST130. b)Exactly 2 obtain A grade in ST130.
c)at least one obtain A grade in ST130
Tuesday, August 27, 2013 at 4:26am
No. These two are both separate problems. The book asks to simplify each problem. And for the 2nd problem the index is a four.
Monday, August 26, 2013 at 11:57pm
that's the same as asking what's the chance of the 2nd child being a boy.
Friday, August 23, 2013 at 4:36am
1st leg - 4 h 2nd leg - 6 h So 480 km in 10 h 48km/h
Tuesday, August 13, 2013 at 7:46am
Represent the unknown with variables: Let x = amount of premium grade gasoline sold in gallons Let x+420 = amount of regular grade gasoline sold in gallons (according to the third statement) Then we
set up the equation. We know that the total worth of gasoline sold is 10,957. ...
Monday, August 12, 2013 at 2:57am
strange question Are you saying his annual payment is $2500 + the interest? If so, then ... Balance now = $50,000 interest at end of 1st year = $7000 payment = 2500 + 7000 = $9500 Balance at end of
1st year = $47,500 interest at end of 2nd year = $6,650 payment at end of 2nd ...
Thursday, August 8, 2013 at 10:19am
Calculus-HELP PLZ !!!!
Find the complete general solution to the 2nd ODE: 9y'' + 9y' - 4y = 0
Monday, July 22, 2013 at 4:37pm
Find the complete solution to the 2nd ODE: 2y''+3y'+y=t^(2)+3sin(t)
Thursday, July 18, 2013 at 6:15pm
The bar means the 2nd 89 REPEATS continuously.
Saturday, July 6, 2013 at 4:14pm
The 2nd 89 REPEATS continuously.
Saturday, July 6, 2013 at 4:10pm
My 2nd thought was B because I really haven't heard of the others
Thursday, July 4, 2013 at 7:31pm
how to write 2nd may 2011 in 5 letters without usi
Saturday, June 22, 2013 at 7:05am
how to write 2nd may 2011 in 5 letters without usi
Saturday, June 22, 2013 at 7:05am
how to write 2nd may 2011 in 5 letters without usi
Saturday, June 22, 2013 at 7:02am
if the length y is a function of the mass, x, then y = ax+b 2a+b = 12 5a+b = 18 subtract 1st from 2nd to get 3a = 6 a = 2 2*2+b=12, so b=8 y = 2x+8
Sunday, June 16, 2013 at 3:10pm
Apparently therre are no questions on the 2nd day, so I'll not be back. Sra (aka Mme)
Saturday, June 15, 2013 at 12:57pm
anyone got the 2nd answer???
Tuesday, June 11, 2013 at 12:21am
The subject (muffins) is certainly plural, but it's not first person. 1st: I,we 2nd: you 3rd, he,she,it,they Given that, what does it have to do with tenses?
Monday, June 10, 2013 at 12:19pm
For the 2nd one I'm getting 299..however my answer choices are 500, 450, 280, or the ball travels an infinite distance
Saturday, June 8, 2013 at 4:55pm
if tan theta= -13/12 where theta is in 2nd quadrant find remaing trigonometric ratios
Thursday, June 6, 2013 at 9:42am
Here, CSA of first rod = C CSA of first rod = D Thermal Conductivity of first wire = a Thermal Conductivity of 2nd wire = b Let the length of the two wires be L Temperature difference at the ends be
∆T Now by law thermal conduction for first conductor ∆Q/∆t...
Monday, June 3, 2013 at 8:58am
how to write 2nd may 2011 in 5 letters without usi
Saturday, June 1, 2013 at 4:08pm
how to write 2nd may 2011 in 5 letters without usi
Friday, May 31, 2013 at 11:33am
The 2nd equation is the 1st equation multiplied by -3. So, they are the same line, and there are infinitely many solutions.
Thursday, May 30, 2013 at 11:03am
SAT math
I actually think the 2nd one has no real solutions...
Wednesday, May 29, 2013 at 5:00pm
Are these the formulas for 2nd problem?
Saturday, May 25, 2013 at 4:20am
MATH HELP!!
How many ways can 12 horses in a race come in 1st, 2nd, and 3rd place
Friday, May 17, 2013 at 8:54pm
Algebra 2
No, they don't they fit the first equation, but not the 2nd. Only (0,0) works in both
Friday, May 17, 2013 at 10:53am
Probability Math
A committee of four students will be selected from a list that contains six Grade 9 students and eight Grade 10 students. What is the expected number of Grade 10 students on the committee?
Tuesday, May 7, 2013 at 9:29pm
Any one for 2nd and 4th question all parts??
Saturday, May 4, 2013 at 11:54pm
Given data: 1st rod: L₁, T₁, x₁, a => A₁=a²; 2nd rod: L₂, T₂, x₂, d => A₂= πd²/4; Hook s Law for rods: σ=Eε T/A = E x/L, 1st rod: T₁/a²= E x₁/L₁, 2nd rod: 4T...
Sunday, April 28, 2013 at 2:01pm
School (Plz read, really important)
Am I eligible to get a President's Award for Educational Excellence or the President's Award for Educational Achievement and Daughters of the American Revolution??? Here are my grades from last year
and this year so far and everything else I accomplish: 7th Grade - 93....
Sunday, April 28, 2013 at 12:40pm
rate of 1st person = 25/4 ft^2/h rate of 2nd person = 25/5 ft^2/h combined rate = (25/4+25/5) or 45/4 ft^2/h so area in 8 hrs = 8(45/4) ft^2 = 90 ft^2 or 1st person can do 25 fT2 in 4 hrs, so he can
do 50 ft^2 in 8 hrs 2nd person does 25/5 or 5 ft^2 in 1 hour , so in 8 hours ...
Thursday, April 25, 2013 at 7:51am
Let me see if I understand how to find your grade now. So this time if there are 60 questions and you miss 15 your grade would be a 72?
Tuesday, April 23, 2013 at 8:21pm
If there's 60 questions on a homework paper and you miss 10 what would your grade be? I forgot how to find out your grade.
Tuesday, April 23, 2013 at 7:17pm
Here's a little perl program that handles the job: sub Ceil { my $x = shift; int($x+.9999); } print "Numeric grades for midterm and final: "; my ($m,$f) = split /[,\s]/,<STDIN>; $avg = Ceil(($m+2*$f)
/3); $grade = qw(F F F F F F D C B A A)[int($avg/10)]; ...
Wednesday, April 17, 2013 at 11:41am
mmmh, been messing around this for a while from 1st: x(x + y + 1) = 81 from 2nd: y(y + x + 1) = 51 divide them: x/y = 81/51 81y = 51x -----> y = 51x/81 sub that into x^2 + xy + x = 81 81x^2 + x(51x/
81) + x = 81 81x^2 + 51x^2 + 81x = 6561 132x^2 + 81x - 6561 = 0 x = (-81 &...
Tuesday, April 16, 2013 at 8:59am
Literature Help?
The 2nd sentence is correct - No one likes to be uncomfortable while sleeping.
Tuesday, April 16, 2013 at 12:15am
Pages: <<Prev | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | Next>> | {"url":"http://www.jiskha.com/2nd_grade/?page=11","timestamp":"2014-04-20T16:18:41Z","content_type":null,"content_length":"31748","record_id":"<urn:uuid:d4a90d91-6cb0-4669-bedd-f7440eb33c18>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Riemann tensor, Ricci tensor of a 3 sphere
What you have at the top is not the metric of a 3-sphere, but simply the metric of R^3 in spherical polar coordinates. A 3-sphere metric will have r fixed and 3 angles to describe where you are (a
sphere is hollow remember), just as a 2-sphere has r fixed and 2 angles to describe where you are. | {"url":"http://www.physicsforums.com/showpost.php?p=3705043&postcount=5","timestamp":"2014-04-19T07:32:02Z","content_type":null,"content_length":"6959","record_id":"<urn:uuid:ce74a599-79b1-4bd9-a876-647312e1b5c4>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
Method and algorithm for spatially identifying sources of cardiac fibrillation - Patent # 7117030 - PatentGenius
Method and algorithm for spatially identifying sources of cardiac fibrillation
7117030 Method and algorithm for spatially identifying sources of cardiac fibrillation
(7 images)
Inventor: Berenfeld, et al.
Date Issued: October 3, 2006
Application: 11/002,947
Filed: December 2, 2004
Inventors: Berenfeld; Omer (Dewitt, NY)
Jalife; Jose (Manlius, NY)
Vaidyanathan; Ravi (Syracuse, NY)
Assignee: The Research Foundation of State University of New York (Albany, NY)
Primary Pezzuto; Robert E
Assistant Malamud; Deborah
Attorney Or Rabin; Sander
U.S. Class: 600/515; 128/920; 600/512; 600/518; 600/523; 607/4; 607/5
Field Of 607/4; 607/5; 600/512; 600/515; 600/518; 600/523; 128/920
International A61B 5/0402
U.S Patent 5109862; 5549109; 5578007; 5609158; 5676153; 5782899; 5868680; 6622042; 2003/0069511; 2004/0176696; 2004/0176697; 2004/0220489
Abstract: A method and computer program product comprising an algorithm adapted to execute a method of identifying the spatial coordinates of a sustaining source of fibrillatory activity in a
heart by computing a set of point-dependent dominant frequencies and a set of point-dependent regularity indices for a set of products of point-dependent unipolar discrete power spectra
and point-dependent bipolar discrete power spectra, derived by spectral analyses of corresponding unipolar and bipolar cardiac depolarization signals simultaneously acquired from a set
of points of the heart. A maximum dominant frequency is selected whose associated coordinates identify the point of the sustaining source of fibrillatory activity. The magnitude of the
regularity index is interpreted to verify the identification of the spatial coordinates of the sustaining source of fibrillatory activity. When indicated, surgical intervention is
directed to the spatial coordinates of the sustaining source of fibrillatory activity.
Claim: We claim:
1. A method for identifying the spatial coordinates of at least one sustaining source of fibrillatory activity ("SSFA") in a heart, said method comprising the steps of: a.simultaneously
acquiring a unipolar time-dependent depolarization signal S.sub.UP(t) and a corresponding bipolar time-dependent depolarization signal S.sub.BP(t) from each acquisition point P.sub.i
(x.sub.i, y.sub.i, z.sub.i) of an acquisition set ofpoints {P.sub.i(x.sub.i, y.sub.i, z.sub.i)} on or within said heart, each said acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i)
having unique spatial coordinates (x.sub.i, y.sub.i, z.sub.i) identified from a set of cardiac points {cP.sub.i(x.sub.i,y.sub.i, z.sub.i)}; b. forming a set of unipolar
time-and-point-dependent depolarization signals {S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)} by assigning to each said unipolar time-dependent depolarization signal S.sub.UP(t) the spatial
coordinates(x.sub.i, y.sub.i, z.sub.i) of the acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) from which it was acquired; and, forming a set of corresponding bipolar
time-and-point-dependent depolarization signals {S.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)} byassigning to each said corresponding bipolar time-dependent depolarization signal S.sub.BP(t)
the spatial coordinates (x.sub.i, y.sub.i, z.sub.i) of the acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) from which it was simultaneously acquired; c.forming a set of unipolar
point-dependent discrete power spectra {DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i)} by computing a unipolar point-dependent discrete power spectrum DPS.sub.UPi(f, x.sub.i, y.sub.i,
z.sub.i) for each said unipolartime-and-point-dependent depolarization signal S.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i); and, forming a set of corresponding bipolar point-dependent
discrete power spectra {DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i)} by computing a corresponding bipolarpoint-dependent discrete power spectrum DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i)
for each said corresponding bipolar time-and-point-dependent depolarization signal S.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i); d. forming a set of point-dependent discretepower spectrum
products {DPS.sub.PRODi(f, x.sub.i, y.sub.i, z.sub.i)} by multiplying each said unipolar point-dependent discrete power spectrum DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i) of said set of
unipolar point-dependent discrete power spectra{DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i)} by each said corresponding bipolar point-dependent discrete power spectrum DPS.sub.BPi(f,
x.sub.i, y.sub.i, z.sub.i) of said set of corresponding bipolar point-dependent discrete power spectra {DPS.sub.BPi(f,x.sub.i, y.sub.i, z.sub.i)}; e. computing a point-dependent product
dominant frequency DF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) for each point-dependent discrete power spectrum product DPS.sub.PRODi(f, x.sub.i, y.sub.i, z.sub.i), thereby forming a set
ofpoint-dependent product dominant frequencies {DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)}; f. selecting a maximum point-dependent product dominant frequency DF.sub.MAXPRODi(x.sub.i,
y.sub.i, z.sub.i) from said set of point-dependent product dominantfrequencies {DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)}; g. assigning the coordinates of said maximum point-dependent
product dominant frequency DF.sub.MAXPRODi (x.sub.i, y.sub.i, z.sub.i) to said at least one SSFA.
2. The method of claim 1, wherein said unique spatial coordinates (x.sub.i, y.sub.i, z.sub.i) of each said acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i), are determined by: a.
defining a spatial coordinate system (x, y, z) for theidentification of cardiac points cP.sub.i(x.sub.i, y.sub.i, z.sub.i) having spatial coordinates (x.sub.i, y.sub.i, z.sub.i) on or
within said heart; b. forming said cardiac points cP.sub.i(x.sub.i, y.sub.i, z.sub.i) into a set cardiac points{cP.sub.i(x.sub.i, y.sub.i, z.sub.i)}; c. assigning to each acquisition
point P.sub.i(x.sub.i, y.sub.i, z.sub.i) the coordinates of the cardiac point with which it is spatially coincident.
3. The method of claim 1, wherein said step of computing a unipolar point-dependent discrete power spectrum DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i) for each said unipolar
time-and-point-dependent depolarization signal S.sub.UPi (t, x.sub.i,y.sub.i, z.sub.i) and computing a corresponding bipolar point-dependent discrete power spectrum DPS.sub.BPi(f,
x.sub.i, y.sub.i, z.sub.i) for each said corresponding bipolar time-and-point-dependent depolarization signal S.sub.BPi (t, x.sub.i, y.sub.i,z.sub.i), further comprises the steps of: a.
selecting a predefined segment of each said unipolar time-and-point-dependent depolarization signal S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of segmented unipolar
time and-point-dependentdepolarization signals {sS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)}; and, selecting a predefined segment of each of said corresponding bipolar
time-and-point-dependent depolarization signal S.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set ofcorresponding segmented bipolar time-and-point-dependent depolarization
signals {sS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)}; b. detrending each said segmented unipolar time-and-point-dependent depolarization signal sS.sub.UPi(t, x.sub.i, y.sub.i,z.sub.i),
thereby forming a set of detrended and segmented unipolar time-and-point-dependent depolarization signals {dsS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)}; and, detrending each said
corresponding segmented bipolar time-and-point-dependentdepolarization signal sS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of corresponding detrended and segmented
bipolar time and-point-dependent depolarization signals {dsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)}; c. band pass filteringeach said detrended and segmented unipolar
time-and-point-dependent depolarization signal dsS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i) between a first frequency limit F.sub.lim1 and a second frequency limit F.sub.lim2, thereby
forming a set of filtered,detrended and segmented unipolar time and-point-dependent depolarization signals {fdsS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)}; and, band pass filtering each
said corresponding detrended and segmented bipolar time-and-point-dependent depolarizationsignal dsS.sub.BPi t, x.sub.i, y.sub.i, z.sub.i) between said first frequency limit F.sub.lim1
and said second frequency limit F.sub.lim2, thereby forming a set of corresponding filtered, detrended and segmented bipolar time-and-point-dependentdepolarization signals {fdsS.sub.BPi
(t, x.sub.i, y.sub.i, z.sub.i)}; d. convolving each said filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal fdsS.sub.UPi(t, x.sub.i, y.sub.i,
z.sub.i) with a shaping signal (t),thereby forming a set of shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {{circle around (.times.)}
fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, convolving each said corresponding filtered,detrended and segmented bipolar time-and-point-dependent depolarization signal
fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i) with said shaping signal (t), thereby forming a set of corresponding shaped, filtered, detrended and segmented bipolartime-and-point-dependent
depolarization signals {{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.j, z.sub.i)}; e. band pass filtering each said shaped, filtered, segmented and detrended unipolar
time-and-point-dependent depolarization signal{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) between a third frequency limit F.sub.lim3 and a fourth frequency
limit F.sub.lim4, thereby forming a set of refiltered, shaped, filtered, detrended and segmented unipolartime-and-point-dependent depolarization signals {r{circle around (.times.)}
fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, band pass filtering each said corresponding shaped, filtered, detrended and segmented bipolar time-and-point-dependentdepolarization
signal {circle around (.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i) between said third frequency limit F.sub.lim3 and said fourth frequency limit F.sub.lim4, thereby forming a
set of corresponding refiltered, shaped, filtered,detrended and segmented bipolar time-and-point-dependent depolarization signals {r{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i,
y.sub.i, z.sub.i)}; f. windowing each said refiltered, shaped, filtered, detrended and segmented unipolartime-and-point-dependent depolarization signal r{circle around (.times.)}
fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of windowed, refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependentdepolarization
signals {wr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and, windowing each said corresponding refiltered, shaped, filtered, detrended and segmented bipolar
time-and-point-dependent depolarization signal r{circlearound (.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of corresponding windowed, refiltered, shaped,
filtered, detrended and segmented bipolar time-and-point-dependent depolarization signals {wr{circle around(.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)}; g. edge-smoothing each
said windowed, refiltered, shaped, filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal wr{circle around (.times.)}fdsS.sub.UPi(t,x.sub.i, y.sub.i,
z.sub.i), thereby forming a set of edge-smoothed, windowed, refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {ewr{circle
around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i,z.sub.i)} and, edge-smoothing each said corresponding windowed, refiltered, shaped, filtered, detrended and segmented bipolar
time-and-point-dependent depolarization signal wr{circle around (.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i), therebyforming a set of corresponding edge-smoothed, windowed,
refiltered, shaped, filtered, detrended and segmented bipolar time-and-point-dependent depolarization signals {ewr{circle around (.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)};
h. computinga unipolar point-dependent discrete frequency spectrum DFS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i) for each said edge-smoothed, windowed, refiltered, shaped, filtered,
segmented and detrended unipolar time-and-point-dependent depolarization signalewr{circle around (.times.)}fdsS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of unipolar
point-dependent discrete frequency spectra {DFS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i)}; and, computing a corresponding bipolar point-dependentdiscrete frequency spectrum DFS.sub.BPi(f,
x.sub.i, y.sub.i, z.sub.i) for each said corresponding edge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended bipolar time-and-point-dependent depolarization
signal ewr{circle around(.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of corresponding bipolar point-dependent discrete frequency spectra {DFS.sub.BPi(f,
x.sub.i, y.sub.i, z.sub.i)}; i. computing said unipolar point-dependent discrete powerspectrum DPS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i) for each said edge-smoothed, windowed,
refiltered, shaped, filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal ewr{circle around (.times.)}fdsS.sub.UPi(t, x.sub.i,y.sub.i, z.sub.i),
thereby forming said set of unipolar point-dependent discrete power spectra {DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i)}; j. computing said corresponding bipolar point-dependent discrete
power spectrum DPS.sub.BPi(f, x.sub.i, y.sub.i,z.sub.i) for each said edge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended bipolar time-and-point-dependent
depolarization signal ewr{circle around (.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i), thereby forming said setof corresponding bipolar point-dependent discrete power spectra
{DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i)}.
4. The method of claim 3, wherein said step of computing a unipolar point-dependent discrete frequency spectrum comprises computing a Fast Fourier Transform for said edge-smoothed,
windowed, refiltered, shaped, filtered, segmented and detrendedunipolar time-and-point-dependent depolarization signal ewr{circle around (.times.)}fdsS.sub.UPi(t, x.sub.i, y.sub.i,
z.sub.i); and said step of computing a bipolar point-dependent discrete frequency spectrum comprises computing a Fast Fourier Transformfor said edge-smoothed, windowed, refiltered,
shaped, filtered, segmented and detrended bipolar time-and-point-dependent depolarization signal ewr{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i).
5. The method of claim 3, wherein said first frequency limit F.sub.lim1 is about 1 Hz and said second frequency limit F.sub.lim2 is about 30 Hz.
6. The method of claim 3, wherein said third frequency limit F.sub.lim3 is about 3 Hz and said fourth frequency limit F.sub.lim4 is about 30 Hz.
7. The method of claim 1, wherein each said point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) comprises a frequency in each said point-dependent
discrete power spectrum product DPS.sub.PRODi (f, x.sub.i,y.sub.i, z.sub.i) that is associated with an absolute maximum power density in each said point-dependent discrete power
spectrum product DPS.sub.PRODi (f x.sub.i, y.sub.i, z.sub.i).
8. The method of claim 1, wherein said step of computing a point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) further comprises the step of mapping each
point-dependent product dominant frequency DF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) of said set of point-dependent product dominant frequencies {DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)}
to said point P.sub.i(x.sub.i, y.sub.i, z.sub.i) of said acquisition set of points {P.sub.i(x.sub.i, y.sub.i, z.sub.i)} onor within said heart with which said point-dependent product
dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) is associated.
9. The method of claim 1, wherein said step of assigning the coordinates of said maximum point-dependent product dominant frequency DF.sub.MAXPRODi (x.sub.i, y.sub.i, z.sub.i) to said
SSFA., further comprises the steps of: a. computing apoint-dependent product regularity index RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) for each point-dependent discrete product power
spectrum DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i) of said set of point-dependent discrete power spectrum products{DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)}, thereby forming a
set of point-dependent product regularity indices {RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i)}; b. verifying said assignment of the coordinates of said maximum point-dependent
productdominant frequency DF.sub.MAXPRODi (x.sub.i, y.sub.i, z.sub.i) to said SSFA by interpreting the value of its corresponding point-dependent product regularity index.
10. The method of claim 9, wherein said point-dependent product regularity index RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) comprises a ratio of a power contained in a point-dependent
product dominant frequency band .DELTA..sub.PRODiDF to a totalpower computed at all frequencies of said point-dependent discrete power spectrum product DPS.sub.PRODi(f, x.sub.i,
y.sub.i, z.sub.i), said point-dependent product dominant frequency band .DELTA..sub.PRODiDF being a frequency band centered about apoint-dependent product dominant frequency
DF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i), having a width of about three times a frequency resolution .DELTA.f.sub.i.
11. The method of claim 9, wherein said step of computing a point-dependent product regularity index RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) further comprises the step of mapping each
point-dependent product regularity indexRI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) of said set of point-dependent product regularity indices {RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i)} to
said point P.sub.i(x.sub.i, y.sub.i, z.sub.i) of said acquisition set of points {P.sub.i(x.sub.i, y.sub.i,z.sub.i)} on or within said heart with which said product regularity index
RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) is associated.
12. A computer program product, comprising a computer usable medium having a computer readable program code embodied therein, wherein the computer readable program code comprises an
algorithm adapted to execute a method of identifying thespatial coordinates of at least one sustaining source of fibrillatory activity ("SSFA") in a heart, said method comprising the
steps of: a. simultaneously acquiring a unipolar time-dependent depolarization signal S.sub.UP(t) and a corresponding bipolartime-dependent depolarization signal S.sub.BP(t) from each
acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) of an acquisition set of points {P.sub.i(x.sub.i, y.sub.i, z.sub.i)} on or within said heart, each said acquisition point P.sub.i
(x.sub.i,y.sub.i, z.sub.i) having unique spatial coordinates (x.sub.i, y.sub.i, z.sub.i) identified from a pre-stored set of cardiac points {cP.sub.i (x.sub.i, y.sub.i, z.sub.i)}; b.
forming a set of unipolar time-and-point-dependent depolarization signals{S.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} by assigning to each said unipolar time-dependent depolarization
signal S.sub.UP(t) the spatial coordinates (x.sub.i, y.sub.i, z.sub.i) of the acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) from which itwas acquired; and, forming a set of
corresponding bipolar time-and-point-dependent depolarization signals {S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} by assigning to each said corresponding bipolar time-dependent
depolarization signal S.sub.BP(t) thespatial coordinates (x.sub.i, y.sub.i, z.sub.i) of the acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) from which it was simultaneously
acquired; c. forming a set of unipolar point-dependent discrete power spectra {DPS.sub.UPi(f, x.sub.i, y.sub.i,z.sub.i)} by computing a unipolar point-dependent discrete power spectrum
DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i) for each said unipolar time-and-point-dependent depolarization signal {S.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, forming a set ofbipolar
point-dependent discrete power spectra {DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)} by computing a bipolar point-dependent discrete power spectrum DPS.sub.BPi(f, x.sub.i, y.sub.i,
z.sub.i) for each said corresponding bipolartime-and-point-dependent depolarization signal {S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}; d. forming a set of point-dependent discrete power
spectrum products {DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)} by multiplying each said unipolarpoint-dependent discrete power spectrum DPS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i) of
said set of unipolar point-dependent discrete power spectra {DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i)} by each said corresponding bipolar point-dependent discretepower spectrum
DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i) of said set of corresponding bipolar point-dependent discrete power spectra {DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i)}; e. computing a
point-dependent product dominant frequencyDF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) for each point-dependent discrete product power spectrum DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)
of said set of point-dependent discrete power spectrum products {DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)},thereby forming a set of point-dependent product dominant frequencies
{DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)}; f. selecting a maximum point-dependent product dominant frequency DF.sub.MAXPRODi(x.sub.i, y.sub.i, z.sub.i) from said set ofpoint-dependent
product dominant frequencies {DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)}; g. assigning the coordinates of said maximum point-dependent product dominant frequency DF.sub.MAXPRODi
(x.sub.i, y.sub.i, z.sub.i) to said at least one SSFA.
13. The computer program product of claim 12, wherein said unique spatial coordinates (x.sub.i, y.sub.i, z.sub.i) of each said acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) are
determined by: a. defining a spatial coordinate system (x,y, z) for the identification of cardiac points cP.sub.i(x.sub.i, y.sub.i, z.sub.i) having spatial coordinates (x.sub.i,
y.sub.i, z.sub.i) on or within said heart; b. storing said cardiac points cP.sub.i(x.sub.i, y.sub.i, z.sub.i) on a computerrecordable medium as a set cardiac points {cP.sub.i(x.sub.i,
y.sub.i, z.sub.i)}; c. assigning to each acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) the coordinates of the cardiac point with which it is spatially coincident.
14. The computer program product of claim 12, further comprising during execution of the step of forming a set of unipolar time-and-point-dependent depolarization signals {S.sub.UPi (t,
x.sub.i, y.sub.i, z.sub.i)} and forming a set ofcorresponding bipolar time-and-point-dependent depolarization signals {S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}: a. storing said set of
unipolar time-and-point-dependent depolarization signals {S.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} on a computerrecordable medium; and, b. storing said set of bipolar
time-and-point-dependent depolarization signals {S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
15. The computer program product of claim 14, wherein said first frequency limit F.sub.limit is about 1 Hz and said second frequency limit F.sub.lim2 is about 30 Hz.
16. The computer program product of claim 14, wherein said third frequency limit F.sub.lim3 is about 3 Hz and said fourth frequency limit F.sub.lim4 is about 30 Hz.
17. The computer program product of claim 12, further comprising during execution of the step of computing a unipolar point-dependent discrete power spectrum DPS.sub.UPi (f, x.sub.i,
y.sub.i, z.sub.i) for each said unipolartime-and-point-dependent depolarization signal {S.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and computing a bipolar point-dependent discrete power
spectrum DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i) for each said corresponding bipolartime-and-point-dependent depolarization signal {S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}: a.
selecting a predefined segment of each said unipolar time-and-point-dependent depolarization signal S.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming aset of segmented unipolar
time and-point-dependent depolarization signals {sS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, selecting a predefined segment of each of said corresponding bipolar
time-and-point-dependent depolarization signal S.sub.BPi (t,x.sub.i, y.sub.i, z.sub.i), thereby forming a set of corresponding segmented bipolar time-and-point-dependent depolarization
signals {sS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}; b. detrending each said segmented unipolar time-and-point-dependentdepolarization signal sS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i),
thereby forming a set of detrended and segmented unipolar time-and-point-dependent depolarization signals {dsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, detrending each
saidcorresponding segmented bipolar time-and-point-dependent depolarization signal sS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of corresponding detrended and
segmented bipolar time and-point-dependent depolarization signals {dsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)}; c. band pass filtering each said detrended and segmented unipolar
time-and-point-dependent depolarization signal dsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) between a first frequency limit F.sub.lim1 and a secondfrequency limit F.sub.lim2, thereby
forming a set of filtered, detrended and segmented unipolar time and-point-dependent depolarization signals {fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, band pass filtering each
said corresponding detrended andsegmented bipolar time-and-point-dependent depolarization signal dsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i) between said first frequency limit F.sub.lim1
and said second frequency limit F.sub.lim2, thereby forming a set of corresponding filtered,detrended and segmented bipolar time-and-point-dependent depolarization signals {fdsS.sub.BPi
(t, x.sub.i, y.sub.i, z.sub.i)}; d. convolving each said filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal fdsS.sub.UPi(t, x.sub.i, y.sub.i,
z.sub.i) with a shaping signal (t), thereby forming a set of shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {{circle around
(.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, convolving each said corresponding filtered, detrended and segmented bipolar time-and-point-dependent depolarization signal
fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i) with said shaping signal (t), thereby forming a set of corresponding shaped,filtered, detrended and segmented bipolar
time-and-point-dependent depolarization signals {{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}; e. band pass filtering each said shaped, filtered, segmented and
detrended unipolartime-and-point-dependent depolarization signal {circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) between a third frequency limit F.sub.lim3 and a
fourth frequency limit F.sub.lim4, thereby forming a set of refiltered, shaped, filtered,detrended and segmented unipolar time-and-point-dependent depolarization signals {r{circle
around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, band pass filtering each said shaped, filtered, detrended and segmented bipolartime-and-point-dependent
depolarization signal {circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i) between said third frequency limit F.sub.lim3 and said fourth frequency limit F.sub.lim4,
thereby forming a set of correspondingrefiltered, shaped, filtered, detrended and segmented bipolar time-and-point-dependent depolarization signals {r{circle around (.times.)}
fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}; f. windowing each said refiltered, shaped, filtered, detrended andsegmented unipolar time-and-point-dependent depolarization signal r
{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of windowed, refiltered, shaped, filtered, detrended and segmented
unipolartime-and-point-dependent depolarization signals {wr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and, windowing each said corresponding refiltered,
shaped, filtered, detrended and segmented bipolar time-and-point-dependentdepolarization signal r{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming a
set of corresponding windowed, refiltered, shaped, filtered, detrended and segmented bipolar time-and-point-dependent depolarization signals{wr{circle around (.times.)}fdsS.sub.BPi (t,
x.sub.i, y.sub.i, z.sub.i)}; g. edge-smoothing each said windowed, refiltered, shaped, filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal wr
{circle around(.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of edge-smoothed, windowed, refiltered, shaped, filtered, detrended and segmented unipolar
time-and-point-dependent depolarization signals {ewr{circle around(.times.)}fdss.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and, edge-smoothing each said corresponding windowed,
refiltered, shaped, filtered, detrended and segmented bipolar time-and-point-dependent depolarization signal wr{circle around(.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i),
thereby forming a set of corresponding edge-smoothed, windowed, refiltered, shaped, filtered, detrended and segmented bipolar time-and-point-dependent depolarization signals {ewr{circle
around(.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}; h. computing a unipolar point-dependent discrete frequency spectrum for each said edge-smoothed, windowed, refiltered,
shaped, filtered, segmented and detrended unipolar time-and-point-dependentdepolarization signal ewr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming
a set of unipolar point-dependent discrete frequency spectra {DFS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i)}; and, computing a bipolarpoint-dependent discrete frequency spectrum for each
said edge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended bipolar time-and-point-dependent depolarization signal ewr{circle around (.times.)}fdsS.sub.BPi (t,
x.sub.i, y.sub.i,z.sub.i), thereby forming said set of bipolar point-dependent discrete frequency spectra {DFS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)}; i. computing said unipolar
point-dependent discrete power spectrum for each said edge-smoothed, windowed, refiltered,shaped, filtered, segmented and detrended unipolar time-and-point-dependent depolarization
signal ewr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming said set of unipolar point-dependent discrete power spectra{DPS.sub.UPi (f, x.sub.i,
y.sub.i, z.sub.i)}; j. computing a bipolar point-dependent discrete power spectrum for each said edge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended bipolar
time-and-point-dependent depolarizationsignal ewr{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming said set of bipolar point-dependent discrete power
spectra {DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)}.
18. The computer program product of claim 17, further comprising during execution of the step of forming said set of segmented unipolar time and-point-dependent depolarization signals
{sS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, formingsaid set of corresponding segmented bipolar time-and-point-dependent depolarization signals {sS.sub.BPi (t, x.sub.i, y.sub.i,
z.sub.i)}: a. storing said set of segmented unipolar time and-point-dependent depolarization signals {sS.sub.UPi (t, x.sub.i,y.sub.i, z.sub.i)} on a computer recordable medium; and, b.
storing said set of corresponding segmented bipolar time and-point-dependent depolarization signals {sS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
19. The computer program product of claim 17, further comprising during execution of the step of forming said set of detrended and segmented unipolar time-and-point-dependent
depolarization signals {dsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}and forming said set of corresponding detrended and segmented bipolar time and-point-dependent depolarization signals
{dsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}: a. storing said set of detrended and segmented unipolar time-and-point-dependentdepolarization signals {dsS.sub.UPi (t, x.sub.i, y.sub.i,
z.sub.i)} on a computer recordable medium; and, b. storing said set of corresponding detrended and segmented bipolar time and-point-dependent depolarization signals {dsS.sub.BPi (t,
x.sub.i,y.sub.i, z.sub.i)} on a computer recordable medium.
20. The computer program product of claim 17, further comprising during execution of the step of forming said set of filtered, detrended and segmented unipolar time-and-point-dependent
depolarization signals {fdsS.sub.UPi (t, x.sub.i, y.sub.i,z.sub.i)} and forming said set of corresponding filtered, detrended and segmented bipolar time and-point-dependent
depolarization signals {fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} further comprises the steps of: a. storing said set of filtered,detrended and segmented unipolar
time-and-point-dependent depolarization signals {fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium; and, b. storing said set of corresponding filtered,
detrended and segmented bipolar timeand-point-dependent depolarization signals {fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
21. The computer program product of claim 17, further comprising during execution of the step of forming said set of shaped filtered, detrended and segmented unipolar
time-and-point-dependent depolarization signals {{circle around(.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and forming said set of corresponding shaped, filtered, detrended
and segmented bipolar time and-point-dependent depolarization signals {{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i,z.sub.i)}: a. storing said set of shaped, filtered,
detrended and segmented unipolar time-and-point-dependent depolarization signals {{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium;
and, b. storingsaid set of corresponding shaped, filtered, detrended and segmented bipolar time and-point-dependent depolarization signals {{circle around (.times.)}fdsS.sub.BPi (t,
x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
22. The computer program product of claim 17, further comprising during execution of the step
23. The computer program product of claim 17, further comprising during execution of the step of forming said set of windowed, refiltered, shaped, filtered, detrended and segmented
unipolar time-and-point-dependent depolarization signals{wr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and forming said set of corresponding windowed,
refiltered, shaped, filtered, detrended and segmented bipolar time and-point-dependent depolarization signals {wr{circle around(.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}: a.
storing said set of windowed, refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {wr{circle around (.times.)}fdsS.sub.UPi (t,
x.sub.i,y.sub.i, z.sub.i)} on a computer recordable medium; and, b. storing said set of corresponding windowed, refiltered, shaped, filtered, detrended and segmented bipolar time
and-point-dependent depolarization signals {r{circle around (.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
24. The computer program product of claim 17, further comprising during execution of the step of forming said set of edge-smoothed, windowed, refiltered, shaped, filtered, detrended and
segmented unipolar time-and-point-dependent depolarizationsignals {ewr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and forming said set of corresponding
edge-smoothed, windowed, refiltered, shaped, filtered, detrended and segmented bipolar time and-point-dependent depolarization signals{ewr{circle around (.times.)}fdsS.sub.BPi (t,
x.sub.i, y.sub.i, z.sub.i)}: a. storing said set of edge-smoothed, windowed, refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization
signals {ewr{circle around(.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium; and, b. storing said set of corresponding edge-smoothed, windowed,
refiltered, shaped, filtered, detrended and segmented bipolar time and-point-dependentdepolarization signals {er{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} on
a computer recordable medium.
25. The computer program product of claim 17, further comprising during execution of the step of computing a unipolar point-dependent discrete frequency spectrum comprises computing a
Fast Fourier Transform for said edge-smoothed, windowed,refiltered, shaped, filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal ewr{circle around
(.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i); and said step of computing a bipolar point-dependent discretefrequency spectrum comprises computing a Fast Fourier Transform for
said edge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended bipolar time-and-point-dependent depolarization signal ewr{circle around (.times.)}fdsS.sub.BPi
(t,x.sub.i, y.sub.i, z.sub.i).
26. The computer program product of claim 17, further comprising during execution of the step of forming said set of unipolar point-dependent discrete frequency spectra {DFS.sub.UPi (f,
x.sub.i, y.sub.i, z.sub.i)} and forming said set ofbipolar point-dependent discrete frequency spectra {DFS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)}: a. storing said set of unipolar
point-dependent discrete frequency spectra {DFS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium; and, b. storing said set of bipolar point-dependent discrete
frequency spectra {DFS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
27. The computer program product of claim 17, further comprising during execution of the step of forming said set of unipolar point-dependent discrete power spectra {DPS.sub.UPi (f,
x.sub.i, y.sub.i, z.sub.i)} and forming said set of bipolarpoint-dependent discrete power spectra {DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)}: a. storing said set of unipolar
point-dependent discrete power spectra {DPS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium; and, b. storingsaid set of bipolar point-dependent discrete power
spectra {DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
28. The computer program product of claim 17, further comprising during execution of the step of forming said set of point-dependent discrete power spectrum products {DPS.sub.PRODi (f,
x.sub.i, y.sub.i, z.sub.i)}, storing said set ofpoint-dependent discrete power spectrum products {DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
29. The computer program product of claim 17, further comprising during execution of the step of forming said set of point-dependent product dominant frequencies {DF.sub.PRODi (x.sub.i,
y.sub.i, z.sub.i)}, storing said set of point-dependentproduct dominant frequencies {DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)} on a computer recordable medium.
30. The computer program product of claim 12, wherein each said point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) comprises a frequency in each said
point-dependent discrete power spectrum product DPS.sub.PRODi(f, x.sub.i, y.sub.i, z.sub.i) that is associated with an absolute maximum power density in each said point-dependent
discrete power spectrum product DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i).
31. The computer program product of claim 12, further comprising during execution of the step of computing a point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i,
z.sub.i), mapping each point-dependent product dominantfrequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) of said set of point-dependent product dominant frequencies {DF.sub.PRODi
(x.sub.i, y.sub.i, z.sub.i)} to said acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) of said acquisition set of points{P.sub.i(x.sub.i, y.sub.i, z.sub.i)} with which said
point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) is associated.
32. The computer program product of claim 12, further comprising during execution of the step of assigning the coordinates of said maximum point-dependent product dominant frequency
DF.sub.MAXPRODi (x.sub.i, y.sub.i, z.sub.i) to said SSFA: a.computing a point-dependent product regularity index RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) for each point-dependent
discrete product power spectrum DPS.sub.PRODi(f, x.sub.i, y.sub.i, z.sub.i) of said set of point-dependent discrete power spectrumproducts {DPS.sub.PRODi(f, x.sub.i, y.sub.i, z.sub.i)},
thereby forming a set of point-dependent product regularity indices {RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i)}; b. verifying said assignment of the coordinates of said maximum
point-dependentproduct dominant frequency DF.sub.MAXPRODi(x.sub.i, y.sub.i, z.sub.i) to said SSFA by interpreting the value of its corresponding point-dependent product regularity
33. The computer program product of claim 32, wherein said point-dependent product regularity index RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) comprises a ratio of a power contained in a
point-dependent product dominant frequency band.DELTA..sub.PRODiDF to a total power computed at all frequencies of said point-dependent discrete power spectrum product DPS.sub.PRODi(f,
x.sub.i, y.sub.i, z.sub.i), said point-dependent product dominant frequency band .DELTA..sub.PRODiDF being afrequency band centered about a point-dependent product dominant frequency
DF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i), having a width of about three times a frequency resolution .DELTA.f.sub.i.
34. The computer program product of claim 32, further comprising during execution of the step of computing a point-dependent product regularity index RI.sub.PRODi(x.sub.i, y.sub.i,
z.sub.i), mapping each point-dependent product regularity indexRI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) of said set of point-dependent product regularity indices {RI.sub.PRODi(x.sub.i,
y.sub.i, z.sub.i)} to said acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) of said acquisition set of points {P.sub.i(x.sub.i,y.sub.i, z.sub.i)} with which said product regularity
index RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) is associated.
Description: BACKGROUND OF THE INVENTION
1. Technical Field
The present invention generally relates to a method and algorithm for spatially identifying sources generative of cardiac fibrillation, and in particular, atrial fibrillation.
2. Related Art
2a. Atrial Fibrillation: Epidemiology, Incidence and Prevalence
Atrial fibrillation (AF) is the most frequently occurring sustained cardiac rhythm disturbance ("arrhythmia") in humans. AF may be intermittent or paroxysmal, or it may be a stable
arrhythmia that may last for many years. One to two millionAmericans have chronic AF. Epidemiologic studies have shown that the prevalence and incidence of AF doubles with each
advancing decade beyond 50 years of age. Although not usually considered a life-threatening arrhythmia, AF has been associated with atwo-fold increase in total and cardiovascular
mortality. Factors that may increase mortality in AF include age, mitral stenosis, aortic valve disease, coronary artery disease, hypertension, and congestive heart failure.
Clinically, AF is often categorized as:
[i] paroxysmal--generally characterized by predominant sinus rhythm with intermittent episodes of AF;
[ii] chronic--persistent or permanent AF; [iii] acute--an episode of AF with an onset within 24 to 48 hours of diagnosis; and, [iv] lone--variably defined, but generally considered to
occur in the absence of cardiac disease.
The most clinically important consequences of AF are thromboembolic events and stroke. A four-fold to six-fold increased risk of stroke (15-fold in patients with a history of rheumatic
heart disease) makes this arrhythmia one of the most potentrisk factors for stroke in the elderly and the most common cause of cardiogenic stroke. The risk of stroke in nonvalvular AF
varies with age and with the presence of concomitant cardiovascular disease and other risk factors for stroke. Most strokesassociated with AF appear to be caused by cardiac emboli,
presumably formed in fibrillating atria.
The presence of persistent rapid ventricular rates in association with AF may lead to impairment of ventricular function by a mechanism similar to that of tachycardia-mediated
cardiomyopathy. This condition may be reversible. Improvedventricular function has been reported after complete atrioventricular (AV) node ablation, medical control of ventricular rate,
or achievement of sinus rhythm. Evidence for development of atrial myopathy has also been reported in patients with AF in theabsence of valvular disease. Mechanical and electrical
cardiac remodeling could also promote further propensities toward AF and thromboembolism.
The most common underlying cardiovascular diseases associated with AF are hypertension and ischemic heart disease. Valvular heart disease, congestive heart failure, hypertension, and
diabetes have been shown to be independent risk factors forAF. Other associated conditions include pulmonary embolism, thyrotoxicosis, chronic obstructive pulmonary disease, the
Wolff-Parkinson-White syndrome, pericarditis, neoplastic disease, and the postoperative state. The cardiac rhythm of a normal heartmay be precipitated into AF by excessive alcohol,
stress, drugs, excessive caffeine, hypoxia, hypokalemia, hypoglycemia, and systemic infection.
Morbidity attributable to AF also includes limitation in functional capacity from symptoms of palpitations, fatigue, dyspnea, angina, or congestive heart failure.
2b. Normal Cardiac Electrophysiology
The heart is a blood pumping organ consisting of four chambers--two atria and two ventricles. The normal function of the heart depends on the periodic and synchronized contraction of
the walls of its four chambers. The walls of the heart arecomprised of millions of cells called cardiomyocytes, whose interiors are maintained at a transmembrane potential difference
voltage of about 70 millivolts relative to the external environment; i.e., the cardiomyocytes are in a state of relative voltagepolarization. The synchronized mechanical contraction of
the walls of the heart's chambers is triggered by the sequential and coordinated depolarization of their cardiomyocytes. The measured aggregate manifestation this depolarization of the
restingtransmembrane potential difference in cardiomyocytes is called an action potential or depolarization impulse.
The normal propagation of every cardiac action potential starts spontaneously at a region of the heart's right atrium ("RA") known as the sino-atrial ("SA") node, from which the action
potential spreads throughout both atrial walls, causing theirsynchronous contraction, and toward a region known as the atrio-ventricular ("AV") node. From AV node, the action potential
propagates as a depolarization wave front into a specialized conduction system known as the His-Purkinje system, whose terminalbranches conduct the action potential into the walls of
the right and left ventricles.
The normal propagation of the action potential's wave front of depolarization in the walls the atria and the ventricles is relatively continuous and uninterrupted. The normal
contraction of the heart accompanying the propagation of thedepolarization wave front is called normal sinus rhythm ("NSR"). NSR depends on normal propagation of the action potential,
which must always originate at the SA node, as opposed to some other ectopic focus of origin, and must always spread from the SAnode precisely in the foregoing sequence of transmission
to the AV node, and thence to and through the His-Purkinje conduction system.
2c. Electrophysiology of Atrial Fibrillation
Certain self-sustaining, irregular and non-physiologically sequential depolarization impulses ("arrhythmias") may arise from one or more ectopic (non-SA node--either pacemaker or
reentrant) foci and either impair or eliminate the normalcontracting rhythm of the heart, thereby impairing or destroying the heart's capacity to pump blood. Atrial fibrillation and
ventricular fibrillation are two such arrhythmias.
During atrial fibrillation ("AF"), multiple depolarization wave fronts are generated in the atria, giving rise to vermiform atrial contractions responding to depolarization wave fronts
that often have frequencies in excess of 400 cycles perminute. This rapid, disordered atrial activation results in loss of coordinated atrial contraction, with irregular electrical
conduction to the AV node and His-Purkinje system, leading to sporadic ventricular contractions.
On the surface electrocardiogram ("ECG"), AF is characterized by the absence of visible discrete P waves or the presence of irregular fibrillatory waves, or both, and an irregular
ventricular response.
Sustained AF depends on the uninterrupted aberrant periodic electrical activity of at least one discrete primary ectopic focus, hereinafter called a sustaining source of fibrillatory
activity ("SSFA") that may behave as a reentrant circuit. Thereentrant circuit is established by the interaction of propagating wave fronts of cardiac depolarization with either an
anatomical or functional obstacles, i.e., tissue regions of variable refractoriness or excitability acting as intermittent conductionblocks, in a region of the heart, such as, for
example the right atrium ("RA") or the right ventricle ("RV") in a process called "vortex shedding." These reentrant circuits act as sources ("mother rotors") that generate
high-frequency depolarization wavefronts ("mother waves") emanating in rapid succession that propagate through both atria and interact with anatomic or functional obstacles acting as
intermittent conduction blocks and maintaining the overall fibrillatory activity. Some of these anatomicor functional obstacles become secondary ectopic foci themselves generative of
aberrant depolarization daughter wavelets having lower frequencies.
Some of these daughter wavelets may attenuate in amplitude and undergo decremental conduction. Others may be annihilated by collision with another daughter wavelet or a boundary; and,
still others conduct circuitously to create new vortices ofdepolarization. The end result is the fragmentation or fractionation of the secondary depolarizing wave fronts emanating from
these reentrant circuits into multiple independent daughter wavelets, giving rise to new wavelets, and so on--in a perpetual,globally aperiodic pattern that characterizes fibrillatory
Sustained AF is a function of several factors, including a nonuniform distribution reentrant circuits having relatively brief refractory periods over a sufficiently large area of
cardiac tissue with the concomitant fractionation of a mother waveinto a large number of independent daughter wavelets, possibly also having low conduction velocities.
2d. Atrial Fibrillation: Therapeutic Approaches
Radiofrequency ("RF") ablation of atrial tissue by application of energy through cardiac catheters has become a major therapeutic method for atrial fibrillation in patients. The RF
ablation procedure consists of beneficially altering theelectrical properties of cardiac tissue in the vicinity of the ablating catheter tip. The extent to which tissue is altered
depends on the power and duration of the application, as well as on the characteristics of the tissue itself. For a typical RFablation, a power of 20 40 Watts is delivered for 6 10
minutes to create an altered substrate in a cardiac volume with a radius of about 5 mm around the catheter tip.
The efficacy of RF ablation is suboptimal because of imprecise localization of tissue hosting the AF sources that are targeted. This situation prevails because methods for mapping
sources of fibrillation rely on educated guesswork based uponsubjective inferences from clinical electrophysiological data and vague identification criteria. Extensive ablation
sufficient to modify cardiac tissues can cure many types of AF, but it exposes the patient to a higher risk of complications and tounacceptable fluoroscopy exposure times; on the other
hand, more selective ablation that targets localized ectopic foci is safer, but may be less likely to effect a permanent cure of the AF, which may be become prone to recurrences.
Accordingly, thereis a need for improved targeting of RF ablation and other surgical interventions that seek to neutralize AF.
The present invention comprises an automated method for the detection and spatial identification of sources of fibrillation that is far more rapid and reliable than prevailing methods.
Accordingly, the present invention may be expected tosubstantially reduce the duration of RF ablation and improve the success rate of the procedure by providing real-time spectrally
guided RF ablation in patients.
SUMMARY OF THE INVENTION
The present invention provides a method of identifying the spatial coordinates of at least one sustaining source of fibrillatory activity ("SSFA") in a heart, and a computer program
product, comprising a computer usable medium having a computerreadable program code embodied therein, wherein the computer readable program code comprises an algorithm adapted to
execute the method of identifying the spatial coordinates of at least one SSFA, the method comprising the steps of: simultaneouslyacquiring a unipolar time-dependent depolarization
signal and a corresponding bipolar time-dependent depolarization signal from each acquisition point of a set of acquisition points of the heart, each acquisition point having unique
spatial coordinates;forming a set of unipolar time-and-point-dependent depolarization signals by assigning to each unipolar time-dependent depolarization signal the spatial coordinates
of the acquisition point from which it was acquired; and, forming a set of correspondingbipolar time-and-point-dependent depolarization signals by assigning to each corresponding
bipolar time-dependent depolarization signal the spatial coordinates (x.sub.i, y.sub.i, z.sub.i) of the acquisition point from which it was simultaneouslyacquired; forming a set of
unipolar point-dependent discrete power spectra by computing a unipolar point-dependent discrete power spectrum for each unipolar time-and-point-dependent depolarization signal; and,
forming a set of bipolar point-dependentdiscrete power spectra by computing a bipolar point-dependent discrete power spectrum for each corresponding bipolar time-and-point-dependent
depolarization signal; forming a set of point-dependent discrete power spectrum products by multiplying eachunipolar point-dependent discrete power spectrum by each corresponding
bipolar point-dependent discrete power spectrum; computing a point-dependent product dominant frequency for each point-dependent discrete product power spectrum, thereby forming a setof
point-dependent product dominant frequencies; selecting a maximum point-dependent product dominant frequency from the set of point-dependent product dominant frequencies; assigning the
spatial coordinates of the maximum point-dependent productdominant frequency to the SSFA.
The present invention advantageously provides a rapid, efficient, sensitive and specific computer implemented method for detecting identifying sources of cardiac fibrillation, thereby
providing precision targeting for surgical intervention andtermination of cardiac fibrillation.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1A shows an exemplary unipolar electrogram.
FIG. 1B shows a unipolar power spectrum corresponding to the exemplary unipolar electrogram of FIG. 1A.
FIG. 2A shows an exemplary bipolar electrogram.
FIG. 2B shows a bipolar power spectrum corresponding to the exemplary bipolar electrogram of FIG. 2A.
FIG. 3 shows the power spectrum product obtained from the multiplication of the unipolar power spectrum shown in FIG. 1B with the bipolar power spectrum shown in FIG. 2B.
FIG. 4A 4C shows graphs of three exemplary point-and-time-dependent depolarization signals and corresponding graphs of point-dependent discrete power spectra.
FIG. 5 shows a flowchart that outlines the SSFA Identification Method and Algorithm.
FIG. 6 shows a flowchart that outlines the FFT and Power Spectrum Module of the SSFA Identification Method and Algorithm.
FIG. 7 schematically illustrates a computer system for implementing the SSFA Identification Algorithm, in accordance with embodiments of the present invention.
DETAILED DESCRIPTION OF THE INVENTION
5a. The Heart and Fibrillation
As used herein, the term heart refers to a mammalian or human heart and includes but is not limited to its epicardial surfaces, endocardial surfaces, chambers, vessels, valves, nodes,
conduction pathways, conduction bundles, muscles and alliedstructures.
As used herein, the term acquisition point refers to a point on or within the heart from which a unipolar and bipolar depolarization signal have been simultaneously acquired.
As used herein, the term fibrillation refers to all irregular rhythms of the heart having rates that are faster than the normal sinus rhythm rate of the heart, i.e., greater than about
40 beats per minute, including without limitation, atrialflutter, atrial fibrillation, ventricular flutter, ventricular fibrillation, monomorphic and polymorphic tachycardia, and
torsade de point(s).
5b. Electrocardiogram
The electrical activity of the heart can be monitored because the action potential generated by a myocyte can be detected by devices that sense the electrical field changes it produces.
The electrical activity of the heart is most commonlyrecorded and measured by use of a surface electrocardiogram ("ECG"), whose twelve electrodes ("leads") are applied to locations on
the body's surface determined by long-established convention. The ECG leads independently measure and record twelvetime-dependent macroscopic voltage changes at twelve orientations
about the heart.
5c. Unipolar and Bipolar Time-Dependent Depolarization Signals
When more detailed information about the heart's electrical activity is necessary, a cardiac signal acquisition device may be disposed within the heart to acquire, i.e., to detect,
measure, record and output as a signal, the heart's electricalactivity from its endocardial surfaces. The electrical activity of the heart may also be acquired by a cardiac signal
acquisition device from its epicardial surfaces or from within any of its tissues, such as, for example, from within its muscle tissue.
The cardiac signal acquisition device may function on the basis of electrical, optical, acoustic, or other signal acquisition and transduction methods, well known in the cardiac
electrophysiological arts, whose time-dependent output is correlatedwith the electric depolarization of a cardiac myocyte; and, as used herein, is referred to as a time-dependent
depolarization signal S.sub.i(t). A recorded time-dependent depolarization signal S.sub.i (t) is called an electrogram.
The cardiac signal acquisition devices used herein simultaneously acquire the heart's electrical activity in both unipolar and bipolar modes. For example, a cardiac signal acquisition
device may comprise two electrodes, spaced about 1 mm apart,that simultaneously record the heart's electrical activity as a unipolar time-dependent depolarization signal S.sub.UPi (t)
and a corresponding bipolar time-dependent depolarization signal S.sub.BPi (t), each describing the electrical activity at thecontact points of the electrodes with an endocardial
surface of the heart.
As described more fully infra., a time-dependent depolarization signal S.sub.i (t) derived from a point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart may be associated with
a point-dependent dominant frequency DF.sub.i(x.sub.i,y.sub.i, z.sub.i) that may be identified from a point-dependent discrete power spectrum DPS.sub.i(f, x.sub.i, y.sub.i, z.sub.i)
derived from the time-dependent depolarization signal S.sub.i(t). A unipolar time-dependent depolarization signal S.sub.UPi(t) and a bipolar time-dependent depolarization signal
S.sub.BPi (t) are combined in the present invention to improve the identification of the point-dependent dominant frequency DF.sub.i(x.sub.i, y.sub.i, z.sub.i).
A bipolar time-dependent depolarization signal S.sub.BPi(t) removes far field electrical activity, but power contained in the high frequency range of its point-dependent discrete power
spectrum DPS.sub.i(f, x.sub.i, y.sub.i, z.sub.i) may exceedthe power contained in the lower frequency range of the point-dependent discrete power spectrum DPS.sub.i(f, x.sub.i, y.sub.i,
z.sub.i) at which the heart is beating (i.e., the beating frequency). The power contained in the beating frequency range maybe further degraded by the low signal-to-noise ratio that is
typical of bipolar signals. While the power spectra of unipolar time-dependent depolarization signals contain less power in their high frequency ranges, unipolar signals may be
significantlydistorted by far-field electrical activity.
In view of the different spectral properties of unipolar and bipolar signals, the present invention advantageously multiplies their respective power spectra to enhance the power
contained in a common band of local electrical excitation. Mathematically, the multiplication of a unipolar power spectrum by a bipolar power spectrum is equivalent to convolution of
the unipolar and bipolar signals, which convolution results in the screening out of the uncommon distorting elements, such asharmonics and far-field effects.
FIG. 1A shows an exemplary unipolar time-dependent depolarization signal acquired from an endocardial point; and, FIG. 1B shows an exemplary unipolar power spectrum corresponding to the
exemplary unipolar time-dependent depolarization signal ofFIG. 1A. The ordinate in FIG. 1A shows the relative amplitude of the exemplary unipolar time-dependent depolarization signal.
The abscissa in FIG. 1A shows a scale marking 500 ms. The ordinate in FIG. 1B is labeled "P1," and indicates the power perunit frequency ("power density"). The abscissa in FIG. 1B is
labeled "Frequency (Hz)."
FIG. 2A shows an exemplary bipolar time-dependent depolarization signal acquired from the endocardial point of FIG. 1A; and, FIG. 2B shows an exemplary bipolar power spectrum
corresponding to the time-dependent depolarization signal of FIG. 2A. The ordinate in FIG. 2A shows the relative amplitude of the exemplary bipolar time-dependent depolarization signal.
The abscissa in FIG. 2A shows a scale marking 500 ms. The ordinate in FIG. 2B is labeled "P2," and indicates the power per unitfrequency ("power density"). The abscissa in FIG. 2B is
labeled "Frequency (Hz)."
In FIG. 1B and FIG. 2B, the frequency associated with the highest power density in both the unipolar power spectrum and the bipolar power spectrum--the "dominant frequency" DF--is
identified at about 7.57 Hz. However, the degree to which thedominant frequency is the exclusive contributor to its associated time-dependent depolarization signal is influenced by the
presence of other secondary frequencies at which significant power density peaks in both power spectra arise.
As more fully described infra., the level of certainty in the identification of the dominant frequency as the exclusive contributor to its associated time-dependent depolarization
signal may be quantitated by computing a regularity index RI. Thecloser the regularity index RI is to 1, the greater the extent to which the dominant frequency is the exclusive
contributor to its associated time-dependent depolarization signal. The closer the regularity index RI is to 0, the smaller the extent towhich the dominant frequency is the exclusive
contributor to its associated time-dependent depolarization signal. As shown in FIG. 1B, the RI of the unipolar power spectrum is about 0.11. As shown in FIG. 2B, the RI of the bipolar
power spectrum isabout 0.22.
FIG. 3 shows the power spectrum product obtained from the multiplication of the exemplary unipolar power spectrum shown in FIG. 1B by the exemplary bipolar power spectrum shown in FIG.
2B. The ordinate in FIG. 3 is labeled "P1.times.P2," andindicates the power per unit frequency ("power density"). The abscissa in FIG. 2B is labeled "Frequency (Hz)."
FIG. 3 shows that the power spectrum product obtained by multiplying the unipolar power spectrum by the bipolar power spectrum preserves the dominant frequency of about 7.57 Hz.
However, relative to both the unipolar power spectrum and thebipolar power spectrum, the number of secondary power peaks in the power spectrum product is reduced together with the
amplitudes of the secondary power peaks.
The reduction in the number and amplitude of secondary peaks obtained by multiplying a unipolar power spectrum from an endocardial point by its corresponding bipolar power spectrum from
the same endocardial point has the advantageous effect ofmaking the identification of the dominant frequency easier, and increasing level of certainty in the identification of the
dominant frequency as the exclusive contributor to its associated time-dependent depolarization signal. This is indicated in FIG.3 by the increase in the regularity index RI of over 50%
to a value of about 0.34.
5d. "Roving" Signal Acquisition Mode
The method for identifying the spatial coordinates of sustaining sources of fibrillatory activity and the algorithm adapted execute the method (hereinafter "SSFA Identification Method
and Algorithm") described herein, assigns to a time-dependentdepolarization signal S.sub.i(t) the coordinates of a point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart from
which the time-dependent depolarization signal S.sub.i(t) is acquired by a cardiac signal acquisition device, thereby forming apoint-and-time-dependent depolarization signal S.sub.i(t,
x.sub.i, y.sub.i, z.sub.i).
In the present invention, a "roving" cardiac signal acquisition device is used to sequentially probe a relatively inaccessible cardiac region, such as, for example, the atria, acquiring
a time-dependent depolarization signal S.sub.i(t) from onelocation before being directed to another location. In a patient with fibrillation, the roving cardiac signal acquisition
device may, for example, be used to record real-time episodes of atrial fibrillation over an acquisition time T of, for example of5 seconds.
6. Spectral Analysis
Abnormalities in the form and propagation of a time-dependent depolarization S.sub.i(t) may be correlated with changes in its corresponding mathematical representation x(t). However,
more useful information about an abnormal time-dependentdepolarization signal S.sub.i(t) may be obtained from a study--a spectral analysis--of the mathematical properties of its
frequency spectrum X(f). A spectral analysis is used in the present invention to compute a spatial identification within acoordinate system of the location of the electrophysiological
source a fibrillating time-dependent depolarization signal S.sub.i(t). The identification of such a source, as described infra., provides a target for intervention and termination of
6a. Fourier Series
Generally, an integratable function of time having a period T, with a finite number of maxima and minima within T, and a finite number of discontinuities in T, can be represented as a
Fourier series comprising a fundamental periodic function(sine or cosine) having a fundamental frequency and an infinite superposition of sine and cosine functions whose arguments are
integer multiples of that fundamental frequency. These sine and cosine functions are called harmonics. A plot of themagnitudes of the amplitudes of these sines and cosines against their
corresponding frequencies forms the frequency spectrum of the function of time.
6b. The Fourier Transform and the Frequency Spectrum
The Fourier transform is a generalization of the Fourier series applicable to aperiodic functions of time. The Fourier transform X(f) is a frequency domain representation of a function
x(t) defined as:
.function..intg..infin..infin..times..function..times.eI.times..times..tim- es..times..pi..times..times..times..times.d.times..function. ##EQU00001## The inverse Fourier transform is
defined as:
.function..intg..infin..infin..times..function..times.eI.times..times..tim- es..times..pi..times..times..times..times.d.times..function. ##EQU00002## X(f) is called the frequency
spectrum of x(t) 6c. The Power Spectrum
The power spectrum P(f) of x(t) is proportional to the energy per unit frequency interval of the frequency spectrum X(f) and is given by the product of X(f) with X(f) P(f)=|X(f)|.sup.2=
X(f)X(f) (3) 6d. The Discrete Fourier Transform and the FastFourier Transform
Because a digital computer works only with discrete data, numerical computation of the Fourier transform of x(t), requires transformations of discretely sampled values of x(t) to yield
a series of recorded values x(n). The equations whichprovide the digital analogues of the Fourier transform for discretely sampled data, such as, for example, a time-dependent
depolarization signal S.sub.i(t), are called the discrete Fourier transform ("DFT"). A fast Fourier transform ("FFT") is a DFTalgorithm.
A DFT is applied to a discretely sampled time-dependent depolarization signal S.sub.i(t), that is represented as a real-valued series that has N samples x(k) of the form x.sub.0,
x.sub.1, x.sub.2, . . . , x.sub.k, . . . , x.sub.N-1 where timeat the kth sampling of S.sub.i(t) is k.DELTA.t, .DELTA.t being the sampling interval in seconds. The DFT from the time
domain t into the frequency domain f is then given by:
.function..times..times..function..times..times..times.I.times..times..tim- es..times..times..times..pi..times..times..times..times..DELTA..times..tim-
es..times..times..times..times..times. ##EQU00003## Where n.DELTA.f is the frequency and.DELTA.f is a fixed frequency interval, also known as the basic harmonic, or the frequency
resolution. The frequency interval .DELTA.f is related to the sampling interval .DELTA.t and the number of samples N that are taken by .DELTA.f=1/N.DELTA.t (5)6e. The Discrete Frequency
X(n) is the discrete frequency spectrum of x(k). X(n) is complex, containing a real and an imaginary component; i.e., X(n)=X.sub.re(n)+iX.sub.im(n). (6)
The discretely sampled S.sub.i(t) is acquired with a sampling rate f.sub.s over an acquisition time having a duration T=N.DELTA.t. (7) The sampling rates f.sub.s is related to the
acquisition time T by f.sub.s=N/T=1/.DELTA.t (8) The frequencyresolution .DELTA.f is related to the sampling rater f.sub.s by .DELTA.f=1/N.DELTA.t=1/T=f.sub.s/N (9) 6f. The Discrete
Power Spectrum
X(n) is commonly expressed as a discrete power spectrum P(n) that is proportional to the energy per unit frequency interval of the discrete frequency spectrum X(n), and is given by P(n)
=X(n)X(n) (10) 7. Spatial Identification of SustainingSources of Fibrillatory Activity 7a. Notation
The description of the SSFA Identification Method and Algorithm utilizes the notation scheme appearing in TABLE 1.
TABLE-US-00001 TABLE 1 NOMENCLATURE OF TERMS, ELEMENTS AND SETS Element or Term Symbol Interpretation Set Symbol cP.sub.i(x.sub.i, y.sub.i, z.sub.i) Cardiac points {cP.sub.i(x.sub.i,
y.sub.i, z.sub.i)} cP.sub.i {cP.sub.i} P.sub.i(x.sub.i,y.sub.i, z.sub.i) Acquisition points {P.sub.i(x.sub.i, y.sub.i, z.sub.i)} P.sub.i {P.sub.i} S.sub.UP(t) Time-dependent unipolar
depolarization signal S.sub.BP(t) Time-dependent bipolar depolarization signal S.sub.UPi Time-and-point-dependent{S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)} (t, x.sub.i, y.sub.i, z.sub.i)
unipolar depolarization {S.sub.UPi} signal S.sub.BPi Time-and-point-dependent {S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i) bipolar depolarization (t, x.sub.i, y.sub.i, z.sub.i)} signal
{S.sub.BPi} DFS.sub.UPi Point-dependent discrete (f, x.sub.i, y.sub.i, z.sub.i) unipolar frequency DFS.sub.UPi spectrum DFS.sub.BPi Point-dependent discrete (f, x.sub.i, y.sub.i,
z.sub.i) bipolar frequency spectrum DFS.sub.Bpi DPS.sub.i Point-dependentdiscrete {DPS.sub.i(f, x.sub.i, y.sub.i, z.sub.i)} (f, x.sub.i, y.sub.i, z.sub.i) power spectrum {DPS.sub.i}
DPS.sub.i DPS.sub.UPi Point-dependent discrete {DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i)} (f, x.sub.i, y.sub.i, z.sub.i) unipolar power spectrum{DPS.sub.UPi} DPS.sub.UPi DPS.sub.BPi
Point-dependent discrete {DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i)} (f, x.sub.i, y.sub.i, z.sub.i) bipolar power spectrum {DPS.sub.BPi} DPS.sub.Bpi DPS.sub.PRODi Point-dependent
discrete {DPS.sub.PRODi(f, x.sub.i,y.sub.i, z.sub.i)} (f, x.sub.i, y.sub.i, z.sub.i) power spectrum product {DPS.sub.PRODi} DPS.sub.PRODi DF.sub.UPi Point-dependent unipolar (x.sub.i,
y.sub.i, z.sub.i) dominant frequency DF.sub.UPi DF.sub.BPi Point-dependent bipolar (x.sub.i, y.sub.i,z.sub.i) dominant frequency DF.sub.BPi DF.sub.i Point-dependent dominant {DF.sub.i
(x.sub.i, y.sub.i, z.sub.i)} (x.sub.i, y.sub.i, z.sub.i) frequency {DF.sub.i} DF.sub.i DF.sub.PRODi Point-dependent product {DF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i)}(x.sub.i, y.sub.i,
z.sub.i) dominant frequency {DF.sub.PRODi} DF.sub.PRODi RI.sub.i Point-dependent {RI.sub.i(x.sub.i, y.sub.i, z.sub.i)} (x.sub.i, y.sub.i, z.sub.i) regularity index {RI.sub.i} RI.sub.i
RI.sub.UPi unipolar point-dependent{RI.sub.UPi(x.sub.i, y.sub.i, z.sub.i)} (x.sub.i, y.sub.i, z.sub.i) regularity index {RI.sub.UPi} RI.sub.UPi RI.sub.BPi bipolar point-dependent
{RI.sub.BPi(x.sub.i, y.sub.i, z.sub.i)} (x.sub.i, y.sub.i, z.sub.i) regularity index {RI.sub.BPi} RI.sub.BpiRI.sub.PRODi Point-dependent product {RI.sub.PRODi(x.sub.i, y.sub.i,
z.sub.i)} (x.sub.i, y.sub.i, z.sub.i) regularity index {RI.sub.PRODi} RI.sub.PRODi DF.sub.MAXi Maximum point-dependent (x.sub.i, y.sub.i, z.sub.i) dominant frequency
DF.sub.MAXPRODiMaximum point-dependent (x.sub.i, y.sub.i, z.sub.i) product dominant frequency .DELTA.f.sub.i Frequency resolution .DELTA..sub.iDF Dominant frequency band F.sub.lim1
First frequency limit F.sub.lim2 Second frequency limit F.sub.lim3 Third frequency limitF.sub.lim4 Fourth frequency limit
7b. Dominant Frequency
FIG. 4 shows graphs of three exemplary point-and-time-dependent depolarization signals A, B and C acquired by a cardiac signal acquisition device from three different locations
("acquisition points") on the posterior endocardial wall of the leftatrium of a patient with paroxysmal atrial fibrillation. For purposes of illustration, the graphs shown in FIG. 4 are
intended to generically represent either unipolar or bipolar depolarization signals.
The ordinate of each graph shows the relative amplitude of an exemplary point-and-time-dependent depolarization signal S.sub.i(t, x.sub.i, y.sub.i, z.sub.i) in volts and the abscissa of
each graph shows time (relative to a scale bar of 500 ms). To the left of each graph of each exemplary point-and-time-dependent depolarization signal there is shown a graph of its
corresponding point-dependent discrete power spectrum DPS.sub.i(f, x.sub.i, y.sub.i, z.sub.i) or DPS.sub.i
The ordinate of each graph of a discrete power spectrum shows the power per unit frequency ("power density"). The abscissa of each power spectrum graph shows frequency in Hz.
As shown in FIG. 4, during an episode of fibrillation, a discrete point-dependent power spectrum DPS.sub.i(f, x.sub.i, y.sub.i, z.sub.i) computed from a time-and-point-dependent
depolarization signal S.sub.i (t, x.sub.i, y.sub.i, z.sub.i),acquired from an acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart, is characterized by a set of
discrete peaks having bandwidths that are distributed across a frequency range of about 3 Hz to about 15 Hz.
The dominant frequency (designated in FIG. 4 by the letters "DF") is the frequency in the point-dependent discrete power spectrum DPS.sub.i (f, x.sub.i, y.sub.i, z.sub.i), derived from
a time-and-point-dependent depolarization signal S.sub.i (t,x.sub.i, y.sub.i, z.sub.i) acquired from that acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart
that is associated with an absolute maximum power density, (i.e., maximum amplitude), in the point-dependent discrete power spectrumDPS.sub.i (f, x.sub.i, y.sub.i, z.sub.i).
The SSFA Identification Method and Algorithm assigns to the dominant frequency the coordinates assigned to the time-and-point-dependent depolarization signal S.sub.i (t, x.sub.i,
y.sub.i, z.sub.i), from which it is derived, thereby forming apoint-dependent dominant frequency DF.sub.i (x.sub.i, y.sub.i, z.sub.i) or DF.sub.i. The point-dependent dominant frequency
DF.sub.i (x.sub.i, y.sub.i, z.sub.i) is considered the activation frequency of its associated time-and-point-dependentdepolarization signal S.sub.i (t, x.sub.i, y.sub.i, z.sub.i).
The unipolar point-dependent dominant frequency DF.sub.UPi (x.sub.i, y.sub.i, z.sub.i) of an acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart is the frequency
in the unipolar point-dependent discrete power spectrumDPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i), derived from a unipolar time-and-point-dependent depolarization signal S.sub.UPi(t,
x.sub.i, y.sub.i, z.sub.i) acquired from that acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart thatis associated with an absolute maximum power density in the
unipolar point-dependent discrete power spectrum DPS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i).
The bipolar point-dependent dominant frequency DF.sub.BPi (x.sub.i, y.sub.i, z.sub.i) of an acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart is the frequency
in the bipolar point-dependent discrete power spectrumDPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i), derived from a bipolar time-and-point-dependent depolarization signal S.sub.BPi(t,
x.sub.i, y.sub.i, z.sub.i) acquired from that acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart, thatis associated with an absolute maximum power density in
the bipolar point-dependent discrete power spectrum DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i).
The point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) of an acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or within the heart is the
frequency in the point-dependent discrete power spectrum productDPS.sub.PRODi(f, x.sub.i, y.sub.i, z.sub.i) obtained by the multiplication of a unipolar point-dependent discrete power
spectrum DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i) by a corresponding bipolar point-dependent discrete power spectrum DPS.sub.BPi(f,x.sub.i, y.sub.i, z.sub.i), each respectively derived
from a unipolar time-and-point-dependent depolarization signal S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i) acquired from that acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) on or
within the heartand a corresponding bipolar time-and-point-dependent depolarization signal S.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i) acquired from the same acquisition point P.sub.i
(x.sub.i, y.sub.i, z.sub.i) on or within the heart, that is associated with an absolutemaximum power density in the point-dependent discrete power spectrum product DPS.sub.PRODi(f,
x.sub.i, y.sub.i, z.sub.i).
In any given acquisition of unipolar and bipolar time-and-point-dependent depolarization signals, S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i), S.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i), from
each point acquisition P.sub.i(x.sub.i, y.sub.i, z.sub.i) ofa set of acquisition points {P.sub.i(x.sub.i, y.sub.i, z.sub.i)}, there will be at least one acquisition point P.sub.i
(x.sub.i, y.sub.i, z.sub.i) whose point-dependent product dominant frequency DF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) is associated witha point-dependent maximum product dominant
frequency DF.sub.MAXPRODi(x.sub.i, y.sub.i, z.sub.i).
7c. Dominant Frequency Band
In the SSFA Identification Method and Algorithm, the term "point-dependent dominant frequency band" (".DELTA..sub.iDF") comprises a frequency band of about three times the frequency
resolution (".DELTA.f.sub.i") e.g., about 0.75 Hz, centeredabout a point-dependent dominant frequency DF.sub.i.
7d. Regularity Index
In the SSFA Identification Method and Algorithm, the degree to which the point-dependent dominant frequency DF.sub.i of a time-and point-dependent depolarization signal S.sub.i (t,
x.sub.i, y.sub.i, z.sub.i) acquired from an acquisition pointP.sub.i (x.sub.i, y.sub.i, z.sub.i) on or within the heart during an episode of fibrillation is an exclusive contributor to
the time-and point-dependent depolarization signal S.sub.i (t, x.sub.i, y.sub.i, z.sub.i) is gauged by an associatedpoint-dependent regularity index RI.sub.i (x.sub.i, y.sub.i, z.sub.i)
or RI.sub.i.
The closer the value of the point-dependent regularity index RI.sub.i is to 1, the fewer the frequencies other than the dominant frequency DF.sub.i that contribute to a
time-and-point-dependent depolarization signal S.sub.i (t, x.sub.i, y.sub.i,z.sub.i). Accordingly, if the coordinates of an acquisition point having a particular dominant frequency are
assigned to the SSFA, the validity of the assignment may be assessed by interpreting the value of the point-dependent regularity indexassociated with the dominant frequency. The closer
the value of the associated point-dependent regularity index RI.sub.i is to 1, the greater the likelihood that the assignment of coordinates accurately identifies the SSFA.
The value of the point-dependent regularity index RI.sub.i also serves to characterize the behavior of a time-and-point-dependent depolarization signal S.sub.i (t, x.sub.i, y.sub.i,
z.sub.i) in the time domain.
The closer the value of the point-dependent regularity index RI.sub.i is to 1, the more regularly periodic the time-and-point-dependent depolarization signal S.sub.i (t, x.sub.i,
y.sub.i, z.sub.i). Conversely, the closer the value of thepoint-dependent regularity index RI.sub.i is to zero, the more irregularly periodic the time-and-point-dependent depolarization
signal S.sub.i (t, x.sub.i, y.sub.i, z.sub.i).
Consequently, points near a very stable high-frequency SSFA, or points far from such a SSFA but having very low frequencies, will be associated with point-dependent regularity index
RI.sub.i having values close to 1; and, conversely, points nearwave front fragmentation or an unstable, meandering high-frequency SSFA, or sites of intermittent conduction delays or
blocks, are likely to be associated with point-dependent regularity index RI.sub.i values closer to zero.
The point-dependent regularity index RI.sub.i is defined as the ratio of the power contained in the point-dependent dominant frequency band .DELTA..sub.iDF to the total power computed
at all frequencies of the point-dependent discrete powerspectrum DPS.sub.i (f, x.sub.i, y.sub.i, z.sub.i), the dominant frequency band .DELTA..sub.iDF being a frequency band centered
about a point-dependent dominant frequency DF having a width of about three times the frequency resolution .DELTA.f.sub.i.
For example, in FIG. 1, regularity indices of 0.33, 0.28 and 0.25 have been computed for the respective dominant frequency peaks found in each of the power spectra of the time-dependent
depolarization signals, A, B, C.
By analogy with the forgoing definitions of a unipolar, bipolar and product dominant frequency, a unipolar point-dependent regularity index RI.sub.UPi may be computed from a unipolar
point-dependent discrete power spectrum DPS.sub.UPi (f,x.sub.i, y.sub.i, z.sub.i), a bipolar point-dependent regularity index RI.sub.BPi may be computed from a bipolar point-dependent
discrete power spectrum DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i), and, a product point-dependent regularity indexRI.sub.BPi may be computed from a point-dependent discrete power
spectrum product DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i).
7e. Defining Criterion for Identifying a Point of SSFA: Maximum Dominant Frequency
In the SSFA Identification Method and Algorithm, the point of SSFA is assigned the coordinates of that acquisition point P.sub.i (x.sub.i, y.sub.i, z.sub.i) whose point-dependent
discrete power spectrum product DPS.sub.PRODi (f, x.sub.i, y.sub.i,z.sub.i) has a point-dependent product dominant frequency DF.sub.MAXPRODi (x.sub.i, y.sub.i, z.sub.i), that is higher
than the point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) of any other point-dependent discrete powerspectrum product DPS.sub.PRODi (f, x.sub.i,
y.sub.i, z.sub.i) computed for any other acquisition point P.sub.i (x.sub.i, y.sub.i, z.sub.i).
The point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) satisfying this criterion is called the maximum point-dependent product dominant frequency
DF.sub.MAXPRODi. The spatial coordinates of the maximumpoint-dependent product dominant frequency DF.sub.MAXPRODi. identify the point of SSFA.
8. SSFA Identification Method and Algorithm
FIG. 5 shows a flowchart that outlines the SSFA Identification Method and Algorithm.
FIG. 6 shows a flowchart that outlines the FFT and Power Spectrum Module of the SSFA Identification Method and Algorithm.
8a. Establish a Cardiac Spatial Coordinate System
Referring initially to Flowchart Step No. 1 in FIG. 5, spatial coordinates (x.sub.i, y.sub.i, z.sub.i) for cardiac points are determined by: pre-defining a spatial coordinate system (x,
y, z) for the identification of cardiac pointscP.sub.i(x.sub.i, y.sub.i, z.sub.i) having spatial coordinates (x.sub.i, y.sub.i, z.sub.i) on or within the heart; storing the cardiac
points cP.sub.i (x.sub.i, y.sub.i, z.sub.i) on a computer recordable medium as a set cardiac points {cP.sub.i (x.sub.i,y.sub.i, z.sub.i)}; assigning to each acquisition point P.sub.i
(x.sub.i, y.sub.i, z.sub.i) the coordinates of the cardiac point with which it is spatially coincident.
The spatial coordinate system and the spatial coordinates may, for example, be maintained in a Cartesian, spherical, cylindrical, conical, or other spatial coordinate system that are
transformable inter se. The spatial coordinate system may, forexample, be defined by adaptation of the multi-electrode basket method, the CARTO system, or the Ensite non-contact mapping
system, all known in the cardiac electrophysiological arts.
8b. Simultaneously Acquire a Unipolar S.sub.UPi(t) and Bipolar Signal S.sub.BPi(t) from Points
During an episode of fibrillation, a unipolar time-dependent depolarization signal S.sub.UP(t) and a corresponding bipolar time-dependent depolarization signal S.sub.BP(t) are
simultaneously acquired by a cardiac acquisition device from eachacquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) of an acquisition set of points {P.sub.i(x.sub.i, y.sub.i,
z.sub.i)} of the heart, each acquisition point P.sub.i (x.sub.i, y.sub.i, z.sub.i) having unique spatial coordinates (x.sub.i, y.sub.i,z.sub.i) identified from the pre-stored set of
cardiac points {cP.sub.i (x.sub.i, y.sub.i, z.sub.i)}. (Flowchart Step No. 2 in FIG. 5).
The simultaneously acquired unipolar and bipolar time-dependent depolarization signals S.sub.UPi(t), S.sub.BPi(t) may be acquired in the aforementioned roving mode, which mode comprises
the repetitive sequential use through a plurality ofiterations of a roving cardiac signal acquisition device that detects, records and outputs the simultaneously acquired unipolar and
bipolar time-dependent depolarization signals S.sub.UPi(t), S.sub.BPi(t) to a computer recordable medium from eachacquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i). Alternatively, a
plurality of paired unipolar and bipolar time-dependent depolarization signals S.sub.UPi(t), S.sub.BPi(t) may be simultaneously acquired in a concurrent mode, using a concurrentcardiac
signal acquisition device that detects, records and outputs a plurality of simultaneously acquired paired unipolar and bipolar time-dependent depolarization signals S.sub.UPi(t),
S.sub.BPi(t) to a computer recordable medium from a plurality ofacquisition points P.sub.i(x.sub.i, y.sub.i, z.sub.i).
The simultaneously acquired unipolar and bipolar time-dependent depolarization signals S.sub.UPi(t), S.sub.BPi(t) may be recorded over an acquisition time of, for example about 5
seconds. The unipolar and bipolar time-dependent depolarizationsignals S.sub.UPi(t), S.sub.BPi(t) may be acquired as discretely sampled signals by the roving cardiac signal acquisition
device, in which case the acquisition time comprises a sampling time, or they may be acquired as continuous signals that arediscretely sampled after their acquisition by means, for
example, of a computing device.
8c. Assign Coordinates of Each Point to Each Unipolar and Corresponding Bipolar Signal Forming S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i) and S.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)
A set of unipolar time-and-point-dependent depolarization signals {S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)} is formed by assigning to each unipolar time-dependent depolarization signal
S.sub.UP(t) the spatial coordinates (x.sub.i, y.sub.i,z.sub.i) of the acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) from which it was acquired; and, a set of corresponding
bipolar time-and-point-dependent depolarization signals {S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} is formed by assigning to eachcorresponding bipolar time-dependent depolarization
signal S.sub.BP(t) the spatial coordinates (x.sub.i, y.sub.i, z.sub.i) of the acquisition point P.sub.i(x.sub.i, y.sub.i, z.sub.i) from which it was simultaneously acquired (Flowchart
Step No. 3 in FIG.5).
8d. Store {S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)} and {S.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)}
The set of unipolar time-and-point-dependent depolarization signals {S.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)} and the set of corresponding bipolar time-and-point-dependent
depolarization signals {S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} arerespectively stored on a computer recordable medium (Flowchart Step No. 4 in FIG. 5).
8e. Compute Power Spectra DPS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i) and DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)
A set of unipolar point-dependent discrete power spectra {DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i)} is formed by computing a unipolar point-dependent discrete power spectrum
DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i) for each unipolartime-and-point-dependent depolarization signal {S.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}; and, a set of bipolar
point-dependent discrete power spectra {DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i)} is formed by computing a bipolar point-dependent discretepower spectrum DPS.sub.BPi(f, x.sub.i,
y.sub.i, z.sub.i) for each corresponding bipolar time-and-point-dependent depolarization signal {S.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)} (Flowchart Step No. 5 in FIG. 5).
Referring now to Flowchart Step No. 1 in FIG. 6, each unipolar point-dependent discrete power spectrum DPS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i) of the set of the set of unipolar
point-dependent discrete power spectra {DPS.sub.UPi(f, x.sub.i,y.sub.i, z.sub.i)} and each bipolar point-dependent discrete power spectrum DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i) of
the set of bipolar point-dependent discrete power spectra {DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i)} is computed as follows:
8e(i). Segment Each Signal
A predefined segment of each unipolar time-and-point-dependent depolarization signal S.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), such as, for example, 5 ms, is selected, thereby forming a
set of segmented unipolar time and-point-dependentdepolarization signals {sS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)}; and, a predefined segment of each of the corresponding bipolar
time-and-point-dependent depolarization signal S.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i), such as, for example, 5 ms, isselected thereby forming a set of corresponding segmented bipolar
time-and-point-dependent depolarization signals {sS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} (Flowchart Step No. 2 in FIG. 6).
8e(ii). Store {sS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)}and {sS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)}
The set of segmented unipolar time and-point-dependent depolarization signals {sS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} is stored on a computer recordable medium and the set of
corresponding segmented bipolar time and-point-dependentdepolarization signals {sS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} is also stored on a computer recordable medium (Flowchart Step
No. 3 in FIG. 6).
8e(iii). Detrend Each Signal
Each segmented unipolar time-and-point-dependent depolarization signal sS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) is detrended, that is, a linear best fit vector of sS.sub.UPi(t,
x.sub.i, y.sub.i, z.sub.i) is computed and its magnitude issubtracted from the values of sS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i) at each point in time, thereby forming a set of
detrended and segmented unipolar time-and-point-dependent depolarization signals {dsS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)}; and, eachcorresponding segmented bipolar
time-and-point-dependent depolarization signal sS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i) is also detrended, thereby forming a set of corresponding detrended and segmented bipolar time
and-point-dependent depolarizationsignals {dsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} (Flowchart Step No. 4 in FIG. 6).
8e(iv). Store {dsS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)} and {dsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}
The set of detrended and segmented unipolar time-and-point-dependent depolarization signals {dsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} is stored on a computer recordable medium; and
the set of corresponding detrended and segmented bipolar timeand-point-dependent depolarization signals {dsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}is also stored on a computer
recordable medium (Flowchart Step No. 5 in FIG. 6).
8e(v). Band Pass Filtering Each Signal
Each detrended and segmented unipolar time-and-point-dependent depolarization signal dsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) band pass-filtered between a first frequency limit
F.sub.lim1 and a second frequency limit F.sub.lim2. The firstfrequency limit may be about 1 Hz and the second frequency limit may be about 30 Hz. This band pass-filtering forms a set of
filtered, detrended and segmented unipolar time and-point-dependent depolarization signals {fdsS.sub.UPi (t, x.sub.i, y.sub.i,z.sub.i)}. Each corresponding detrended and segmented
bipolar time-and-point-dependent depolarization signal dsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i) is also band pass-filtered between the first frequency limit F.sub.lim1 and the second
frequencylimit F.sub.lim2, thereby forming a set of corresponding filtered, detrended and segmented bipolar time-and-point-dependent depolarization signals {fdsS.sub.BPi (t, x.sub.i,
y.sub.i, z.sub.i)} (Flowchart Step No. 6 in FIG. 6).
8e(vi). Store {fdsS.sub.UP i(t, x.sub.i, y.sub.i, z.sub.i)} and {fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}
The set of filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} is stored on a computer recordable
medium; and the set of corresponding filtered, detrended andsegmented bipolar time and-point-dependent depolarization signals {fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} is also
stored on a computer recordable medium (Flowchart Step No. 7 in FIG. 6).
8e(vii). Convolve Each Signal with (t)
Each filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) is convolved with a shaping signal (t),
thereby forming a set of shaped, filtered, detrended and segmentedunipolar time-and-point-dependent depolarization signals {{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i,
z.sub.i)}; and, each corresponding filtered, detrended and segmented bipolar time-and-point-dependent depolarization signalfdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i) with also convolved
with the shaping signal (t), thereby forming a set of corresponding shaped, filtered, detrended and segmented bipolar time-and-point-dependent depolarization signals {{circle around
(.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} (Flowchart Step No. 8 in FIG. 6).
The shaping signal may, for example comprise a time-dependent periodic triangle having a base of 100 ms and unit amplitude. The effect of each convolution is to clarify each filtered,
segmented and detrended unipolar time-and-point-dependentdepolarization signal fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) and to clarify each filtered, segmented and detrended bipolar
time-and-point-dependent depolarization signal fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i).
8e(viii). Store {{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and {{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}
The set of shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} is
stored on a computer recordable medium and the set ofcorresponding shaped, filtered, detrended and segmented bipolar time and-point-dependent depolarization signals {{circle around
(.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} is also stored on a computer recordable medium (Flowchart Step No. 9 inFIG. 6).
8e(ix). Refilter Each Signal
Each shaped, filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal {circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) is again band
pass-filtered between a third frequency limit F.sub.lim3 anda fourth frequency limit F.sub.lim4. The third frequency limit may be about 1 Hz and the fourth frequency may be about 30 Hz.
This band pass-filtering forms a set of refiltered, shaped, filtered, detrended and segmented unipolartime-and-point-dependent depolarization signals {r{circle around (.times.)}
fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}. Each shaped, filtered, detrended and segmented bipolar time-and-point-dependent depolarization signal {circle around(.times.)}fdsS.sub.BPi
(t, x.sub.i, y.sub.i, z.sub.i) is also band pass filtered between the third frequency limit F.sub.lim3 and the fourth frequency limit F.sub.lim4, thereby forming a set of corresponding
refiltered, shaped, filtered, detrended andsegmented bipolar time-and-point-dependent depolarization signals {r{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}.
(Flowchart Step No. 10 in FIG. 6).
8e(x). Store {r{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and {r{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}
The set of refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {r{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i,
z.sub.i)} is stored on a computer recordable medium and theset of corresponding refiltered, shaped, filtered, detrended and segmented bipolar time and-point-dependent depolarization
signals {r{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} is also stored on a computer recordable medium(Flowchart Step No. 11 in FIG. 6).
8e(xi). Window Each Signal
Each refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signal r{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)
is windowed. A window may, for example, be selected having apower-of-2-length in the center of the refiltered, shaped, filtered, detrended and segmented unipolar
time-and-point-dependent depolarization signal r{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), and may correspond to a default of4096 discretely sampled points.
Windowing of each refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signal r{circle around (.times.)}fdsS.sub.UPi(t, x.sub.i,
y.sub.i, z.sub.i) forms a set of windowed,refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {wr{circle around (.times.)}
fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}. Each corresponding refiltered, shaped, filtered, detrended andsegmented bipolar time-and-point-dependent depolarization signal r{circle
around (.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i), is also windowed, thereby forming a set of corresponding windowed, refiltered, shaped, filtered, detrended andsegmented
bipolar time-and-point-dependent depolarization signals {wr{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}. (Flowchart Step No. 12 in FIG. 6).
8e(xii). Store {wr{circle around (.times.)}fdsS.sub.UPi(t, x.sub.i, y.sub.i, z.sub.i)} and {r{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}
The set of windowed, refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {wr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i,
y.sub.i, z.sub.i)} is stored on a computer recordable mediumand the set of corresponding windowed, refiltered, shaped, filtered, detrended and segmented bipolar time and-point-dependent
depolarization signals {wr{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)} is also stored on a computerrecordable medium (Flowchart Step No. 13 in FIG. 6).
8e(xiii). Edge-Smooth Each Signal
Each windowed, refiltered, shaped, filtered, segmented and detrended unipolar time-and-point-dependent depolarization signal wr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i,
y.sub.i, z.sub.i) is edge-smoothed, so that its beginning and endgradually converge to a value of zero. This can be achieved by multiplying it with a pre-selectable window, such as, for
example, a Hanning window. Edge-smoothing each windowed, refiltered, shaped, filtered, segmented and detrended unipolartime-and-point-dependent depolarization signal wr{circle around
(.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i) forms a set of edge-smoothed, windowed, refiltered, shaped, filtered, detrended and segmented unipolar
time-and-point-dependentdepolarization signals {ewr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)}. Each corresponding windowed, refiltered, shaped, filtered,
detrended and segmented bipolar time-and-point-dependent depolarization signal wr{circle around(.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i) is also edge-smoothed, thereby
forming a set of corresponding edge-smoothed, windowed, refiltered, shaped, filtered, detrended and segmented bipolar time-and-point-dependent depolarization signals{ewr{circle around
(.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}. (Flowchart Step No. 14 in FIG. 6).
8e(xiv). Store {ewr{circle around (.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} and {er{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i)}
The set of edge-smoothed, windowed, refiltered, shaped, filtered, detrended and segmented unipolar time-and-point-dependent depolarization signals {ewr{circle around (.times.)}
fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i)} is stored on a computerrecordable medium and the set of corresponding edge-smoothed, windowed, refiltered, shaped, filtered, detrended and
segmented bipolar time and-point-dependent depolarization signals {er{circle around (.times.)}fdsS.sub.BPi(t, x.sub.i, y.sub.i, z.sub.i)}is also stored on a computer recordable medium
(Flowchart Step No. 15 in FIG. 6).
8e(xv). Compute Frequency Spectra Using an FFT
A unipolar point-dependent discrete frequency spectrum is computed for each edge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended unipolar
time-and-point-dependent depolarization signal ewr{circle around(.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), by means of a Fast Fourier Transform, thereby forming a set of
unipolar point-dependent discrete frequency spectra {DFS.sub.UPi(f, x.sub.i, y.sub.i, z.sub.i)}; and, a bipolar point-dependent discretefrequency spectrum is computed for each
edge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended bipolar time-and-point-dependent depolarization signal ewr{circle around (.times.)}fdsS.sub.BPi (t,
x.sub.i, y.sub.i, z.sub.i), bymeans of a Fast Fourier Transform, thereby forming the set of bipolar point-dependent discrete frequency spectra {DFS.sub.BPi(f, x.sub.i, y.sub.i,
z.sub.i)} (Flowchart Step No. 16 in FIG. 6).
8e(xvi). Store {DFS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i)} and {DFS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)}
The set of unipolar point-dependent discrete frequency spectra {DFS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i)} is stored on a computer recordable medium and the set of bipolar
point-dependent discrete frequency spectra {DFS.sub.BPi (f, x.sub.i,y.sub.i, z.sub.i)} is also stored on a computer recordable medium (Flowchart Step No. 17 in FIG. 6).
8e(xvii). Compute Power Spectra
A unipolar point-dependent discrete power spectrum is computed for each edge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended unipolar time-and-point-dependent
depolarization signal ewr{circle around(.times.)}fdsS.sub.UPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of unipolar point-dependent discrete power spectra {DPS.sub.UPi(f,
x.sub.i, y.sub.i, z.sub.i)}; and, a bipolar point-dependent discrete power spectrum is computed for eachedge-smoothed, windowed, refiltered, shaped, filtered, segmented and detrended
bipolar time-and-point-dependent depolarization signal ewr{circle around (.times.)}fdsS.sub.BPi (t, x.sub.i, y.sub.i, z.sub.i), thereby forming a set of bipolarpoint-dependent discrete
power spectra {DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i)} (Flowchart Step No. 18 in FIG. 6).
8e(xviii). Store {DPS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i)} and {DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)}
The set of unipolar point-dependent discrete power spectra {DPS.sub.UPi (f, x.sub.i, y.sub.i, z.sub.i)} is stored on a computer recordable medium and the set of bipolar point-dependent
discrete power spectra {DPS.sub.BPi (f, x.sub.i, y.sub.i,z.sub.i)} is also stored on a computer recordable medium (Flowchart Step No. 19 in FIG. 6).
8f. Multiply Unipolar Power Spectrum by Bipolar Power Spectrum
Returning now to Flowchart Step 6 of FIG. 5, a set of point-dependent discrete power spectrum products {DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)} is formed by multiplying each
unipolar point-dependent discrete power spectrum DPS.sub.UPi(f,x.sub.i, y.sub.i, z.sub.i) of the set of unipolar point-dependent discrete power spectra {DPS.sub.UPi (f, x.sub.i,
y.sub.i, z.sub.i)} by each the corresponding bipolar point-dependent discrete power spectrum DPS.sub.BPi(f, x.sub.i, y.sub.i, z.sub.i) ofthe set of corresponding bipolar point-dependent
discrete power spectra {DPS.sub.BPi (f, x.sub.i, y.sub.i, z.sub.i)}.
8g. Store {DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)}
The set of point-dependent discrete power spectrum products {DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)} is stored on a computer recordable medium (Flowchart Step 7 of FIG. 5).
8h. Compute Product Dominant Frequencies
A point-dependent product dominant frequency DF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) is computed for each point-dependent discrete product power spectrum DPS.sub.PRODi(f, x.sub.i,
y.sub.i, z.sub.i) of the set of point-dependent discrete powerspectrum products {DPS.sub.PRODi (f, x.sub.i, y.sub.i, z.sub.i)}, thereby forming a set of point-dependent product dominant
frequencies {DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)} (Flowchart Step 8 of FIG. 5).
8i. Store {DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)}
The set of point-dependent product dominant frequencies {DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)} is stored on a computer recordable medium (Flowchart Step 9 of FIG. 5).
8j. Map DF.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) to the point (x.sub.i, y.sub.i, z.sub.i) on which it is Dependent.
Each point-dependent product dominant frequency DF.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) of the set of point-dependent product dominant frequencies {DF.sub.PRODi(x.sub.i, y.sub.i,
z.sub.i)} is mapped to the point (x.sub.i, y.sub.i, z.sub.i) withwhich it is associated (Flowchart Step 10 of FIG. 5).
8k. Select the Maximum Dominant Frequency
A maximum point-dependent product dominant frequency DF.sub.MAXPRODi (x.sub.i, y.sub.i, z.sub.i) is selected from the set of point-dependent product dominant frequencies {DF.sub.PRODi
(x.sub.i, y.sub.i, z.sub.i)} (Flowchart Step 11 of FIG. 5).
8l. Identify the Point of SSFA
The coordinates of the maximum point-dependent product dominant frequency DF.sub.MAXPRODi (x.sub.i, y.sub.i, z.sub.i) are assigned to the point of SSFA (Flowchart Step 12 of FIG. 5).
8m. Compute Point-Dependent Product Regularity Index RI.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)
A point-dependent product regularity index RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) is computed for each point-dependent discrete product power spectrum DPS.sub.PRODi(f, x.sub.i,
y.sub.i, z.sub.i) of the set of point-dependent discrete powerspectrum products {DPS.sub.PRODi(f, x.sub.i, y.sub.i, z.sub.i)}, thereby forming a set of point-dependent product
regularity indices {RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i)} (Flowchart Step 13 of FIG. 5).
8n. Store {RI.sub.PRODi (x.sub.i, y.sub.i, z.sub.i)}
The set of point-dependent product regularity indices {RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i)} is stored on a computer recordable medium (Flowchart Step 14 of FIG. 5).
8o. Verify the Assignment of the Coordinates of DF.sub.MAXPRODi(x.sub.i, y.sub.i, z.sub.i) to the Point of SSFA
The assignment of the coordinates of the maximum point-dependent product dominant frequency DF.sub.MAXPRODi (x.sub.i, y.sub.i, z.sub.i) to the SSFA is verified by interpreting the value
of its corresponding point-dependent product regularityindex RI.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) (Flowchart Step 15 of FIG. 5)
8p. Map RI.sub.PRODi(x.sub.i, y.sub.i, z.sub.i) to the Point (x.sub.i, y.sub.i, z.sub.i) on which it is Dependent.
Each point-dependent product regularity index RI.sub.PRODi (x.sub.i, y.sub.i, z.sub.i) of the set of point-dependent product regularity indices {RI.sub.PRODi (x.sub.i, y.sub.i,
z.sub.i)} is mapped to the point (x.sub.i, y.sub.i, z.sub.i) withwhich it is associated (Flowchart Step 16 of FIG. 5).
9. Computer System
FIG. 7 illustrates a computer system 90 for implementing the SSFA Identification Algorithm, in accordance with embodiments of the present invention. A computer system 90 comprises a
processor 91, an input device 92 coupled to the processor 91,an output device 93 coupled to the processor 91, and memory devices 94 and 95 each coupled to the processor 91. The input
device 92 may be, inter alia, a keyboard, a mouse, etc. The output device 93 may be, inter alia, a printer, a plotter, a computerscreen, a magnetic tape, a removable hard disk, a floppy
disk, an optical storage such as a compact disc (CD), etc. The memory devices 94 and 95 may be, inter alia, a hard disk, a floppy disk, a magnetic tape, an optical storage such as a
compact disc(CD) or a digital video disc (DVD), a dynamic random access memory (DRAM), a read-only memory (ROM), etc. The memory device 95 includes a computer code 97. The computer code
97 includes the SSFA Identification Algorithm. The processor 91 executes thecomputer code 97. The memory device 94 includes input data 96. The input data 96 includes input required by
the computer code 97. The output device 93 displays output from the computer code 97. Either or both memory devices 94 and 95 (or one or moreadditional memory devices not shown in FIG.
7) may be used as a computer usable medium (or a computer readable medium or a program storage device) having a computer readable program code embodied therein and/or having other data
stored therein, whereinthe computer readable program code comprises the computer code 97. Generally, a computer program product (or, alternatively, an article of manufacture) of the
computer system 90 may comprise the computer usable medium (or the program storage device).
While FIG. 7 shows the computer system 90 as a particular configuration of hardware and software, any configuration of hardware and software, as would be known to a person of ordinary
skill in the art, may be utilized for the purposes statedsupra in conjunction with the particular computer system 90 of FIG. 3. For example, the memory devices 94 and 95 may be portions
of a single memory device rather than separate memory devices.
* * * * * | {"url":"http://www.patentgenius.com/patent/7117030.html","timestamp":"2014-04-18T08:30:04Z","content_type":null,"content_length":"134196","record_id":"<urn:uuid:1a6134b3-a07b-4b2d-9c73-584f89faa61a>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00518-ip-10-147-4-33.ec2.internal.warc.gz"} |
Peachtree City Math Tutor
Find a Peachtree City Math Tutor
...This usually helps students form a framework, which is all they need to understand trig. SAT math can be broken down into only a few types of questions in a few subjects. Specifically there are
abstract, disguised, or multi-step questions in algebra, geometry, or data interpretation.
17 Subjects: including trigonometry, algebra 1, algebra 2, ACT Math
...I use the My Computer and Control Panel to tailor the environment as desired by the user. I can set up networks and Work Groups to allow network file sharing. File organization is an area I
recommend to increase efficiency in locating and using files.
27 Subjects: including prealgebra, logic, ACT Math, algebra 1
...This is a holistic (visual) approach to the effective mastery of traditional chemistry. It reinforced learning of the course concepts by illustrating how they relate to the students' own life
experiences. Specialty areas include general inorganic and introductory organic chemical concepts; atom...
2 Subjects: including prealgebra, chemistry
...ACT Math is a collection of pre-algebra, elementary algebra, intermediate algebra, geometry, and trigonometry; basically all the courses that should have been taking by the end of the eleventh
grade year. Elementary math is basic computation skills such as adding, subtracting, multiplying and di...
9 Subjects: including algebra 1, algebra 2, grammar, geometry
...I am very friendly, easy to get along with, cheerful, energetic, responsible, respectful, and professional. If you are interested in my service, please contact me.As a major in drawing,
painting and printmaking, I have an array of practice in drawing with various materials such as charcoal, vine...
16 Subjects: including algebra 2, biology, Spanish, French | {"url":"http://www.purplemath.com/peachtree_city_math_tutors.php","timestamp":"2014-04-18T14:19:07Z","content_type":null,"content_length":"23787","record_id":"<urn:uuid:72d3399f-53dd-41f2-9c24-9b817f258bb7>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
An nXn matrix A has all diagonal elements=0 and non-diagonal elements =1 Find the eigen values of A.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5095f338e4b0d0275a3cca49","timestamp":"2014-04-20T16:15:02Z","content_type":null,"content_length":"197500","record_id":"<urn:uuid:22e7a42e-66c1-4c3d-ba45-c6bd757ed970>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00145-ip-10-147-4-33.ec2.internal.warc.gz"} |
Running time analysis
10-17-2011 #1
Registered User
Join Date
Sep 2011
Running time analysis
for (i=1; i<=n; i*=2)
for (j=1; j<=i; j++)
for(i=1; i<= n ; i++)
for (j = 1; j <= n; j*=2)
if (n mod 2 == 0) // n even
for (k = 1; k <= n; k++)
else // n odd
how to calculate the running analysis for the worst case of the two above algorithms??
Please change your [\code] tag - replace the \ with /, and it will show your code correctly.
for (i=1; i<=n; i*=2)
for (j=1; j<=i; j++)
for(i=1; i<= n ; i++)
for (j = 1; j <= n; j*=2)
if (n mod 2 == 0) // n even
for (k = 1; k <= n; k++)
else // n odd
good point
Ok... you found the code tags... now learn how to indent your code...
Indent style - Wikipedia, the free encyclopedia
This should do the trick:
clock_t t0, t1, elapsed;
t0 = clock();
/* Code to clock goes here */
t1 = clock();
elapsed = 1000 * (t1 - t0) / (CLOCKS_PER_SEC);
printf("Avg elapsed time: %ld ms\n\n", elapsed/MAX_LOOPS);
How can i analyze the time that needs the program to be executed? is it all about maths?
How can i analyze the time that needs the program to be executed? is it all about maths?
Are you talking about the run time complexity of those snippets of code? Then yes, it is more or less about counting and maths. Go through the algorithm with specific input to understand what is
happening. Count to give yourself a ballpark estimate of how the number of operations might vary with the size of the input, if only for the average case at first.
Since this is not about C, I am moving this thread.
C + C++ Compiler: MinGW port of GCC
Version Control System: Bazaar
Look up a C++ Reference and learn How To Ask Questions The Smart Way
In the first example you have a loop of order n, and within this loop is another loop of order n (it loops to i, which itself is order n), making the total O(n^2).
In the second example you have a loop of order n, and within this loop another loop of order n, and within THAT loop, you get ANOTHER loop of order n, every other time, which makes the whole
thing O(n^3).
if (a) do { f( b); } while(1);
else do { f(!b); } while(1);
for (i=1; i<=n; i*=2)
Is this really a loop of order n, and not perhaps of log(n)?
I might be wrong.
Thank you, anon. You sure know how to recognize different types of trees from quite a long way away.
Quoted more than 1000 times (I hope).
10-17-2011 #2
Registered User
Join Date
Sep 2006
10-18-2011 #3
Registered User
Join Date
Sep 2011
10-18-2011 #4
Join Date
Aug 2010
Ontario Canada
10-18-2011 #5
Registered User
Join Date
Sep 2011
Stockholm, Sweden
10-18-2011 #6
Registered User
Join Date
Sep 2011
10-18-2011 #7
Registered User
Join Date
Sep 2011
Stockholm, Sweden
10-18-2011 #8
10-18-2011 #9
10-18-2011 #10
The larch
Join Date
May 2006 | {"url":"http://cboard.cprogramming.com/general-discussions/142203-running-time-analysis.html","timestamp":"2014-04-19T05:15:26Z","content_type":null,"content_length":"76474","record_id":"<urn:uuid:7b456276-eba3-420a-af63-c20ad4e17d61>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Need help on making assumptions in maths
August 14th 2009, 01:29 AM
Need help on making assumptions in maths
In my maths problem, there is a turtle race of 10m consisting of 4 turtles.
One of the questions is "In order to consider the race fair several assumptions need to be made". List these assumptions and the effects of these assumptions".
So far i have:
1) We assume all the turtles are the same weight, if one turtle was heavier it would give that turtle a disadvantage
2) We assume the turtles become tired when their speed changed
3) We assume the terrain is smooth, because if one turtle hit a rock it would slow it down and make the race unfair
What are some other assumptions that i could make? Could i use air resistance and friction as an assumption?
August 14th 2009, 01:57 AM
What if a turtle weighed a little more but had significantly more lean body mass than other turtles?
What if one turtle was fitter than the others and so the decrease in velocity with time was slower for some than others? | {"url":"http://mathhelpforum.com/pre-calculus/98010-need-help-making-assumptions-maths-print.html","timestamp":"2014-04-20T11:29:41Z","content_type":null,"content_length":"4248","record_id":"<urn:uuid:76ab49bf-f816-46ad-998e-6f3865e1f596>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
conversion from long to float
Author conversion from long to float
Long is 64 bytes and both int and float are 32 bytes. Why can a long
Joined: Oct 19, 2003 be assigned to a float without an explicit cast but an int can not?
Posts: 12 If a long is declared as follows:
long l = 123456L;
This doesn't implicitly convert
int i = l; // Gives a compile error as expected
but doing this safely compiles and run.
float f = l;
Can someone please explain?
Joined: Oct 19, 2003 The way floats are stored internally (in the form of exponents) gives it a larger range and hence makes it safe to cast a long to a float.
Posts: 13 Long to int of course might lead to possible loss of info.
Good question. A floating point number uses essentially the binary
Joined: Dec 20, 2002 version of scientific notation... so it can express the very large
Posts: 1 possible values of long variables, but with a loss of *precision*. That
is, I might convert (this example by analogy is actually using decimal
precisions) 23472166003027 into (2.34722 * 10^13) as a floating point
Whether this should be considered a widening conversion (and thus not
need a cast) is a debatable point, but the Java design team decided that
it should. On the pro side, you do get a reasonable value, even if
precision is lost (compare with, for example, converting long to int,
where a value out of range creates a practically random numeric result).
On the con side, it *is* a loss of precision, so perhaps the user should
be warned about it.
I'm always in favor of compiler checks, so I disagree with Java's choice
on this one.
If this would help, this is the data from IEEE 754 Floating Point which Java complies :
A float is 4 bytes, 32 bits,
With 6 to 7 significant digits of accuracy.
covers a range from �1.40129846432481707e-45 to �3.40282346638528860e+38,
is formed of 3 fields:
1-bit sign
8-bit base 2 exponent biased+127
23-bit fraction, lead 1 implied
e.g. 3. = 0x404000
-3. = 0xC04000
subject: conversion from long to float | {"url":"http://www.coderanch.com/t/243511/java-programmer-SCJP/certification/conversion-long-float","timestamp":"2014-04-17T16:03:21Z","content_type":null,"content_length":"22871","record_id":"<urn:uuid:8c30863d-5a08-4e3f-b365-b5ed26fd6b62>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |