content
stringlengths
86
994k
meta
stringlengths
288
619
Spline Models for Observational Data Results 1 - 10 of 1,056 , 1995 "... Abstract—Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on ..." Cited by 8950 (28 self) Add to MetaCart Abstract—Statistical learning theory was introduced in the late 1960’s. Until the 1990’s it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990’s new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. , 1990 "... Liklihood based regression models, such as the normal linear regression model and the linear logistic model, assume a linear (or some other parametric) form for the covariate effects. We introduce the Local Scotinq procedure which replaces the liner form C Xjpj by a sum of smooth functions C Sj(Xj)a ..." Cited by 1314 (33 self) Add to MetaCart Liklihood based regression models, such as the normal linear regression model and the linear logistic model, assume a linear (or some other parametric) form for the covariate effects. We introduce the Local Scotinq procedure which replaces the liner form C Xjpj by a sum of smooth functions C Sj(Xj)a The Sj(.) ‘s are unspecified functions that are estimated using scatterplot smoothers. The technique is applicable to any likelihood-based regression model: the class of Generalized Linear Models contains many of these. In this class, the Locul Scoring procedure replaces the linear predictor VI = C Xj@j by the additive predictor C ai ( hence, the name Generalized Additive Modeb. Local Scoring can also be applied to non-standard models like Cox’s proportional hazards model for survival data. In a number of real data examples, the Local Scoring procedure proves to be useful in uncovering non-linear covariate effects. It has the advantage of being completely automatic, i.e. no “detective work ” is needed on the part of the statistician. In a further generalization, the technique is modified to estimate the form of the link function for generalized linear models. The Local Scoring procedure is shown to be asymptotically equivalent to Local Likelihood estimation, another technique for estimating smooth covariate functions. They are seen to produce very similar results with real data, with Local Scoring being considerably faster. As a theoretical underpinning, we view Local Scoring and Local Likelihood as empirical maximizers of the ezpected log-likelihood, and this makes clear their connection to standard maximum likelihood estimation. A method for estimating the “degrees of freedom ” of the procedures is also given. - IEEE Transactions on Pattern Analysis and Machine Intelligence , 2001 "... We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by (1) solv- ing for correspondences between points on the two shapes, (2) using the correspondences to estimate an aligning transform ..." Cited by 1246 (19 self) Add to MetaCart We present a novel approach to measuring similarity between shapes and exploit it for object recognition. In our framework, the measurement of similarity is preceded by (1) solv- ing for correspondences between points on the two shapes, (2) using the correspondences to estimate an aligning transform. In order to solve the correspondence problem, we attach a descriptor, the shape context, to each point. The shape context at a reference point captures the distribution of the remaining points relative to it, thus offering a globally discriminative characterization. Corresponding points on two similar shapes will have similar shape con- texts, enabling us to solve for correspondences as an optimal assignment problem. Given the point correspondences, we estimate the transformation that best aligns the two shapes; reg- ularized thin plate splines provide a flexible class of transformation maps for this purpose. The dissimilarity between the two shapes is computed as a sum of matching errors between corresponding points, together with a term measuring the magnitude of the aligning trans- form. We treat recognition in a nearest-neighbor classification framework as the problem of finding the stored prototype shape that is maximally similar to that in the image. Results are presented for silhouettes, trademarks, handwritten digits and the COIL - Image and Vision Computing , 2003 "... This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align t ..." Cited by 400 (5 self) Add to MetaCart This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align two images (the reference and sensed images). The reviewed approaches are classified according to their nature (areabased and feature-based) and according to four basic steps of image registration procedure: feature detection, feature matching, mapping function design, and image transformation and resampling. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of image registration and outlook for the future research are discussed too. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas. q 2003 Elsevier B.V. All rights reserved. - IEEE Transactions on Medical Imaging , 1999 "... Abstract — In this paper we present a new approach for the nonrigid registration of contrast-enhanced breast MRI. A hierarchical transformation model of the motion of the breast has been developed. The global motion of the breast is modeled by an affine transformation while the local breast motion i ..." Cited by 375 (23 self) Add to MetaCart Abstract — In this paper we present a new approach for the nonrigid registration of contrast-enhanced breast MRI. A hierarchical transformation model of the motion of the breast has been developed. The global motion of the breast is modeled by an affine transformation while the local breast motion is described by a free-form deformation (FFD) based on B-splines. Normalized mutual information is used as a voxel-based similarity measure which is insensitive to intensity changes as a result of the contrast enhancement. Registration is achieved by minimizing a cost function, which represents a combination of the cost associated with the smoothness of the transformation and the cost associated with the image similarity. The algorithm has been applied to the fully automated registration of three-dimensional (3-D) breast MRI in volunteers and patients. In particular, we have compared the results of the proposed nonrigid registration algorithm to those obtained using rigid and affine registration techniques. The results clearly indicate that the nonrigid registration algorithm is much better able to recover the motion and deformation of the breast than rigid or affine registration algorithms. I. , 1999 "... We introduce a new method of constructing kernels on sets whose elements are discrete structures like strings, trees and graphs. The method can be applied iteratively to build a kernel on an infinite set from kernels involving generators of the set. The family of kernels generated generalizes the fa ..." Cited by 368 (0 self) Add to MetaCart We introduce a new method of constructing kernels on sets whose elements are discrete structures like strings, trees and graphs. The method can be applied iteratively to build a kernel on an infinite set from kernels involving generators of the set. The family of kernels generated generalizes the family of radial basis kernels. It can also be used to define kernels in the form of joint Gibbs probability distributions. Kernels can be built from hidden Markov random elds, generalized regular expressions, pair-HMMs, or ANOVA decompositions. Uses of the method lead to open problems involving the theory of infinitely divisible positive definite functions. Fundamentals of this theory and the theory of reproducing kernel Hilbert spaces are reviewed and applied in establishing the validity of the method. - JOURNAL OF MACHINE LEARNING RESEARCH , 2006 "... We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning al ..." Cited by 332 (13 self) Add to MetaCart We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including Support Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize properties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph-based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework. - Neural Computation , 1995 "... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..." Cited by 309 (31 self) Add to MetaCart We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, som... , 1993 "... This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argu ..." Cited by 290 (2 self) Add to MetaCart This paper presents a general theoretical framework for ensemble methods of constructing significantly improved regression estimates. Given a population of regression estimators, we construct a hybrid estimator which is as good or better in the MSE sense than any estimator in the population. We argue that the ensemble method presented has several properties: 1) It efficiently uses all the networks of a population - none of the networks need be discarded. 2) It efficiently uses all the available data for training without over-fitting. 3) It inherently performs regularization by smoothing in functional space which helps to avoid over-fitting. 4) It utilizes local minima to construct improved estimates whereas other neural network algorithms are hindered by local minima. 5) It is ideally suited for parallel computation. 6) It leads to a very useful and natural measure of the number of distinct estimators in a population. 7) The optimal parameters of the ensemble estimator are given in clo... - Advances in Neural Information Processing Systems 13 , 2001 "... A major problem for kernel-based predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n ), where n is the number of training examples. We show that an approximation to the eigendecomposition of the Gram matrix ..." Cited by 286 (6 self) Add to MetaCart A major problem for kernel-based predictors (such as Support Vector Machines and Gaussian processes) is that the amount of computation required to find the solution scales as O(n ), where n is the number of training examples. We show that an approximation to the eigendecomposition of the Gram matrix can be computed by the Nyström method (which is used for the numerical solution of eigenproblems). This is achieved by carrying out an eigendecomposition on a smaller system of size m < n, and then expanding the results back up to n dimensions. The computational complexity of a predictor using this approximation is O(m n). We report experiments on the USPS and abalone data sets and show that we can set m n without any significant decrease in the accuracy of the solution.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=66938","timestamp":"2014-04-19T00:16:51Z","content_type":null,"content_length":"41251","record_id":"<urn:uuid:4f5ba187-7dc1-49f1-8f15-7bea116620aa>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Inequalities Examples Quadratic Inequalities Graph of an equation y = bx + c gives us the straight line and it is termed as the linear equation. But, if we add a term ‘ax^2’ to it, the graph changes into a parabola. And, the equation after adding ax^2 becomes ax^2 + bx + c and that is known as a Quadratic equation. Quadratic equation is an equation which is of degree 2 and is of the form ax^2 + bx + c = 0. Solution of the equation is $x$ = $\frac{-b\pm \sqrt{b^{2}-4ac}}{2a}$ where ‘a’, ‘b’ and ‘c’ are the real numbers. Therefore, value of a $\neq$ 0. 3x^2 + 8x - 9, 4x^2 – 5x + 16 and x^2 – 45 are all examples of the quadratic equations. The values of a, b and c in the equation are termed as the coefficients wherein ‘a’ is the coefficient of x^2 and b is the coefficient of x and c is usually called as the constant. If we want to compare one quantity with another, we make use of the inequalities symbols. For example: 2 < 10. There are different symbols that we use to express the inequalities, where > indicates greater than < indicates less than $\leq$ indicates less than or equal to $\geq$ indicates greater than or equal to
{"url":"http://www.mathcaptain.com/algebra/quadratic-inequalities.html","timestamp":"2014-04-16T07:16:52Z","content_type":null,"content_length":"56760","record_id":"<urn:uuid:f95a8161-03ba-4768-86d7-ac00991443a6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
From New World Encyclopedia Euclid (also referred to as Euclid of Alexandria) (Greek: Εὐκλείδης) (c. 325 B.C.E. – c. 265 B.C.E.), a Greek mathematician, who lived in Alexandria, Hellenistic Egypt, almost certainly during the reign of Ptolemy I (323 B.C.E.–283 B.C.E.), is often referred to as the "father of geometry." His most popular work, Elements, is thought to be one of the most successful textbooks in the history of mathematics. Within it, the properties of geometrical objects are deduced from a small set of axioms, establishing the axiomatic method of mathematics. Euclid thus imposed a logical organization on known mathematical truths, by the disciplined use of logic. Later philosophers adapted this methodology to their own fields. Although best-known for its exposition of geometry, the Elements also includes various results in number theory, such as the connection between perfect numbers and Mersenne primes, the proof of the infinitude of prime numbers, Euclid's lemma on factorization (which lead to the fundamental theorem of arithmetic, on uniqueness of prime factorizations), and the Euclidean algorithm for finding the greatest common divisor of two numbers. Elements was published in approximately one thousand editions, and was used as the basic text for geometry by the Western world for two thousand years. Euclid also wrote works on perspective, conic sections, spherical geometry, and possibly quadric surfaces. Neither the year nor place of his birth have been established, nor the circumstances of his Little is known about Euclid outside of what is presented in Elements and his other surviving books. What little biographical information we do have comes largely from commentaries by Proclus and Pappus of Alexandria: Euclid was active at the great Library of Alexandria and may have studied at Plato's Academy in Greece. Euclid's exact lifespan and place of birth are unknown. Some writers in the Middle Ages erroneously confused him with Euclid of Megara, a Greek Socratic philosopher who lived approximately one century earlier. Euclid’s most famous work, Elements, is thought to be one of the most successful textbooks in the history of mathematics. Within it, the properties of geometrical objects are deduced from a small set of axioms, establishing the axiomatic method of mathematics. In addition to the Elements, five works of Euclid have survived to the present day. • Data deals with the nature and implications of "given" information in geometrical problems; the subject matter is closely related to the first four books of the Elements. • On Divisions of Figures, which survives only partially in Arabic translation, concerns the division of geometrical figures into two or more equal parts or into parts in given ratios. It is similar to a third-century C.E. work by Heron of Alexandria, except that Euclid's work characteristically lacks any numerical calculations. • Phaenomena concerns the application of spherical geometry to problems of astronomy. • Optics, the earliest surviving Greek treatise on perspective, contains propositions on the apparent sizes and shapes of objects viewed from different distances and angles. • Catoptrics, which concerns the mathematical theory of mirrors, particularly the images formed in plane and spherical concave mirrors. All of these works follow the basic logical structure of the Elements, containing definitions and proved propositions. There are four works credibly attributed to Euclid which have been lost. • Conics was a work on conic sections that was later extended by Apollonius of Perga into his famous work on the subject. • Porisms might have been an outgrowth of Euclid's work with conic sections, but the exact meaning of the title is controversial. • Pseudaria, or Book of Fallacies, was an elementary text about errors in reasoning. • Surface Loci concerned either loci (sets of points) on surfaces or loci which were themselves surfaces; under the latter interpretation, it has been hypothesized that the work might have dealt with quadric surfaces. Euclid's Elements (Greek: Στοιχεῖα) is a mathematical and geometric treatise, consisting of thirteen books, written around 300 B.C.E. It comprises a collection of definitions, postulates (axioms), propositions (theorems and constructions), and proofs of the theorems. The thirteen books cover Euclidean geometry and the ancient Greek version of elementary number theory. The Elements is the oldest extant axiomatic deductive treatment of mathematics, and has proven instrumental in the development of logic and modern science. Euclid's Elements is the most successful textbook ever written. It was one of the very first works to be printed after the printing press was invented, and is second only to the Bible in number of editions published (well over one thousand). It was used as the basic text on geometry throughout the Western world for about two thousand years. For centuries, when the quadrivium was included in the curriculum of all university students, knowledge of at least part of Euclid's Elements was required of all students. Not until the twentieth century did it cease to be considered something all educated people had read. The geometrical system described in Elements was long known simply as "the" geometry. Today, however, it is often referred to as Euclidean geometry to distinguish it from other so-called non-Euclidean geometries which were discovered during the nineteenth century. These new geometries grew out of more than two millennia of investigation into Euclid's fifth postulate (Parallel postulate), one of the most-studied axioms in all of mathematics. Most of these investigations involved attempts to prove the relatively complex and presumably non-intuitive fifth postulate using the other four (a feat which, if successful, would have shown the postulate to be in fact a theorem). Scholars believe that Elements is largely a collection of theorems proved by earlier mathematicians in addition to some original work by Euclid. Euclid’s text provides some missing proofs, and includes sections on number theory and three-dimensional geometry. Euclid's famous proof of the infinitude of prime numbers is in Book IX, Proposition 20. Proclus, a Greek mathematician who lived several centuries after Euclid, writes in his commentary of the Elements: "Euclid, who put together the Elements, collecting many of Eudoxus's theorems, perfecting many of Theaetetus's, and also bringing to irrefragable demonstration the things which were only somewhat loosely proved by his predecessors." A version by a pupil of Euclid called Proclo was translated later into Arabic after being obtained by the Arabs from Byzantium and from those secondary translations into Latin. The first printed edition appeared in 1482 (based on Giovanni Campano’s 1260 edition), and since then it has been translated into many languages and published in approximately one thousand different editions. In 1570, John Dee provided a widely respected "Mathematical Preface," along with copious notes and supplementary material, to the first English edition by Henry Billingsley. Copies of the Greek text also exist in the Vatican Library and the Bodlean Library in Oxford. However, the manuscripts available are of very variable quality and invariably incomplete. By careful analysis of the translations and originals, hypotheses have been drawn about the contents of the original text (copies of which are no longer available). Ancient texts which refer to the Elements itself and to other mathematical theories that were current at the time it was written are also important in this process. Such analyses are conducted by J. L. Heiberg and Sir Thomas Little Heath in their editions of Elements. Also of importance are the scholia, or annotations to the text. These additions, which often distinguished themselves from the main text (depending on the manuscript), gradually accumulated over time as opinions varied upon what was worthy of explanation or elucidation. Outline of the Elements The Elements is still considered a masterpiece in the application of logic to mathematics, and, historically, its influence in many areas of science cannot be overstated. Scientists Nicolaus Copernicus, Johannes Kepler, Galileo Galilei, and especially Sir Isaac Newton all applied knowledge of the Elements to their work. Mathematicians (Bertrand Russell, Alfred North Whitehead) and philosophers such as Baruch Spinoza have also attempted to use Euclid’s method of axiomatized deductive structures to create foundations for their own respective disciplines. Even today, introductory mathematics textbooks often have the word elements in their titles. The success of the Elements is due primarily to its logical presentation of most of the mathematical knowledge available to Euclid. Much of the material is not original to him, although many of the proofs are his. However, Euclid's systematic development of his subject, from a small set of axioms to deep results, and the consistency of his approach throughout the Elements, encouraged its use as a textbook for about two thousand years. The Elements still influences modern geometry books. Further, its logical axiomatic approach and rigorous proofs remains the cornerstone of mathematics. Although Elements is primarily a geometric work, it also includes results that today would be classified as number theory. Euclid probably chose to describe results in number theory in terms of geometry because he could not develop a constructible approach to arithmetic. A construction used in any of Euclid's proofs required a proof that it is actually possible. This avoids the problems the Pythagoreans encountered with irrationals, since their fallacious proofs usually required a statement such as "Find the greatest common measure of ..."^[1] First principles Euclid's Book 1 begins with 23 definitions such as point, line, and surface—followed by five postulates and five "common notions" (both of which are today called axioms). These are the foundation of all that follows. 1. A straight line segment can be drawn by joining any two points. 2. A straight line segment can be extended indefinitely in a straight line. 3. Given a straight line segment, a circle can be drawn using the segment as radius and one endpoint as center. 4. All right angles are congruent. 5. If two lines are drawn which intersect a third in such a way that the sum of the inner angles on one side is less than two right angles, then the two lines inevitably must intersect each other on that side if extended far enough. Common notions: 1. Things which equal the same thing are equal to one another (transitive property of equality). 2. If equals are added to equals, then the sums are equal. 3. If equals are subtracted from equals, then the remainders are equal. 4. Things which coincide with one another are equal to one another. (Reflexive property of equality) 5. The whole is greater than the part. These basic principles reflect the interest of Euclid, along with his contemporary Greek and Hellenistic mathematicians, in constructive geometry. The first three postulates basically describe the constructions that one can carry out with a compass and an unmarked straightedge. A marked ruler, used in neusis construction, is forbidden in Euclidian construction, probably because Euclid could not prove that verging lines meet. Parallel Postulate The last of Euclid's five postulates warrants special mention. The so-called parallel postulate always seemed less obvious than the others. Euclid himself used it only sparingly throughout the rest of the Elements. Many geometers suspected that it might be provable from the other postulates, but all attempts to do this failed. By the mid-nineteenth century, it was shown that no such proof exists, because one can construct non-Euclidean geometries where the parallel postulate is false, while the other postulates remain true. For this reason, mathematicians say that the parallel postulate is independent of the other postulates. Two alternatives to the parallel postulate are possible in non-Euclidean geometries: either an infinite number of parallel lines can be drawn through a point not on a straight line in a hyperbolic geometry (also called Lobachevskian geometry), or none can in an elliptic geometry (also called Riemannian geometry). That other geometries could be logically consistent was one of the most important discoveries in mathematics, with vast implications for science and philosophy. Indeed, Albert Einstein's theory of general relativity shows that the "real" space in which we live can be non-Euclidean (for example, around black holes and neutron stars). Contents of the thirteen books Books 1 through 4 deal with plane geometry: • Book 1 contains the basic properties of geometry: the Pythagorean theorem, equality of angles and areas, parallelism, the sum of the angles in a triangle, and the three cases in which triangles are "equal" (have the same area). • Book 2 is commonly called the "book of geometrical algebra," because the material it contains may easily be interpreted in terms of algebra. • Book 3 deals with circles and their properties: inscribed angles, tangents, the power of a point. • Book 4 is concerned with inscribing and circumscribing triangles and regular polygons. Books 5 through 10 introduce ratios and proportions: • Book 5 is a treatise on proportions of magnitudes. • Book 6 applies proportions to geometry: Thales' theorem, similar figures. • Book 7 deals strictly with elementary number theory: divisibility, prime numbers, greatest common divisor, least common multiple. • Book 8 deals with proportions in number theory and geometric sequences. • Book 9 applies the results of the preceding two books: the infinitude of prime numbers, the sum of a geometric series, perfect numbers. • Book 10 attempts to classify incommensurable (in modern language, irrational) magnitudes by using the method of exhaustion, a precursor to integration. Books 11 through 13 deal with spatial geometry: • Book 11 generalizes the results of Books 1–6 to space: perpendicularity, parallelism, volumes of parallelepipeds. • Book 12 calculates areas and volumes by using the method of exhaustion: cones, pyramids, cylinders, and the sphere. • Book 13 generalizes Book 4 to space: golden section, the five regular (or Platonic) solids inscribed in a sphere. Despite its universal acceptance and success, the Elements has been the subject of substantial criticism, much of it justified. Euclid's parallel postulate, treated above, has been a primary target of critics. Another criticism is that the definitions are not sufficient to fully describe the terms being defined. In the first construction of Book 1, Euclid used a premise that was neither postulated nor proved: that two circles with centers at the distance of their radius will intersect in two points (see illustration above). Later, in the fourth construction, he used the movement of triangles to prove that if two sides and their angles are equal, then they are congruent; however, he did not postulate or even define movement. In the nineteenth century, the Elements came under more criticism when the postulates were found to be both incomplete and superabundant. At the same time, non-Euclidean geometries attracted the attention of contemporary mathematicians. Leading mathematicians, including Richard Dedekind and David Hilbert, attempted to add axioms to the Elements, such as an axiom of continuity and an axiom of congruence, to make Euclidean geometry more complete. Mathematician and historian W. W. Rouse Ball put the criticisms in perspective, remarking that "the fact that for two thousand years [the Elements] was the usual text-book on the subject raises a strong presumption that it is not unsuitable for that purpose."^[2] 1. ↑ Daniel Shanks (2002). Solved and Unsolved Problems in Number Theory. American Mathematical Society. 2. ↑ W. W. Rouse Ball (1960). A Short Account of the History of Mathematics, 4th ed. (Original publication: London: Macmillan & Co., 1908), Mineola, N.Y.: Dover Publications, 55. ISBN 0486206300. See also • Artmann, Benno. (1999). Euclid: The Creation of Mathematics. New York: Springer. ISBN 0387984232. • Ball, W. W. Rouse. (1908). A Short Account of the History of Mathematics, 4th ed. New York: Dover Publications, 1960. pp. 50–62 ISBN 0486206300 • Bulmer-Thomas, Ivor. (1971). "Euclid." Dictionary of Scientific Biography. • Heath, Thomas L. The Thirteen Books of Euclid's Elements, 3 vols. New York: Dover Publications, 1956. ISBN 0486600882 (vol. 1), ISBN 0486600890 (vol. 2), ISBN 0486600904 (vol. 3) • Heath, Thomas L. (1981). A History of Greek Mathematics, 2 vols. New York: Dover Publications. ISBN 0486240738, ISBN 0486240746 • Kline, Morris (1980). Mathematics: The Loss of Certainty. Oxford: Oxford University Press. ISBN 019502754X External links All links retrieved October 8, 2013. Complete and fragmentary manuscripts of versions of Euclid's Elements : General Philosophy Sources New World Encyclopedia writers and editors rewrote and completed the Wikipedia article in accordance with New World Encyclopedia standards. This article abides by terms of the Creative Commons CC-by-sa 3.0 License (CC-by-sa), which may be used and disseminated with proper attribution. Credit is due under the terms of this license that can reference both the New World Encyclopedia contributors and the selfless volunteer contributors of the Wikimedia Foundation. To cite this article click here for a list of acceptable citing formats.The history of earlier contributions by wikipedians is accessible to researchers here: Note: Some restrictions may apply to use of individual images which are separately licensed. Research begins here...
{"url":"http://www.newworldencyclopedia.org/entry/Euclid","timestamp":"2014-04-21T02:18:56Z","content_type":null,"content_length":"50452","record_id":"<urn:uuid:195c90f0-1d95-47b3-be10-90f314e07151>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
April 3rd 2007, 03:30 PM #1 Apr 2007 Hey, I have a couple of questions from my book, I either didn't understand or couldn't get the right awnser for.. it would be amazing if you could help!! Thanks soo much. t2(squared)-9 = t2(squared)-9 __________ ____________ t +7 t+12 (t+3)(t+4) x-1 - 3x-2 ___ ____ 4 - 2 _______ ________ 2m2-m-1 m2+2m -3 thanks again! Last edited by ccynd; April 3rd 2007 at 03:41 PM. Is this question 1? Last edited by Jhevon; April 3rd 2007 at 04:01 PM. I think this is question 2 hah yeah I know I had it all perfect with exponents & everything! I was like Gah this will look good . but then it changed it.. and I was like like .. oh god. what ever haha but thanks so much ahah yeah, I see what you're doing.. Is this question 3? yeah it is. yeah, the formatting here can get messed up, it doesn't register spaces, you when you press space over and over to create a gap or something, it doesn't actually do anything. there are tricks to getting the formatting to work seamlessly if you don't use LaTex. However, i discovered this nice program just yesterday, and now i use it whenever something will be a pain to type out perfectly yeah what is? please click "quote" when you're replying, so i know which of my messages you are responding to. or tell me, "yeah, you are right about question so and so" Last edited by ThePerfectHacker; April 7th 2007 at 07:57 PM. haha alright.. Uh that is question 3.. & I did the work but I came out with a diffrent awnser then my text book so I foiled the one bottom thee t(sqred)+7t+12 & I got t(sqrd) - 9 but.. yeah I dont even know whats up with my text book Here's question 4 Hold on, let me get this straight. for question 3 you put the question and your answer? You should tell me when you are doing that. did you do that for any others? for t squared, type t^2, we use the "^" symbol to show that the number immediately following the symbol is a power of the number immediately before the symbol Last edited by ThePerfectHacker; April 4th 2007 at 04:31 PM. No.. I just glanced at it before I wrote it down. I didn't even notice it before I typed it in. thanks so much for helping me haha I'm kinda a mess :S Here's number 5 ok, can you just clarify number three now. did i get all the other questions correct? once i have all the solutions up, look over them, if there is anything you don't get, ask away. you have a test tomorrow, you need to know how to do these yourself I get how you did it all now.. & I can immediatley see what I did/or didn't do. for number 4., I believe my teacher told us to multiply the entire equasion buy (in this case 10) . would that result in the same awnser in the end? & with number 3. t^2 + 7t + 12 (that looks a little better doesnt it) I get how you did it all now.. & I can immediatley see what I did/or didn't do. for number 4., I believe my teacher told us to multiply the entire equasion buy (in this case 10) . would that result in the same awnser in the end? & with number 3. t^2 + 7t + 12 (that looks a little better doesnt it) yes that does look better, but it looks like you had to do a lot of work to get it so nicely. here's the conventional way to do it without LaTex (t^2 - 9)/(t^2 + 7t + 12) now that i have the question, i can post the solution April 3rd 2007, 03:48 PM #2 April 3rd 2007, 03:51 PM #3 April 3rd 2007, 03:54 PM #4 Apr 2007 April 3rd 2007, 03:55 PM #5 April 3rd 2007, 03:56 PM #6 Apr 2007 April 3rd 2007, 03:57 PM #7 April 3rd 2007, 04:04 PM #8 Apr 2007 April 3rd 2007, 04:06 PM #9 April 3rd 2007, 04:07 PM #10 April 3rd 2007, 04:09 PM #11 Apr 2007 April 3rd 2007, 04:16 PM #12 April 3rd 2007, 04:18 PM #13 April 3rd 2007, 04:26 PM #14 Apr 2007 April 3rd 2007, 04:28 PM #15
{"url":"http://mathhelpforum.com/algebra/13295-factoring-radicals.html","timestamp":"2014-04-17T16:35:05Z","content_type":null,"content_length":"83929","record_id":"<urn:uuid:db48b2aa-042e-47db-b2bc-2dd20d954f12>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
[SI-LIST] Re: UltraCAD ESR and Bypass Capacitor Caculator • From: Ray Anderson <Raymond.Anderson@xxxxxxx> • To: si-list@xxxxxxxxxxxxx • Date: Wed, 13 Aug 2003 09:24:26 -0700 (PDT) Vishram Pandit wrote: >I have seen some comments on the SI list that the capacitors are ineffective >say above 500MHz. Is this really true?? I have seen improvement in EMI at >600MHz-800MHz if I tune the >* value of the capacitors >* ESR / ESL of the capacitors >* location of the capacitors >Any comments/advice will be appreciated. I think you will find that discrete decoupling capacitors placed on a PCB are largely ineffective at 500 MHz (actually above about 100 MHz or so) for SI purposes. However, as you have discovered, they can be useful for dealing with EMI issues up to 500 MHz (and sometimes beyond). The problem with using decaps for SI purposes above about 100 MHz is that there is usually enough parasitic inductance in the chip package (and the package mounting) that any attempt at bypassing on the PCB never shows much if any effect at the silicon. Depending on the exact design of the package and the mounting arrangement (pad design, vias, board thickness, etc.) the useful frequency for the SI decaps may even be considerably less than 100 MHz.To provide effective bypassing at the higher frequencies you need to consider decaps in the package or on the silicon. For EMI purposes, you can select decaps that will resonate with their mounting inductance to provide low impedances at specific problematic frequencies. You can select physical locations on the board that are "hot-spots" to place your decaps for maximum effect. The distributed capacitance provided by the power planes is also very effective in dealing with EMI issues at high frequencies , but as mentioned above, being present at the PCB it doesn't do much for SI pds problems on the There are lots of other things to consider, but these are some of the main ones. -Ray Anderson Sun Microsystems To unsubscribe from si-list: si-list-request@xxxxxxxxxxxxx with 'unsubscribe' in the Subject field or to administer your membership from a web page, go to: For help: si-list-request@xxxxxxxxxxxxx with 'help' in the Subject field List archives are viewable at: or at our remote archives: Old (prior to June 6, 2001) list archives are viewable at: Other related posts: • » [SI-LIST] Re: UltraCAD ESR and Bypass Capacitor Caculator
{"url":"http://www.freelists.org/post/si-list/UltraCAD-ESR-and-Bypass-Capacitor-Caculator,54","timestamp":"2014-04-18T01:16:01Z","content_type":null,"content_length":"17998","record_id":"<urn:uuid:25058967-a33c-4ec1-8474-77a7b343f0b6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Some remarks on drop rates I'm going to keep this relatively short, because a full discussion of probability could fill several college semesters. However, there is one misconception that some players have that has been bugging me lately. Let's say you read that Shattered Sun Supplies have a 10% chance to contain a Badge of Justice , and, excited, you go out and do enough dailies get 10 Shattered Sun Supplies. You open them all and find not a single Badge, or you find five badges. Do either of these outcomes mean the 10% drop rate is wrong? No! They do not! All a 10% drop rate means is that for each Supplies, there is a 10% chance that it contains a Badge. Random events have no memory, so no matter how many badges you get in the first nine Supplies, your chance to get a Badge in the tenth Supplies is still 10%. The traditional analogy is that if you flip a coin nine times and get heads each time, the chance of getting heads on the next flip is still 50%. Now it is true that you will get a Badge in ten Supplies if the drop rate is 10%. If you're interested in how likely it is, here's the calculation to do. The chance of getting a Badge in one Supplies is (100% - 10%) = 90%, or 0.9. Raise that to the tenth power, for your ten independent Supplies-opening events, and you get the chance of, ten times out of ten, not getting a Badge: 0.9^10 = 0.349, about 35%. So in fact, out of ten Supplies, you will get a badge (100% - 35%) = 65% of the time, about two thirds. TL;DR version: A drop rate is a probability, not a guarantee. Filed under: Analysis / Opinion Reader Comments Tekkub Apr 29th 2008 6:54PM I love the people in my guild that cry about a "bad batch" of prospecting... I just burn thru whatever I'm crushing and feel happy in the knowledge that, over the long run, I get raw blue gems. People spend too much time looking at the small picture when it comes to droprates. Nizari Apr 29th 2008 6:55PM Now can you explain to them that when you sell in bulk you discount the price per item versus when you sell individually? schwonga Apr 29th 2008 8:38PM That is only the case b/c of how RL handles advertising and incentives to the public. Really there is no real economic reason to put bulk items cheaper as the individual items besides beating out that buyer nervousness and getting them to buy your bulk. All about incentives, and many times you can corner the market (or take advantage of a strangely empty one) and place prices as you see fit. Dave Apr 30th 2008 10:38AM Discount? What's the point of that? I don't know about you, but I personally jack the price of things in bulk rather than individually. People will pay it! I can sell 1 Large Prismatic Shard for 25.99. That's not bad, but someone's going to under cut me and it's going to waste my time and I might not even sell that single shard. However, I can take a stack of 6 large Prismatic Shards, perfect for someone who wants to buy a ready-to-go stack to enchant their fancy new weapon with +dmg or whatever and list that stack of 6 for 169.99. I jacked the price of each shard by 2g or so. I guarantee that stack will sell QUICKER than the individual shards! My only assumption at this point is that people in WoW are terrible at math and just don't care how much things cost if they can get it with one click rather than 12. The AH is not Costco. You shouldn't expect to get a discount on anything at all, especially when people aren't given a "price per unit" sort of measurement on their screen without some help. jdkenada Apr 29th 2008 6:55PM Right now my Grade 10 teacher is screaming out "I told you that you would use this someday!!!" Simon Jia Apr 29th 2008 7:24PM this is basic statistics. But statistically speaking, if you open enough bags, the overall drop rate will be close to 10% with a smaller and smaller error margin. Of course the sample size will be in the 100,000 ranges in order to even see it gets close to 10%. JRM Apr 30th 2008 4:03PM you could, assuming the drop rate is purely random, get an accurate approximation of the drop rate with a sample size of a few hundred (probably less). It is a common misconception that large sample sizes are necessary to obtain accurate results. Biff Apr 29th 2008 7:24PM I can attest to this. I've done the fishing daily quest every day since it became available (spent hours getting my Fishing to 375 just for the cool rewards available from the quest). I've gotten countless fishhooks, about 10 water walk potions, and two Monocles. The only reason I do the fishing daily now is to reassure myself that God hates me. Verit Apr 29th 2008 7:24PM Anytime people complain about drop rates on bosses in WoW - I seriously encourage them to try out another MMO like Lineage 2 or EQ. In L2 there's a % that nothing will drop off a raid boss at all... In EQ you might go about getting to the boss to find out he's already engaged or camped - and even then he/she might not drop what you want. xnyhps Apr 29th 2008 8:03PM Interesting post, but what I wonder is: are droprates 'real' mathematical probabilities? Isn't there for example some sort of fairness-system that makes sure quest items drop before, say, 1,000 kills? I do think this is the case, because otherwise people might get fed up with the game quickly and quit - not in Blizzard's advantage at all. So, did Blizzard ever confirm anything about this, I wonder? Malreth Apr 29th 2008 9:45PM #25 and #26: No. To do so, Blizzard would have to keep a record (a state) of the number of Supplies bags opened since your last badge in the character database since if it were kept client-side, you could hack it and give yourself guaranteed drops. Furthermore, they'd have to also keep a state for any given drop on any given mob on any given quest. Now multiply that by all your quests. And again by all your alts. And again by, what... 10 million players? That's a lot of state to hold at any given time and it's not worth it. If Blizzard really did want to make a certain "collect N drops" quest faster (or slower) to make it seem "more fair", it's much easier to just tweak the drop-rate by a percent or two. Malreth Apr 29th 2008 9:46PM hmm... adding a reply changes the message numbers... oh well. :p Dave Apr 30th 2008 11:18AM Blizzard has said that random is random, except in certain cases where it isn't. ie: mob drop tables are generated on spawn, not on kill. Therefore if the last person to have killed a mob was NOT on the quest that you're killing them for, they will not respawn with a chance to drop the item you need. Despite what WoWhead shows you as a flat list of drops with percentages, that's not how it works necessarily. Mobs have multiple loot tables that are independent rolls. For instance, typical random mobs loot tables: Money Roll Cloth Roll Loot roll Quest item roll. each of these separate rolls on creation can have different probabilities but in most cases are not tied to each other in any way. Some of them have built-in defaults as to prevent a zero result for whatever reason, some of them don't. (some mobs will come with a piece of something every time... some are frustrating and have a %10 drop rate, etc). They don't even have to be consistent in any way with other mobs, such as the elves that drop rep turnins. You can totally get a loot drop from dead elf AND a scryer signet/arcane tome from the same mob at the same time. You probably won't see it very often because of the probability involved but you know they're on separate loot rolls. Boss loot however isn't necessarily the same but follows the same logic of unpredictability. A boss may end up with a money roll (usually a near-fixed value), two completely separate loot rolls (ie: token drops are independent from the other drops, and further that on Illidan the warglaives are a totally separate roll and drop, etc) and maybe any other rolls that can happen. Long story short, random loot is random. But it's not necessarily as simple as a % chance based on overall kills even though it can be represented that way. TonyMcS Apr 29th 2008 8:45PM I think that you need a cap to the number of attempts, otherwise you need a very large number of attempts for the 10% or whatever to appear. If you do it for infinity then you should be sure of 10%, but for this to occur in 40 or 50 tries seems a bit non-random. I think Blizzard is checking to see how many are dropped over a number of attempts and smoothing it out by providing extra drops if you don't get it in a reasonable number of attempts. Brad Apr 30th 2008 12:07AM I would hope Blizzard doesn't add drops to atone for low drop rates, no matter how unusually low it may be. First off, the Law of Large Numbers (http://en.wikipedia.org/wiki/Law_of_large_numbers) guarantees that, over time, random variables (such as the 10% drop rate of the Badge) will maintain a stable percentage at or very, very near the expected probability. Secondly, you know they wouldn't "remove" any Badges if the drop rate suddenly spikes to an unusually high level, so why should they arbitrarily add Badges if the drop rate goes low for a period of This is, of course, assuming a valid random number generator. Seeing as how we pretty much trust the /roll function, there's no reason to think that Blizzard adjusts things when we don't see the random number displayed in the chat log. Taara Apr 30th 2008 4:01AM Since 2.4 I've been doing those Supply Bag quests. I heard from guildies they have got like 7 badges from 30 bags and so on. I've opened about 80 bags. Every time I hoped to get that extra badge and so far and all I've got is 2 badges.I stopped counting on extra badges from the bags. They are so rare drop for me that I mostly don't even do the quests in shat anymore unless I really have nothing better to do. Doing that huge circle to finish all 4 of them is only clogging my bags with useless lvl 68 greens. I rather hit 8 dailys on QQ island, make 100g in about 30 minutes by doing so and run heroic daily for guaranteed 5 badges or more (depends on the boss count). My personal badge drop rating from bags is around 2,5%. Not enjoying that. Mowgile Apr 30th 2008 4:47AM @#4: You are nearly correct. If you open 30 Shattered Sun Supplies you have a ~96% chance of getting *at least one* badge. On the one hand, humans are very good at seeing a pattern where there is none (I didn't get any badges this week at all! The drop rate is messed up! Idol of the Avian Heart has dropped 5 times in a row from Moroes! The drop rate is messed up!). This simply isn't accurate. As the article suggests, a drop rate is a probability, not a certainty. The probability of getting a perfect distribution of loot (say, one each of Herod's Shoulder, Raging Zerker's Helm, Scarlet Leggings, and Ravager) in four runs of Armory is actually quite small - about the same as the probability of getting all four drops the same. Yet when the loot distribution isn't perfect, we notice. On the other hand, anecdotal evidence with some drops - the ones I've noticed are the Crusader pattern, Various BoP recipes, and any rare 1x quest drop, especially the Creeper Ichor for the Elixir of Suffering quest - suggests that the loot *isn't* normally distributed - i.e. the game isn't rolling a "dice" when a mob dies with a X% chance of it dropping the item. Typically the variance is unusually skewed - you'll either get the Crusader pattern on your first kill, or you'll spend a *long* time grinding for it. That suggests to me that rare drops are spawned randomly based on real time (like mobs, gathering nodes, etc). It might be that one could predict the drops of, say, the Crusader pattern in the same way that one used to be able to predict the spawns of Kazzak in Blasted Lands. Of course, you'd need an overwhelming quantity of statistical evidence for that to be anything but conjecture. dacamper Apr 30th 2008 6:35AM I've started to see patterns where there are none on the Attumen loot table, after killing him each week for 3 months the Worgen Claw Necklace finally dropped the one night we entered the instance 30 minutes early. Coincidence? Well of course it was, but through those 3 months I had to try hard not to imagine someone or something hated me. Perhaps seeing patterns in random occurances is the root of superstition? Has anyone developed any WoW superstitions to ensure "that loot item drops", like always take 40 manna biscuits, always enter the instance at the same time, etc? zappo Apr 30th 2008 9:33AM "On the one hand, humans are very good at seeing a pattern where there is none" This is exactly what makes 50% of the comments on Thottbot useless. That being said I've done the daily quests since around 2 days after 2.4 came out. I have never received a badge. Brommon Apr 30th 2008 4:02PM Yes, DaCamper - actually WoW players (and MMO players in general) have come up with a wide array of absurd superstitions. The Deadalus Project actually did a study of these a while back. It's a hilarious read, to see what people will sometimes insist makes their loot drop better: Previous 20 Comments | 1 | 2 | 3 | Most Recent | Next 20 Comments
{"url":"http://wow.joystiq.com/2008/04/29/some-remarks-on-drop-rates/2","timestamp":"2014-04-20T03:33:59Z","content_type":null,"content_length":"96073","record_id":"<urn:uuid:505642a9-b275-4653-9c1b-b79bc8e694fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
The Emerging Threat to Public-Key Encryption One of the perennial problems with symmetric-key encryption has been establishing a secure channel over which to pass the secret key. It has always been much easier to compromise the transmission of the keys than to try to find some weakness in the encryption algorithm. Military and government organisations have put in place elaborate methods of passing secret keys: they pass secrets more generally so using similar channels to pass an encryption key is not a great However, as everyone has become more connected, and especially with the commercialisation of the Internet, encryption has become a requirement for the vast majority of networked users. Thus, the traditional methods of passing secret keys is impractical, if only because you might not actually know who you want to communicate with in an encrypted fashion. It was realised in the 1970’s that encryption needed to be more accessible, so a great deal of work was done on algorithms that could ensure that a key had been passed over relatively insecure channels was not compromised. Leaders in the field were Whifield Diffie and Martin Helman. In doing this work, it was realised that there was another way of tackling the problem. In the late 1970’s Diffie and Helman published their seminal paper entitled “New Directions in Cryptography” and public-key encryption was born. When the implementation of the early public-key encryption methods was compared to symmetric-key encryption it was found that the public-key encryption was significantly slower and hence communication using public-key encryption alone was, at the very least, going to require far more processing power than would otherwise have been the case. But wait! The original problem being studied was secure key transmission so why not use public-key encryption to securely transmit the symmetric-key, and then use the faster, more efficient symmetric-key encryption algorithms. In essence, that is how most so called public-key encryption works today. The majority of public-key encryption algorithms rely upon a mathematical subtlety. One of the earliest was based upon prime numbers. If one has a number (N) that is derived from multiplying two prime numbers, and N is very large, it is practically impossible to calculate what the two constituent prime numbers were (known as factorisation). It’s not that you can’t calculate the two prime numbers. There are many algorithms for doing so. It is that it takes such a long time to successfully compute the algorithm (sometimes thousands of years) that by the time you finish, the information you can recover is worthless. Even with the huge budgets available to governments, it is possible to reduce these timescales only marginally. Public-key algorithms therefore are able to use one of the prime number constituents of N as part of the public-key element of the encryption process without fearing that the other might be discovered. It’s a fact of life that even with the massive increases in computing power over recent years, traditional algorithms for factorising these large numbers means public-key encryption is still quite secure. Of course, “secure” is a relative word and there are many ways of recovering the private element that was used to derive N. These do not involve some fiendishly clever mathematical method (although a small army of mathematicians are seeking to do this) but rather simple methods such as accessing the PC that holds the private element! As ever, the weakest link determines the strength of the chain. Having said that, there is now an emerging threat the public-key encryption which does not rely upon such trivial methods: Quantum Computing, a field first introduced by Richard Feynman in 1982. To understand why Quantum Computing poses a threat one first needs to understand a little about how it works. In the computers that we are familiar with, processing is done using bits. A bit has two possible values: 0 and 1. A quantum computer uses qubits. A qubit can also have the values 0 and 1, but also any combination of the two simultaneously. In quantum physics this is called superposition. So, if you have 2 qubits you can have 4 possible states, 3 qubits gives 8 possible states; and all simultaneously. In a bit-based computer you have the same number of possible states but only one exists at any one time. It is the fact that these states can exist simultaneously in a quantum computer which is both counter-intuitive and extraordinarily powerful. These qubits are manipulated using quantum logic gates in the same way that conventional computation is done by manipulating bits using logic gates. The detailed math is not important here. Suffice to say that you can operate on multiple simultaneous states, thereby increasing the amount of computation you can undertake over what you could otherwise do in a conventional computer. So, if you had a 2 qubit quantum computer you could theoretically compute at 4 times the speed of a 2 bit conventional computer. There are a few problems though, in translating theory into practice. First, the algorithms developed over many years for conventional computers have been optimised for their architecture, and different algorithms are needed to run a quantum computer. Hence, trying to compare speeds of conventional and quantum computers is like comparing apples with bananas. However, one of the earliest algorithms developed for quantum computing was by Peter Shor. Shor’s 1994 algorithm was specifically designed to factorise numbers into their prime number components. Shor’s algorithm runs on a quantum computer in polynomial time, which means that for a number N it takes only logN to successfully complete the algorithm. Even with the largest numbers in use today in public-key encryption, that means that it is perfectly feasible to factorise the number in meaningful timescale. Since Peter Shor developed his quantum computing algorithm many others have been developed, and it is worth noting that a significant number of these algorithms are aimed at breaking the underlying mathematics that supports more recent public-key encryption. The fact that methods other than the use of prime numbers have been developed is being very quickly followed by their quantum computing algorithm counterpart. Second, there is the much quoted, but less well understood, Heisenberg’s Uncertainty Principle. The bottom line is that this principle tells us that if you observe something then you affect it. Hence, measuring a qubit causes it to adopt one state but that state might not have been the state resulting from the computation but could have been altered by you observing it. So, the moment you try to measure the answer calculated by your superfast quantum computer it loses its ability to give you the correct answer. That would seem to render quantum computing rather pointless. But, there is another quantum effect that can be employed: quantum entanglement. This is where, if two objects have previously interacted, they stay “connected” such that by studying the state of one object you can determine the state of the other. Hence, you can determine the state of the qubit with your answer by studying another object with which it is entangled. Again, this is counter-intuitive, but has been proven, so all one really needs to know is that there is a way of getting your answer out of your quantum computer. Lastly, there is a small matter of implementation. Until recently this was a show stopper. Typically quantum computers are based on light, as photons are relatively easy to work with and measure. However, anyone who has seen an optical bench will know that they are enormous. They require great weight to prevent even the slightest vibration and they are prone to all manner of environmental factors. Hardly the small “chips” we are used to! Also, typically these implementations are aimed at running one algorithm: the algorithm is built into the design. Having said that it wasn’t that long ago that conventional computers with far less processing power than the average PC required air conditioned luxury and small army of attendants to keep them functioning. It did not take very long for the size to shrink dramatically and for the environmental requirements to be quite relaxed. Not surprising then that there is a company in Canada (D Wave Systems Inc) who already offer access to quantum based computing. It’s expensive and the size of a room, but we all know that won’t be the case for long. 2011 bought about some major developments which might well make 2012 the year quantum computing comes of age. Most significant of these was by a team at Bristol University, who developed a small chip which housed a quantum computer. It had only 2 qubits but it was of a size that could be used for the mass market, and, crucially, it was programmable. We are now entering a new era where we have programmable, relatively inexpensive, relatively small, quantum computers visible on the horizon, and we know that such computers have the potential to undermine the mathematics upon which current public-key encryption depends. Given all of that, maybe it’s time to reconsider the role of public-key encryption and where it might be more sensible to rely wholly on symmetric-key encryption. Cross-posted from Professor Alan Woodward You Must Register or Login to Comment
{"url":"http://www.infosecisland.com/blogview/19707-The-Emerging-Threat-to-Public-Key-Encryption.html","timestamp":"2014-04-18T08:02:49Z","content_type":null,"content_length":"27583","record_id":"<urn:uuid:4a4e03b0-716f-487a-ac92-53a6a210565d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
never learned how to change a cosine into frational form January 12th 2011, 12:59 PM never learned how to change a cosine into frational form I was wondering if someone could help me out with how to change a cosine into fractional from: example cos 45 = sqrt2/2 and cos 150 = sqrt3/2 I know that if I take the inverse of the cosine decimal form I can take the answer do square it, make a fraction out of it with the sqrt of the square dived by the squared number. that gets really messy as well though. January 12th 2011, 01:04 PM Are you saying... When solving $x = \cos 45^o$ , how can you get a fractional answer rather than some nasty decimal? January 12th 2011, 01:06 PM yeah, I can do it the way I explained but with most angles, that will still achieve a nasty decimal January 12th 2011, 01:10 PM Only particular angles will give you something nice. In general these are 0,30,45,60,90 and multiples of these. You can achieve a rational answer for others, but start by working on these ones first. Have you seen the special triangles? January 12th 2011, 01:14 PM I haven't seen the special triangles. January 12th 2011, 01:19 PM They are my favourite triangles. Commit them to memory and you will be a trig-wiz! Special Triangles January 12th 2011, 01:20 PM Essentially your construct a triangle with angles 60-30-90 and 45-45-90 Special right triangles - Wikipedia, the free encyclopedia January 12th 2011, 01:29 PM awesome, thanks a bunch :) January 12th 2011, 01:56 PM You can also google the unit circle. January 13th 2011, 03:03 AM A side note: Most of those decimal answers you are getting are not exact answers. They are only approximations (although good approximations) to the actual answers. The actual answers are irrational and therefore cannot be written as finite decimals.
{"url":"http://mathhelpforum.com/trigonometry/168162-never-learned-how-change-cosine-into-frational-form-print.html","timestamp":"2014-04-16T05:55:25Z","content_type":null,"content_length":"7186","record_id":"<urn:uuid:9d33a048-f1fa-41ba-af7c-84ffdf2d3c88>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Endorsement Course Requirements Applicants to the UW Bothell Secondary and Middle Level Teacher Certification M.Ed. who plan to earn an endorsement in Mathematics must have completed coursework in the following areas prior to starting the fieldwork portion of the program. Courses must have been completed with a minimum grade of 2.5. Please note: it is not necessary to have completed an entire course in the content area. One course may cover multiple content areas if content was addressed in depth. The following list contains examples of course content that meet the requirements for each subject area. Applicants may have completed the approved courses or courses with equivalent content. Calculus – 3 quarter courses or 2 semester courses Examples of course title and content: • Calculus 1: First quarter in calculus of functions of a single variable. Emphasizes differential calculus. Emphasizes applications and problem solving using the tools of calculus. (UW course number MATH 124, UWB course number B CUSP 124, Washington State Community College Common Course Number MATH 151) • Calculus 2: Second quarter in the calculus of functions of a single variable. Emphasizes integral calculus. Emphasizes applications and problem solving using the tools of calculus. (UW course number MATH 125, UWB course number B CUSP 125, Washington State Community College Common Course Number MATH 152) • Calculus 3: Third quarter in calculus sequence. Introduction to Taylor polynomials and Taylor series, vector geometry in three dimensions, introduction to multivariable differential calculus, double integrals in Cartesian and polar coordinates. (UW course number MATH 126, UWB course number B CUSP 126, Washington State Community College Common Course Number MATH 153) Logic and Problem Solving – 1 course (100 level or above) Examples of course title and content: • Intro to Logic: Elementary symbolic logic; The development, application, and theoretical properties of an artificial symbolic language designed to provide a clear representation of the logical structure of deductive arguments. (UW course number PHIL 120, Washington State Community College Common Course Number PHIL 120) *If students take both MATH 444 and 445 at UW Seattle, MATH 444 can satisfy the Logic and Problem Solving course requirement Geometry – 1 course (200 level or above) Examples of course title and content: • Intro to Proofs: Introduce different methods for constructing simple proofs, including forwards/backwards proofs, contradiction, contraposition, and induction. The students will apply these methods to a variety of areas of mathematics, including simple number theory, relations, calculus concepts, and a study of infinity. • Differential Geometry: Curves in 3-space, continuity and differentiability in 3-space, surfaces, tangent planes, first fundamental form, area, orientation, the Gauss Map.(Ex: MATH 442 at UW • Special Topics in Geometry: Content selected from such topics as homotopy theory, topological surfaces, advanced differential geometry, projective geometry, hyperbolic geometry, spherical geometry, and combinatorial geometry. (Ex: MATH 443 at UW Seattle) Algebra – 2 courses (1 course must be 200 level or above) Examples of course title and content: • Basic properties of functions, graphs; with emphasis on linear, quadratic, trigonometric, exponential functions and their inverses. Emphasis on multi-step problem solving. (UW course number MATH 120, UWB course number B CUSP 123, Washington State Community College Common Course Number MATH 141 AND MATH 142) • Intro to Differential Equations: Introduces ordinary differential equations. Includes first-and second-order equations and Laplace transform. (UW course number MATH 307, UWB course number STMATH • Linear and/or Matrix Algebra: Systems of linear equations, vector spaces, matrices, subspaces, orthogonality, least squares, eigenvalues, eigenvectors, applications. For students in engineering, mathematics, and the sciences. (UW course number MATH 308/318, UWB course number STMATH 308) Statistics – 2 courses in Statistics OR 1 course in Calculus-based Statistics/Probability Examples of course title and content: • Statistical Methods: Elementary concepts of probability and sampling; binomial and normal distributions. Basic concepts of hypothesis testing, estimation, and confidence intervals; t-tests and chi-square tests. Linear regression theory and the analysis of variance. (UW course number STAT 311, UWB course number B BUS 215 and BIS 315 Washington State Community College Common Course Number MATH 146) • Concepts of probability and statistics. Conditional probability, independence, random variables, distribution functions. Descriptive statistics, transformations, sampling errors, confidence intervals, least squares and maximum likelihood. Exploratory data analysis and interactive computing. (UW course number STAT 390) Discrete Mathematics – 1 course (200 level or above) Examples of course title and content: • Computer Programming: Basic programming-in-the-small abilities and concepts including procedural programming (methods, parameters, return values), basic control structures (sequence, if/else, for loop, while loop), file processing, arrays and an introduction to defining objects. (UW course number CSE 142) History/Foundations of Math – 1 course (300 level or above) Examples of course title and content: • History of Math: Survey of the development of mathematics from its earliest beginnings through the first half of the twentieth century. (UW course number MATH 420, UWB Special Topics)
{"url":"http://www.uwb.edu/secondarycertmed/mathendorsement","timestamp":"2014-04-18T01:09:33Z","content_type":null,"content_length":"39620","record_id":"<urn:uuid:3b054772-dfe5-4a84-942f-7e128d3cf2e2>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Function To Evaluate MxModel Values mxEval {OpenMx} R Documentation Function To Evaluate MxModel Values This function can be used to evaluate an arbitrary R expression that includes named entities from a MxModel object, or labels from a MxMatrix object. mxEval(expression, model, compute = FALSE, show = FALSE) expression An arbitrary R expression. model The model in which to evaluate the expression. compute If TRUE then compute the value of algebra expressions. show If TRUE then print the translated expression. The argument ‘expression’ is an arbitrary R expression. Any named entities that are used within the R expression are translated into their current value from the model. Any labels from the matrices within the model are translated into their current value from the model. Finally the expression is evaluated and the result is returned. To enable debugging, the ‘show’ argument has been provided. The most common mistake when using this function is to include named entities in the model that are identical to R function names. For example, if a model contains a named entity named ‘c’, then the following mxEval call will return an error: mxEval(c(A, B, C), model). If ‘compute’ is FALSE, then MxAlgebra expressions returns their current value as they have been computed by the optimization call (using mxRun). If the ‘compute’ argument is TRUE, then MxAlgebra expressions will be calculated in R. Any references to an objective function that has not yet been calculated will return a 1 x 1 matrix with a value of NA. The OpenMx User's guide can be found at http://openmx.psyc.virginia.edu/documentation. matrixA <- mxMatrix("Full", nrow = 1, ncol = 1, values = 1, name = "A") algebraB <- mxAlgebra(A + A, name = "B") model <- mxModel(matrixA, algebraB) model <- mxRun(model) start <- mxEval(-pi * A, model) ## Not run: mxEval(plot(sin, start, B * pi), model) # The statement above is equivalent to: plot(sin, -pi, 2 * pi) ## End(Not run) version 0.2.0-905
{"url":"http://openmx.psyc.virginia.edu/docs/OpenMx/0.2.0-905/_static/Rdoc/mxEval.html","timestamp":"2014-04-17T19:10:13Z","content_type":null,"content_length":"3368","record_id":"<urn:uuid:2b51ad12-b32d-46af-b1cb-3daab9f11215>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
The slimmest geometric lattices - Disc. and Comp. Geom , 2003 "... Abstract. We prove that the order complex of a geometric lattice has a convex ear decomposition. As a consequence, if ∆(L) is the order complex of a rank (r+1) geometric lattice L, then the for all i ≤ r/2 the h-vector of ∆(L) satisfies, hi−1 ≤ hi and hi ≤ hr−i. We also obtain several inequalities f ..." Cited by 6 (2 self) Add to MetaCart Abstract. We prove that the order complex of a geometric lattice has a convex ear decomposition. As a consequence, if ∆(L) is the order complex of a rank (r+1) geometric lattice L, then the for all i ≤ r/2 the h-vector of ∆(L) satisfies, hi−1 ≤ hi and hi ≤ hr−i. We also obtain several inequalities for the flag h-vector of ∆(L) by analyzing the weak Bruhat order of the symmetric group. As an application, we obtain a zonotopal cd-analogue of the Dowling-Wilson characterization of geometric lattices which minimize Whitney numbers of the second kind. In addition, we are able to give a combinatorial flag h-vector proof of hi−1 ≤ hi when i ≤ 2 5 (r + 7 2). 1. "... Abstract. Hyperplane arrangements form the geometric counterpart of combinatorial objects such as matroids. The shape of the sequence of Betti numbers of the complement of a hyperplane arrangement is of particular interest in combinatorics, where they are known, up to a sign, as Whitney numbers of t ..." Add to MetaCart Abstract. Hyperplane arrangements form the geometric counterpart of combinatorial objects such as matroids. The shape of the sequence of Betti numbers of the complement of a hyperplane arrangement is of particular interest in combinatorics, where they are known, up to a sign, as Whitney numbers of the first kind, and appear as the coefficients of chromatic, or characteristic, polynomials. We show that certain combinations, some non-linear, of these Betti numbers satisfy Schur positivity. At the same time, we study the higher degree resonance varieties of the arrangement. We draw some consequences, using homological algebra results and vector bundles techniques, of the fact that all resonance varieties are determinantal. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4663475","timestamp":"2014-04-24T13:57:59Z","content_type":null,"content_length":"14747","record_id":"<urn:uuid:5653de65-309e-4e39-a7fd-f09cfab455bf>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithm for match operatons in a Matrix Author Algorithm for match operatons in a Matrix Hi guys this is not quite a Programming question rather a logic question. Joined: Jun 18, 2013 Posts: 1 I have a matrix 5x5 like this : a + b * c * d + f + g * h + i + j + k * l + m * n Can someone help me to find an algorithm to find all the math operations possible in this matrix? Like : a+d+f and so on. I tried to make a Grapth data Structure but coud not find a solution.Can some one help to find this puzzle? lowercase baba 1) Turn off your computer Joined: Oct 02, 2003 2) Get some paper, pencils, and erasers Posts: 10916 3) play around with how YOU would do it, using only the above materials. 4) Once you think you know how to do it, try to explain it to a 10 year old child in such a way that THEY could do it. Only when you have completed the above should you consider turning on your computer and writing a single line of java. I like... There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors subject: Algorithm for match operatons in a Matrix
{"url":"http://www.coderanch.com/t/614021/gc/Algorithm-match-operatons-Matrix","timestamp":"2014-04-19T02:50:26Z","content_type":null,"content_length":"20548","record_id":"<urn:uuid:966f196d-85cb-46a4-b291-82d2138ed667>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Anatoly Vershik on Peter Cameron's Blog Tag Archives: Anatoly Vershik I have just come to the end of the longest uninterrupted sequence of conferences I have ever had: five in a row, one of them a two-week conference. Since the start of July I have been at conferences in Egham, … Continue reading Regenerative distributions on number partitions Earlier this week, my new colleague Sasha Gnedin gave a talk on “Regenerative Combinatorial Structures”. Regeneration is what certain newts do: if they lose their tail, a limb, or even an eye, they grow a … Continue reading Anatoly Vershik is almost certainly the nearest thing to a universal mathematician that I know. The range of his interests is impossible to summarise: logic, algebra, combinatorics, analysis, probability, dynamical systems, mathematical physics, … I first met him in 2000, … Continue reading
{"url":"http://cameroncounts.wordpress.com/tag/anatoly-vershik/","timestamp":"2014-04-18T02:58:17Z","content_type":null,"content_length":"33475","record_id":"<urn:uuid:a3fe727e-f280-48af-8b8f-eb11557420db>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Anatoly Vershik on Peter Cameron's Blog Tag Archives: Anatoly Vershik I have just come to the end of the longest uninterrupted sequence of conferences I have ever had: five in a row, one of them a two-week conference. Since the start of July I have been at conferences in Egham, … Continue reading Regenerative distributions on number partitions Earlier this week, my new colleague Sasha Gnedin gave a talk on “Regenerative Combinatorial Structures”. Regeneration is what certain newts do: if they lose their tail, a limb, or even an eye, they grow a … Continue reading Anatoly Vershik is almost certainly the nearest thing to a universal mathematician that I know. The range of his interests is impossible to summarise: logic, algebra, combinatorics, analysis, probability, dynamical systems, mathematical physics, … I first met him in 2000, … Continue reading
{"url":"http://cameroncounts.wordpress.com/tag/anatoly-vershik/","timestamp":"2014-04-18T02:58:17Z","content_type":null,"content_length":"33475","record_id":"<urn:uuid:a3fe727e-f280-48af-8b8f-eb11557420db>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
New Website Launched October 2nd, 2008 Well, after squatting on this domain for about a year, I finally decided that I might as well put a website up here again. The website is going to be much more math-oriented this time around since I’m a nerd like that. In that vein, my first entry here will simply be one of the only math-related things that I wrote on the old site (on November 22, 2005): After hours of thought and consideration, I have come to the conclusion that the way in which math is taught sucks monkey fur.Let us take, for example, a Numerical Methods assignment that I currently have sitting in front of me. One particular question (which is worth a whopping 0.0769% of my final mark) on this assignment requires to me to find eigenvalues of a 3 × 3 matrix for which the characteristic polynomial does not factor. “But Nathan,” I can hear you say, “that’s simply a matter of plugging numbers into the cubic root formula! What’s your problem, ho?” And though you are quite correct, allow me to print, in its entirety, said formula: “But Nathan,” I can hear to chirp up again, “why don’t you just use the QR Algorithm or MATLAB or some other method to find the roots?” Well, it seems that this route of escape was thought of by the prof, so she specifically states that we are to compare our answer with the one obtained from MATLAB – indicating that we are indeed actually expected to find these roots by hand and get the exact answers – to obtain a whopping 0.0769% of our final mark (actually, considerably less than that – this is only one part of a multi-part “But Nathan,” I hear you say one final time, “why don’t you talk about something interesting? I don’t know what the hell the QR Algorithm is, nor could I care less.” Shut up, I never talk about math on here, I’ve been generous until now, so let me rant. I’m getting sick and tired of so many courses managing to teach so little, while expecting so much. Why do we have to prove over and over again that we are capable of plugging numbers into longer and longer formulae, while not actually being required to demonstrate any real insight along the way? So, tell me professors, what is the point of this? And what is the point of us having to draw nine linearizations to complete question #1 on a differential equations assignment? You don’t believe that we know how to do it after the first eight times? Why do you feel the need to ask the same questions over and over again, while giving us nothing really insightful or different on the Maybe someday down the line if/when I become a professor I’ll understand (and perhaps even prove) that making assignments ridiculously repetitive and far more tedious than necessary is a fundamental law of the universe which keeps us all in harmony and prevents the Earth from being hurled into the sun. But, barring that realization, I make the following vow to my future students, should I become a professor: I will (try to) make assignments for my classes (as) interesting (as possible for a math class) and will not ask you to do questions that involve exceedingly gross algebra (unless you all get on my bad side by skipping lectures) for no good reason. 1. October 6th, 2008 at 01:10 | #1 You could grab Maple, copy-paste the formula and just have it evaluate it at the [a,b,c,d] you want. Assuming that’s not its default algorithm anyway! Although who knows if that was possible back in the dark dreary days of three years ago. If something looks repetitive and boring to me nowadays I do in fact just resort to Maple. Although I don’t know that most other people are as technologically savvy, so I’m not sure how they 2. December 1st, 2008 at 15:32 | #2 Are you back next semester? Will you be teaching? Seeing as half the professors seem to be going on sabbatical over the next few years? C. Risi p.s. I came across your website from Ph.D comics which had a advertisement to academia.edu, which you had a link to this website. Cool site though! 3. December 1st, 2008 at 23:19 | #3 Hey Chris, I’ll be back next semester, but I have no idea what I’ll be doing as far as TAing/teaching goes. I hope to be TAing the 3rd year linear algebra course and spending the rest of my time in the learning centre, but we’ll see where the powers that be put me. 1. No trackbacks yet.
{"url":"http://www.njohnston.ca/2008/10/new-website-launched/","timestamp":"2014-04-20T08:56:38Z","content_type":null,"content_length":"32559","record_id":"<urn:uuid:76ea5672-fa00-4acb-ac81-2941d8c8c7eb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
A Pathway Into Number Theory Preface to the second edition 1. The fundamental theorem of arithmetic Division algorithm Greatest common divisor and Euclidean algorithm Unique factorisation into primes Infinity of primes Mersenne primes Historical note Notes and answers 2. Modular addition and Euler’s φ function Congruence classes and the Chinese remainder theorem The groups (Z[n], +) and their generators Euler’s φ function Summing Euler’s function over divisors Historical note Notes and answers 3. Modular multiplication Fermat’s theorem Wilson’s theorem Linear congruences Fermat-Euler theorem Simultaneous linear congruences Lagrange’s theorem for polynomials Primitive roots Chevalley’s theorem RSA codes Historical note Notes and answers 4. Quadratic residues Quadratic residues and the Legendre symbol Gauss’ lemma Law of quadratic reciprocity Historical note Notes and answers 5. The equation x^n + y^n = z^n, for n = 2, 3, 4 The equation x^2 + y^2 = z^2 The equation x^4 + y^4 = z^4 The equation x^2 + y^2 + z^2 = t^2 The equation x^3 + y^3 = z^3 Historical note Notes and answers 6. Sums of squares Sums of two squares Sums of four squares Sums of three squares Triangular numbers Historical note Notes and answers 7. Partitions Ferrers’ graphs Generating functions Euler’s theorem Historical note Notes and answers 8. Quadratic forms Unimodular transformations Equivalent quadratic forms Proper representation Reduced forms Automorphs of definite quadratic forms Historical note Notes and answers 9. Geometry of numbers Subgroups of a square lattice Minkowski’s theorem in two dimensions Subgroups of a cubic lattice Minkowski’s theorem in three dimensions Legendre’s theorem on ax^2 + by^2 + cz^2 = 0 Historical note Notes and answers 10. Continued fractions Irrational square roots Purely periodic continued fractions Pell’s equation Lagrange’s theorem on quadratic irrationals Automorphs of the indefinite form ax^2 – by^2 Historical note Notes and answers 11. Approximation of irrationals by rationals Naive approach Farey sequences Hurwitz’ theorem Liouville’s theorem Historical note Notes and answers
{"url":"http://www.maa.org/publications/maa-reviews/a-pathway-into-number-theory?device=desktop","timestamp":"2014-04-18T06:31:25Z","content_type":null,"content_length":"100466","record_id":"<urn:uuid:95a4d585-ad78-4513-b08c-4d659a0fdfcb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
MAT1101 Discrete Mathematics for Computing Semester 1, 2013 External Toowoomba Units : 1 Faculty or Section : Faculty of Sciences School or Department : Maths and Computing Version produced : 21 April 2014 Examiner: Nicolas Jourdan Moderator: Xiaohui Tao Other requisites Current skills at the level of Queensland Senior Secondary School Studies Mathematics B or equivalent are recommended. Discrete methods underlie the areas of data structures, computational complexity and the analysis of algorithms. Continuing advances in technology - particularly in applications of computing - have enhanced the importance of discrete (or finite) mathematics for understanding not only the foundations of computer science but also the basis on which computational solutions to a wide variety of applications problems rests. This course introduces the basic elements of discrete mathematics which provide a foundation for an understanding of algorithms and data structures used in computing. Topics covered include number systems, logic, relations, functions, induction, recursion, Boolean algebra and graph theory. On successful completion of this course students will be able to: 1. demonstrate an understanding of how numeric and character data are stored in a computer; 2. demonstrate proficiency in converting simple algorithms into functional pseudo-code; 3. demonstrate proficiency with symbolic logic, in mathematical reasoning and the construction of proofs; 4. show familiarity with the basic notions of graphs and relationships. Description Weighting(%) 1. Computer representation of character and numeric data. Binary and hexadecimal system. ASCII code. Integer and floating point representations. 25.00 2. Functions and algorithms. Pseudo-code for binary/decimal and other conversions. Control structures for iteration and branching. Recursive functions. Proof by induction. 25.00 3. Truth tables and the laws of logic. Venn diagrams. Ordering and equivalence relationships. Digital circuits and Boolean algebra. Logical reduction and Karnaugh maps. 25.00 4. Graphs and trees. Eulerian and Hamiltonian graphs. Spanning trees. Dijkstra's and Prim's algorithms. Expression trees. Huffman codes. 25.00 Text and materials required to be purchased or accessed ALL textbooks and materials available to be purchased can be sourced from USQ's Online Bookshop (unless otherwise stated). (https://bookshop.usq.edu.au/bookweb/subject.cgi?year=2013&sem=01&subject1= Please contact us for alternative purchase options from USQ Bookshop. (https://bookshop.usq.edu.au/contact/) • Grossman, Peter 2009, Discrete Mathematics for Computing, 3rd edn, Palgrave MacMillan, Basingstoke, New York. • A scientific calculator. • All other study materials are available only from the course website which can be accessed through the USQStudyDesk. Reference materials Reference materials are materials that, if accessed by students, may improve their knowledge and understanding of the material in the course and enrich their learning experience. • Epp, S 2011, Discrete Mathematics with Applications, 4th edn, Brooks/Cole, Pacific Grove, Ca. • Gersting, JL 2003, Mathematical Structures for Computer Science, 5th edn, WH Freeman, New York. • Grimaldi, RP 2003, Discrete and Combinatorial Mathematics: an applied introduction, 5th edn, Addison-Wesley, Boston, Mass. • Ross, KA & Wright, CRB 2003, Discrete Mathematics, 5th edn, Prentice Hall, Upper Saddle River, NJ. Student workload requirements Activity Hours Assessments 30.00 Examinations 2.00 Private Study 130.00 Assessment details Description Marks out of Wtg (%) Due Date Notes ASSIGNMENT 1 30 20 28 Mar 2013 ASSIGNMENT 2 30 20 17 May 2013 2HR RESTRICTED EXAMINATION 100 60 End S1 (see note 1) 1. Please refer to the Examination Timetable when it is published to confirm the examination date. Important assessment information 1. Attendance requirements: There are no attendance requirements for this course. However, it is the students' responsibility to study all material provided to them or required to be accessed by them to maximise their chance of meeting the objectives of the course and to be informed of course-related activities and administration. 2. Requirements for students to complete each assessment item satisfactorily: To complete an assessment item satisfactorily, students must obtain at least 50% of the marks available for that assessment item. 3. Penalties for late submission of required work: If students submit assignments after the due date without (prior) approval of the examiner then a penalty of 5% of the total marks gained by the student for the assignment may apply for each working day late up to ten working days at which time a mark of zero may be recorded. 4. Requirements for student to be awarded a passing grade in the course: To be assured of receiving a passing grade a student must achieve at least 50% of the total weighted marks available for the course. 5. Method used to combine assessment results to attain final grade: The final grades for students will be assigned on the basis of the aggregate of the weighted marks obtained for each of the summative assessment items in the course. 6. Examination information: The only materials that candidates may use in the restricted examination for this course are: writing materials (non-electronic and free from material which could give the student an unfair advantage in the examination); calculators which cannot hold textual information (i.e. no graphics or programmable calculators); One A4 sheet, written or typed on one or both sides with any material the student wishes to have. Students whose first language is not English, may, take an appropriate unmarked non-electronic translation dictionary (but not technical dictionary) into the examination. Dictionaries with any handwritten notes will not be permitted. Translation dictionaries will be subject to perusal and may be removed from the candidate's possession until appropriate disciplinary action is completed if found to contain material that could give the candidate an unfair advantage. 7. Examination period when Deferred/Supplementary examinations will be held: Any Deferred or Supplementary examinations for this course will be held during the next examination period. 8. University Student Policies: Students should read the USQ policies: Definitions, Assessment and Student Academic Misconduct to avoid actions which might contravene University policies and practices. These policies can be found at http://policy.usq.edu.au. Assessment notes 1. Students must retain a copy of each assignment submitted for assessment. This should be despatched to USQ within 24 hours of receipt of a request from the Examiner. 2. Students who, for medical, family/personal, or employment-related reasons, are unable to complete an assignment or to sit for the examination at the scheduled time may apply to defer the assessment item. Such a request must be accompanied by appropriate supporting documentation. One of the following temporary grades may be awarded IDS (Incomplete - Deferred Examination; IDM (Incomplete Deferred Make-up); IDB (Incomplete - Both Deferred Examination and Deferred Make-up). 3. In the normal course of events students should have access to e-mail and the internet for this course. This access is assumed in the running of the course. Alternative arrangements may be made in special circumstances on request. 4. The referencing system to be used in this course is supported by the Department. Information on this referencing system and advice on how to use it can be found in the course materials.
{"url":"http://www.usq.edu.au/course/specification/2013/MAT1101-S1-2013-EXT-TWMBA.html","timestamp":"2014-04-21T14:50:35Z","content_type":null,"content_length":"26012","record_id":"<urn:uuid:f840a42b-eb1f-4ed5-ba8a-a68f6cded2f1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Password Strength Meters: While password strength meters can help to warn users away from highly vulnerable passwords they are not always accurate. A password strength meter cannot warn users away from reusing the same passwords. Because they are usually based on heuristics (e.g., password length, uppercase, lowercase, numbers, special symbols) they can be easily fooled. The following password may be ranked positively simply because it is long. Other password strength meters may give poor scores to strong passwords (e.g., four random words) simply because they do not include numbers and special symbols. Password strength meters may be helpful for many users, but without knowing the underlying password distribution it is not possible to design a password strength meter that will always be accurate. One way to estimate the strength of a password is to look at the entropy of the underlying password distribution. Entropy is defined as follows If a user selects a password from a distribution with 30 bits of entropy then an adversary will need to use 2­^30 guesses on average to crack the user’s password. While high entropy is a necessary condition for security (i.e., any password generator with low entropy is not secure). However, high entropy is not a sufficient condition for security. Consider the following password generator G­ While G­[1] has high entropy (H(G­[1]) = n) it should be clear that G­[1] is a highly insecure password generator. A user who uses G­[1] will pick the password mmmm 50% of the time so that an adversary could crack into the user’s account 50% of the time by simply guessing mmmm! Even though entropy is an highly imperfect it is often used to measure the security of passwords, because it can often be estimated from empirical distributions. Minimum Entropy Minimum entropy is a better measure of password security. The minimum entropy of a distribution is defined as follows: Our example, bad password generator G­­­­[1] has low minimum entropy ( H­[min](G­­­­[1]) = 1). Indeed high minimum entropy guarantees (e.g., H­[min](G­­­) = n) that with high probability the adversary will always need to use around 2^n guesses to recover the user’s password. This means that the password will resist offline password cracking attacks with high probability. However, minimum entropy does not consider the correlation between passwords. Consider the following generators which outputs two passwords (e.g., one for site A and one for site B): G­­[1] picks one very strong (2n bit) password which is used for both accounts and G­­[2]­[­] picks two independent strong (n bit) passwords. Both generators have equivalent minimum entropy. While all three of the passwords x,y, and z should be strong enough to resist password cracking attacks generator G­­[1] is vulnerable to phishing attacks. Suppose that an adversary is able to obtain the password for site B (e.g., website B was a malicious phishing site Paypaul.com or website B managed stored their password in the clear like rockyou.com). This adversary will also be able to compromise the user’s account at site B.
{"url":"http://www.cs.cmu.edu/~jblocki/entropyAndMinimumEntropy.htm","timestamp":"2014-04-20T09:27:15Z","content_type":null,"content_length":"36923","record_id":"<urn:uuid:4c737f55-63ab-435c-a6a8-4fa9fb1d8231>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Pre-Ged - Mathematics DVD - Review DVD Pre-GED Mathematics 2004 Video Aided Instruction, Inc Running Time 5 hours 14 min. Summary Pre-GED Mathematics consists of 3 DVDs full of step by step teaching instructions. It begins with Order of Operations and ends with Coordinate Geometry. The lessons are taught by Harold Shane, Ph. D., a professor at Baruch College in New York. Features I Like - ***I like the free study guide. The PDF file is available online. However, you must provide a code that comes with the DVD. The study guide includes the problems that the instructor explains step by step. You can print and / or save it on your computer. ****After Professor Shane explains how to work a particular type of problem, he shows how the problem may appear on a test and shows a strategy to solve it. ***In general, I like DVDs because you can pause and repeat as often as you feel necessary. To Be Desired - I would like to see additional problems for the student to work alone for practice. Each section has three to five problems. So, it s possible to watch the instructor work one or two problems. Then, pause the video to work the next few math problems by yourself. Continue the video to check your answers. Nevertheless, a few more problems would make this an excellent resource. Table of Contents Part 1 Introduction Part 2: Algebra Exercise A: Order of Operations Exercise B: Signed Numbers Exercise C: Operations with Monomials Exercise D: Zero Power and Scientific Notation Exercise E: Operations with Polynomials Exercise F: Solving First-Degree Equations Exercise F: and Inequalities in One Variable Exercise G: Factoring Exercise H: Solving Quadratic Equations Exercise I: Radicals PART 3: Geometry Exercise J: Angle Relationships Exercise K: Parallel Lines Exercise L: Angles in a Triangle Exercise M: Similar Triangles Exercise N: Pythagorean Theorem Exercise O: Perimeter and Circumference Exercise P: Area Exercise Q: Volume PART 4: Coordinate Geometry Exercise R: Plotting Points Exercise S: Distance, Midpoint, and Slope In summary, five hours of quality tutoring can cost $125 - $300. So, yes, I believe the cost is worth it.
{"url":"http://www.bellaonline.com/articles/art64163.asp","timestamp":"2014-04-18T16:15:15Z","content_type":null,"content_length":"26390","record_id":"<urn:uuid:6c9eaabc-2eae-40ee-89bd-5cb272446664>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Really bignums [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Really bignums | From: Bradley Lucier <lucier@xxxxxxxxxxxxxxx> | Date: Sun, 7 Aug 2005 22:34:31 -0500 | On Aug 7, 2005, at 12:21 PM, Aubrey Jaffer wrote: | > There is a third way: report a violation of an implementation | > restriction when trying to return numbers with more than, say, | > 16000 bits. Practical calculation on numbers larger than that | > would need FFT multiplication and other number-theoretic | > algorithms, which is a lot of hair to support execution of simple | > programming errors. | If you're saying that any computations that need more than 16,000 | bits are simple programming errors, then I'd point you to | computational number theorists and others who use much bigger | numbers. It would be good if these problems could be done | practically in Scheme. FFT-multiplication time is O(n*log(n)) while regular multiplication time is O(n^2). If I understand big-O notation, for some k > 1000, products with k or more (base 65536) digits will take hundreds of times longer to compute using the n^2 algorithm than with So yes; you can do number theory with simple arithmetic algorithms if you are very, very patient. Is that practical?
{"url":"http://srfi.schemers.org/srfi-70/mail-archive/msg00202.html","timestamp":"2014-04-18T20:43:41Z","content_type":null,"content_length":"5958","record_id":"<urn:uuid:5fbcabee-d453-479b-9884-9e5c9bc11298>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Springtown, PA Algebra 1 Tutor Find a Springtown, PA Algebra 1 Tutor ...I teach/tutor all levels of math from General Math to Calculus. I will do all I can to help students get up to speed on their math courses. I have a lot of extra materials to use if necessary. 9 Subjects: including algebra 1, geometry, algebra 2, SAT math ...You cannot just walk into the test center with inadequate preparation even if you are confident of your mastery over the skills being assessed by Praxis tests. The Praxis tests consist of questions that are presented in a specific format. You need to be aware of this type of test format and you should have practice in answering the questions asked within the time allotted for answering them. 62 Subjects: including algebra 1, reading, English, calculus ...I'll help you translate the facts of your coursework into knowledge. I really want to work with students from middle school to adults returning to school. I look forward to working with you!I am preparing to teach middle school mathematics and science. 15 Subjects: including algebra 1, chemistry, GRE, algebra 2 ...I have a comprehensive science teacher certificate, a Masters of Chemistry Education degree and more than twenty five years of experience teaching many different science courses at a variety of levels. Currently, I am a high school science teacher where my teaching load over the last several years has included mostly chemistry. I set high standards for myself and my students. 6 Subjects: including algebra 1, chemistry, algebra 2, prealgebra ...If you need help with mathematics, physics, or engineering, I'd be glad to help out. With dedication, every student succeeds, so don’t despair! Learning new disciplines keeps me very aware of the struggles all students face. 14 Subjects: including algebra 1, calculus, physics, geometry Related Springtown, PA Tutors Springtown, PA Accounting Tutors Springtown, PA ACT Tutors Springtown, PA Algebra Tutors Springtown, PA Algebra 2 Tutors Springtown, PA Calculus Tutors Springtown, PA Geometry Tutors Springtown, PA Math Tutors Springtown, PA Prealgebra Tutors Springtown, PA Precalculus Tutors Springtown, PA SAT Tutors Springtown, PA SAT Math Tutors Springtown, PA Science Tutors Springtown, PA Statistics Tutors Springtown, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/springtown_pa_algebra_1_tutors.php","timestamp":"2014-04-18T00:57:12Z","content_type":null,"content_length":"24269","record_id":"<urn:uuid:c50d5c51-a176-43a7-bac3-08f4de7c9737>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Chemistry: The Ideal Gas Law Video | MindBites Chemistry: The Ideal Gas Law About this Lesson • Type: Video Tutorial • Length: 8:39 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 93 MB • Posted: 07/14/2009 This lesson is part of the following series: Chemistry: Full Course (303 lessons, $198.00) Chemistry: Final Exam Test Prep and Review (49 lessons, $64.35) Chemistry: Gases (14 lessons, $20.79) Chemistry: Ideal Gas Law, Kinetic-Molecular Theory (5 lessons, $7.92) This lesson was selected from a broader, comprehensive course, Chemistry, taught by Professor Harman, Professor Yee, and Professor Sammakia. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/product/chemistry. The full course covers atoms, molecules and ions, stoichiometry, reactions in aqueous solutions, gases, thermochemistry, Modern Atomic Theory, electron configurations, periodicity, chemical bonding, molecular geometry, bonding theory, oxidation-reduction reactions, condensed phases, solution properties, kinetics, acids and bases, organic reactions, thermodynamics, nuclear chemistry, metals, nonmetals, biochemistry, organic chemistry, and more. Dean Harman is a professor of chemistry at the University of Virginia, where he has been honored with several teaching awards. He heads Harman Research Group, which specializes in the novel organic transformations made possible by electron-rich metal centers such as Os(II), RE(I), AND W(0). He holds a Ph.D. from Stanford University. Gordon Yee is an associate professor of chemistry at Virginia Tech in Blacksburg, VA. He received his Ph.D. from Stanford University and completed postdoctoral work at DuPont. A widely published author, Professor Yee studies molecule-based magnetism. Tarek Sammakia is a Professor of Chemistry at the University of Colorado at Boulder where he teaches organic chemistry to undergraduate and graduate students. He received his Ph.D. from Yale University and carried out postdoctoral research at Harvard University. He has received several national awards for his work in synthetic and mechanistic organic chemistry. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. 2854 - The Ideal Gas Law And now the moment you've all been waiting for. We've introduced Boyle's Law, Charles's Law, Avogadro's Law, and now we're going to combine all of those observations into a single expression. And that single expression called the Ideal Gas Law might very well be the only thing you even remember about chemistry when you're 50. Pressure times volume is equal to the number of moles times the constant, which we'll talk about in a bit, times the temperature in Kelvins, . And this expression embodies all of those other laws. You can see that there's still an inverse relationship between and . That's Boyle's Law. There's still a linear relationship between and . That's Charles's Law. And there's a linear relationship between and . That's Avogadro's Law. Now empirically you determine the value of . So you take a sample of gas. You measure its pressure. You measure its volume. You know how many moles. You measure the temperature. And from that, is uniquely determined. And turns out to be approximately with these units: liter-atmospheres per mole-Kelvin, which seem really strange. But if you think about the algebra, it's necessary to have units on that allow you to cancel things out as you need to in order for this expression to be true. Now one of the questions you might have is what's an ideal gas? And the simple answer is that an ideal gas is just something that obeys the Ideal Gas Law. And it turns out that lots of gases, at modest pressures, around room temperature obey the Ideal Gas Law just fine. And that's why it's worth talking about it. But at a deeper lever, what we're going to see is that the assumptions about the Ideal Gas Law involve things like the fact that the particles of gas don't really occupy very much volume compared to the size of the container. A way to simplify that is to say that the particles are assumed to be point particles. You may have heard the expression how many angels can dance on the head of a pin? Well in this case it's how many ideal gas particles can fit into a container? And the answer is infinitely many, because each individual particle doesn't occupy any space. And then the second thing that we assume--and we're going to talk about this later on some more--is that there aren't any interactions between the gas particles. They're neither attractive nor repulsive. So they basically don't even know that they're there, that each individual particle doesn't know that the collection of other particles is there. And those are the kinds of assumptions that we need in order to observe that the Ideal Gas Law holds. In particular, when we have a real gas like a sample of hydrogen, it's going to obey the Ideal Gas Law when the assumptions about the Ideal Gas Law make some sense. So how can we use the Ideal Gas Law? And one of the things that we can do is we can calculate the volume of 1 mole of gas. And let's say let's calculate it at 0.00 degrees C and 1.00 atmosphere. Now the reason why I chose 0.00 degrees C and 1.00 atmosphere is because chemists have decided to give these two conditions a special name. And we call that special set of conditions STP for standard temperature and pressure. And standard temperature and pressure is just 1.00 atmospheres and 0.00 degrees C. Now remember this thing in Kelvins is something entirely different. But it's the temperature at which ice melts that we use when we define STP. So we're going to calculate the volume of 1 mole. One thing to remember is, when we calculate the volume of 1 mole here, it doesn't matter what kind of gas we're talking about. So any kind of gas, assuming it's obeying the Ideal Gas Law, is going to occupy a volume given by this expression. So how do we use the Ideal Gas Law ? We can rearrange that to solve for volume. , and let's plug in the values that we had on the previous page. Remember temperature has to be in Kelvins, and this corresponds to 0 degrees Celsius. Plug this all in - 22.4 liters. Now to give you some idea of how big 22.4 liters is it's about the volume of a toilet tank, the thing in the back of the toilet. That's about 22.4 liters. And that's the volume of a mole of gas at standard temperature and pressure. Let's look at another problem now. And the next problem we're going to look at is going to involve this. And what this is, it's basically a party canister full of helium. And helium is an interesting gas, because it has an interesting property. Don't try this at home, but... How many moles of helium gas are contained in a 10-liter cylinder at 25 degrees C and 120 atmospheres? The effect doesn't last very long because I'm inhaling oxygen and displacing the helium. And we'll talk about why the helium should be displaced quickly later on. But let's go ahead now and solve this problem. Okay. Again we're going to use the Ideal Gas Law. We're interested in the number of moles contained in this cylinder. To remind you, the cylinder is this big. And we're going to have to make some approximations, because we don't know exactly what the size of the cylinder is. So let's say that it's 10 liters. That's probably pretty close. And it turns out that there's a gage on the front, so we can read off the pressure on the inside of the cylinder. And converting the pressure units here to our metric units, it's about 120 atmospheres. So we'll use those pieces of information. . That's an approximation. . And we have to make an assumption about the temperature too. We'll assume it's at room temperature. In other words, the inside of the cylinder is the same as the temperature in the room. We'll rearrange the Ideal Gas Law to solve for the number of moles, which looks like that: . Then we'll just plug in these values and solve for . We have that , never changes, and then we need to convert the temperature to Kelvins. And you've seen this a lot now, that room temperature is 298 Kelvins. We work all of this out and we find out that there are 49 moles. We've made some approximations here because we were just guessing what the volume was. But this tells us that there are roughly 49 moles if our assumptions are valid. So the Ideal Gas Law allows us to bring together all of the stuff that we've learned up until now. And it allows us to quantitatively solve problems. Where before we were always looking at changes, now we can actually look at how different values of these numbers together with this new constant give us answers about the real world. The Ideal Gas Law and Kinetic-Molecular Theory of Gases The Ideal Gas Law Page [1 of 2] Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/4717-chemistry-the-ideal-gas-law","timestamp":"2014-04-18T20:46:08Z","content_type":null,"content_length":"59076","record_id":"<urn:uuid:d4670a50-4c1a-4939-89cf-9381b4f79ee3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
F.: Faster scalar multiplication on Koblitz curves combining point halving with the Frobenius endomorphism Results 1 - 10 of 13 , 2006 "... The paper is an examination of double-base decompositions of integers n, namely expansions loosely of the form X i,j A for some base B}. This was examined in previous works [3, 4], in the case when A, B lie in N. ..." Cited by 15 (5 self) Add to MetaCart The paper is an examination of double-base decompositions of integers n, namely expansions loosely of the form X i,j A for some base B}. This was examined in previous works [3, 4], in the case when A, B lie in N. - In: proceedings of Asiacrypt 2006. Lecture Notes in Comput. Sci , 2006 "... Abstract. It has been recently acknowledged [4, 6, 9] that the use of double bases representations of scalars n, that is an expression of the form n = � e,s,t (−1)eA s B t can speed up significantly scalar multiplication on those elliptic curves where multiplication by one base (say B) is fast. This ..." Cited by 9 (7 self) Add to MetaCart Abstract. It has been recently acknowledged [4, 6, 9] that the use of double bases representations of scalars n, that is an expression of the form n = � e,s,t (−1)eA s B t can speed up significantly scalar multiplication on those elliptic curves where multiplication by one base (say B) is fast. This is the case in particular of Koblitz curves and supersingular curves, where scalar multiplication can now be achieved in o(log n) curve additions. Previous literature dealt basically with supersingular curves (in characteristic 3, although the methods can be easily extended to arbitrary characteristic), where A, B ∈ N. Only [4] attempted to provide a similar method for Koblitz curves, where at least one base must be non-real, although their method does not seem practical for cryptographic sizes (it is only asymptotic), since the constants involved are too large. We provide here a unifying theory by proposing an alternate recoding algorithm which works in all cases with optimal constants. Furthermore, it - AGCT 2003), Sémin. Congr , 2005 "... Abstract. — The two main systems used for public key cryptography are RSA and protocols based on the discrete logarithm problem in some cyclic group. We focus on the latter problem and state cryptographic protocols and mathematical background material. Résumé (Éléments mathématiques de la cryptograp ..." Cited by 6 (4 self) Add to MetaCart Abstract. — The two main systems used for public key cryptography are RSA and protocols based on the discrete logarithm problem in some cyclic group. We focus on the latter problem and state cryptographic protocols and mathematical background material. Résumé (Éléments mathématiques de la cryptographie à clef publique). — Les deux systèmes principaux de cryptographie à clef publique sont RSA et le calcul de logarithmes discrets dans un groupe cyclique. Nous nous intéressons aux logarithmes discrets et présentons les faits mathématiques qu’il faut connaître pour apprendre la cryptographie mathématique. 1. Data Security and Arithmetic Cryptography is, in the true sense of the word, a classic discipline: we find it in Mesopotamia and Caesar used it. Typically, the historical examples involve secret services and military. Information is exchanged amongst a limited community in which each member is to be trusted. Like Caesar’s chiffre these systems were entirely symmetric. Thus, the communicating parties needed to have a common key which is used to de- and encrypt. The key exchange posed a problem (and gives a marvellous plot for spy-novels) but the number of people involved was rather bounded. This has changed dramatically because of electronic communication in public networks. Since 2000 Mathematics Subject Classification. — 11T71. Key words and phrases. — Elliptic curve cryptography, mathematics of public key cryptography, hyperelliptic curves. The authors would like to thank the organizers of the conference for generous support, an interesting program and last but not least for a very inspiring and pleasant atmosphere. The second author acknowledges financial support by STORK - in Proceedings of SAC 2006 (Workshop on Selected Areas in Cryptography), Lecture Notes in Computer Science "... Abstract. This paper studies τ-adic expansions of scalars, which are important in the design of scalar multiplication algorithms for Koblitz Curves, but are also less understood than their binary counterparts. At Crypto ’97 Solinas introduced the width-w τ-adic non-adjacent form for use with Koblitz ..." Cited by 6 (6 self) Add to MetaCart Abstract. This paper studies τ-adic expansions of scalars, which are important in the design of scalar multiplication algorithms for Koblitz Curves, but are also less understood than their binary counterparts. At Crypto ’97 Solinas introduced the width-w τ-adic non-adjacent form for use with Koblitz curves. It is an expansion of integers z = Pℓ i=0 ziτ i, where τ is a quadratic integer depending on the curve, such that zi � = 0 implies zw+i−1 =... = zi+1 = 0, like the sliding window binary recodings of integers. We show that the digit sets described by Solinas, formed by elements of minimal norm in their residue classes, are uniquely determined. However, unlike for binary representations, syntactic constraints do not necessarily imply minimality of weight. Digit sets that permit recoding of all inputs are characterized, thus extending the line of research begun by Muir and Stinson at SAC 2003 to the Koblitz Curve setting. Two new digit sets are introduced with useful properties; one set makes precomputations easier, the second set is suitable for low-memory applications, generalising an approach started by Avanzi, Ciet, and Sica at PKC 2004 and continued by several authors since, including Okeya, Takagi and Vuillaume. Results by Solinas, and by Blake, Murty, and Xu are generalized. Termination, optimality, and cryptographic applications are considered. The most important application is the ability to perform arbitrary windowed scalar multiplication on Koblitz curves without storing any precomputations first, thus reducing memory storage to just one point and the scalar itself. 1 , 2006 "... In this paper we prove the optimality and other properties of the τ-adic nonadjacent form: this expansion has been introduced in order to efficiently compute scalar multiplications on Koblitz curves. We also refine and extend results about double expansions of scalars introduced by Avanzi, Ciet an ..." Cited by 4 (4 self) Add to MetaCart In this paper we prove the optimality and other properties of the τ-adic nonadjacent form: this expansion has been introduced in order to efficiently compute scalar multiplications on Koblitz curves. We also refine and extend results about double expansions of scalars introduced by Avanzi, Ciet and Sica in order to further improve scalar multiplications. Our double expansions are optimal and their properties are carefully analysed. In particular we provide first and second order terms for the expected weight, determine the variance and prove a central limit theorem. Transducers for all the involved expansions are provided, as well as automata accepting all expansions of minimal weight. - In Proceedings of CHES 2005, Lecture Notes in Computer Science 3659 , 2005 "... Abstract. We present a new method for computing the scalar multiplication on Koblitz curves. Our method is as fast as the fastest known technique but requires much less memory. We propose two settings for our method. In the first setting, well-suited for hardware implementations, memory requirements ..." Cited by 4 (0 self) Add to MetaCart Abstract. We present a new method for computing the scalar multiplication on Koblitz curves. Our method is as fast as the fastest known technique but requires much less memory. We propose two settings for our method. In the first setting, well-suited for hardware implementations, memory requirements are reduced by 85%. In the second setting, well-suited for software implementations, our technique reduces the memory consumption by 70%. Thus, with much smaller memory usage, the proposed method yields the same efficiency as the fastest scalar multiplication schemes on Koblitz curves. "... Abstract. We discuss irreducible polynomials that can be used to speed up square root extraction in fields of characteristic two. The obvious applications are to point halving methods for elliptic curves and divisor halving methods for hyperelliptic curves. Irreducible polynomials P(X) such that the ..." Cited by 1 (0 self) Add to MetaCart Abstract. We discuss irreducible polynomials that can be used to speed up square root extraction in fields of characteristic two. The obvious applications are to point halving methods for elliptic curves and divisor halving methods for hyperelliptic curves. Irreducible polynomials P(X) such that the square root ζ of a zero x of P(X) is a sparse polynomial are considered and those for which ζ has minimal degree are characterized. We reveal a surprising connection between the minimality of this degree and the extremality of the the number of trace one elements in the polynomial base associated to P(X). We also show how to improve the speed of solving quadratic equations and that the increase in the time required to perform modular reduction is marginal and does not affect performance adversely. Experimental results confirm that the new polynomials mantain their promises; These results generalize work by Fong et al. to polynomials other than trinomials. Point halving gets a speed-up of 20 % and the performance of scalar multiplication based on point halving is improved by at least 11%. - International Journal of Computer Science Issues , 2011 "... As a generalization of double base chains, multibase number system is very suitable for efficient computation of scalar multiplication of a point of elliptic curve because of shorter representation length and hamming weight. In this paper combined with the given formulas for computing the 7- Fold of ..." Cited by 1 (0 self) Add to MetaCart As a generalization of double base chains, multibase number system is very suitable for efficient computation of scalar multiplication of a point of elliptic curve because of shorter representation length and hamming weight. In this paper combined with the given formulas for computing the 7- Fold of an elliptic curve point P an efficient scalar multiplication algorithm of elliptic curve is proposed using 2, 3 and 7 as basis of the multi based number system. The algorithms cost less compared with Shamirs trick and interleaving with NAFs method. Key words: scalar multiplication, elliptic curve, double base number system, multibase number system, double chain, septupling. 1 "... The last years have witnessed tremendous developments in the field of curve based cryptography. First proposed in 1985 by Koblitz and Miller, elliptic curve cryptography (ECC) slowly proved itself to be a valid alternative to RSA. Later, also hyperelliptic curves have been added to the arsenal of cr ..." Add to MetaCart The last years have witnessed tremendous developments in the field of curve based cryptography. First proposed in 1985 by Koblitz and Miller, elliptic curve cryptography (ECC) slowly proved itself to be a valid alternative to RSA. Later, also hyperelliptic curves have been added to the arsenal of cryptographic primitives. Today curve based cryptography is a well established technology. In this survey we shall first very broadly review its development, and we shall then move to a survey of recent results dealing specifically with Koblitz curves. - In: SAC 2005. Volume 3897 of LNCS. (2005) 332–344 , 2005 "... In order to e#ciently perform scalar multiplications on elliptic Koblitz curves, expansions of the scalar to a complex base associated with the Frobenius endomorphism are commonly used. One such expansion is the #-adic NAF, introduced by Solinas. Some properties of this expansion, such as the averag ..." Add to MetaCart In order to e#ciently perform scalar multiplications on elliptic Koblitz curves, expansions of the scalar to a complex base associated with the Frobenius endomorphism are commonly used. One such expansion is the #-adic NAF, introduced by Solinas. Some properties of this expansion, such as the average weight, are well known, but in the literature there is no proof of its optimality, i.e. that it always has minimal weight. In this paper we provide the first proof of this fact.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=322004","timestamp":"2014-04-20T22:02:29Z","content_type":null,"content_length":"39484","record_id":"<urn:uuid:846de732-902d-41a8-8c4c-231c9ec0f33b>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
help with sample program Join Date May 2013 Rep Power hello , I am new member here I just want you to help me with this assignment plz. Statistical Analysis of student grades. 1) Create an array of double of size 20 2) Ask user to enter 20 grades Enter 1st student's grade: 85.88 <enter> Enter 2nd student's grade: 68.15 <enter> Enter 3rd student's grade: 75.45 <enter> Enter 20th student's grade: 95.20 <enter> You are free to choose the grades. 3) Print the average and standard deviation of the grades for example average: 68.47 stdev: 19.45 4) Assign letter grades to the students. top %10 will get A (two people in this case) next %20 will get B (3rd, 4th, 5th, 6th people will get B) next %40 will get C (7th, 8th, 9th, 10th, 11th, 12th, 13th, 14th will get C) next %20 will get D (15th, 16th, 17th, 18th people will get B) next %10 will get F (19th, 20th people will get B) 5) print the letter grades 1st student's letter grade is A 2nd student's letter is C 3rd student's letter grade B hope you got the way to calculate the the stdev because I didn't find any way to put it in the first loop of inputting the grades . Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power You can calculate the 'ingredients' for the average and standard deviation while reading the individual grades Xi, calculate the sum(Xi) and the sum of the squares sum(Xi^2); when the loop finishes you know 'n', the number of grades entered by the user; so, the average = sum(Xi)/n and the standard deviation = (sum(Xi^2)-sum(Xi)*sum(Xi)/n)/n; (I forgot whether or not to take the square root of this). kind regards, cenosillicaphobia: the fear for an empty beer glass Join Date May 2013 Rep Power yes but now how I can include the the sigma and then sqrt of the multiplication I'll try more and I'll write the whole code also here OpenStudy Join Date Sep 2008 Voorschoten, the Netherlands Blog Entries Rep Power yes but now how I can include the the sigma and then sqrt of the multiplication I'll try more and I'll write the whole code also here You don't need a 'sigma'; you just have to add the Xi's and the Xi squared in your loop: Java Code: double Xi= 0; double Xi2= 0; int n= 0; while (<more to read>) { double x= <read a number from the user>; Xi+= X; Xi2+= X*X; kind regards, cenosillicaphobia: the fear for an empty beer glass Join Date May 2013 Rep Power well thank you I figured a way calculate but how to order my array decreasingly with the students and then signing letters to each ? public static void main(String[] args) throws Exception { Scanner in=new Scanner (System.in); System.out.println("input the number of students"); int arr=in.nextInt(); //if in case Students number less than 20 double[] grades=new double [arr]; double[] result=new double[arr+1]; int i,j = 0; double sum=0.0; double v=0.0; for( i=0;i<grades.length;i++) System.out.println("input the grade of Student "+(i+1)); double average=sum/arr; System.out.println("the Average is "+average); for ( i=0; i<grades.length;i++) for( j=0;j<=i;j++){ double temp=result[j];result[j]=result[j+1];result[j+1]=temp; v =v+(grades[i] - average) * (grades[i] - average); double std =Math.sqrt((1/((double)(arr-1)))*v); System.out.println("the stdev is "+std); Join Date Aug 2011 Rep Power Well, you need to import class java.util.Arrays and use the sort method: Arrays.sort(<your array>). This method will sort the array in its natural order, in the case of numeric values the natural order is ascending(i.e. the array {2, 0 , 1} after the sort will be {0, 1, 2}). Note that in your case the original order of the grades in the array does matter :) P.S. Of course, if you're interested, you can write your own sort by implementing one of these algorithms: Last edited by kalata; 05-30-2013 at 03:55 PM. Join Date May 2013 Rep Power I don't understand any thing how would you please help me finish this . public static void main(String[] args) throws Exception { Scanner in=new Scanner (System.in); System.out.println("input the number of students"); int n=in.nextInt(); //if in case Students number less than 20 double[] grades=new double [n]; int[] S=new int[n+1]; int i,j ; int s=1; double sum=0.0; double v=0.0; for( i=0;i<grades.length;i++) System.out.println("input the grade of Student "+(i+1)); double average=sum/n; System.out.println("the Average is "+average); for ( i=1; i<grades.length-1;i++){ for( j=0;j<grades.length-1;j++){ double temp=grades[i]; int tem=s=i; System.out.println("the student "+S[s]+"took the "+grades[i]); v =v+(grades[i] - average) * (grades[i] - average); double std =Math.sqrt((1/((double)(n)))*v); System.out.println("the stdev is "+std); Join Date Jun 2013 Rep Power I want to suggest you https://www.udemy.com/java-basics-fo...nCode=TECHDIS0. They are providing free java online course. Just join their course and solve your all problems.The course is spread over 128 lectures in 21 sections with practice problems in sections intended to enhance your practical knowledge of concepts learnt throughout the section. Join Date May 2013 Rep Power I don't understand any thing how would you please help me finish this . Java Code: public static void main(String[] args) throws Exception { Scanner in=new Scanner (System.in); System.out.println("input the number of students"); int n=in.nextInt(); //if in case Students number less than 20 double[] grades=new double [n]; int[] S=new int[n+1]; int i,j ; int s=1; double sum=0.0; double v=0.0; for( i=0;i<grades.length;i++) System.out.println("input the grade of Student "+(i+1)); double average=sum/n; System.out.println("the Average is "+average); for ( i=1; i<grades.length-1;i++){ for( j=0;j<grades.length-1;j++){ double temp=grades[i]; int tem=s=i; System.out.println("the student "+S[s]+"took the "+grades[i]); v =v+(grades[i] - average) * (grades[i] - average); double std =Math.sqrt((1/((double)(n)))*v); System.out.println("the stdev is "+std); Last edited by JosAH; 06-04-2013 at 01:08 PM. Reason: added [code] ... [/code] tags Join Date May 2013 Rep Power find it ;) Java Code: import java.util.*; * @author smart public class Count { * @param args the command line arguments public static void main(String[] args) throws Exception { Scanner in=new Scanner (System.in); System.out.println("input the number of students"); int n=in.nextInt(); //if in case Students number less than 20 double[] grades=new double [n]; int[] S=new int[n]; int s,i,j; double average ,v=0.0,sum=0.0; for( i=0;i<grades.length ;i++) System.out.println("input the grade of Student "+(i+1)); for (j=0; j<n-1;j++){ for(i=0 ,s=0;i<n-1;i++,s++){ double temp=grades[i]; int tem=S[s]; v =v+(grades[i] - average) * (grades[i] - average); double NA=n/10; //2 i<2 double NB=n/5; //4 2<=i<6 double NC=n*2/5; //8 6<=i<14 double NE=n/5; //4 14<=i<18 double NF=n/10; //2 18<=i<20 double std =Math.sqrt((1/((double)(n)))*v); System.out.println("the Average is "+average); System.out.println("the stdev is "+std); for(s=0,i=0;(s<n &&i <n);s++,i++){ System.out.println("the "+(i+1) + " student which is the "+(S[s]+1)+"th took " +grades[i]+"recieve 'A'"); else if (NA<=i && i<NB+NA){ System.out.println("the "+(i+1) + " student which is the "+(S[s]+1)+"th took " +grades[i]+"recieve 'B'"); else if (NB+NA<=i && i<NC+NB+NA){ System.out.println("the "+(i+1) + " student which is the "+(S[s]+1)+"th took " +grades[i]+"recieve 'C'"); else if (NC+NB+NA<=i && i<NC+NB+NA+NE){ System.out.println("the "+(i+1) + " student which is the "+(S[s]+1)+"th took " +grades[i]+"recieve 'E'"); else { System.out.println("the "+(i+1) + " student which is the "+(S[s]+1)+"th took " +grades[i]+"recieve 'F'"); Similar Threads 1. By dmaccormick in forum Forum Lobby Last Post: 01-21-2013, 08:19 AM 2. By murali23krishna in forum New To Java Last Post: 08-05-2010, 12:44 PM 3. By sathishk in forum New To Java Last Post: 02-16-2008, 03:33 AM
{"url":"http://www.java-forums.org/new-java/74599-help-sample-program.html","timestamp":"2014-04-16T11:36:13Z","content_type":null,"content_length":"105441","record_id":"<urn:uuid:08af0aaa-7c7d-4b5b-93ab-aca4c5d952ae>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
A Unified Iterative Treatment for Solutions of Problems of Split Feasibility and Equilibrium in Hilbert Spaces Abstract and Applied Analysis Volume 2013 (2013), Article ID 613928, 13 pages Research Article A Unified Iterative Treatment for Solutions of Problems of Split Feasibility and Equilibrium in Hilbert Spaces ^1Department of Accounting Information, Southern Taiwan University of Science and Technology, 1 Nantai Street, Yongkang District, Tainan 71005, Taiwan ^2Department of Industrial Management, National Pingtung University of Science and Technology, 1 Shuefu Road, Neipu, Pingtung 91201, Taiwan Received 21 May 2013; Revised 30 August 2013; Accepted 1 September 2013 Academic Editor: Simeon Reich Copyright © 2013 Young-Ye Huang and Chung-Chien Hong. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We at first raise the so called split feasibility fixed point problem which covers the problems of split feasibility, convex feasibility, and equilibrium as special cases and then give two types of algorithms for finding solutions of this problem and establish the corresponding strong convergence theorems for the sequences generated by our algorithms. As a consequence, we apply them to study the split feasibility problem, the zero point problem of maximal monotone operators, and the equilibrium problem and to show that the unique minimum norm solutions of these problems can be obtained through our algorithms. Since the variational inequalities, convex differentiable optimization, and Nash equilibria in noncooperative games can be formulated as equilibrium problems, each type of our algorithms can be considered as a generalized methodology for solving the aforementioned problems. 1. Introduction Throughout this paper, denotes a real Hilbert space with inner product and the norm , the identity mapping on , the set of all natural numbers, and the set of all real numbers. For a self-mapping on , denotes the set of all fixed points of . If is a set-valued mapping; then denotes its domain, that is, . Let and be nonempty closed convex subsets of two Hilbert spaces and , respectively, and let be a bounded linear mapping. The split feasibility problem (SFP) is the problem of finding a point with the property The SFP was first introduced by Censor and Elfving [1] for modeling inverse problems which arise from phase retrievals and medical image reconstruction. Recently, it has been found that the SFP can also be used to model the intensity-modulated radiation therapy. For details, the readers are referred to Xu [2] and the references therein. Assume that the SFP has a solution. There are many iterative methods designed to approximate its solutions. The most popular algorithm is the algorithm introduced by Byrne [3, 4]. Start with any and generate a sequence through the iteration where , the adjoint of , and and are the metric projections onto and , respectively. The sequence generated by the algorithm (2) converges weakly to a solution of SFP(1), cf. [2–4]. Under the assumption that SFP(1) has a solution, it is known that a point solves SFP(1) if and only if is a fixed point of the operator cf. [2], where Xu also proposed the regularized method, and proved that the sequence converges strongly to the minimum norm solution of SFP(1) provided that the parameters and verify some suitable conditions. This regularized method was further investigated by Yao et al. [5] and Yao et al. [6]. Putting and , SFP(1) is of the forms: As a metric projection is firmly nonexpansive, it is reasonable to require the and in (5) to be firmly nonexpansive only and call it a split feasibility fixed point problem (SFFP). Many interesting problems in the literature can be described as SFFP. (i) A set-valued map is called monotone if for all and for any , . is said to be maximal monotone if its graph is not properly contained in the graph of any other monotone operator. A point is called a zero point of a maximal monotone operator if . The set of all zero points of is denoted by , which is equal to for any , where denotes the resolvent of a monotone operator ; that is, for any . It is known that for any , is firmly nonexpansive. Now, let and be two maximal monotone operators on and , respectively. Replacing and with and , respectively, in (1), the SFP becomes a SFFP: Putting , the previous SFFP is reduced to the common zero point problem of two maximal monotone operators: (ii) Let . An equilibrium problem is the problem of finding such that whose solution set is denoted by . For solving an equilibrium problem, we usually assume that the function satisfies the following conditions:(A1),for all;(A2)is monotone, that is,,for all;(A3)for all,;(A4) for allis convex and lower semicontinuous.Blum and Oettli [7] and Aoyama et al. [8] showed that there exists a unique such that Moreover, For , define by for all . Combettes and Hirstoaga [9] showed that there hold(a)is single-valued;(b)is firmly nonexpansive;(c);(d)is closed and convex. Now, let and be two functions satisfying conditions (A1)–(A4). Replacing and with and , respectively, in (1), the SFP becomes a SFFP: Putting , , and , the previous SFFP is reduced to the common equilibrium problem: (iii) When , and the bounded linear operator is the identity mapping, SFP(1) is reduced to the convex feasibility problem (CFP): which in turn can be described as a SFFP: where and . Although SFFP(5) contains CFP as a special case, it cannot cover the multiple-set split convex feasibility problem described in [10]. In this paper, we are concerned with iterative methods for SFFP(5). We derive some weak convergence theorems for SFFP(5) in Section 3. In Section 4, we describe SFFP(5) in a more general form. Letandbe two families of firmly nonexpansive self-mappings onand ,respectively, so thatandand obtain the following main result. Let,,,andbe sequences inwithandfor all. Let be a sequence in and let be a bounded sequence in . Suppose that the solution set ofSFFP(16)is nonempty. For any , start with an arbitrary and define a sequence byThen, the sequence converges strongly to provided that the following conditions are satisfied: (i),,;(ii),;(iii)there are two nonnegative real-valued Based on the concept of using contractions to approximate nonexpansive mappings, another type of algorithms for SFFP(5) is also introduced, and the corresponding strong convergence theorem for the sequence generated by such algorithm is given too. In Section 5, since resolvents of monotone operators are firmly nonexpansive, we replace the sequences and of firmly nonexpansive mappings in the previous condition (iii) by two sequences of resolvents of maximal monotone operators. Then, the proposed algorithm becomes a scheme to approach the minimum norm solution of zero point problem of maximal monotone operators and the equilibrium problem. It is worth noting that as Blum and Oettli [7] showed that the variational inequalities, convex differentiable optimization, and Nash equilibria in noncooperative games can be formulated as equilibrium problems, the proposed algorithm can be considered as a generalized methodology for solving all aforementioned problems. 2. Preliminaries In order to facilitate our investigation in this paper, we recall some basic facts. A mapping is said to be(i)nonexpansive if (ii)firmly nonexpansive if (iii)-averaged by if for some and some nonexpansive mapping ;(iv)-inverse strongly monotone (-ism), with , if If is nonexpansive, then the fixed point set of is closed and convex, cf. [11]. If is averaged, then is nonexpansive with . It is well known that is firmly nonexpansive if and only if it is -averaged, cf. [11], and so is . Here we would like to mention that the term “averaged mapping” originated in [12, 13]. In [12], Baillon el at. showed that if is a -averaged mapping by on a nonempty closed convex subset of a uniformly convex Banach space, then if and only if for all in . Moreover, in [13], Bruck and Reich showed that if the above satisfies and is odd, then converges strongly to a fixed point of . Let be a nonempty closed convex subset of . The metric projection from onto is the mapping that assigns each the unique point in with the property It is known that is firmly nonexpansive and characterized by the inequality: for any , We need some lemmas that will be quoted in the sequel. Lemma 1 (see [4]). If is a self-mapping on , then the following assertions hold.(a) is nonexpansive if and only if the complement is -averaged.(b)If is -ism and , then is ()-ism.(c) is -averaged if and only if is ()-ism.(d)If is -averaged and is -averaged, then the composite is -averaged.(e)If and are averaged on so that , then For , the resolvent of a maximal monotone operator on has the following properties. Lemma 2. Let be a maximal monotone operator on . Then,(a) is single-valued and firmly nonexpansive;(b) and ;(c) (the resolvent identity) for , the following identity holds: We referred readers to [14–24] for maximal monotone operators and their related algorithms. Lemma 3. Let . Then,(a);(b)for any , (c)for with , Lemma 4 (see [11] (demiclosedness principle)). Let be a nonexpansive self-mapping on and suppose that is a sequence in such that converges weakly to some and . Then, . Lemma 5 (see [23]). Let be a sequence of nonnegative real numbers satisfying where , , and verify the following conditions:(i), ;(ii);(iii) and .Then, . Lemma 6 (see [25]). Let be a sequence in that does not decrease at infinity in the sense that there exists a subsequence such that For any , define . Then, as and , for all . 3. Weak Convergence Theorems In this section, we at first transform SFFP(5) into a fixed point problem for the operator , where is any positive real number. And then use fixed point algorithms to solve SFFP(5). From now on until the end of this paper, unless we state specifically, and , (resp., and , ) are firmly nonexpansive self-mappings on (resp., ), and denotes a bounded linear operator from into with adjoint . Lemma 7. Let be a firmly nonexpansive self-mapping on with . Then, for any , one has Proof. Since and is firmly nonexpansive, we have and hence, Although , for all is similar to the characterization inequality (24) for the metric projection , as needs not to be in , it is in general different from . For example, let for all , which is obviously firmly nonexpansive with . Thus, for all , while for all . Proposition 8. For any , the operator is -averaged and is -averaged. Proof. Using the fact that is firmly nonexpansive, it is routine to show that is -ism, and so is -ism by Lemma 1(b). Thus, Lemma 1(c) shows that is -averaged. As is firmly nonexpansive, it is -averaged. Therefore, is -averaged by Lemma 1(d). Proposition 9. Let be the solution set of SFFP(5); that is, . For any , let . Suppose that . Then, . Proof. If solves SFFP(5), we have and . Now, note that implies and so which means that . Consequently, . This shows that . For the inverse inclusion, let be any member of and pick . It is readily seen from that Since , an application of Lemma 7 yields which together with (36) implies that This comes to conclude that , and hence once we note that . Finally, since by assumption, we have . Thus, follows from Lemma 1(e). Proposition 10 (see [2, 4]). If is an averaged self-mapping on with , then for any , the sequence converges weakly to a fixed point of . An immediate consequence of Propositions 8, 9, and 10 is the following convergence result. Theorem 11. Assume that SFFP(5) has a solution. Then, for any and starting with any point , the sequence generated by converges weakly to a solution of SFFP(5). Proposition 12 (see [2]). Let be a -averaged self-mapping on with and assume that is a sequence in such that Then, for any , the sequence generated by the Mann's algorithm converges weakly to a fixed point of . Applying Propositions 8, 9, and 12, we have the following result. Theorem 13. Assume that SFFP(5) has a solution and . Let be a sequence in with Then, for any , the sequence generated by the Mann's algorithm converges weakly to a solution of SFFP(5). 4. Strong Convergence Theorems In this section, we devise two algorithms, one for SFFP(16) and the other for SFFP(5). We deal with SFFP(16) firstly. To begin with, we need a lemma. Lemma 14. For any and all , one has Proof. Since is firmly nonexpansive, so is . Hence, for all , we have Consequently, Theorem 15. Let , , , and be sequences in with and for all . Let be a sequence in and let be a bounded sequence in . Suppose that the solution set of SFFP(16) is nonempty. For any , start with any and define a sequence by Then the sequence converges strongly to provided that the following conditions are satisfied: (i),,;(ii),;(iii)there are two nonnegative real-valued functions and on Proof. Put . For simplicity, put . In view of Proposition 9, is nonexpansive, so from which follows that is a bounded sequence. Taking into account Lemma 3, we get Meanwhile, we have by Lemma 14 that Therefore, we deduce that We now carry on with the proof by considering the following two cases: (I) is eventually decreasing and (II) is not eventually decreasing. Case I. Suppose that is eventually decreasing; that is, there is such that is decreasing. In this case, exists in . From inequality (52) we have which together with the boundedness of and conditions (i) and (ii) implies Then, an application of condition (iii) follows that for all , Since is bounded, it has a subsequence such that converges weakly to some and where the last inequality follows from (24) since by Lemma 4. Choose so that . From (52) we have Accordingly, because of (56) and condition (i), we can apply Lemma 5 to inequality (57) with , , , and to conclude that Case II. Suppose that is not eventually decreasing. In this case, by Lemma 6, there exists a nondecreasing sequence in such that and Then, it follows from (52) and (59) that Therefore, and then proceeding just as in the proof in Case I, we obtain which in conjunction with condition (iii) shows that for all and then follows that From (60) we have and thus, Letting and using (64) and condition (i), we obtain Also, since which together with (62) and condition (i) implies that , and so by virtue of (67). Consequently, we conclude that via (59) and (69). This completes the proof. This theorem says that the sequence converges strongly to a point of which is the nearest to . In particular, if is taken to be , then the limit point of the sequence is the unique minimum solution of SFFP(16). Corollary 16. Let , , and be sequences in with and for all . Let be a sequence in and let be a bounded sequence in . Suppose that the solution set of SFFP(16) is nonempty. For any , start with any and define a sequence by Then, the sequence converges strongly to provided that the following conditions are satisfied: (i),,;(ii),;(iii)there are two nonnegative real-valued functions and on with(iv)either or . Proof. Put and . Let and define a sequence iteratively by We have by Theorem 15. Since the limit follows by applying Lemma 5 to (74), and thus, If the sequence (resp., ) of firmly nonexpansive mappings consists of a single mapping (resp., ), then and obviously verify condition (iii), and hence, we have the following corollary. Corollary 17. Let , , , and be sequences in with and for all . Let be a sequence in and let be a bounded sequences in . Assume that the solution set of SFFP(5) is nonempty. For any , start with an arbitrary and define the sequence by Then, converges strongly to provided that the following conditions are satisfied: (i),,;(ii),. When the sequence is taken to be a constant , then because is an averaged mapping, we can apply Corollary 3.4 of Huang and Hong [26] to obtain the following result. Theorem 18. Let , , , and be sequences in with and for all . Suppose that and suppose that is a bounded sequence in . Assume that the solution set of SFFP(5) is nonempty. For any , start with an arbitrary and define the sequence by Then, converges strongly to provided that the following conditions are satisfied: (i),,;(ii),. Since both condition (ii) of Corollary 17 and Theorem 18 are equivalent provided that for every , Theorem 18 also follows from Corollary 17. We now turn to SFFP(5) for another algorithm, which essentially follows the argument of Wang and Xu [27]. For the sake of completeness, we still give a detailed proof. Theorem 19. Let be a sequence in . Suppose that and assume that the solution set of SFFP(5) is nonempty. Start with any and define a sequence by Then, the sequence converges strongly to the minimum norm solution of SFFP(5) provided that the following conditions are satisfied: (i);(ii);(iii)eitheror. Proof. Put, , and for all . Then, Take . From Proposition 9, we have Hence, from which follows that is bounded and so is . Choose so that for all . We have In view of conditions (i), (ii), and (iii), we can apply Lemma 5 to (82) to get and then from we see that Consequently, the demiclosedness principle ensures that each weak limit point of is a fixed point of the averaged mapping . And then we conclude from Proposition 9 that each weak limit point of lies in . Let be the minimum norm element of ; that is, . We shall show that converges strongly to . To see this, we compute as follows: If , then an application of Lemma 5 to (86) yields that . Hence, to complete the proof, it suffices to show that . For this, taking into account Proposition 8, we can write for some and some nonexpansive mapping . Then, from we obtain which ensures that , and hence, once we note that . Since is bounded, we can take a subsequence so that it is weakly convergent to and where the last inequality comes from the characterization inequality of a metric projection. Now, applying (89) and (90) to the equality we obtain and thus, This completes the proof. 5. Applications In this section, we shall apply the results in Section 4 to approximate zeros of maximal monotone operators and solutions of equilibrium problems. Theorem 20. Suppose that and are two maximal monotone operators on and , respectively, and suppose that is a bounded linear operator with adjoint . Suppose further that , , , and are sequences in with and for all . Let and be sequences in , a sequence in , and a bounded sequence in . Assume that the solution set of the problem is nonempty. For any , start with an arbitrary and define a sequence by Then, the sequence converges strongly to provided that the following conditions are satisfied: (i), , ;(ii), ;(iii), . Proof. For any , letting and
{"url":"http://www.hindawi.com/journals/aaa/2013/613928/","timestamp":"2014-04-16T05:50:23Z","content_type":null,"content_length":"1045323","record_id":"<urn:uuid:0a53f507-d687-499f-8b92-703c4f1d6a66>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
'Mental Abacus': An Unusual Calculating Ability | RCScience Stanford Univ. The Mental Calculation World Cup is a brutal contest, and one that threatens to fry the neurons of the unprepared. Over the course of a competition, contestants might be asked to add a string of 10 different 10-digit numbers, multiply 18,467,941 by 73,465,135, find the square root of 530,179 and determine which day of the week corresponds to Aug. 12, 1721 – all without writing anything down. TAGGED: Thinking
{"url":"http://www.realclearscience.com/2011/08/04/039mental_abacus039_an_unusual_calculating_ability_242410.html","timestamp":"2014-04-19T14:31:50Z","content_type":null,"content_length":"20366","record_id":"<urn:uuid:44c72ee2-f67b-4c0b-b60e-e0634f6b3de6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Variant of vertex pl. vertexes or vertices 1. the highest point; summit; apex, as the top point of the sun, moon, etc. above the horizon 2. Anat., Zool. the top or crown of the head 3. Geom. 1. the point of intersection of the two sides of an angle 2. a corner point of a triangle, square, cube, parallelepiped, or other geometric figure bounded by lines, planes, or lines and planes 4. Optics the point at the center of a lens at which the axis of symmetry intersects the curve of the lens Origin of vertex Classical Latin the top, properly the turning point ; from to turn: see verse
{"url":"http://www.yourdictionary.com/vertexes","timestamp":"2014-04-20T01:50:14Z","content_type":null,"content_length":"40541","record_id":"<urn:uuid:c9a5bb21-c909-4daf-9c44-4a03fb093a07>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics101 Info Statistics101 executes programs written in an enhanced version of the easy-to-learn Resampling Stats language. Resampling Stats is a statistical simulation language. You write a program in the language, describing the process behind a probability or statistics problem. Statistics101 then executes that program, computing probability and statistics answers without using mysterious formulas. Statistics101 runs your Resampling Stats model thousands of times, each time with different random numbers or samples, keeping track of the results. When the program completes, you have your answer. As a very simple example, say you wanted to know the probability of getting exactly two heads in a toss of three coins. You could toss three coins many times, counting the number of times you got exactly two heads and dividing by the number of tosses. That would take considerable effort and time. You could also calculate it precisely if you knew the correct formula. Instead, with Statistics101, you could model that process as follows: URN (0 1) coin REPEAT 1000 SAMPLE 3 coin toss COUNT toss =1 heads SCORE heads results COUNT results =2 successes DIVIDE successes 1000 probability PRINT probability The program simulates 1000 tosses of three coins and prints out the resulting probability. The output looks like this: probability: 0.368 Here's what the above program is doing: Put the numbers 0 and 1, representing tails and heads, into an "urn" named "coin". Repeat the following three commands 1000 times: Take three samples at random, with replacement, from the "urn". This is equivalent to three tosses of a coin. Count how many of the tosses were equal to 1 (i.e., heads). Record the number of heads in the "results" vector or list. Count how many of the 1000 results in the results vector were equal to two, i.e., two heads. Calculate the probability by dividing the number of successes by the number of trials (1000). Print the probability. For much more on how to apply Statistics101 to probability and statistics problems, see the links in the right column of this page.
{"url":"http://www.statistics101.net/statistics101web_000005.htm","timestamp":"2014-04-20T23:28:08Z","content_type":null,"content_length":"22975","record_id":"<urn:uuid:1dccef4e-7689-4f99-ba37-d2948fe78819>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 866.11019 Autor: Calkin, Neil J.; Erdös, Paul Title: On a class of aperiodic sum-free sets. (In English) Source: Math. Proc. Camb. Philos. Soc. 120, No.1, 1-5 (1996). Review: A set S of positive integers is said to be sum-free if (S+S)\cap S = Ø, where S+S = {x+y | x,y in S}. A sum-free set S is complete iff it is constructed greedily from a finite set, i.e. there is an n' such that for all n > n', n in S\cup(S+S). Let us call S ultimately periodic (with respect to m) if there is an n^* such that for n > n^*, n in S <==> n+m in S. Let S(\alpha) = {n: {n\alpha} in (1/3,2/3)}. The authors note that for \alpha irrational S(\alpha) is aperiodic. The main result of the paper states: For every irrational \alpha, the set S(\alpha) is incomplete. The authors raise some open questions on this topic as well. Reviewer: N.Hegyvari (Budapest) Classif.: * 11B83 Special sequences of integers and polynomials Keywords: additive problems; aperiodic sum-free sets; aperiodic; incomplete; sum-free sets © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/86611019.htm","timestamp":"2014-04-19T17:10:31Z","content_type":null,"content_length":"3931","record_id":"<urn:uuid:a75e14b5-d8d0-431e-907e-a4233d6bb2b1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
GCE A Level 2012 Oct/Nov H2 Maths 9740 Paper 1 Suggested Answers & Solutions It’s my annual hello to all of you again. While my sister has churned out her O Level A/E-Maths solutions with much fanfare, here’s my little contribution to all our A Level comrades this year. Click the button and grab it here! Latest version: • 1.1: Q6(iii) – amended working step n = lg1000/lg2 to compare 2^9 = 512 and 2^10 = 1024, in order to comply strictly with the “No Calculator” requirement in this question. Q11(i) – improved descriptions of the tangents (sorry couldn’t write straight that night *sigh*) Access it if you’re having trouble accessing it on Facebook using your state-of-the-art smartphone As usual, [DEL:all yim yim jim jim scrutinizers:DEL] please leave a gentle comment should you spot any mistake in this set of suggested solutions, which was produced well after my last class at the end of my tiring day *rubs eyes* Now go revise your • AP/GP • Integration by parts/substitution/ Volume of revolutions • Summatioon (Method of Difference) • Complex Numbers (Argand diagram) All the best for your Paper 2! Comments & Reactions 48 Comments just wondering, how difficult do you think this year's H2 paper was? @pheng: wa tis year's paper too easy.. RJC's tcher said to get A yall hav to score above 87 at least @boo: Which "RJC's teacher" said that, may I ask? I'm from RJC myself, and if it's true that the grade boundary is 87, then even this school will have a ridiculously low distinction rate. Look at my below comment, and that of Carol. Hi hi, are you gonna share the solutions for H1 Math too? Hi sir, just a few qns. for qn ten my method was correct of substituting h equals to something k, but the substitution was wrong. approximately how many marks for method will i get? additionally for the mi qn i put n is z and not z plus, is that ok,like will minus mark? and erm for the definite integral i just put =answer(in 3dp) without the longer answer than the curly equal sign, is that ok, like will minus mark? and uhm for the vectors qn 9 i got first aprt correct but cos of calculation error second and third aprt wrong but emthod correct, approx how many method marks? similarly for the intial vector qn ihad a calculation error at the final step the square root u etc last step, is that enuff for two method marks? and for the integration qn i didnt say c is a constant is that ok? and finally do u hv any duiea whats the grade range for a? any help would be sincerely appreciated thanks and sorry one last one. for qn 8 i just said since the sign is negatiove its always less than zero. but i accidentally put min point. will is till get 1method mark? thank you hi sir can u approximate the a for this year? for qn 9 (ii) is it possible to find the ratio of the lengths AN:NB using this method? or why is it not acceptable? AN = ON - OA NB = OB - ON proceed to mod AN and NB, then compare the lengths? my answer was 1:5 if i didn't forget.. :/ @angstzx: angstzx, the question requires to compare ratio of the vectors AN and NB. not the length. they dint mod the vectors @angstzx: Errr ... I thought that's exactly what I did? Actually the question simply asked for the ratio AN : NB but you may obtain the same answer (1 : 3) via $\vec{NB}=3\vec{AN}$ too. i think the difficulty of p1 for this year as compared to last year was eazier Hi I think qn 11iii) has a mistake, the limits weren't changed from x to teter? @Ang: Well in this question, the limits for x and θ happen to be the same since when sketching the curve in part (ii), we've already determined that: x = 0 when θ = 0 at (0, 0) x = 2π when θ = 2π at (2π, 0) Hi, I would like to ask if there's likely to be ECF for QN 11, as I did not include the base surface area. Here we go again, people asking about the grade boundary to get an A, as with every year. There is really no way anyone can give a good approximate, because SEAB is very opaque when it comes to their moderation procedure, so we can't really know. The worst thing anyone can do now is to give a high estimate (e.g. 85) since this was an 'easy' paper, then everyone who did NOT get that grade will start fearing, and have decreased morale. Hello! May I ask what is the estimated mark for an A this year?:) For question 9(i), Why shouldn't the solution be: Vector equation of the line AB: r = (7,8,9) + λ[ -(7,8,9) + (-1,-8,-1)] = (7,8,9) + λ(-8,-16,-8) , where λ is a parameter. (Can this be left as the final answer?) @Jud: should be accepted. having factorised out the common factor doesnt change the answer. people usually factorise for sake of minimising careless mistakes when cal, @Jud: Yup not a problem is 75 safe? can any1 estimate for me pls Hi, I would like to ask if answers that are not simplified will be accepted or penalized. For eg Q9iii, I didn't simplify my answer and left the Cartesian equation as over 2, -8 and 2 for the x y and z respectively. And for Q6iii also I left it as -10pie /3. @T: For Q9(iii), you mean you left your equation of the line as a column vector? If this is the case then it's not in cartesian equation form leh. For Q6(iii), generally it's a good idea to express your final answers for arguments as the principle argument but you may get away with leaving it as −10π/3 since the question did not specify requirement for the argument of z^n to be between −π and π in this case. Hi may I ask if they asked you not to use a calculator in question 6 and you used lg1000/lg2 to compute n which is also stated in your solutions provided, what will happen? @curious: For 6(iii), having yet to explore an alternative method, we'll all arrive at 2^n > 1000. Hmmm ... [DEL:maybe I should simply know that 2^10 = 1024 and remove the lg1000/lg2 part just to strictly "comply". Heh.:DEL] Alright amended the step to compare 2^9 and 2^10 in version 1.1 of the solutions. I could guarentee 85 - 88 marks. Isit safe for an A ? hi to all who asked about mark gauges.. i would like to paraphrase a quote from a previous forum, and the quote was from the cambridge examiners themselves: the grading for a levels is comparable to judging a student's ability to swim 100 metres. if you're able to clear the 100 metres, you'll get the A regardless of the number of people who are able to swim around the world. so i guess don't worry if you aren't able to hit a high score, just aim for as high as possible for tmr's paper! if you really want a gauge.. i think the RJ comment is not representative of wat is going on in SG la. over here at where i am the consensus is "ok but damn tedious", My estimate for my own score is 82 (provided got ecf..), but i've also seen ppl who couldn't do all the last three qns. so i guess this year the threshold will be lower.. @Carol: True. If its 87 for an A, all the people taking h2 math in rj and hci sure A then people of yjc and mi get D? Lol @meme: ...i'm from hci btw haha. @Carol: im from yjc. practically half of the school said that p1 was difficult lol. @meme: well for me i was ridiculously careless.. at first i put the correct ans for smth and then changed it. 3 marks down the drain >< @Carol: LIKE! This quote really brings back memories of that long list of comments in 2010! @Mr Loi: oh yes Mr Loi, just some question to ask: will there be method marks and ecf for question 10 if i forgot to include the base area. because i did everything else the same way and got an answer for r that differs by a digit.. and for question 11 i used parametric integration (the one where you differentiate y in terms of t and leave x as it is, then integrate in terms of t), and got 3pi (though with a negative sign so i took absolute value). would that be fine as well? @Carol: As mentioned here, method marks and ECF are determined by the mark scheme, and that they are given when it was only the substitution of a wrong value in your appropriate steps that led to the wrong answer. While I think there's a good chance they're allocated here with [7] & [5] marks in parts (i) & (ii), I'm really hoping they'll be lenient in your case since omitting the base area would have affected the expressions for the surface area, its derivative, r and h. (unless you've gotten some of the above right?) For Q11, you seem to have done it the same way as me. If so, you shouldn't get −3π. Have you gotten your limits the other way round in the definite integral? @Mr Loi: Sorry for the long hiatus. I found h in terms of r using the volume expression, so that part would be the same as the answer scheme. Yeah I do hope method marks will be given, since there is really no difference in the method other than the base area, and quite a lot of people from my school had left out the base area when calculating as well.. As for question 11, I realised I had unwittingly integrated x in terms of y, and it so happened that the answer was the same in magnitude. So I suppose I will lose the method marks and hopefully get one mark for the right answer... Luckily paper 2 managed to pull me up, despite being a harder paper. Hi, will there be solutions for the Math H1 paper as well? @Adeline: Here. Will it be penalised if l = -1/3 is found using GP method instead of the usual way? @amos: For Q3, though it wasn't explicitly stated, I really think Cambridge's intention is to test your concept in utilizing the given expression of u[n] rather than coming up with your own G.P. method. @Mr Loi: For some reasons I did it using the GP way but ended up with the same answer. Is there a likelihood that marks will be deducted ><'' Hi Sir, may I know if today maths paper 2 solutions will be up? @Ang: Here. Hi, for the question that needs us to show that dy/dx = cot 0.5θ, could we do it this way? = (sin θ)/(1-cosθ) = (sin 0.5θ+0.5θ) / ( 1- cos(0.5θ+0.5θ)) then we go on to solving the equation using these 2 formulas: sin( A+B) = sinAcosB+cosAsinB and cos(A+B)= cosAsinB-sinAcosB if not, then why is it wrong? @Jess: Q11(i) Using Additional Formulae is perfectly fine, though it looks a bit tedious @Mr Loi: Hi Sir, but the problem is that when I continued to solve it using the additional formula, the final answer I get is 2 sin 0.5θ cos0.5θ. I am unable to get cot 0.5θ. @Jess: Hmmm ... $\frac{\sin (\frac{\theta}{2}+\frac{\theta}{2})}{1-\cos (\frac{\theta}{2}+\frac{\theta}{2})}$ $=\frac{\sin (\frac{\theta}{2}) \cos(\frac{\theta}{2})+\sin (\frac{\theta}{2}) \cos(\frac{\theta}{2})}{1-[\cos (\frac{\theta}{2})\cos (\frac{\theta}{2})-\sin(\frac{\theta}{2})\sin(\ $=\frac{2\sin (\frac{\theta}{2}) \cos(\frac{\theta}{2})}{1-\cos^2 (\frac{\theta}{2})+\sin^2(\frac{\theta}{2})}$ $=\frac{2\sin (\frac{\theta}{2}) \cos(\frac{\theta}{2})}{\sin^2 (\frac{\theta}{2})+\sin^2(\frac{\theta}{2})}$ $=\frac{2\sin (\frac{\theta}{2}) \cos(\frac{\theta}{2})}{2\sin^2 (\frac{\theta}{2})}$ $=\frac{ \cos(\frac{\theta}{2})}{\sin (\frac{\theta}{2})}$ $= \cot (\frac{\theta}{2})$ (shown) Did you steps tally? Are we really expected to do 2^10 without calculators? =\ Hi ,I know it's kind of late haha but my friend wanna ask whether can switch step1 with step 3 for qn7:) Hi Miss Loi, I know this is not the 2013 thread but I can't find it so can I ask you a general question regarding the grading system? Does Cambridge give us our marks (based on score and/or ability) or do they simply write down our scores and give them to MOE who will moderate them and create the bell curve? I am asking this because I am afraid I am scoring a low 70s due to a lot of careless mistakes 5 Reactions #alevel #H2 Maths suggested answers & solutions done on a really quiet night: http://t.co/dL0j7RWj (no twang twang twang this time) #math #alevel #H2 Maths suggested answers & solutions done on a really quiet night: http://t.co/dL0j7RWj (no twang twang twang this time) #math #alevel #H2 Maths suggested answers & solutions done on a really quiet night: http://t.co/dL0j7RWj (no twang twang twang this time) #math #alevel #H2 Maths suggested answers & solutions done on a really quiet night: http://t.co/dL0j7RWj (no twang twang twang this time) #math @shurui http://t.co/Y7ZAcMGb Post a Comment • * Required field • Your email will never, ever be published nor shared with a Nigerian banker or a pharmaceutical company. • Spam filter in operation. DO NOT re-submit if your comment doesn't appear. • Spammers often suffer terrible fates. Connect with Facebook Impress Miss Loi with beautiful mathematical equations in your comment by enclosing them within alluring $\LaTeX$ [tex][/tex] tags (syntax guide | online editor), or the older [pmath][/pmath] tags ( syntax guide). Please PREVIEW your equations before posting! Recent Comments
{"url":"http://www.exampaper.com.sg/mr-loi-the-a-level-tutor/gce-a-level-2012-oct-nov-h2-maths-9740-paper-1-suggested-answers-solutions","timestamp":"2014-04-17T06:41:55Z","content_type":null,"content_length":"205679","record_id":"<urn:uuid:0a5a4632-5b92-487a-b90e-f287d226e5c4>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Gerrymandering - Rules Gerrymandering is the process of carving up an electoral district into strange shapes in order to derive a political advantage. The name comes from a cross between the name Gerry (from the early Massachusetts governor Elbridge Gerry) and the elongated shape of a salamander. The Problem For the eighth MATLAB programming contest, you are given the task of preparing for an upcoming election in the state of Rectanglia. As the director of redistricting, your job is to divide the state into N districts of equal population. Therefore, given a matrix A in which each element corresponds to the population in a given square mile, return a matrix B that indicates which voting district each square mile belongs in. For example, suppose, given the following census data for Rectanglia in matrix A, you are told to divide the state into N = 3 districts of equal population. So 3 people live in square (1,1) and 28 people live in square (3,2). A total of 120 people live in the entire state (it's a small state). Our goal, therefore, is to make three separate districts, each of which is connected, each with a population of 120/3, or 40 people. Voting districts must be contiguous, or connected in the four-neighbor sense (no diagonal connections). Thus, district 1 could not consist solely of square (1,1) and square (2,2). Here's one way to divide the state. It's easy to see that this solution, while not unique, can't be improved. We have successfully made sure that each district has exactly 40 people. We would label our districts using the matrix B as shown below. The overall score for your entry is a combination of how well your algorithm does and how fast it runs on a multi-problem test suite. How well your algorithm does (which we call the "result") is determined by considering the deviation of your solution from the perfect population in each district. If a district's population is given by pop , and the total population is pop , and the number of districts to be assigned is N, then the result is computed by This result corresponds to the number of people that must be redistributed in order to even out the district population. A result of zero is the best you can do, since no one needs to move (pop is equal to pop /N for all districts). The example shown above yields a perfect result of zero. Suppose instead you partitioned matrix A like so. so the B matrix looks like this. In this case, pop is 3 + 0 + 0 + 0 + 2 + 12 + 10 + 2, or 29. pop is 12 + 11 + 28, or 51, and pop is still 40. The result is calculated as follows. result = 1/(2*120) * ( abs(40 - 29) + abs(40 - 51) + abs(40 - 40) ) or result = (11 + 11)/240 = 11/120 That is, if we could magically move 11 people from district 2 to district 1, the result would be perfect. And 11, expressed as a ratio of the total population, is 11/120, or 0.0917. The average result for the entire test suite, along with the associated runtime of your entry, gets passed to our scoring algorithm before returning a final score, according to the equation We don't publish the values k , k , and k , but they aren't hard to figure out. Developing Your Entry The files you need to get started on the contest are included in a ZIP-file available at the MATLAB Central File Exchange . If you download and uncompress this zip-file, you will have the files described below. The routine that does the algorithmic work is function b = solver(a,n) b = n*ones(size(a)); b(1:n) = 1:n; Keep in mind that this function must have the right signature: two input arguments, one output argument. Variable names are unimportant. function b = solver(a,n) To test this function with the test suite in the zip-file, run >> runcontest Collaboration and Editing Existing Entries Once an entry has been submitted, it cannot be changed. However, any entry can be viewed, edited, and resubmitted as a new entry. You are free to view and modify any entries in the queue. The contest server maintains a history for each modified entry. If your modification of an existing entry improves its score, then you are the "author" for the purpose of determining the winners of this contest. We encourage you to examine and optimize existing entries. We also encourage you to discuss your solutions and strategies with others. You can do this by posting to the comp.soft-sys.matlab that we've started from our newsreader. Fine Print The allowable functions are those contained in the basic MATLAB package available in $MATLAB/toolbox/matlab, where $MATLAB is the root MATLAB directory. Functions from other toolboxes will not be available. Entries will be tested against MATLAB version 6.5.1 (R13sp1). The following are prohibited: Java commands or object creation eval, feval, etc. Shell escape such as !, dos, unix Handle Graphics commands ActiveX commands File I/O commands Debugging commands Printing commands Simulink commands Benchmark commands such as tic, toc, flops, clock, pause error, clear Check our for answers to frequently asked questions about the contest. About named visibility periods Contests are divided into segments where some or all of the scores and code may be hidden for some users. Here are the segments for this contest: • Darkness - You can't see the code or scores for any of the entries. • Twilight - You can see scores but no code. • Daylight - You can see scores and code for all entries. • Finish - Contest end time.
{"url":"http://www.mathworks.com/matlabcentral/contest/contests/24/rules","timestamp":"2014-04-21T14:45:47Z","content_type":null,"content_length":"26467","record_id":"<urn:uuid:00e23ffc-f42a-4d05-86f0-0219d01ed049>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Relations and sizes - Squares and square roots - In Depth Any number raised to the power of 2 can be modeled using a polygon--the square! That's why we call raising a number to the second power "squaring the number." The perfect squares are squares of whole numbers. Here are the first five perfect squares. We've shown a geometric model to verify each of these squares. The square root of a number n is a number that, when multiplied by itself, equals n. Here are the square roots of the perfect squares This model shows the number 169 as a square. From the model, what is the square root of 169? We can count the number of units making up each side of the square. We find 13 units to a side, so 13 is the square root of 169.
{"url":"http://math.com/school/subject3/lessons/S3U3L3DP.html","timestamp":"2014-04-19T19:42:21Z","content_type":null,"content_length":"21272","record_id":"<urn:uuid:351982e9-b4cb-4ad9-a71a-cc89ec0706bc>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Factorial of 1-20 February 16th, 2011, 01:10 AM Factorial of 1-20 I have to display the factorial's of numbers 1 - 20. I believe my code should work, but i keep getting the error: FactorialTest.java:6: cannot find symbol symbol : method factorial(int) location: class FactorialTest System.out.println( factorial( x ) );; ^(pointing under the "f" in "factorial") Any help on what's causing my error, and how to fix it, would be greatly appreciated. Thank's in advance. Code Java: public class Factorial public static long factorial(int n) return 1; return n*factorial(n-1); Code Java: public class FactorialTest public static void main(String[] args) for( int x=1; x<=20; x++) System.out.println( factorial( x ) );; February 16th, 2011, 08:49 AM Re: Factorial of 1-20 Im just think about this and could be wrong but try putting the int x out of the for loop and use and int i in the for loop instead: like this Code : int x; for(int i; i<20;i++) That might work February 16th, 2011, 12:33 PM Re: Factorial of 1-20 The factorial method cannot be accessed from the FactorialTest class because the factorial method is inside of the Factorial class. There are two ways to fix this: 1) Create a Factorial object and invoke the factorial(int) method on that (and send it x). 2) Use the static approach, where you would say new Factorial().factorial(x); instead of just factorial(x); It is a scope issue that is occurring because you are accessing methods between differing classes.
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/7424-factorial-1-20-a-printingthethread.html","timestamp":"2014-04-18T19:37:21Z","content_type":null,"content_length":"9427","record_id":"<urn:uuid:12994d3e-74c0-4609-8a99-18ec6e580e45>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Chemistry: Acid-Base Titrations Video | MindBites Chemistry: Acid-Base Titrations About this Lesson • Type: Video Tutorial • Length: 12:12 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 131 MB • Posted: 07/14/2009 This lesson is part of the following series: Chemistry: Full Course (303 lessons, $198.00) Chemistry: Reactions in Aqueous Solutions (10 lessons, $14.85) Chem: Titration Problems & Gravimetric Analysis (3 lessons, $4.95) This lesson was selected from a broader, comprehensive course, Chemistry, taught by Professor Harman, Professor Yee, and Professor Sammakia. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/product/chemistry. The full course covers atoms, molecules and ions, stoichiometry, reactions in aqueous solutions, gases, thermochemistry, Modern Atomic Theory, electron configurations, periodicity, chemical bonding, molecular geometry, bonding theory, oxidation-reduction reactions, condensed phases, solution properties, kinetics, acids and bases, organic reactions, thermodynamics, nuclear chemistry, metals, nonmetals, biochemistry, organic chemistry, and more. Dean Harman is a professor of chemistry at the University of Virginia, where he has been honored with several teaching awards. He heads Harman Research Group, which specializes in the novel organic transformations made possible by electron-rich metal centers such as Os(II), RE(I), AND W(0). He holds a Ph.D. from Stanford University. Gordon Yee is an associate professor of chemistry at Virginia Tech in Blacksburg, VA. He received his Ph.D. from Stanford University and completed postdoctoral work at DuPont. A widely published author, Professor Yee studies molecule-based magnetism. Tarek Sammakia is a Professor of Chemistry at the University of Colorado at Boulder where he teaches organic chemistry to undergraduate and graduate students. He received his Ph.D. from Yale University and carried out postdoctoral research at Harvard University. He has received several national awards for his work in synthetic and mechanistic organic chemistry. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. I have here a solution containing an unknown amount of a base. Now I've added a little bit of universal indicator to let me know that I have a hydroxide concentration. You'll recall the universal indicator is going to be purple if we have an excess of hydroxide. What if I wanted to know exactly how much base I have in this solution? How could I find out? Well, qualitatively, I know that I can simply add a little bit of acid, and that that acid will undergo a reaction with the base in a neutralization reaction, again base reacting with acid to give us water. So we'll see this color change at the point at which the concentration of hydroxide is returned close to zero. But how is that going to help me understand exactly how much base I had in here to begin with? Well what if I knew exactly how much acid I put in to just bring me to the point where I neutralized all of the hydroxide, but I was careful not to add so much that I went beyond that, that I'd have an excess of acid? So what I'm saying is I'm going to try to be very careful to add just enough acid that I exactly neutralize the base that I have. Does that help me know how much base I had? Not unless I know exactly how many moles of acid I added at that point. What I'm describing to you is a very common experiment in chemistry called a titration. The idea of a titration again is that if we have a known amount of acid that we add to a solution that we neutralize a base in that solution, and if I know exactly how much in moles of the acid I put in to reach that point of neutralization, then I know by this relationship that there is a one to one relationship between the acid and base in a neutralization step that I'll know exactly how many moles of base that I neutralized. I'll have exactly the same amount of base, in fact, of the acid that I put in. So let's look at this in a little more detail. Once again, let me walk you through what a titration experiment is, but in a little more detail. In a typical titration--and by the way, a titration does not only happen with acid-base reactions. This also can happen with a redox reaction. In the general experiment of a titration experiment, we know very accurately the concentration of a standardized solution. And we know very accurately the volume of a standardized solution that we're adding to an unknown. In the example I just showed you, this would be an acid. And we'd know the concentration of the acid. We'd know the volume change. And what we would be probing for would be the concentration of a base. We could do that in reverse. We could be using a standardized base to probe for an unknown acid. We could in fact use a substance that undergoes a redox reaction with some unknown redox partner. The ideas are exactly the same, as long as we can tell where the equivalence point is, the point at which we've exactly neutralized the solution. So also, as part of this experiment, is some type of an indicator letting us know when we've reached that point. In the case I'm showing you here, the indicator is simply a dye. But very often the indicator can be some type of an electronic device that let's us know precisely when we've reached the neutralization point. And usually in a titration experiment, what we're looking for is a percentage or an amount of some unknown substance. So once again we know concentration and we know volume. That means we know exactly the number of moles that we're adding. And if we know the number of moles we're adding, and if we know the relationship between the moles that we're adding, the substance that we're adding, and how it's reacting with what we're probing for, then we can determine the all-important knowledge of how much moles of the unknown material we have. And that's what we're ultimately shooting for here. What does this look like in practice? First of all let me quickly comment that the way we would actually do this is to use something known as a buret. A buret is simply a very finely graduated device to let us know very accurately small amounts of volume. So we would actually have this clamped up. We'd know a standard solution that we would put into a buret. And many of you are going to get a chance to do this type of an experiment in lab. We'd add a specified volume just until the point where we reach the neutralization point. We'd look at how much volume was used. We'd know the concentration. Therefore we'd know the number of moles that we've added. And that is the crucial point here. So let's do an example then. Suppose that we had an unknown acid. In this case our acid is oxalic acid, but what's unknown about it is how pure it is. So we have some impure sample containing mostly oxalic acid, but also a little bit of something else that's an unknown. And we want to know just how pure this material is. We could use a titration experiment to answer that question. First of all, what is oxalic acid? Oxalic acid, the molecular structure for it looks like this. What's important for us right now is just simply to note that the OH bonds here are polarized. And so this material is capable of losing up to two protons. So this is an example of what we call a diprotic acid, capable of reacting with two equivalents of hydroxide. This is very important to keep in mind, because what it means is that it's going to take 2 equivalents of base to consume only one equivalent of the acid. So 2 moles of this will react with 1 mole of this. We have to keep that in mind, because that's the chemical relationship that ties together how many moles of stuff we add, and what we actually have as our unknown. Now our next step, let's go back and take a look at this exact situation here. In this problem, we're going to have 1.034 grams of an impure acid. So this again is not the mass of the pure oxalic acid. This is the mass of everything, the oxalic acid plus whatever other junk is in this sample. What we're after is what is the concentration of the oxalic acid in here, what the percentage of purity is? Now what we have to work with is a standardized solution of sodium hydroxide that's .485 molar let's assume. And we know how much volume we needed to exactly neutralize. The experiment that I'm describing is what I showed you earlier, but just switched around, where I'm using a standardized base to titrate an unknown acid. Once neutralization has been reached, we're going to be able to know how many moles of the acid we had by knowing a couple of things. We know concentration and volume. That gives us moles of base. And then we know the relationship between the moles of base and the moles of acid that we're neutralizing. And remember that's a one to two ration now, not a one to one ratio. So let's work out this calculation. So we're going to start with 0.03477 liters. That's the 34.77 milliliters of the base that we needed. And if we multiply that by the concentration of the base, 0.485 moles per liter, that's going to give us 0.0167 moles of hydroxide, because our units are going to cancel here. So multiplying volume by concentration is going to give us the moles of hydroxide that we added to exactly neutralize this acid. Now we know that that many moles of hydroxide, that it's going to take twice as much hydroxide for every one amount of our acid. So we need to factor that in. The next step is to take .0167 moles of hydroxide and multiply that by the factor that there is only 1 oxalic acid for every 2 equivalents of hydroxide that are used. So that then is going to give me 0.00836 moles of the oxalic acid, if we do that calculation, H^2C^2O^4. We're going to use the content box here to keep track of our numbers. Let's stop and look at where we are now. We know the number of moles now of oxalic acid in our unknown sample. So all that remains for us to do is convert this to grams so we know the mass of oxalic acid, and then compare that to the total mass of our impure material. So we know the moles. Now let's take that .00836 moles of oxalic acid. We're going to multiply this by the molecular weight of oxalic acid. And that's going to give us 0.753 grams of oxalic acid. We're almost home. I now know the mass of oxalic acid that I have in my unknown sample, but remember that I started with considerably more mass than this. I started with a total mass of 1.034 grams. So the remainder is impurities. So finally I just want to determine what is my percentage of purity? We've done this type of calculation before. This is a percent by mass calculation. So what I want to do now is say the percent is going to be 100 times the mass of pure oxalic acid, 0.753 grams, divided by the total mass. This is mass of pure material divided by the total mass that we had. That's going to give me our percentage. And that turns out to be 72.8%. So we've done it. We've determined that the concentration of the oxalic acid is only 72.8%, that that was our percentage of purity, in other words. So recapping, by knowing accurately the concentration of base, through a titration experiment we determined accurately moles of acid, and therefore the percent of that acid in our unknown material. Reactions in Acqueous Solutions Stoichiometry Problems in Solutions Acid-Base Titrations Page [2 of 2] Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/4710-chemistry-acid-base-titrations","timestamp":"2014-04-19T04:34:47Z","content_type":null,"content_length":"61919","record_id":"<urn:uuid:fb8c4fd8-8062-40ff-a6dc-afa0fe85fef7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
sample size - poisson dist'n? November 7th 2008, 10:19 AM #1 Nov 2008 sample size - poisson dist'n? I am designing a survey questionaire and need help estimating sample sizes. The survey will ask the respondents about the number of a certain animal species seen in a given month. e.g., how many bears did you see? Because this is count data (# of animals seen/person), I don't believe methods for normal distributions apply? (or do they?). Can someone point me to an equation that will estimate the required sample size for these data? (For the sake of an example, let's assume I want to estimate the mean with a margin of error of 5% and a 95% confidence interval. The population of people will vary, but lets use 200). Thank you, Shawn Morrison I am designing a survey questionaire and need help estimating sample sizes. The survey will ask the respondents about the number of a certain animal species seen in a given month. e.g., how many bears did you see? Because this is count data (# of animals seen/person), I don't believe methods for normal distributions apply? (or do they?). Can someone point me to an equation that will estimate the required sample size for these data? (For the sake of an example, let's assume I want to estimate the mean with a margin of error of 5% and a 95% confidence interval. The population of people will vary, but lets use 200). Thank you, Shawn Morrison There are at least three methods that I can think of for doing this: 1. Compute the distribution of mean number of sightings from a sample of size $N$, with mean number per sighter $\mu$, and then use that to determine the required sample size for the sort of mean number of sightings you are expecting (I suspect, but would have to do some research to confirm this, that with $N$ observers the distribution of the total number of sightings is Poisson with mean $N \mu$, where $\mu$ is the mean number of sightings from a single observer) 2. Generate approxinate distributions for the mean using a bootstrap technique. 3. Using the Chebyshev or Vysochanskiï-Petunin inequalities to give conservative estimates of the required sample size. Thank you - I'll try those ideas. November 8th 2008, 01:51 AM #2 Grand Panjandrum Nov 2005 November 12th 2008, 09:26 AM #3 Nov 2008
{"url":"http://mathhelpforum.com/advanced-statistics/58214-sample-size-poisson-dist-n.html","timestamp":"2014-04-19T06:19:02Z","content_type":null,"content_length":"37786","record_id":"<urn:uuid:505a8af5-abbf-4669-8dd9-6c499e14ca3e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Needs a Revolution, Too, writes Barry Garelick in response to The Atlantic story, The Writing Revolution. He first encountered reform math when his daughter was in second grade. . . . understanding takes precedence over procedure and process trumps content. In this world, memorization is looked down upon as “rote learning” and thus addition and subtraction facts are not drilled in the classroom–it’s something for students to learn at home. Inefficient methods for adding, subtracting, multiplying, and dividing are taught in the belief that such methods expose the conceptual underpinning of what is happening during these operations. The standard (and efficient) methods for these operations are delayed sometimes until 4th and 5th grades, when students are deemed ready to learn procedural fluency. Students are expected to “think like mathematicians” before acquiring the analytic tools necessary to do so, Garelick writes. Procedural skills are taught on a “just in time” basis. Such a process may eliminate what the education establishment views as tedious “drill and kill” exercises, but it results in poor learning and lack of mastery. Students generally work in groups with teachers who “facilitate” rather than providing direct instruction. As reform math has become the norm in K-6 classrooms, high school math teachers are trying to teach algebra to students who “do not know how to do simple mathematical procedures,” he writes. In math, as in writing, learning the fundamentals may not be fun or engaging. It may require practice. But students who skip the basics rarely develop the ability to “think like mathematicians” or write like “authors.” They’re confused. And bored. Recent Comments • allen on Windmills in San Diego • Roger Sweeny on Math isn’t just for ‘math people’ • Roger Sweeny on Math isn’t just for ‘math people’ • gahrie on Math isn’t just for ‘math people’
{"url":"http://www.joannejacobs.com/tag/reform-math/","timestamp":"2014-04-20T11:07:24Z","content_type":null,"content_length":"24324","record_id":"<urn:uuid:40aaf133-7c53-42f9-a582-00acd99eca20>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem Set 9 Due date: 11/20 The goal of this problem set is to design classes and methods in an object-oriented language. The design involves structural and generative recursion, plus accumulators HtDC: 10.1, 10.9, 11.1, 14.2, 14.3, 14.4, 14.5, 15.7, 15.11; also solve the problems from the case-study sections 6.2 and 16.3 so that you know how to design draw methods in this Java-like Required Problems: Note: You must use the ProfessorJ Intermediate language level for all problems. 1. Design the method render for your data representation of Enums from problem 8.2. Since ProfessorJ/Intermediate (nor Java) doesn't include images as first-class values, the method must render the image onto a canvas and at a specified position. This implies that the method has the following signature: boolean render(Canvas c, Posn p) Your Example class should create a canvas and a position to render your sample enumerations. A second consequence is that you cannot actually test your method. The best you can do is invoke render on examples and inspect the canvas with your eyes. As for problem 7.1, the anchor for items in unordered (UL) lists must depend on their nesting depth. For the outermost unordered list use a (blue) disk, next a (red) circle, a (green) square, and from then on (yellow) squares: For my solution, I used 2 pixels for the radius for disks/bullets and for the length of the side of squares. As this figure suggests, it is impossible to be faithful to the renderings of the solution to 7.1. Thus you have some freedom with the spacing of items etc. but your renderings must be recognizable. 2. Design a class-based representation of mobiles, as presented in problem 3.1. Then design the method isBalanced, which determines whether a mobile is balanced. You may find it instructive to solve this exercise in full Java using arrays or vector lists for the representation of beams. 3. Design a class-based representation of BERs, as presented in problem 5.1. Then design the method evaluator, which determines the Boolean value of a BER. Note that this problem solves on the first part of problem 5.1 and does not mention fevaluate. Recall that evaluator in problem 5.1 produced either a boolean or a symbol. In a typed programming language, such as ProfessorJ/Intermediate (or Java), doing so is impossible. Instead, your evaluator function will produce String: □ "true" if the result is true; □ "false" if the result is false; and □ "*error*" if the expression contains a variable (symbol). Also, ProfessorJ/Intermediate (or Java) doesn't support symbols; use String instead.
{"url":"http://www.ccs.neu.edu/home/matthias/107-f08/Assignments/9.html","timestamp":"2014-04-21T15:18:40Z","content_type":null,"content_length":"6684","record_id":"<urn:uuid:24e78a41-d53a-4099-80a2-7fa8f4d20270>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] 2htdp/image From: Stephen Bloch (sbloch at adelphi.edu) Date: Wed Nov 18 16:29:22 EST 2009 On Nov 18, 2009, at 4:07 PM, Robby Findler wrote: > What you describe in the first paragraph (below) is what htdp/image > did, except that it didn't provide the second equality operator, only > the bitmap-based one. One of the things we found is that doing a full > bitmap comparison for equality was too expensive for systems and that > was part of the impetus for 2htdp/image. Gee, and I thought it was getting arbitrary rotations to work :-) Yes, full bitmap comparisons will be expensive: you have to (a) flatten the image to a bitmap, and (b) compare it bit by bit. But if you cache the flattened bitmap, at least you can amortize the time for (a) over multiple comparisons and displays. > Finally, what you say about matrices (in a followup message) doesn't > really work for rotation because you need to compute the bounding box > of the shape (to be prepared for future overlays and stuff) and thus > you actually need to do a linear time computation to find that bound > box (ie, apply the matrix to all of the points in the shape)-- it > isn't just (constant time) matrix multiply. But yes, thinking thru all > of these issues is what we've been doing recently. Hmm. We should be able to do better than "apply the matrix to all of the points in the shape." Do you really need a bounding box precomputed (as opposed to computing it only when you actually need it)? How about replacing the bounding box with a convex hull? To compute the convex hull after a rotation (or reflection or scaling or translation or skewing or whatever) you would simply apply the matrix to each of the vertices of the convex hull. In fact, like the "cached bitmap" I mentioned in a previous message, you don't need to store a convex hull at each node of the tree. When you apply any of the matrix-multiply operations (rotate, reflect, scale, translate, skew) to an image that has a known convex hull, just compose the new matrix with the existing one. The only times you need to actually COMPUTE a convex hull (by multiplying a matrix by all the vertices of the previous one) are when you're about to do a crop, an overlay, or a display. And since you don't actually need a precise convex hull, you can do a sort of "conservative convex hull" that merges any two vertices that are closer than a certain distance apart, so the hulls don't get enormous themselves. Stephen Bloch sbloch at adelphi.edu Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2009-November/036766.html","timestamp":"2014-04-21T02:09:00Z","content_type":null,"content_length":"7684","record_id":"<urn:uuid:0127bc22-12e4-4026-a37c-37cf37363141>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2005/119 Index Calculus in Class Groups of Plane Curves of Small DegreeClaus DiemAbstract: We present a novel index calculus algorithm for the discrete logarithm problem (DLP) in degree 0 class groups of curves over finite fields. A heuristic analysis of our algorithm indicates that asymptotically for varying q, ``essentially all'' instances of the DLP in degree 0 class groups of curves represented by plane models of a fixed degree d over $\mathbb{F}_q$ can be solved in an expected time of $\tilde{O}(q^{2 -2/(d-2)})$. A particular application is that heuristically, ``essentially all'' instances of the DLP in degree 0 class groups of non-hyperelliptic curves of genus 3 (represented by plane curves of degree 4) can be solved in an expected time of $\tilde{O}(q)$. We also provide a method to represent ``sufficiently general'' (non-hyperelliptic) curves of genus $g \geq 3$ by plane models of degree $g+1$. We conclude that on heuristic grounds the DLP in degree 0 class groups of ``sufficiently general'' curves of genus $g \geq 3$ (represented initially by plane models of bounded degree) can be solved in an expected time of $\tilde{O}(q^{2 -2/(g-1)})$. Category / Keywords: public-key cryptography / discrete logarithm problemDate: received 18 Apr 2005Contact author: diem at iem uni-due deAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation Version: 20050421:174335 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2005/119/20050421:174335","timestamp":"2014-04-18T15:41:09Z","content_type":null,"content_length":"3029","record_id":"<urn:uuid:363eea1c-a5ba-4f8b-9d6f-f44e9d61bf17>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Midtown, NJ ACT Tutor Find a Midtown, NJ ACT Tutor ...Rose High School. I have experience tutoring and familiarity the following high school curriculum/tests: Algebra 2, Algebra 2 Honors, Trigonometry, Trigonometry Honors, PreCalc, PreCalc Honors, Calculus AB/BC, Physics, Physics Honors, Physics AP B, Physics AP C, SAT Math/Verbal/Writing, SAT 2 Ph... 9 Subjects: including ACT Math, calculus, physics, algebra 1 ...I have had tutoring experience with SAT math previously. I scored 790 on the SAT math portion. I am comfortable and familiar with the concepts covered in the exam. 15 Subjects: including ACT Math, English, calculus, GRE ...Having spent the last five years tutoring students in public speaking, I have a great deal of experience and look forward to continue tutoring over the summer. I am interested in tutoring in math (Algebra I to AP Calculus BC), English (SAT/ACT Prep to AP English Lang/Lit), introductory economics... 43 Subjects: including ACT Math, reading, English, algebra 1 ...I can't guarantee you similar results, but I CAN guarantee that I will provide you with the tools you need to succeed on your upcoming tests!I have tutored high school algebra both privately and for the Princeton Review. I have a bachelor's degree in physics. I have tutored high school algebra both privately and for the Princeton Review. 20 Subjects: including ACT Math, English, algebra 2, grammar ...During our time together, we will focus on the following:-- basic organizational skills-- effective note taking-- time-management, scheduling, and prioritizing-- fundamental math skills (pre-algebra, geometry, expressions, inequalities, equations)-- advanced reading techniques (comprehension, act... 41 Subjects: including ACT Math, English, reading, writing
{"url":"http://www.purplemath.com/Midtown_NJ_ACT_tutors.php","timestamp":"2014-04-20T11:04:26Z","content_type":null,"content_length":"23714","record_id":"<urn:uuid:210ab143-bbeb-4c40-b15e-fb4277b46000>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help finding function! Hi! This is my first ever post on PF. Thanks in advance for anyone who helps me out on this! I'm trying to find a general function that describes the curve in the attached image. As you can see, it is periodic and decays as it approaches infinity. y ≥ 0 at all times; f(0)=0; symmetric about y axis. The period should increase as well - the distance between the first local maxima on the each side of the y axis should be very small (nearly 0) but increases at a very large rate. For example, set the highest local maximas as p and p' and the second highest as q and q'. Well the distance between p and p' <<<< than the distance between p and q (similarly p' and q'). I'll post more drawings if need be. Thank you very much to whoever helps me out with this.
{"url":"http://www.physicsforums.com/showthread.php?s=4d55fe33efe890504c5200c98650b325&p=4527942","timestamp":"2014-04-17T12:39:25Z","content_type":null,"content_length":"27631","record_id":"<urn:uuid:52da3621-d374-4590-8b63-31eefbeec9b7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00455-ip-10-147-4-33.ec2.internal.warc.gz"}
Poulsbo Math Tutor Find a Poulsbo Math Tutor ...Being ahead of others in my grade at math shows my love and able to understand it. I won't just give my students the answers, but instead will push them to try and solve the problems on their own after I have shown them how to solve other examples. Of course I will be there to support them the entire time. 15 Subjects: including algebra 2, geometry, precalculus, prealgebra ...Tutoring is very fulfilling for me, and I always enjoy seeing the progress students make as they gain confidence and understanding of the material.I have worked with algebra students of middle school age through college. I focus on emphasizing the underlying logic of the concepts and building on... 17 Subjects: including prealgebra, English, linear algebra, algebra 1 ...I have worked as an IT industry professional for over 30 years. I have programmed in various languages, installed and configured hardware and software, and recently have focused on computer network security issues. I was actively certified as a Computer Information Systems Security Professional (CISSP) for about six years (until October 2010). I prepped a 10-year old recently in ISEE math. 43 Subjects: including trigonometry, linear algebra, computer science, discrete math ...During this time, I developed many methods for teaching this content, especially in the area of integers where many students struggle. I later went on to teach Algebra 1, and found that there were many of my students who needed extra time and review with their Prealgebra learning. It was then that I started offering lunch time tutoring for any and all students requiring extra help. 11 Subjects: including geometry, writing, trigonometry, algebra 1 Due to my academic achievements, I entered the University of Washington as a full-time college student at the age of 14 years old from the nationally famous Early Entrance Program and was featured on a Nightline ABC episode because of this program. Since then, I have enrolled in a variety of classe... 42 Subjects: including algebra 1, algebra 2, ACT Math, calculus
{"url":"http://www.purplemath.com/poulsbo_wa_math_tutors.php","timestamp":"2014-04-18T05:51:47Z","content_type":null,"content_length":"23783","record_id":"<urn:uuid:48aee84e-707f-4ac3-85cd-be1990090267>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Books : Applied Linear Algebra: The Decoupling Principle. First edition published 2000 by Prentice Hall. (out of print) ISBN 0-13-085645-2 Applied Linear Algebra: The Decoupling Principle. Second edition published 2008 by the American Mathematical Society, ISBN-13: 978-0-08218-4441-0. ISBN-10: 0-8218-4441-5 Topology of Tiling Spaces. Published 2008 by the American Mathematical Society, ISBN-13: 978-0-08218-4727-5. ISBN-10: 0-8218-4727-9 Book Chapter: Linear Algebra and Mathematical Physics, chapter for the CRC Handbook of Linear Algebra , Leslie Hogben ed., 2007. Research articles: 1. (with A. C. Sadun and A. A. Sadun) Solar Retinopathy - A Biophysical Analysis, Arch. Opthalmol. 102 (1984) 1510-1512. 2. (with Zvi Bern, M. B. Halpern and Clifford Taubes)Continuum Regularization of QCD, Phys. Lett. B 185 (1985) 151-156. 3. (with Zvi Bern, M. B. Halpern and Clifford Taubes)Continuum Regularization of Quantum Field Theory I. Scalar Prototype, Nucl. Phys. B 284 (1987) 1-34. 4. (with Zvi Bern, M. B. Halpern and Clifford Taubes)Continuum Regularization of Quantum Field Theory II. Gauge Theory, Nucl. Phys. B 284 (1987) 35-91. 5. (with Zvi Bern and M. B. Halpern) Continuum Regularization of Quantum Field Theory III. The QCD4 $\beta$-function, Nucl. Phys. B 284 (1987) 92-102. 6. (with Zvi Bern and M. B. Halpern) Continuum Regularization of Quantum Field Theory IV. Langevin Renormalization, Z. Phys. C 35 (1987) 255-283. 7. Continuum Regularization of Quantum Field Theory V. Schwinger-Dyson Renormalization, Z. Phys. C 36 (1987) 407-424. 8. (with F. Gesztesy, D. Gurarie, H. Holden, M. Klaus, B. Simon and P. Vogl) Trapping and Cascading of Eigenvalues in the Large Coupling Limit, Commun. Math. Phys. 118 (1988) 597-634. 9. (with J. E. Avron, Jan Segert and Barry Simon) Topological Invariants in Fermi Systems with Time Reversal Invariance, Phys. Rev. Lett. 61 (1988), 1329-1332. 10. (with Jan Segert) Chern Numbers for Fermionic Quadrupole Systems, J. Phys. A. 22 (1989) L111-L115. 11. (with J. E. Avron, Jan Segert and Barry Simon) Chern Numbers, Quaternions and Berry's Phase in Fermi Systems, Commun. Math. Phys. 124 (1989), 595-624. 12. (with J. E. Avron) Chern Numbers and Adiabatic Transport in Networks with Leads, Phys. Rev. Letters 62 (1989) 3082-3084. 13. (with J. E. Avron) Adiabatic Quantum Transport in Networks with Macroscopic Components, Ann. of Physics 206 (1991) 440-493. 14. (with Jan Segert) Non-self-dual Yang-Mills Connections with Nonzero Chern Number, Bull. Amer. Math. Soc. 24 (1991) 163-170. 15. (with A. C. Sadun) Relativistic Dynamics of Expanding Sources, Astrophys. & Space Sci. 185 (1991), 21-36. 16. (with Jan Segert) Non-self-dual Yang-Mills connections with Quadrupole Symmetry, Comm. Math. Phys.145(1992), 363-391. 17. (with Jan Segert) Stationary Points of the Yang-Mills Action, Comm. Pure \& Appl. Math. 45 (1992) 461-484. 18. (with J. E. Avron, M. Klein and A. Pnueli) Hall conductance, Adiabatic Charge Transport and Persistent Currents of Leaky Tori, Phys. Rev. Lett. 69(1992), 128-131. 19. (with Jan Segert) Constructing Non-Self-Dual Yang-Mills Connections with Nonzero Chern Number, Proceedings of the Symposia in Pure Mathematics 54, Part 2 (1993), 529-537. 20. (with M. Vishik) The Spectrum of the Second Variation of the Energy for an Ideal Incompressible Fluid, Phys. Lett. A 182 (1993), 394-398. 21. A Symmetric Family of Yang-Mills Fields, Commun. Math. Phys. 163 (1994), 257-291. 22. (with C. Radin) The Isoperimetric Problem for Pinwheel Tilings, Commun. Math. Phys. 177 (1996), 255-263. 23. A Simple Geometric Representative of $\mu$ of a Point, Commun. Math Phys. 178 (1996), 107-113. 24. (with J. Avron) Adiabatic Curvature and the $S$-Matrix, Commun. Math. Phys. 181 (1996), 685-702. 25. (with D. Auckly) A Family of Mobius Invariant 2-Knot energies, in ``Geometric Topology: The Proceedings of the 1993 Georgia International Topology Conference'', W. Kazez, ed. AMS/IP Studies in Advanced Mathematics 2, Part 1 (1997) 235-258. 26. Simple Type is Not a Boundary Phenomenon, in ``Geometry, Topology and Physics'', B. Apanasov, S. Bradlow, W. Rodrigues and K. Uhlenbeck, ed. (1997), 233-244. 27. Some Generalizations of the Pinwheel Tiling, Disc. Comp. Geom. 20 (1998), 79-110. 28. (with C. Radin) Subgroups of $SO(3)$ Associated with Tilings, J. Algebra. 202 (1998), 611-633. Click here for pdf. 29. (with M. Speight) Geodesic Incompleteness in the $CP^1$ Model on a Compact Riemann Surface, Lett. Math. Phys. 43 (1998), 329-334. 30. (with C. Radin) An Algebraic Invariant for Substitution Tiling Systems, Geometriae Dedicata 73 (1998) 21-37. Click here for pdf. 31. (with C. Radin) On 2-generator subgroups of SO(3), Trans. Amer. Math. Soc. 351 (1999), 4469-4480. Click here for pdf. 32. (with J. Conway and C. Radin) On Angles Whose Squared Trigonometric Functions are Rational, Discrete Comput. Geom. 22 (1999), 321-332. Click here for pdf. 33. (with B. Draco and D. Van Wieren) Growth Rates in the Quaquaversal Tiling, Discrete Comput. Geom. 23 (2000), 419-435. 34. (with J. Conway and C. Radin) Relations in SO(3) Supported by Geodetic Angles, Discrete Comput. Geom. 23 (2000), 453-463. Click here for pdf. 35. (with J.E. Avron, A. Elgart and G.M. Graf) Geometry, Statistics and Asymptotics of Quantum Pumps, Physical Review B (Rapid Communications)62 (2000) R10618-R10621. Click herefor pdf. 36. (with L. Guijarro and G. Walschap) Parallel Connections over Symmetric Spaces, J. Geom. Anal. 11 (2001), 265--281. 37. (with D. Groisser) Simple Type and the Boundary of Moduli Space, J. Geom. Phys. 36 (2000), 324--384. 38. (with C. Radin) Isomorphisms of Hierarchical Structures, Ergodic Theory Dynam. Systems 21(2001), 1239-1248. Click here for pdf. 39. (with J.E. Avron) Fredholm Indices and the Phase Diagram of Quantum Hall Systems, J. Math. Phys. 42 (2001), 1--14. Click here for pdf. 40. (with J.E. Avron, A. Elgart, and G.M. Graf) Optimal Quantum Pumps, Physical Review Letters. 87 (2001), 236601. Clickhere for pdf. 41. (with Jean Marie Linhart) Fast and Slow Blowup in the S^2 Sigma Model and (4+1)-Dimensional Yang-Mills Model, Nonlinearity 15 (2002) 219-238. Click here for pdf. 42. (with N. Ormes and C. Radin) A Homeomorphism Invariant for Substitution Tiling Spaces, Geometriae Dedicata 90 (2002), 153-182. Click here for pdf. 43. (with F. Rodriguez Villegas and J. F. Voloch) Blet, a Mathematical Puzzle. American Mathematical Monthly 109 (2002) 729-740. Click here for pdf. 44. (with J.E. Avron, A. Elgart, and G.M. Graf) Time-Energy Coherent States and Adiabatic Scattering, Journal of Mathematical Physics43 (2002), 3415-3424. Click here for pdf. 45. (with R.F. Williams) Tiling Spaces are Cantor Set Fiber Bundles , Ergodic Theory and Dynamical Systems 23 (2003) 307-316. Click here for pdf. 46. (with A. Marini) Spherically Symmetric Solutions of a Boundary Value Problem for Monopoles, Journal of Mathematical Physics 44 (2003) 1071-1083. Click here for pdf. 47. (with A. Clark) When size matters: subshifts and their related tiling spaces, Ergodic Theory and Dynamical Systems 23 (2003) 1043-1057. Click here for pdf. 48. (with S. Keel) Oort's Conjecture for A_g x C, Journal of the Americal Mathematical Society 16 (2003) 887-900. Click here for pdf. 49. Tiling Spaces are Inverse Limits. Journal of Mathematical Physics 44 (2003) 5410-5414. Click here for pdf. 50. (with J.E. Avron, A. Elgart, G.M. Graf and K. Schnee) Adiabatic charge pumping in open quantum systems, Communications in Pure and Applied Math 57 (2004) 528-561. Click here for pdf. 51. (with J.E. Avron, A. Elgart and G.M. Graf) Transport and Dissipation in Quantum Pumps. Journal of Statistical Physics 116 (2004) 425-473. Click here for pdf. 52. (with C. Radin) Structure of the hard sphere solid, Phys. Rev. Lett. 94 (2005), paper 015502. Click here for pdf. 53. (with C. Holton and C. Radin) Conjugacies for Tiling Dynamical Systems. Communications in Mathematical Physics 254 (2005) 343-359. Click here for pdf. 54. (with P. Buczek and J. Wolny) Periodic Diffraction Patterns for 1D Quasicrystals. Acta Physica Polonica B 36 (2005) 919-933. 55. (with L. Bowen, C. Holton and C. Radin) Uniqueness and Symmetry in Problems of Optimally Dense Packings. Math Phys. Electron. J. 11 (2005) paper 1. 56. (with H. Koch and C. Radin), The most stable structure for hard spheres, Physical Review E 72 (2005), paper 016708. Click here for pdf. 57. (with R. Kenyon and B. Solomyak) Topological Mixing for Substitutions on Two Letters, Ergodic Theory and Dynamical Systems 25 (2005) 1919-1934. 58. (with A. Clark) When Shape Matters: Deformations of Tiling Spaces, Ergodic Theory and Dynamical Systems 26 (2006) 69-86. 59. Tilings, tilings spaces and topology, Philosophical Magazine 86 (2006) 875-881. 60. Pattern-Equivariant Cohomology with Integer Coefficients, Ergodic Theory and Dynamical Systems 27 (2007), 1991-1998. 61. (with N.P. Frank) Topology of (Some) Tiling Spaces without Finite Local Complexity, Discrete and Continuous Dynamical Systems 23 (2009) 847-865. 62. (with M. Barge, B. Diamond, and J. Hunton) Cohomology of Substitution Tiling Spaces , Ergodic Theory and Dynamical Systems 30 (2010) 1607-1627. 63. (with B. Rand) An Approximation Theorem for Maps Between Tiling Spaces, Discrete and Continuous Dynamical Systems 29 (2011) 323-326. 64. (with Marcy Barge) Quotient Cohomology for Tiling Spaces, New York Journal of Mathematics 17 (2011) 579-599. 65. Exact Regularity and the Cohomology of Tiling Spaces, Ergodic Theory and Dynamical Systems 31 (2011) 1819-1834. 66. (with M. Barge, H. Bruin and L. Jones) Homological Pisot Substitutions and Exact Regularity, Israel Journal of Mathematics 188 (2012), 281-300. 67. (with Natalie Frank) Fusion: a general framework for hierarchical tilings of R^d, Geometriae Dedicata 10.1007/s10711-013-9893-7 (2013) 68. (with Natalie Frank) Fusion tilings with infinite local complexity, Topology Proceedings 43 (2014) 235-276. 69. (with Johannes Kellendonk) Meyer sets, topological eigenvalues, and Cantor fiber bundles Journal of the London Math Society (2013), online 10.1112/jlms/jdt062 70. (with Charles Radin) Phase transitions in a complex network, J. Phys. A: Math. Theor. 46 (2013) 305002 In Press: 71. (with Kui Ren and Charles Radin) The Asymptotics of Large Constrained Graphs, to appear in J. Phys. A. 72. (with Charles Radin) Singularities in the entropy of asymptotically large simple graphs.
{"url":"http://www.ma.utexas.edu/users/sadun/publist.html","timestamp":"2014-04-18T13:37:10Z","content_type":null,"content_length":"20875","record_id":"<urn:uuid:a18c3fa4-1acd-40cc-8153-36d68cae57c2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Power Supplies Linear Power Supplies Unregulated Linear Power Supplies When building unregulated linear power supplies, different rectifier circuits can be used. The most common circuits are the following: Dual Complementary Rectifier When a positive and negative DC output of the same voltage is required, a dual complementary rectifier is the best choice. The secondary windings are bifilar-wound for precisely matched resistance, coupling, and capacitance. Full-Wave Bridge The full wave bridge rectifier is the most cost-effective because the entire transformer secondary is used each half-cycle and no center tap is required. Full-Wave, Center-Tapped Circuits A full-wave, center-tapped rectifier is commonly used in high-current, low-voltage applications because there is only one voltage drop in the circuit. However, since only one secondary winding is used at a time, the transformer’s power rating must be about 30% greater than for a full-wave bridge transformer. Full-Wave Center-Tap with Choke Input Choke input filters are commonly used in high current applications because they reduce ripple and allow better utilization of the transformer’s power capacity. Regulated Linear Power Supplies Regulated linear power supplies are used to provide a constant output voltage for different loads and varying input voltage: How to Specify the Transformer The formula to determine the transformer’s AC voltage and current (simplified version): Vdc = Output DC voltage Vreg = Voltage drop in the regulator = 3 Volt Vrec = Voltage drop in the diodes = 0.7 Volt Vrip = Ripple voltage = 10% of Vdc Vnom = Nominal input voltage = 117 Volt Vlow = Low line input voltage = 98 Volt 0.9 = Rectifier efficiency Calculations for the transformer’s AC voltage and current, when used in the various rectifier circuits: Rectifier circuits RMS voltages (V) RMS current (A) Dual complementary VAC = 1.03VDC + 3.47 IAC = 1.8 X IDC Full-wave bridge VAC = 1.03VDC + 4.13 IAC = 1.8 X IDC Full-wave center-tapped VAC = 1.03VDC + 3.47 IAC = 1.3 X IDC
{"url":"http://www.powertronix.com/resources.php?page=linear","timestamp":"2014-04-18T18:11:48Z","content_type":null,"content_length":"15902","record_id":"<urn:uuid:1fac7892-bd4a-43a9-829f-bdc5fb5ec81c>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help March 23rd 2009, 12:18 AM #1 Oct 2008 Suppose $f$ is a real valued function whose domain contains the interval $[a,b]$. Assume that $f$ is unbounded on $[a,b]$. (a) Prove that there exists a sequence $(y_n)$ in $[a,b]$ that converges to $y \in [a,b]$ such that for every $n \in \mathbb{N}$, $|f(y_n)| > n$. Isn't this just the definition of an unbounded sequence? Do we have to actually construct one? What would be the easiest way to do this? (b) Using this unbounded sequence how would we show that the set of Riemann sums of $f$ corresponding ot $P$ is an unbounded set of real numbers? GENERALLY speaking ,usually in forming sequences in analysis there are two basic ways: 1)By using the axiom of Choice. 2)By using the inductive method The following steps are a summary of the process in forming the sequence in concern,by using the axiom of Choice. step 1) PROVE: For all nεΝ there exists an ,xε [a,b] and such that |f(x)|>n,by using the fact that f is unbounded over [a,b] step 2) Form the sets $A_{n}$={x:xε[a,b],|f(x)|>n,nεN} Step 3) By using step 1 prove : For all neΝ, $A_{n}eq\emptyset$. step 4) Now use the axiom of Choice to form a sequence { $x_{n}$} in [a,b] and such that |f( $x_{n}$)|>n ,nεN. Step 5) By the Bolzano Weiestrass theorem ,since the sequence { $x_{n}$} is bounded in [a,b] ,there exists an accumulation point ,v in [a,b]. step 6)Prove:For all nεN ,THERE exists a , $yeq v$,yε{ $x_{n}:n\in N$} and such that |y-v|<1/n,by using the fact that v is an accumulation point. step 7)Form the sets, $S_{n}$ ={y:|y-v|<1/n,nεΝ, $yeq v$,yε{ $x_{n}$:nεN}} Step 8) Prove: for all nεN $S_{n}eq\emptyset$ by using step 6. step 9) Use again the axiom of Choice to form the sequence { $y_{n}$} in [a,b],such that |f( $y_{n}$)|>n , $y_{n}eq v$ and | $y_{n}$-v|<1/n for all nεΝ step 10) prove , $\lim_{n\rightarrow\infty}{y_{n}}=v$. ALTERNATIVELY,by using the inductive process we can say: After we have proved that ,for all n belonging to natural Nos ,there exists an ,x belonging to [a,b] and such that |f(x)|>n. For n= 1 ,we pick $x_{1}$,such that |f( $x_{1}$)|>1. For n=2 ,we pick $x_{2}>x_{1}$,such that |f( $x_{2}$)|>2. Suppose we have found: $x_{1}<x_{2}<x_{3}.......................<x_{n}$ all in [a,b] and such that |f( $x_{n}$)|>n . We select $x_{n+1}>x_{n}$.We can thus proceed inductively to construct the sequence { $x_{n}$},which is increasing and bounded from above by ,a.Hence there exists a,y in [a,b] and such, $\lim_{n\ So we have proved the existence of a sequence { $x_{n}$} in [a,b] such that |f( $x_{n}$|>n and $\lim_{n\rightarrow\infty}{x_{n}}=y$ March 29th 2009, 04:40 PM #2 Mar 2009
{"url":"http://mathhelpforum.com/differential-geometry/80090-sequence.html","timestamp":"2014-04-16T15:09:32Z","content_type":null,"content_length":"40151","record_id":"<urn:uuid:436b206b-bcf8-4d8d-93dd-ea82cd1c8ac3>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
UBC Mathematics Colloquium North Carolina State University, USA. Combinatorics and topology of stratified spaces Fri., March. 19, 2010, 3:00pm, MATX 1100 Anders Bj\"orner characterized which finite, graded partially ordered sets (posets) are closure posets of finite, regular CW complexes, and he also observed that a finite, regular CW complex is homeomorphic to the order complex of its closure poset. One might therefore hope to use combinatorics to determine topological structure of stratified spaces by studying their closure posets; however, it is possible for two different CW complexes with very different topological structure to have the same closure poset if one of them is not regular. I will talk about a new criterion for determining whether a finite CW complex is regular (with respect to a choice of characteristic functions); this will involve a mixture of combinatorics and topology. Along the way, I will review the notions from topology and combinatorics we will need. Finally I will discuss an application: the proof of a conjecture of Fomin and Shapiro, a special case of which says that the Schubert cell decomposition of the totally nonnegative part of the space of upper triangular matrices with 1's on the diagonal is a regular CW complex homeomorphic to a ball.
{"url":"http://www.math.ubc.ca/~yhkim/colloquium/abstracts/2010-Mar-19.html","timestamp":"2014-04-20T03:26:30Z","content_type":null,"content_length":"2355","record_id":"<urn:uuid:a3b7e6c8-d2cd-49e8-80f9-46056140232e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: Getaway from Getawehi (Colin Kapp) (quoted from Getaway from Getawehi) "In all my books, twice one is two - and it's never before been in dispute" "But your books were written on Terra, not Getawehi. On Getawehi, they don't apply" "But that's insane!". Mathematics is merely a system for expressing the properties and relationships of quantities. It's universal, not a local phenomenon. Once one is one, twice one is two...." "Not on Getawehi. It seems to be different here. Once one is one but twice one is only a bit over one and a half. One point five seven zero eight, to be more exact. And three times one is about two point three six." "It's long been suspected that our mathematics may not be universal. Dimensionless numbers, for example, although have an accepted value in the part of the universe where we customarily use them, are more likely to be local coincidences than physical absolutes. But on Getawehi, we seem to have hit on something even more fundamental [...] It has something to do with unity." "Unity?" "Yes. Unity...one...a whole. I'm no mathematician but it seems to me there's a darn great hole in our idea of the structure of numbers. We've explored number structure up to infinity and several orders beyond - but something we've always taken for granted is the constant mathematical value of unity. "But it has to have a constant mathematical value!. Once one is one...it can't be otherwise by its very definition." "So we've always assumed. But what if we happened to be wrong? What if there is a difference between the value of one as representing a whole thing and the value of one as a mathematical factor? They both seem to be the same in our corner of the universe but one used as a factor on Getawhei is demonstrably only 0.5785 of what it was on Terra"
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf888","timestamp":"2014-04-16T18:57:56Z","content_type":null,"content_length":"11527","record_id":"<urn:uuid:edac5613-444f-4d81-be28-f61067c18927>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Photoelectric homework help First off, apologies to the boards. I didn't look for the "homework" subforum hard enough. (Being moved on my first post is embarrassing. So, here's my question, typed out. I have an emitter electrode of unknown composition. There is a source of controllable light to the side, pointed right at the electrode. The cathode ray created by the electrode will have to travel through a potential difference between metal plates before reaching the anode where it is converted into a current. My goal is to identify the emitter electrode composition. To determine this, I am given the following information. I can vary the wavelength of the light created by the light source. I can also measure the minimum voltage (potential difference) required to stop the cathode ray from becoming a current completely. If the following are the results of said experiment: wavelength (nm) 250 300 350 400 450 500 minimum voltage (V) 2.70 1.90 1.30 0.80 0.50 0.10 This is my attempt at solving it. (1/2)mv^2]max=eV'=(Planck's constant)(c/wavelength)-(work function) Where V' is the minimum voltage. So eV'+(work function)=(Planck's constant)(c/wavelength). The only property here that is of the electrode is the work function, so... (work function)=(Planck's constant)(c/wavelength)-(eV') Now, since everything here has is measured in units of electron volts, I figure I might as well cancel all the e's from both sides. (work function/e)=((Planck's constant)(c/wavelength)/e)-V' Which makes things a bit simpler for the numbers. But it doesn't matter. So I stuck this in Excel, hoping to come out with a single work function. But instead, my results were pretty much repeats of the minimum potential difference. To which I said, "This can't be right." (I'm only solving up to the work function in my answers, because finding the material composition from the work function is fairly trivial for this problem.) Thanks in advance for the help! (I need to learn LaTeX at some point.)
{"url":"http://www.physicsforums.com/showthread.php?p=3635779","timestamp":"2014-04-18T21:24:19Z","content_type":null,"content_length":"38964","record_id":"<urn:uuid:972747ed-df27-4ed4-81d2-f5eafae62443>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
Sec 2.4 Change in Quantity and Subtracting Real Numbers Change in Quantity: If you had 15gals of gas in your tank and at the end of your trip you had only 2gals left, then the gas quantity decreased from 15 to 2. This change is called the change in quantity. In our case the quantity of gasoline decreased from 15 to 2 gallons. The actual change in quantity could be calculated by subtracting the starting quantity from the ending: 2 15 13 The change in quantity is the ending amount minus the beginning amount. Change in Quantity = Ending Amount – Beginning amount If the quantity increased the change is positive, if it decreased, the change is negative. Subtraction of Real Numbers. Subtracting some amount of money has the same effect as incurring debt or spending. That means that subtracting is the same as adding negative numbers, which suggests that in general subtracting a number is the same as adding the opposite of that number: a b a b Example: Subtract: 5 2 5 2 3 , which is the same as 5 2 3 , so 5 2 5 2 Subtracting a Negative Number: A helicopter starts from the bottom of the canyon 150 ft below the sea level and rises to the altitude of 423 ft above the sea level. Find the change in altitude. Solution: Since the altitude changed from 150 ft (below the sea level altitude) to 423 ft (above the sea level), the change in altitude is the ending altitude minus the starting altitude: 423 150 the difference should be larger than 423 since the helicopter started from the altitude lower than the sea level. If we recall that a b a b , we get 423 150 423 opposite of 150 423 150 573 ft An opposite of a number is denoted by a negative sign, so the opposite of 5 is 5 . To denote the opposite of 5 we write it 5 . Since we know that the opposite of 5 is 5 we can observe that a a The opposite of an opposite of a number is the number itself. Example: Find an opposite of a number: a) 7 Opposite 7 b) 5 Opposite 5 Subtractive a negative number: Changes of Increasing and Decreasing Quantities. An increasing quantity has a positive change. A decreasing quantity has a negative change. The wolf population in the Greater Yellowstone area for various years is shown in the table. Year Population Change in population from each year to next 1996 40 - a. From what year to the next the changes in population were the greatest? What was that change? The change was the largest from year 1999 to 2000, The change was 59. b. From what year to the next the changes in population were the least? What was that change? The change was least from year 1998 to 1999, The change was 6. Group Exploration: A set of points is given in the table (by x- and y- coordinates). x y Change in y (current value minus previous) 0 3 - a. Plot the points from the table. Do they lie on one line? b. Complete the third column in the table. Describe any patterns.
{"url":"http://www.docstoc.com/docs/113903789/Sec-24-Change-in-Quantity-and-Subtracting-Real-Numbers","timestamp":"2014-04-25T08:06:28Z","content_type":null,"content_length":"55471","record_id":"<urn:uuid:a437db68-c576-43cd-ac71-1531e4ada5c4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
ation of logic programs Results 1 - 10 of 27 - ACM Transactions on Programming Languages and Systems , 1989 "... Abstract: Mode and data dependency analyses find many applications in the generation of efficient exe-cutable code for logic programs. For example, mode information can be used to generate specialized unification instructions where permissible; to detect determinacy and functionality of programs; to ..." Cited by 88 (5 self) Add to MetaCart Abstract: Mode and data dependency analyses find many applications in the generation of efficient exe-cutable code for logic programs. For example, mode information can be used to generate specialized unification instructions where permissible; to detect determinacy and functionality of programs; to gen-erate index structures more intelligently; to reduce the amount of runtime tests in systems that support goal suspension; and in the integration of logic and functional languages. Data dependency information can be used for various source-level optimizing transformations, to improve backtracking behavior, and to parallelize logic programs. This paper describes and proves correct an algorithm for the static infer-ence of modes and data dependencies in a program. The algorithm is shown to be quite efficient for pro-grams commonly encountered in practice. , 1994 "... We provide here a systematic comparative study of the relative strength and expressive power of a number of methods for program analysis of Prolog. Among others we show that these methods can be arranged in the following hierarchy: mode analysis ⇒ type analysis ⇒ monotonic properties &rArr ..." Cited by 80 (4 self) Add to MetaCart We provide here a systematic comparative study of the relative strength and expressive power of a number of methods for program analysis of Prolog. Among others we show that these methods can be arranged in the following hierarchy: mode analysis &rArr; type analysis &rArr; monotonic properties &rArr; non-monotonic run-time properties. We also discuss a method allowing us to prove global run-time properties. - ACM TOPLAS , 1998 "... We provide simple conditions which allow us to conclude that in case of several well-known Prolog programs the unification algorithm can be replaced by iterated matching. The main tools used here are types and generic expressions for types. As already noticed by other researchers, such a replaceme ..." Cited by 79 (21 self) Add to MetaCart We provide simple conditions which allow us to conclude that in case of several well-known Prolog programs the unification algorithm can be replaced by iterated matching. The main tools used here are types and generic expressions for types. As already noticed by other researchers, such a replacement offers a possibility of improving the efficiency of program's execution. - Journal of Logic Programming , 1988 "... In general, logic programs are undirected, i.e. there is no concept of "input" and "output" arguments to a procedure. An argument may be used either as an input or as an output argument, and programs may be executed either in a "forward" direction or in a "backward" direction. However, it is often t ..." Cited by 72 (7 self) Add to MetaCart In general, logic programs are undirected, i.e. there is no concept of "input" and "output" arguments to a procedure. An argument may be used either as an input or as an output argument, and programs may be executed either in a "forward" direction or in a "backward" direction. However, it is often the case that in a given program, a predicate is used with some of its arguments used consistently as input arguments and others as output arguments. Such mode information can be used by a compiler to effect various optimizations. This paper considers the problem of automatically inferring the modes of the predicates in a program. The dataflow analysis we use is more powerful than approaches relying on syntactic characteristics of programs, e.g. [18]. Our work differs from that of Mellish [14, 15] in that (i) we give a sound and efficient treatment of variable aliasing in mode inference; (ii) by propagating instantiation information using state transformations rather than through - Proc. JICSLP , 1992 "... internet: ..." , 1992 "... This article reports some results on this correlation in the context of logic programs. A formal notion of the "precision" of an analysis algorithm is proposed, and this is used to characterize the worst-case computational complexity of a number of dataflow analyses with different degrees of precisi ..." Cited by 35 (4 self) Add to MetaCart This article reports some results on this correlation in the context of logic programs. A formal notion of the "precision" of an analysis algorithm is proposed, and this is used to characterize the worst-case computational complexity of a number of dataflow analyses with different degrees of precision. While this article considers the analysis of logic programs, the technique proposed, namely the use of "exactness sets" to study relationships between complexity and precision of analyses, is not specific to logic programming in any way, and is equally applicable to flow analyses of other language families. - in Proceedings of the North American Conference on Logic Programming , 1989 "... We investigate properties of logic programs that permit refinements in their fixpoint evaluation and shed light on the choice of control strategy. A fundamental aspect of a bottom-up computation is that we must constantly check to see if the fixpoint has been reached. If the computation iteratively ..." Cited by 26 (5 self) Add to MetaCart We investigate properties of logic programs that permit refinements in their fixpoint evaluation and shed light on the choice of control strategy. A fundamental aspect of a bottom-up computation is that we must constantly check to see if the fixpoint has been reached. If the computation iteratively applies all rules, bottom-up, until the fixpoint is reached, this amounts to checking if any new facts were produced after each iteration. Such a check also enhances efficiency in that duplicate facts need not be re-used in subsequent iterations, if we use the Seminaive fixpoint evaluation strategy. However, the cost of this check is a significant component of the cost of bottom-up fixpoint evaluation, and for many programs the full check is unnecessary. We identify properties of programs that enable us to infer that a much simpler check (namely, whether any fact was produced in the previous iteration) suffices. While it is in general undecidable whether a given program has these properties, we develop techniques to test sufficient conditions, and we illustrate these techniques on some simple programs that have these properties. The significance of our results lies in the significantly larger class of programs for which bottom-up evaluation methods, enhanced with the optimizations that we propose, become competitive with standard (top-down) implementations of logic programs. This increased efficiency is achieved without compromising the completeness of the bottom-up approach; this is in contrast to the incompleteness that accompanies the depth-first search strategy that is central to most top-down implementations. - International Conference on Logic Programming , 1999 "... At present, the field of declarative programming is split into two main areas based on different formalisms; namely, functional programming, which is based on lambda calculus, and logic programming, which is based on firstorder logic. There are currently several language proposals for integrating th ..." Cited by 20 (3 self) Add to MetaCart At present, the field of declarative programming is split into two main areas based on different formalisms; namely, functional programming, which is based on lambda calculus, and logic programming, which is based on firstorder logic. There are currently several language proposals for integrating the expressiveness of these two models of computation. In this thesis we work towards an integration of the methodology from the two research areas. To this end, we propose an algebraic approach to reasoning about logic programs, corresponding to the approach taken in functional programming. In the first half of the thesis we develop and discuss a framework which forms the basis for our algebraic analysis and transformation methods. The framework is based on an embedding of definite logic programs into lazy functional programs in Haskell, such that both the declarative and the operational semantics of the logic programs are preserved. In spite of its conciseness and apparent simplicity, the embedding proves to have many interesting properties and it gives rise to an algebraic semantics of logic programming. It also allows us to reason about logic programs in a simple calculational style, using rewriting and the algebraic laws of combinators. In the embedding, the meaning of a logic program arises compositionally from the meaning of its constituent subprograms and the combinators that connect them. In the second half of the thesis we explore applications of the embedding to the algebraic transformation of logic programs. A series of examples covers simple program derivations, where our techniques simplify some of the current techniques. Another set of examples explores applications of the more advanced program development techniques from the Algebra of Programming by Bird and de Moor [18], where we expand the techniques currently available for logic program derivation and optimisation. To my parents, Sandor and Erzsebet. And the end of all our exploring Will be to arrive where we started And know the place for the first time. - New Generation Computing "... This paper complements the previous paper \Making Exhaustive Search Programs Deterministic" which showed a systematic method for compiling a Horn-clause program for exhaustive search into a GHC program or a Prolog program with no backtracking. This time we present a systematic method for deriving a ..." Cited by 18 (4 self) Add to MetaCart This paper complements the previous paper \Making Exhaustive Search Programs Deterministic" which showed a systematic method for compiling a Horn-clause program for exhaustive search into a GHC program or a Prolog program with no backtracking. This time we present a systematic method for deriving a deterministic logic program that simulates coroutining execution of a generate-and-test logic program. The class of compilable programs is suciently general, and compiled programs proved to be ecient. The method can also be viewed as suggesting a method of compiling a naive logic program into (very) low-level languages. - Specification and Validation Methods for Programming Languages and Systems , 1994 "... We show here that verification of Prolog programs can be systematically carried out within a simple framework which comprises syntactic analysis, declarative semantics, modes and types. We apply these techniques to study termination, partial correctness, occurcheck freedom, absence of errors and abs ..." Cited by 15 (3 self) Add to MetaCart We show here that verification of Prolog programs can be systematically carried out within a simple framework which comprises syntactic analysis, declarative semantics, modes and types. We apply these techniques to study termination, partial correctness, occurcheck freedom, absence of errors and absence of floundering. Finally, we discuss which aspects of these techniques can be automated. Notes. This research was partly supported by the ESPRIT Basic Research Action 6810 (Compulog 2). A preliminary, shorter, version of this paper appeared as Apt [3]. 1 Introduction 1.1 Motivation Prolog is 20 years old and so is logic programming. However, they were developed separately and these two developments never really merged. The first track is best exemplified by Sterling and Shapiro [36], which puts emphasis on programming style and techniques, and the second by Lloyd [25], which concentrates on the theoretical foundations. As a result of these separate developments, until recently little...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1151455","timestamp":"2014-04-18T15:05:22Z","content_type":null,"content_length":"38707","record_id":"<urn:uuid:e4650e69-1ff8-486e-a968-fea448f27a99>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
Fonts with single and double dotted letter R Readily available? I doubt it. It's (to my knowledge) not part of any standard spec. If you're typesetting this you could somewhat simply produce these as /R/ with two different character styles • Login or register to post comments You could contact Sindre Bremnes about his charming Telefon. • Login or register to post comments Some fonts have R dotaccent (Unicode 1E58/1E59) but I've never seen any with R dieresis. Use combining accents instead. Copy & paste the following into a word processor and it'll work with some system fonts such as Corbel, Consolas, Calibri, Cambria, Candara, Deja Vu, Segoe, Times New Roman. • Login or register to post comments Thanks all. I finally figured out how to do it with Word's equation (EQ) field code. Finding the right diacritic marks was the hard part. • Login or register to post comments You could also look at the SIL fonts such as Gentium and Andika I checked the R with dot above ( uni1E58 )in Andika. It has it. R+Dieresis - no. David and Frode's solutions sounds the best to me. • Login or register to post comments Assuming your Word file is the final version, then the EQ field is fine. But if you're subsequently sending it to a publisher/typesetter who are going to import it into some other application, it might break. Either the combining accents, or a regular spacing accent kerned back over the R (which you can still also do in Word) should at least survive the transfer into another system (and the latter would be less dependent on picking a font with a large character • Login or register to post comments Ray, how did you construct those characters? I get inconsistent results when I paste them into Word. Depending on what font I apply, I can sometimes see the double dots above the first two Rs, but the third and fourth Rs are always sans and always have a square dot no matter which font I apply. However, they do make Word show symbols from the Latin Extended Additional area that normally don't show up there. They also make Thai show up in the Symbol list when it normally isn't there. Re DTW's suggestion to use combining accents or spacing accents, those don't seem to work (outside the EQ field code) unless Word has non-English or right-to-left language support installed. But I may be doing something wrong. • Login or register to post comments To avoid headaches maybe you should commission these two additions to the font(s) you're using? If there's a budget: hpapazian at gmail dot com • Login or register to post comments I think something must have happened to the encoding when I posted it. Try making your own. Combining diacriticals are accents with zero width and a left offset. If the font doesn't have proper combining diacriticals, this trick won't work . . . which is why your should stick to Corbel, Consolas, Calibri, Cambria, Candara, Deja Vu, Segoe or Times New Roman. I used OpenOffice to construct them. After each R, I used the insert symbol command, chose "combining diacriticals" and selected the appropriate accent. • Login or register to post comments If your dotted R is a rate of change, then your text is using Newtons's notation. The dot and double dot are then mathematical accents and their centering differs from that of text accents. Here is an example taken from Vieth, Math typesetting in TEX: The good, the bad, the ugly. On the right is the V with the mathematical accent circumflex. Moreover, if R with a dot above is a mathematical formula, then the R should normally be in italics except in France. • Login or register to post comments I have to say that this "proper" horizontal placement issue seems pedantic to me. • Login or register to post comments Well, those "accents" are operators, they are not semantically a component of the letter (at least in this particular case). • Login or register to post comments So you're saying they're supposed to look detached? If so, a slight shift is imperceptible, hence only really visible to the author! If the separation must be explicit I would argue for placement like an apostrophe's. • Login or register to post comments I am just saying there is no need to make them look like they are attached. Their position in mathematical fonts is determined by the "Top Accent Attachment" (see the Math Fontforge doc). I am just a user of mathematical fonts, with LaTeX. I never implemented those things, not even for LaTeX. You guessed correctly, derivatives are much more often denoted with apostrophes than with dots. The dot notation is used in some parts of mechanics, in particular in the so called Hamilton equations where the dotted x is semantically a variable and then the argumentation would get quite tricky. • Login or register to post comments > there is no need to make them look like they are attached. But is there a need to make them look like they are detatched? If so, then a slight shift is pointless; and if not, then the "text accent" is fine. BTW, just to be clear: I didn't mean to use an apostrophe; I meant to position any explicitly-detached qualifier beyond the right edge. • Login or register to post comments Login or register to post comments No, there is no effort to have them look detached, and they are top accents. Here is what I get with the \dot{x}, \dot{p}, \dot{q}, \dot{r}, \dot{R} in LaTeX with the Computer Modern font: I must confess that, to my eye, the positioning is not very coherent. For me, the are fine, but the has its dot much too to the right and the slightly too much to the left. I don't know how John did it with Cambria. • Login or register to post comments My contention then is that differentiating between a "text accent" (where placement is done optically by a human designer) and a "math accent" (where the location is some mathematical center) does more harm than good. The Good, the Bad and the Pedantic. • Login or register to post comments I just looked at U+1D445 (MATHEMATICAL ITALIC CAPITAL R) in Cambria Math and TopAccent is clearly quite to the right of the middle of the bounding box. There would be no need of TopAccent if it were always in the middle. • Login or register to post comments > in Cambria Math and TopAccent is clearly quite > to the right of the middle of the bounding box. Because it was done by a human; as you said yourself automatic positioning is incoherent. The thing is: what's the point? What does it do besides making it look imbalanced? What are the chances somebody will say: "Oh, the accent is to the right, so it can't be this rarefied letter from a writing system I'll never run into, used right in the middle of a mathematical equation." • Login or register to post comments All I am saying is that I see no reason why a human being would systematically align TopAccent in the math font with the anchor that is used in the italic font to position combining accents. • Login or register to post comments Concerning TeX, had almost forgotten that there are only two horizontal parameters given to each math character, its width and its italic correction. Even if they are fixed by a human being, that may not be enough to correctly position top accents. • Login or register to post comments @Michel, in math mode TeX positions the accents horizontally based on the kern of base glyph with the tie accent (a weird knuthian way to store that value) which is what TopAccent value in MS implementation is equivalent to. • Login or register to post comments If I typeset $\hat{H}$ in Computer Modern, the H is taken from CMMI10 and the hat is taken from CMR (the mathematical accents are always upright glyphs). To look at the metrics of the CMMI10 fonts, I need only type tftopl cmmi10.tfm in any shell window. If I look at the lines concerning H, I get (CHARACTER C H (CHARWD R 0.831251) (CHARHT R 0.683332) (CHARIC R 0.081248) where I skip the comment. There are three values, CHARWD, the character width, CHARHT, the height, and CHARIC, called the italic correction. That is consistent with this description of the the way \ hat{H} is typeset, taken from Bogusław Jackowski's slides (GUST). Here are the two glyphs at stake and here is \hat{H}. No other parameter can be used because there is none. Now if you look at the information on Opentype math fonts mentioned in http://fontforge.sourceforge.net/math.html, each math character still has a height, a width and an italic correction, but there is an additional parameter for the top accent which allows more precision. Am I missing something? • Login or register to post comments Back to the dots, and back home where I could look at a few of my books, I can only conclude that those dots look like an editor's nightmare. I have two books (in French) published in Moscow in the seventies. In one of them one gets within five consecutive lines the following dotted q. Both books put the dots very high above. In some american books I have, the dots are so small and so close to the letter that I find them hardly visible. And here is from the "Winner of the 1990 Science Book Prize", published in the UK: • Login or register to post comments "Comrade, there is dot. What is problem?" And, to be even-handed: "Dude, there's the dots there dude. Chill." • Login or register to post comments @Michel: The comment under the second figure is wrong, "italic correction" should be "kern" (in CM fonts the skewchar is usually the tie accent), it actually makes no sense as it is since the italic correction is a fixed value for each character regardless of what is next to it (though it get only applied when switching from italic to upright). I just checked rule 12 in Appendix G of The TeXbook and it states: rule 12. If the current item is an Acc atom ... If the nucleus is not a single character, let $s=0$; otherwise set $s$ to the kern amount for the nucleus followed by the \skewchar of its font. If you checked the kerning on the top of the PL file you will see: (LABEL C H) (KRN O 177 R 0.055557) the character with octal code 177 is the skewchar. In MS implementation a dedicated field is used instead of (mis)using kerning (I guess knuth just wanted to keep the TFM file compact for the obvious reason of that time). • Login or register to post comments Hrant, it isn't a question of making the accent-like mathematical operator look either attached or detached from whatever is below them, but of the specific conventions of alignment of operators and operands in mathematical typesetting. In mathematical typesetting, one is frequently dealing with both horizontal and vertical layout, and generally speaking what is true of alignment of larger, more complex expressions is true of alignment of smaller combinations within them. Hence, the centre alignment of the accent-like operators to top of whatever is below them is the same as the alignment of the top part of an expression to the lower part. This means, among other things, that the distance between the accent-like operator and what is below does not affect the horizontal offset above an italic form. So, for instance, looking at Michel's italic H example, you can see that if the accent were raised or is something else were placed above it, this would be a straight vertical shift or alignment, not a slanted move or offset. • Login or register to post comments PS. Did you get a copy of the MS Mathematical Typesetting book? You can order one from Tiro for $8.50 shipping and handling (note, however, that we currently have a postal strike on in Canada). • Login or register to post comments Am I missing something? Not really. Accent positioning is refined in a similar manner to GPOS accent positioning. Accents each have a center position defined via 'Top Accent Attachment'(optically centered in most cases) and each base can also have a center position defined; these two positions are aligned on the x-axis, and are displayed. If a base glyph does not have an entry, then accents are simply centered. For those who don't know, accents can also be mapped to alternates via the [flac] feature ('flattened' accents, analogous to cap-variants of accents) and also to horizontal, or wide variants for placement over wider glyphs (the Math handler measures the base glyph and then uses the most appropriate accent variant). Note that the Top Accent Attachment is really only an x-axis value, unlike a GPOS anchor, which has both x- and y-coordinates. y-positioning is dynamically achieved by measuring the base glyph as well as other elements (eg. stacked accents), whether you're in a display mode or not etc. As to fonts that support what Gus wants, he's probably already figured that out, which if you are using (Windows) Word 2007/2010 would be Cambria Math (Word defaults to this when you enter a Math zone) or Maxwell Math, when its released. For outside a Math zone, one would need a font with fairly robust GPOS accent positioning, such as Times or Arial or...? • Login or register to post comments @Khaled. Thanks. So, there is another parameter in the kern table which is s = 56. Good! Now, here are some calculations to check: WD = 831, IC = 81 (WD + IC)/2 = 912/2 = 456 456 + s = 456 + 56 = 512 and it is indeed true that the hat of \hat{H} is centered at 512 as one can see from this output from latexit edited with InkScape. The GUST graphic was wrong in more than one way. Now, what is the side effect if I change the kern of H with the skewchar? • Login or register to post comments @Ross: for the math mode (zone) there is also the free XITS Math and Neo Euler and the commercial Lucida Math (when it is released, the name is not final yet). • Login or register to post comments @Michel: I don't think that kern value has any use in TeX other than controlling accent placement as kerning between, say, H and the tie accent makes no sense at all (in TeX accents precede accentee so such kern would never be applied. theoretically someone might build a font where the skewchar is some other character that get kerned but it does not make much sense and I've not seen such font. • Login or register to post comments The reason operators are that high in the russian books seems to be to make sure they are all aligned. That avoids weird looking combinations like • Login or register to post comments Khaled, thanks. • Login or register to post comments
{"url":"http://www.typophile.com/node/83046","timestamp":"2014-04-18T08:20:20Z","content_type":null,"content_length":"53245","record_id":"<urn:uuid:129c9578-95dc-48e4-9132-818087a366dd>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Cleveland, TX Algebra Tutor Find a Cleveland, TX Algebra Tutor ...I am also a Spanish teacher in Conroe, Texas in a private school, that teaches beginning level Spanish. I have coached Junior Varsity High School volleyball for 3 years in Miami, FL and this last year I coached a Varsity High School volleyball this year for Calvary Baptist School in Conroe, Tx. I have 4 years of total experience in coaching volleyball. 24 Subjects: including algebra 1, Spanish, English, ESL/ESOL ...I have taught Algebra II for over 4 years with a high success rate. All of my students continued to Precalculus and were successful in both subjects. I teach several different methods so that the students can have options on how to solve the problems. 8 Subjects: including algebra 1, algebra 2, physics, geometry ...I received my Master of Science degree from Southeast Missouri State University in Cape Girardeau, MO (about 115 miles south of St. Louis). My thesis covered the biochemistry, ecology, and economics of biofuel production and its impact on the environment. While at SEMO I taught lab sections and occasionally gave classroom lectures for intro to botany. 20 Subjects: including algebra 1, algebra 2, chemistry, biology ...I am an elite athlete with multiple recognizable awards--the McDonalds All American Nominee, for example. I have college level experience. I also coached at basketball for more than four years, one year as the assistant coach at the high-school level in ND. 9 Subjects: including algebra 1, elementary math, study skills, basketball ...I am certified in reading and elementary math, therefore, this would translate to all elementary subjects. I am an accomplished pianist with a bachelor's degree in piano and organ performance from National College, Kansas City, Missouri. I have taught piano lessons for many years. 49 Subjects: including algebra 1, algebra 2, English, reading Related Cleveland, TX Tutors Cleveland, TX Accounting Tutors Cleveland, TX ACT Tutors Cleveland, TX Algebra Tutors Cleveland, TX Algebra 2 Tutors Cleveland, TX Calculus Tutors Cleveland, TX Geometry Tutors Cleveland, TX Math Tutors Cleveland, TX Prealgebra Tutors Cleveland, TX Precalculus Tutors Cleveland, TX SAT Tutors Cleveland, TX SAT Math Tutors Cleveland, TX Science Tutors Cleveland, TX Statistics Tutors Cleveland, TX Trigonometry Tutors Nearby Cities With algebra Tutor Ace algebra Tutors Ames, TX algebra Tutors Coldspring algebra Tutors Evergreen, TX algebra Tutors Goodrich, TX algebra Tutors Hardin, TX algebra Tutors Kenefick, TX algebra Tutors New Caney algebra Tutors North Cleveland, TX algebra Tutors Porter, TX algebra Tutors Romayor algebra Tutors Rye, TX algebra Tutors Shep, TX algebra Tutors Shepherd, TX algebra Tutors Splendora algebra Tutors
{"url":"http://www.purplemath.com/Cleveland_TX_Algebra_tutors.php","timestamp":"2014-04-16T13:13:27Z","content_type":null,"content_length":"23948","record_id":"<urn:uuid:4e3ec2dc-d468-4f58-bb56-1d6ce6e94040>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding height of cone September 8th 2009, 11:20 AM #1 Finding height of cone The radius of a traffic cone is 14 centimeters and the lateral surface of the cone is 1617 square centimeters. Find the height of the cone. Don't know where to go from this. Hi A Beautiful Mind, The L.A. of a cone is given by $\pi r l$, where r = radius and l = slant height. Therefore, $1617=\pi (14)l$ $l=\frac{1617}{14 \pi}$ Use the Pythagorean Theorem to find the height of the cone. $\left(\frac{1617}{14 \pi}\right)^2=h^2+14^2$ September 8th 2009, 11:52 AM #2 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia
{"url":"http://mathhelpforum.com/geometry/101183-finding-height-cone.html","timestamp":"2014-04-17T08:42:26Z","content_type":null,"content_length":"34678","record_id":"<urn:uuid:d1f8b447-d82d-423c-b97b-7c7e806d4a63>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Which equation is not equivalent to the formula r = st a. S = r/t b. t = r/s c. s = t/r d. r = ts Best Response You've already chosen the best response. Think of them like this: Best Response You've already chosen the best response. Best Response You've already chosen the best response. And cross multiply and see what you get. Best Response You've already chosen the best response. Best Response You've already chosen the best response. The correct answer would be C. We know this because we can divide. r=st and if we divide by s, we get t=r/s which is an example, so it tells us that it cannot be B. D cannot be the option because multiplication is associative. To eliminate the next one, just take r=st and divide by t. Once you do that, you get S=r/t, which gives you A. By the elimination process, you now know that C is not equal to r=st. You can use the process of elimination at first, but the more you work with this, it will just come naturally! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f430532e4b065f388dc7974","timestamp":"2014-04-19T12:40:39Z","content_type":null,"content_length":"48343","record_id":"<urn:uuid:f7f1340e-9b0b-4501-a128-eef3f41f81b1>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
West Roxbury Geometry Tutor Find a West Roxbury Geometry Tutor ...That means my focus is always on creative and critical thinking, not rule-following or rote memorization or ad hoc "strategies" (a word invented by the test-taking industry to sell products and fool you into thinking you're being told something useful). At the end of the day, the only thing I try... 47 Subjects: including geometry, English, chemistry, reading ...All of this is helped by heavy doses of encouragement; by identifying, tracking, and celebrating tangible progress towards goals; and by constant subtle and/or explicit reminders of why the work at hand is, in fact, worth doing (which it invariably is). I have enjoyed this work a great deal and ... 26 Subjects: including geometry, English, reading, ESL/ESOL ...My tutoring work for the Lexington public school system, run for most of the years by the Special Education department, also dealt with many students with ADD/ADHD issues. My tutoring work for the Lexington public school system for 14 years was run, for most of those years, by the Special Educat... 34 Subjects: including geometry, reading, English, writing I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. 14 Subjects: including geometry, statistics, algebra 1, algebra 2 ...Currently, I also teach Russian privately, but not as much as I used to. My experience in teaching Russian is broad but my key teaching strategy is universal. I always do my best to achieve full understanding starting with the initial preparation level and taking into account the individual features of a student. 23 Subjects: including geometry, English, biology, calculus
{"url":"http://www.purplemath.com/West_Roxbury_Geometry_tutors.php","timestamp":"2014-04-16T21:53:46Z","content_type":null,"content_length":"24081","record_id":"<urn:uuid:efcaa02e-d57f-41e8-b785-b3afa090cccc>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
why is superdeterminism not the universally accepted explanation of nonlocality? OK, and I think one thing that leads to misunderstanding is a terminology issue. You're using local determinism to refer to a philosophical stance, while you're using local realism to refer to a particular formal model which tries to implement this philosophical stance. Yes, I think it's a good idea to keep the technical physics meaning of local realism separate from the philosophical meaning of local determinism. I'm using both local realism and local determinism, pretty much interchangably, to refer to the philosophical stance, not to any formal model or formal constraint. So just keep that in mind when reading my posts. I'll keep that in mind wrt your posts. But I think it would be a good idea to separate the two. I am trying to prove that ANY believer in local determinism MUST disagree with some of the predictions of QM. Ok, it's clear to me now that that's what you're trying to prove. The reason that quantum mechanics is able to have both perfect correlation at identical angles and nonlinear correlations as a function of angle is that QM does not say that the decision about whether the photon goes through the polarizer or not is predetermined by a function P(θ). Bell showed that the view that individual detection is determined by some (LR) function guiding photon behavior is compatible with QM. A LR model of individual detection isn't a problem, and isn't ruled out. It's trying to model coincidental detection in terms of the function that determines individual detection that's a problem, and is ruled out. The crux of why I think one can be a local determinist while still believing that Bell-type LR models of quantum entanglement are ruled out is because of the assumption that what determines individual detection is not the same underlying parameter as what determines coincidental detection. The assumption regarding individual detection is that it's determined by the value of some locally produced (eg., via common emission source) property (eg., the electrical vector) of the photon incident on the polarizing filter. It's further assumed that this is varying randomly from entangled pair to entangled pair. So, there is a 50% reduction in detection rate at each of the individual detectors with the polarizers in place (compared to no polarizers), and a random accumulation of detections. (Wrt individual detection, LR and QM predictions are the same). The assumption regarding coincidental detection is that, wrt each entangled pair, what is being measured by the joint polarizer settings is the locally produced (eg., via common emission source) between the polarizer-incident photons of a pair. Because A and B always record identical results, (1,1) or (0,0) wrt a given coincidence interval when the polarizers are aligned, and because the rate of coincidental detection varies predictably (as θ in the ideal), then it's assumed that the underlying parameter (the locally produced between the photons of a pair) determining coincidental detection isn't varying from pair to pair. It might be further assumed that the the value of the relevant property is the same for each photon of a given pair (ie., that the separated polarizers are measuring exactly the same value of the same property wrt any given pair). But that value only matters wrt individual detection, not wrt coincidental detection. And here's the problem. The LR program requires that coincidental detection be modeled in terms of the underlying parameter that determines individual detection. But how can it do that if the underlying parameter that determines coincidental detection is different than the underlying parameter that determines individual detection? There have been attempts to model entanglement this way (ie., in terms of an unchanging underlying parameter that doesn't vary from entangled pair to entangled pair), but they've rejected as being non-Bell-type LR models. Regarding your 12 step LR reasoning (reproduced below), the problem begins in trying to understand coincidental detection in terms of step 2. I hope the above makes it clearer why I think that one can believe that the LR program (regarding the modelling of quantum entanglement) is kaput, while still believing that the best working assumptions are that our universe is evolving locally deterministically. And so, no need for superdeterministic theories of quantum entanglement. 1. If you have an unpolarized photon, and you put it through a detector, it will have a 50-50 chance of going through, regardless of the angle it's oriented at. 2. A local realist would say that the photon doesn't just randomly go through or not go through the detector oriented at an angle θ; he would say that each unpolarized photon has its own function P (θ) which is guiding it's behavior: it goes through if P(θ)=1 and it doesn't go through it P(θ)=0. 3. Unfortunately, for any given unpolarized photon we can only find out one value of P(θ), because after we send it through a detector and it successfully goes through, it will now be polarized in the direction of the detector and it will "forget" the function P(θ). 4. If you have a pair of entangled photons and you put one of them through a detector, it will have a 50-50 chance of going through, regardless of the angle it's oriented at, just like an unpolarized 5. Just as above, the local realist would say that the photon is acting according to some function P(θ) which tells it what to do. 6. If you have a pair of entangled photons and you put both of them through detectors that are turned to the same angle, then they will either both go through or both not go through. 7. Since the local realist does not believe that the two photons can coordinate their behavior by communicating instantaneously, he concludes the reason they're doing the same thing at the same angle is that they're both using the same function P(θ). 8. He is in a better position than he was before, because now he can find out the values of the function P(θ) at two different angles, by putting one photon through one angle and the other photon through a different angle. 9. If the entangled photons are put through detectors 30° apart, they have 25% chance of not matching. 10. The local realist concludes that for any angle θ, the probability that P(θ±30°)≠P(θ) is 25%, or to put it another way the probability that P(θ±30°)=P(θ) is 75%. 11. So 75% of the time, P(-30)=P(0), and 75% of the time P(0)=P(30), so there's no way that P(-30)≠P(30) 75% of the time. 12. Yet when the entangled photons are put through detector 60°, they have a 75% chance of not matching, so the local realist is very confused.
{"url":"http://www.physicsforums.com/showthread.php?p=3800100","timestamp":"2014-04-16T04:32:32Z","content_type":null,"content_length":"104255","record_id":"<urn:uuid:34678cfd-0a09-4184-b05b-8b1f25b9f8f9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Q: sample code for parsing algebraic expressions? Charlie Zender <zender@ncar.ucar.edu> 9 Dec 1995 20:01:01 -0500 From comp.compilers | List of all articles for this month | From: Charlie Zender <zender@ncar.ucar.edu> Newsgroups: comp.programming,comp.compilers Date: 9 Dec 1995 20:01:01 -0500 Organization: National Center for Atmospheric Research Keywords: parse, question, comment I need to parse algebraic expressions for a data processor i'm writing. that is, I need a function to take forward algebraic expressions like x^2+3*(x+2) and return a parsed listed of (binary,unary) operator expressions in the correct order. The algebraic expressions will be known at run time, so that the code I need does not have to be able to work like a calculator, and take expressions in real time from the Here's what I'm hoping the function will do function results in quotes, input to function->"x^2+3*x+9" output from function-> "x,2 binary exponentiate" (goes on stack) "x,3 binary multiply" (goes on stack) "binary add" (adds last two number on stack together, places result on stack) "9 binary add" (add 9 to last number on stack) Clearly every compiler that knows math has a parser like this in it somewhere, but I don't know what's most appropriate for my uses. Does anyone have recommendations on how to do this? Sample code? Books? Articles? Should I use yacc/bison? Should I write it in C? thanks in advance (please email as well as post any responses) Charlie Zender Voice, FAX: (303) 497-1612, 497-1324 NCAR/CGD/CMS E-mail: zender@ncar.ucar.edu P.O. Box 3000 URL: http://www.cgd.ucar.edu/cms/zender Boulder CO 80307-3000 PGP: finger -l zender@heinlein.cgd.ucar.edu [Our book "lex&yacc, 2nd ed" has the usual calculator example, but someone might have canned infix to RPN code. -John] Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/95-12-068","timestamp":"2014-04-20T03:22:35Z","content_type":null,"content_length":"5049","record_id":"<urn:uuid:e9aad627-28b4-462e-95f0-055ca5ce7705>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: guessing on MC Replies: 6 Last Post: Apr 12, 1998 5:55 PM Messages: [ Previous | Next ] Re: guessing on MC Posted: Apr 10, 1998 4:30 PM >2. If their were no penality then, in your example, 20 MC each with five >choices, you guess on all and you get 4 correct your score is 4. Using the >penality formula your score is 0, zip, nada. So guessing at random will not >help or hurt you. I disagree. The PROBABILITY is that "guessing at random will not help or hurt you", but we all know that probability is not a guarantee. For thousands of students guessing on several questions each, the scoring scheme balances good guesses against bad, but for any INDIVIDUAL student -- unless s/he is particularly lucky (or, at least, not particularly UNlucky) guessing can significantly damage an otherwise good score. If that student could take the test over and over, then guessing makes since to me. With one shot at a performance, however, that evaluates a year's worth of work and that can be worth a college credit, I advise my students not to take the risk of guessing unless they can eliminate enough answers on a question (which is seldom the case) to swing the probability HEAVILY in their favor. >Lin McMullin >Ballston Spa, NY Wayne Murrah, Chairman, Math Dept. Porter-Gaud School 300 Albemarle Rd. Charleston, SC 29414 School: wmurrah@porter.portergaud.edu Home: wmurrah@awod.com
{"url":"http://mathforum.org/kb/thread.jspa?threadID=160643&messageID=654444","timestamp":"2014-04-16T22:37:07Z","content_type":null,"content_length":"24186","record_id":"<urn:uuid:d0fcd266-695a-4d58-93f1-c8679de24e7c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Determine The Frequency, Omega Such That The Inductive ... | Chegg.com Image text transcribed for accessibility: Determine the frequency, omega such that the inductive reactance and the capacitive reactance are equal for the circuit shown in Figure 8. For the frequency determined in (a), calculate the voltages VL, Vr, and Vc. Vl = ______________ Vr = ______________ Vc = ______________ Show that KVL is satisfied by showing Vs = Vl + Vr + Vc. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/determine-frequency-omega-inductive-reactance-capacitive-reactance-equal-circuit-shown-fig-q3397380","timestamp":"2014-04-16T14:06:56Z","content_type":null,"content_length":"20727","record_id":"<urn:uuid:d15244fe-5e0a-4da7-97d7-c9ba231e6c1b>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Investigating Graph Algorithms in the BSP Model on the Cray XMT David A. Bader IEEE Fellow AAAS Fellow College of Computing Georgia Tech Atlanta, GA 30332 Implementing parallel graph algorithms in large, shared memory machines, such as the Cray XMT, can be challenging for programmers. Synchronization, deadlock, hotspotting, and others can be barriers to obtaining linear scalability. Alternative programming models, such as the bulk synchronous parallel programming model used in Google’s Pregel, have been proposed for large graph analytics. This model eases programmer complexity by eliminating deadlock and simplifying data sharing. We investigate the algorithmic effects of the BSP model for graph algorithms and compare performance and scalability with hand-tuned, open source software using GraphCT. We analyze the innermost iterations of these algorithms to understand the differences in scalability between BSP and shared memory algorithms. We show that scalable performance can be obtained with graph algorithms in the BSP model on the Cray XMT. These algorithms perform within a factor of 10 of hand-tuned C code. Publication History Versions of this paper appeared as: 1. D. Ediger and D.A. Bader, ``Investigating Graph Algorithms in the BSP Model on the Cray XMT,'' 7th Workshop on Multithreaded Architectures and Applications (MTAAP), Boston, MA, May 24, 2013. Download this report in Adobe PDF Last updated: May 24, 2013 Computational Biology Parallel Computing
{"url":"http://www.cc.gatech.edu/~bader/papers/GraphBSPonXMT.html","timestamp":"2014-04-21T09:47:45Z","content_type":null,"content_length":"13115","record_id":"<urn:uuid:25309556-afe3-44d8-816b-958e110edf8b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00208-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Renormalization Group for dummies Unfortunately the jig is up on this one - you need some math. Here is the simplest explanation I know: There is a trick in applied math called perturbation theory. The idea is you expand your solutions in a power series about a parameter that is small and you can calculate your solution term by term getting better accuracy with each term. The issue is the coupling constant is thought to be small so you expand about it. The first term is fine. You then calculate the second term - oh oh - its infinite - bummer. Whats wrong? It turns out the coupling constant in fact is not small - but rather is itself infinite so its a really bad choice. Ok how to get around it. What you do about it is what is called regularize the equations so the equations are of the form of a limit depending on a parameter called the regulator. You then choose a different parameter to expand about called the renormalized parameter and you fix its value by saying its the value you would get from measurement so you know its finite when you take the limit. If you do that you immediately see the original problem - the coupling constant secretly depends on the regulator so when you take the limit it blows up to infinity. The infinity minus infinity thing is really historical before they worked out exactly what was going on and resolved by what is known as the effective field theory approach. Thanks. I kinda got the concept now. Anyway. In a power series, [itex]y= y_0+ \epsilon y_1+ \epsilon^2 y_2+ \cdot\cdot\cdot [/itex], is "[itex]\epsilon[/itex]" equivalent to the coupling constant which must be very small like 1/137 and present in each series (although I know it is in more complex form)?
{"url":"http://www.physicsforums.com/showpost.php?p=3774866&postcount=5","timestamp":"2014-04-17T21:40:25Z","content_type":null,"content_length":"9278","record_id":"<urn:uuid:f1c42699-6ef3-4c35-90fb-c17f468842f9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Gearing basics Here to answer many often-encountered gear questions is a quick reference that includes formulas and definitions. Gears are toothed wheels that transmit motion from one shaft to another and determine the speed, torque, and direction of rotation of machine elements. Gears can be grouped into two major categories: those that operate in pairs to connect parallel shafts and those that connect nonparallel shafts. Parallel types include spur and helical. Nonparallel types include bevel, hypoid, and worm. Types of gears Spur gears. Cylindrical gears with teeth that are straight and parallel to the axis of rotation. Helical gears. Cylindrical gears with teeth at an angle to the axis of rotation. External gear. A gear with teeth on the outside of a cylinder or cone. Internal gear. A gear with teeth on the inside of a hollow cylinder. Both spur and helical gears can be made as internal gears. The mating gear for an internal gear must be an external gear. Bevel gears. Gears with teeth on the outside of a cone-shaped body. Teeth may be straight or spiral. Normally used at right angles (perpendicular to each other). Worm gears. Gearsets in which one member of the pair has teeth wrapped around a cylinder like screw threads. Normally, this gear, called the worm, is at a right angle to the mating gear. Worm gears may be cylindrical, single-enveloping, or double-enveloping. Enveloping designs have curved worm or gear-tooth shapes to obtain more tooth contact area. Face gears. Gears with teeth on the end of the cylinder. Hypoid gears. Similar to bevel gears, but they operate on nonintersecting axes. Pinion. Where two gears run together, the one with the smaller number of teeth is called the pinion. Rack. A gear with teeth spaced along a straight line, and suitable for straightline motion. Elements of gear teeth Tooth surface. The side of a gear tooth. Tooth profile. One side of a tooth in a cross section between the outside circle and the root circle. Involute. A tooth profile generated from the involute of a circle. A common tooth shape for spur gears. Base circle. The circle from which involute tooth profiles are derived. Flank. The working or contacting side of a tooth. Usually has an involute profile in a transverse section. Top land. The top surface of a gear tooth. Bottom land. The surface at the bottom of the space between adjacent teeth. Crown. A modification consisting of a slight outward bulge in the center of the tooth flank. The tooth becomes gradually thinner toward each end. A fully crowned tooth has a little material removed at the tip and root areas also. The purpose of crowning is to ensure that the center of the flank carries its full share of the load even if the gears are slightly misaligned or distorted. Root circle. A tangent to the bottom of the tooth spaces in a cross section. Continue on Page 2 Pitch circle. A circle that contains the pitch point. Pitch circles are tangent in mating gears. A circle at which gear teeth theoretically roll without slipping. Gear center. The center of the pitch circle. Line of centers. A line connecting the centers of the pitch circles of two engaging gears. It is also the common perpendicular of the axes in crossed helical gears and worm gears. Pitch point. The point of a tooth profile which lies on the pitch circle of the gear. As the pitch point of one gear contacts its mating gear, the contact occurs at the pitch point of the mating gear. This common pitch point lies on a line connecting the two gear centers. Path of action. A curve along which contact occurs during the engagement of two tooth profiles. Line of action. The path of action for involute gears. It is the straight line passing through the pitch point and tangent to the base circle. Line of contact. The line or curve along which two tooth surfaces are tangent to each other. Point of contact. Any point at which two tooth profiles touch each other. Linear and circular dimensions Center distance. The distance between parallel axes of spur gears or parallel helical gears, or the crossed axes of crossed helical gears or of worms and worm gears. Also, it is the distance between the centers of the pitch circles. Offset. The perpendicular distance between the axes of hypoid gears or offset face gears. Pitch. The distance between similar, equally spaced tooth surfaces along a given line or curve. Axial pitch. Linear pitch in an axial plane and in a pitch surface. In helical gears and worms, axial pitch has the same value at all diameters. In other gears, axial pitch may be confined to the pitch surface and may be a circular measurement. Base pitch. In an involute gear, the pitch on the base circle or along the line of action. Corresponding sides of involute gear teeth are parallel curves, and the base pitch is the constant and fundamental distance between them measured along the base circle. Axial base pitch. The base pitch of helical involute tooth surfaces in an axial plane. Lead. The axial advance of a thread or a helical spiral in 360 deg (one turn about the shaft axis). Circular pitch. Distance along the pitch circle between corresponding profiles of adjacent teeth. Outside diameter. The diameter of the outer circle of a gear. In a bevel gear, it is the diameter of the crown circle. In a throated worm gear, it is the maximum diameter of the blank. Face width. Length of the tooth in an axial plane. Circular tooth thickness. Length of arc between the two sides of a gear tooth on the pitch circle. Chordal tooth thickness. Length of the chord subtending a circular tooth thickness arc. Backlash. Amount by which the width of a tooth space exceeds the thickness of a mating gear tooth on the operating pitch circles. Normally thought of as the freedom of one gear to move while the mating gear is held stationary. Continue on Page 3 Angular dimensions Helix angle. The inclination of the tooth in a lengthwise direction. If the helix angle is 0 deg, the tooth is parallel to the axis of the gear and is really a spur gear tooth. Lead angle. The inclination of a thread at the pitch line from a Iine 90 deg to the shaft axis. Shaft angle. The angle between the axes of two nonparallel gear shafts. Pitch angle. In bevel gears, the angle between an element of a pitch cone and its axis. Angular pitch. The angle subtended by the circular pitch, usually expressed in radians. Pressure angle. The angle at a pitch point between the line of pressure which is normal to the tooth surface, and the plane tangent to the pitch surface. Profile angle. The angle at a specified pitch point between a line tangent to a tooth surface and the line normal to the pitch surface. Gear tooth ratio. The ratio of the larger to the smaller number of teeth in a pair of gears. Diametral pitch. The ratio of the number of teeth to the pitch diameter in inches. As tooth size increases, diametral pitch decreases. Contact ratio. To assure smooth, continuous tooth action as one pair of teeth passes out of action, a succeeding pair must have already started action. It is desired to have as much overlap as possible. The contact ratio is a measure of this overlapping action and can be thought of as the average number of teeth in contact. Hunting ratio. A ratio of numbers of gear and pinion teeth which ensures that each tooth in the pinion will contact every tooth in the gear before it contacts any gear tooth a second time. (13 to 48 is a hunting ratio; 12 to 48 is not.) General terms Runout. A measure of eccentricity relative to the axis of rotation. Runout is measured in a radial direction and the amount is the difference between the highest and lowest reading in 360 deg, or one turn. For gear teeth, runout is usually checked by measurement over pins put between the teeth, by a ball probe, or using a master gear. Cylindrical surfaces are checked for runout by a measuring probe that reads in a radial direction as the part is turned on its axis. Undercut. When part of the involute profile of a tooth is cut away near its base, the tooth is said to be undercut. Undercutting becomes a problem when the number of teeth is small. Flash temperature. The temperature at which a tooth surface is calculated to be hot enough to destroy the oil film and allow instantaneous welding at the contact point. Working depth. The depth of engagement of two gears. Full depth tee th. Those in which the working depth equals 2.000 divided by normal diametral pitch. Tip relief. A modification of a tooth profile, whereby a small amount of material is removed near the tip of the tooth. Technical references Gear Nomenclature, Definitions of Terms with Symbols (ANSI/AGMA 1012-F90), American Gear Manufacturers Association, Alexandria, Va., 1990. Darle W. Dudley, Hand book of Practical Gear Design, McGraw-Hill Inc., New York, 1984. Robert O. Parmley, Mechanical Components Handbook, McGraw-Hill Inc., New York, 1985. Raymond J. Drago, Fundamentals of Gear Design, Butterworths, Stoneham, Mass., 1988. P. M. Dean, Jr., Gear Manufacturing and Performance, American Society for Metals, Metals Park, Ohio, 1974 Related article Discuss this Article 0 Post new comment
{"url":"http://machinedesign.com/mechanical-drives/gearing-basics","timestamp":"2014-04-20T17:16:28Z","content_type":null,"content_length":"97945","record_id":"<urn:uuid:eb214b94-daaf-4dbd-a106-a0a3f3e2ace6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/shubham1/asked","timestamp":"2014-04-18T08:08:15Z","content_type":null,"content_length":"108067","record_id":"<urn:uuid:93308c0c-bbb9-4500-9a13-451b341546aa>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Valid example of proof by contradiction? May 9th 2011, 11:30 PM #1 May 2011 Los Angeles Valid example of proof by contradiction? I'm studying "How To Prove It" (Velleman) and I'm on exercise 3.7. The solution I gave is a "proof by contradiction" and I would like to verify that the method of proof and result are valid. $\\\mbox{Suppose that }a\mbox{ is a real number. Prove that if }a^3 > a\mbox{ then }a^5 > a\mbox{.}\\\\\mbox{Suppose }a^5 \leq a\mbox{. Then }a^5 - a \leq 0\mbox{ and }a^5 - a = (a^3 - a)(a^2 + 1) \leq 0\mbox{.}\\\mbox{However, we know that }a^3 > a\mbox{ so the first factor }a^3 - a > 0\mbox{.}\\\mbox{The second factor is also positive since }x e 0 \rightarrow x^2 > 0\mbox{ for all }x \in \mathbb{R}\mbox{,}\\\mbox{which implies that }a^2 + 1 > 0\mbox{ as well. }\\\mbox{This leads to a contradiction however, therefore }a^3 > a \rightarrow a^5 > a\mbox{.}$ Is this a well formed proof? How explicit do I have to be? For example, is it OK for me to leave out the fact that for a, b in R, a > 0 and b > 0 implies that ab > 0? If so, then what else could I have left out? Maybe these things will be more clear once working on more complicated proofs? Yes, it is. How explicit do I have to be? For example, is it OK for me to leave out the fact that for a, b in R, a > 0 and b > 0 implies that ab > 0? If so, then what else could I have left out? Depends on the context. For example I suppose you are allowed to use all the properties appearing in your proof. Thanks for the feedback! Could you give me an example of a situation where you can't use certain properties (besides the obvious ones of, you are proving that particular property or using a property that is an extension of the one you are proving)? I agree that your proof is fine. To be extremely picky, I would only make some stylistic changes. Suppose that a is a real number. Prove that if a^3 > a then a^5 > a. Suppose that a^3 > a, but a^5 <= a. Then a^5 - a <= 0 and a^5 - a = (a^3 - a)(a^2 + 1) <= 0. However, we know that a^3 > a, so the first factor a^3 - a > 0. The second factor is also positive since x^2 >= 0 for all x in R, which implies that a^2 + 1 > 0 as well. This leads to a contradiction with the fact that (a^3 - a)(a^2 + 1) <= 0, however; therefore, a^3 > a -> a^5 > a. In particular, one feature of a good proof is uniform complexity, when the amount of reasoning required to go from one statement to the next is about the same throughout the proof. I hate it when one particular step in some textbook proofs is much more complicated than others; it has to be broken into several steps. Here the "proof speed" is supposed to be pretty low, i.e., even rather simple steps need to be explained. I found the following sentence: The second factor is also positive since x != 0 -> x^2 > 0 for all x in R,which implies that a^2 + 1 > 0 as well. to be more complicated than others. First, it was not stated that a != 0 (though it is obvious from the fact that a^3 - a > 0) and, second, a^2 + 1 > 0 even when a = 0. Thanks for the excellent feedback! It gives me a much better idea of what's involved in a proof. My original conception was that a proof must follow some very specific steps, otherwise it isn't a valid proof (maybe this idea comes from my background in programming). However, it seems that as long as there are no errors in logic, simply transforming the premise into the conclusion with a well-worded explanation is fine. an alternate, and much (in my opinion) more direct contradiction would be: ....then (a^3 - a)(a^2 + 1) ≤ 0. since a^2 + 1 > 0 for all a, we have that a^3 - a ≤ 0, that is, a^3 ≤ a, contradicting that a^3 > a. (assuming, of course, that one has proved already that a^2 ≥ 0, so that a^2 + 1 > a^2 ≥ 0). My original conception was that a proof must follow some very specific steps, otherwise it isn't a valid proof (maybe this idea comes from my background in programming). Ultimately, this is correct. In fact, you'll be happy to know that proofs are programs. However, when proofs are intended to be read by people, some steps are omitted and proofs are presented as you say. May 9th 2011, 11:58 PM #2 May 10th 2011, 01:01 AM #3 May 2011 Los Angeles May 10th 2011, 01:10 AM #4 MHF Contributor Oct 2009 May 10th 2011, 01:24 AM #5 May 2011 Los Angeles May 10th 2011, 01:54 AM #6 MHF Contributor Mar 2011 May 10th 2011, 02:26 AM #7 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/discrete-math/180070-valid-example-proof-contradiction.html","timestamp":"2014-04-17T13:16:24Z","content_type":null,"content_length":"52620","record_id":"<urn:uuid:14463e8b-68d0-4d55-96f9-e0f85d7bf193>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Making Buildings from .MAP Files Part 1 Valve Hammer Editor is a pretty common tool for making buildings and structures for games these days, and in some cases entire levels. It exports .MAP files containing solids, called “brushes”, which make up the solid areas of the building/structure/map. There are a few tools that can convert these to .BSP files for use in Quake, Quake II, etc., or into other formats, such as those used by Starsiege: Tribes. This series of tutorials will explain how to load .MAP files, create accurate triangle representations of the solids (many people believe that .MAP files contain triangles—they are wrong), build an efficient BSP representing the data, generate and use portals/PVS’s, generate UV coordinates, generate a BVH for collision detection, and even explain a simple physics implementation that will allow you to have some fun with the .MAP files you loaded with your own engine. I will present stable, fast, and robust routines, including the use of explicit stacking while building the BSP tree to avoid stack overflow on huge levels. In some cases I will use things to improve my own speed which most people might not have available to them, such as stack allocators. In these cases I will also explain how it can easily be done in an alternative way, although at the cost of speed. By the end of the series you will be able to load any .MAP file from Valve Hammer Editor into your own engine and have fun rolling balls around the map or even taking the next small step up and putting a walking character into it. Our first goal is simply to get a graphical representation of the solids inside the file. You should start off by understanding the .MAP format. Here is a sample line from a solid in a .MAP file. ( 130 247 48 ) ( 257 247 48 ) ( 257 247 61 ) 143 [ 1 0 0 -18.2162 ] [ 0 0 -1 0.833384 ] 0 0.246667 0.24 The most important thing to note is that those 3 sets of points are not vertices. They are a plane described by 3 points, and those 3 points coincidentally are vertices on the actual solid, but don’t really have to be. The plane described by those 3 points is the plane for a single face of the solid. A face is just one side of the solid, so a cube has 6 faces, and would have 6 entries for that solid in the .MAP file. So if you try to read this data is just triangle data, you would basically end up with a cube with each face missing one triangle. For the time being, let’s ignore all of the rest of the data. Our only objective for this tutorial is to get the triangles that correctly create a graphical representation of the solids. I will not cover how to load and parse the data out of the .MAP file. It’s text and you can do it any way you like. Flex/Bison is recommended, or just roll your own parser. Let’s Get Started So you have a parser and you know that those aren’t triangles in the .MAP file, they are planes pretending to be triangles. Now you need some utility functions. For the sake of accuracy, I strongly recommend using doubles rather than floats. I have vector, plane, ray, line-segment, etc. templates that allow me to specify whatever type I want, and I strongly suggest you do the same. The data will end up as 32-bit floating-point values, but while we are doing plane math, polygon splitting, etc., it will be to your advantage to have them all being 64-bit precision. I am also not going to provide code for all of the base types you will need, but I will for some that are important. A vector class is a vector class is a vector class. They all have a Dot() and a Cross(). In my code, the * operator on a vector is the Dot() function, and % is Cross(). A line segment is just 2 vectors, p and q. p is the starting point and q is the end point. A ray is just a point and a normal. A plane is just a normal and a distance in that direction. But it has some important features that make it worth posting. Specifically, we need a way to convert from 3 points to a plane in standard form. template <typename _tType, typename _tVec3Type> class CPlane3Base { // All is public. This class has no secrets. public : // == Various constructors. LSE_INLINE LSE_CALLCTOR CPlane3Base() { LSE_INLINE LSE_CALLCTOR CPlane3Base( const CPlane3Base<_tType, _tVec3Type> &_pPlane ) { (*this) = _pPlane; LSE_INLINE LSE_CALLCTOR CPlane3Base( const _tVec3Type &_vPoint0, const _tVec3Type &_vPoint1, const _tVec3Type &_vPoint2 ) { n = (_vPoint1 - _vPoint0) % (_vPoint2 - _vPoint0); dist = n.Dot( _vPoint0 ); // == Members. * Plane normal. _tVec3Type n; * Plane distance, signed. _tType dist; typedef CPlane3Base<LSDOUBLE, CVector3High> CPlane3High; Now we have a way to convert the 3 points in the .MAP file to a plane. Going back to the cube example, when you have done this for all 6 faces, you will have 6 planes that represent the solid region of the cube. This is called an H-representation, where H stands for “half space”, which is what planes usually are. They divide space in half, one side being good and the other side evil, or solid. Note that in .MAP files, the normal points towards the inside of the solid, and I have opted to reverse it so it points outwards. The following plane negation operator was used. * Negation operaor. * \return Returns the negated form of this plane. CPlane3Base<_tType, _tVec3Type> LSE_CALL operator - () const { return CPlane3Base<_tType, _tVec3Type>( -n, -dist ); So How do We Get Triangle Data Out of a Bunch of Planes? Let’s consider the case of a cube. The follow image shows the top plane of the cube, shown in blue. The bottom side is not represented at all because it does not intersect the blue plane, which represents the top of the cube. The plane for the top side is shown in blue, but the part that has been “sectioned off” by the side planes is shown in red. We want to determine the line segments that surround the red part. By connecting those line segments we create a closed convex polygon, and once we have a polygon it will be possible to then create There are 2 helper functions we will need. One gets a ray that represents the intersection of 2 planes, and 1 gets the point that represents the intersection of 3 planes. Again, templates are recommended so that you can use 64-bit doubles for your toolchains and 32-bit floats for your game engine. * Gets the intersection point of 3 planes if there is one. If not, false is returned. * \param _pP0 Plane 1. * \param _pP1 Plane 2. * \param _pP2 Plane 3. * \param _vRet The returned intersection point, valid only if true is returned. * \return Returns true if the 3 planes intersect at a single point. template <typename _tType, typename _tPlaneType, typename _tVecType> static LSE_INLINE LSBOOL LSE_CALL CIntersect::ThreePlanes( const _tPlaneType &_pP0, const _tPlaneType &_pP1, const _tPlaneType &_pP2, _tVecType &_vRet ) { _tVecType vU = _pP1.n % _pP2.n; _tType fDenom = _pP0.n * vU; if ( CMathLib::AbsT( fDenom ) <= static_cast<_tType>(LSM_REAL_EPSILON) ) { return false; } _vRet = (vU * _pP0.dist + _pP0.n.Cross( _pP1.n * _pP2.dist - _pP2.n * _pP1.dist )) / fDenom; return true; * If the given planes intersect, the ray defining the intersection is returned. Planes must be normalized. * \param _pP0 Plane 1. * \param _pP1 Plane 2. * \param _rRay The return ray. * \return Returns true if the 2 planes intersect. template <typename _tType, typename _tPlaneType, typename _tRayType> static LSE_INLINE LSBOOL LSE_CALL CIntersect::TwoPlanes( const _tPlaneType &_pP0, const _tPlaneType &_pP1, _tRayType &_rRay ) { // Direction of the intersection. _rRay.dir = _pP0.n % _pP1.n; // If it is (near) 0, the planes are parallel (and separated) or coincident and not considered // intersecting. if ( _rRay.dir.Dot( _rRay.dir ) < static_cast<_tType>(LSM_REAL_EPSILON) ) { return false; } // Finish the ray by computing a point on the intersection line. _rRay.p = ((_pP1.n * _pP0.dist) - (_pP0.n * _pP1.dist)) % _rRay.dir; return true; Remember, rays are a point and a direction, both vector types, and the % operator is cross-product while the * operator is dot-product.. With these 2 helper functions available, it becomes easy to explain the algorithm while showing the code for it as I go. Essentially, each plane on the original solid represents a single polygon with n sides. So we start with one plane and go from there. We then check every other plane for intersection with the first plane, using CIntersect::TwoPlanes(). Each plane that intersects with the first plane is most likely a side to the final polygon. There are cases where a second plane will intersect the first plane without actually contributing a line segment to the final polygon, but that is handled in a post-processing step later. The code for starting with a given face and finding each 2nd plane follows. * Gets the edges for a given polygon. If the polygon is not NULL, the edges are added to it as they are found. * \param _pp3hPoly The polygon to which to add edges. If NULL, no edges are added. * \param _sSolid The solid containing all of the faces that will be used to find the edges. * \param _ui32Face The face on the solid to be converted to a polygon. * \return Returns the number of edges found for the given face on the given solid. LSUINT32 LSE_CALL CTrisFromSolids::GetPolyEdges( CPolygon3High * _pp3hPoly, const CSolid &_sSolid, LSUINT32 _ui32Face ) { LSUINT32 ui32Total = 0UL; // Check all other planes aside from the one given. for ( LSUINT32 I = 0UL; I < _sSolid.Planes().Length(); ++I ) { if ( I == _ui32Face ) { continue; } CRay3High r3hRay; if ( CIntersect::TwoPlanes<LSDOUBLE, CPlane3High, CRay3High>( _sSolid.Planes()[_ui32Face].phPlane, _sSolid.Planes()[I].phPlane, r3hRay ) ) { _ui32Face here is the first plane. This code then seeks all 2nd planes. Note that CPolygon3High is basically an array of line segments. In my implementation, for speed, a polygon is told it will have a maximum number of sides, then it allocates those sides from a stack allocator, and then this function will fill in those sides. Which basically means this function has to be called twice. Once to find out how many sides on the polygon, then again to put those sides into the polygon. While it may seem slow, in practice, this is the fastest way to go, rather than having each polygon use a variable-sized std::vector of sides. A stack allocator provides instant allocation of memory, with the loss of any ability to deallocate or reallocate that memory. It is all freed in one sweep when the stack allocator is destroyed (by which point you must also ensure that anything using that memory is already destroyed and dead). Due to these limitations, you need to know before-hand how much to allocate. It turns out that doing this math twice, plus eliminating deallocation time, is actually vastly faster than otherwise. I have a .MAP file with over 30,000 solids, all of which have about 6 polygons, for a total of 180,000 polygons, and in turn 360,000 calls to this routine. It finishes in less than 0.5 seconds. If all of those calls results in allocations, you would have much less RAM and the total time would likely be more than 15 seconds, including deallocation. Having found a second plane, we now need to find point on that plane. The objective is to find the start point p and the end point q that create the shortest line segment. We do this by searching over all 3rd planes that intersect both of the first 2 planes. The following image shows the ray (purple) for the 2 planes that have intersected, as well as 2 other planes that intersect both of the first 2 planes. These are the 3rd planes—one in yellow and one in green. Both of the 3rd planes have their normals represented by a red arrow. Both the yellow and green planes form a 3rd plane, and this a point on the current edge of the polygon, but how do we know which point is the p and which is the q? If the plane is facing in the same direction as the ray, it is an end point q, otherwise it is a start point p. We scan for the q that is closest to p, and the p that is closest to q. While this sounds complicated, it is nothing more than 2 dot products. One tells us if it is a p or a q, and the other tells us how for in the direction of the ray the point is. The code follows. LSBOOL bLeftFound = false, bRightFound = false; LSDOUBLE dDistLeft = 0.0, dDistRight = 0.0; CLineSeg3High ls3hThisSeg( CVector3High( 0.0, 0.0, 0.0 ), CVector3High( 0.0, 0.0, 0.0 ) ); // Find a 3rd plane that intersects this point. for ( LSUINT32 J = 0UL; J < _sSolid.Planes().Length(); ++J ) { if ( J == I || J == _ui32Face ) { continue; } CVector3High v3hIntersect; if ( CIntersect::ThreePlanes<LSDOUBLE, CPlane3High, CVector3High>( _sSolid.Planes()[_ui32Face].phPlane, _sSolid.Planes()[I].phPlane, _sSolid.Planes()[J].phPlane, v3hIntersect ) ) { LSDOUBLE dDir = _sSolid.Planes()[J].phPlane.n * r3hRay.dir; LSDOUBLE dDist = r3hRay.dir * v3hIntersect; if ( dDir > 0.0 ) { // Right side. if ( !bRightFound || dDist < dDistRight ) { dDistRight = dDist; bRightFound = true; ls3hThisSeg.q = v3hIntersect; else { dDist = -dDist; // Left side. if ( !bLeftFound || dDist < dDistLeft ) { dDistLeft = dDist; bLeftFound = true; ls3hThisSeg.p = v3hIntersect; if ( !bLeftFound || !bRightFound ) { continue; } if ( _pp3hPoly ) { _pp3hPoly->Segments()[ui32Total] = ls3hThisSeg; return ui32Total; Remember that the * operator, used to get dDir and dDist, is a vector dot product. Update: Since I reversed the direction of the .MAP planes, dDir < 0.0 became dDir > 0.0, and the line-segment’s p and q were swapped from my original posting of this (to the current code shown here). As I have explained, in my implementation, this is repeated twice—once to get the total sides on the polygon, and again to actually fill in those sides. Given a solid, the loop looks like this: // For each face, find out how many edges there are, then add the edges. for ( LSUINT32 I = m_ui32TotalPolies; I--; ) { LSUINT32 ui32Total = GetPolyEdges( NULL, _sSolid, I ); if ( !m_pp3hPolygons[I].SetTotalSides( ui32Total, _psaAllocator ) ) { return false; } GetPolyEdges( &m_pp3hPolygons[I], _sSolid, I ); if ( !m_pp3hPolygons[I].Finalize( 0.0001 ) ) { return false; } m_pp3hPolygons[I].Plane() = _sSolid.Planes()[I].phPlane; m_ui32TotalPolies here is the same as the number of sides on the solid, which for our cube example is 6. Note that I count down to 0 instead of starting at 0 and going up. This allows each iteration of the for () loop to compare against 0 instead of against m_ui32TotalPolies, which is basically a very minor optimization. The output code is smaller and comparing against 0 is always the fastest compare there is. Smaller code means fewer instruction cache misses as well. Notice as well that I set the normal of the polygon to the normal of the solid face. Due to rounding errors, we might get a polygon with some vertices exactly on this plane, slightly in front of it, or slightly behind it. When you set the normal for a polygon you have 2 ways of doing it. Either set the normal based off the object that created it, or by calculating the normals of all the triangles in the polygon and reaching an average. At this point we don’t even have triangles, but aside from that it is simply more robust to set the normal of the polygon to match the normal of the face from which it was created. What Does CPolygon3High::Finalize() Do? As mentioned, it is possible for the previous routine to add segments to a polygon that don’t actually belong. These can be handled easily by a post-processing routine, which in my case is a call to Finalize(). It basically removes any segments that don’t make sense, and then sequence the remaining segments so that every end point matches a starting point of another segment. Segments that don’t make sense are those that either don’t have a connection on both ends or are shorter than epsilon in length. The code is fairly straight-forward: * Finalizes the polygon such that the end of each segment connects to the start of another segment. * If false is returned, at least one segment was unable to be connected. * \param _tEpsilon The epsilon for snapping points together. * \return Returns true if all end points of the polygon's edges connect with 1 starting point. LSBOOL LSE_CALL Finalize( _tType _tEpsilon ) { assert( m_ui32Total >= 3UL ); _tSeg3Type tTemp; // Remove any segments that have any missing connections. for ( LSUINT32 I = 0UL; I < m_ui32Total; ) { // Check P against Q. LSBOOL bHasP = false, bHasQ = false; _tType tLen = (m_ptSegs[I].q - m_ptSegs[I].p).LenSq(); if ( tLen >= _tEpsilon * _tEpsilon ) { for ( LSUINT32 J = 0UL; J < m_ui32Total; ++J ) { if ( J == I ) { continue; } _tType tDist = (m_ptSegs[I].p - m_ptSegs[J].q).Len(); if ( tDist <= _tEpsilon ) { bHasP = true; if ( bHasP ) { // Check its Q. for ( LSUINT32 J = 0UL; J < m_ui32Total; ++J ) { if ( J == I ) { continue; } _tType tDist = (m_ptSegs[I].q - m_ptSegs[J].p).Len(); if ( tDist <= _tEpsilon ) { bHasQ = true; if ( !bHasP || !bHasQ ) { Remove( I ); if ( m_ui32Total <= 2UL ) { return false; else { for ( LSUINT32 I = 0UL; I < m_ui32Total; ++I ) { // Find the starting point that connects to the end point of the Ith edge. for ( LSUINT32 J = I + 1UL; J < m_ui32Total; ++J ) { _tType tDist = (m_ptSegs[J].p - m_ptSegs[I].q).Len(); if ( tDist <= _tEpsilon ) { if ( J != I + 1UL ) { // No need to swap here. tTemp = m_ptSegs[I+1]; m_ptSegs[I+1] = m_ptSegs[J]; m_ptSegs[J] = tTemp; // Return true if we came full circle. return (m_ptSegs[0].p - m_ptSegs[m_ui32Total-1].q).Len() <= _tEpsilon; We Now Have a Bunch of Polygons. What Next? In my case, the CTrisFromSolids class is a 1-to-1 representation of a solid. That is, each solid has a single instance of a CTrisFromSolids class associated with it. That class in turn has one polygon for each face of the solid it represents. For general-purpose polygons, conversions to triangles means ear-clipping. Luckily, this is only necessary if you have no guarantee that your polygons are convex. In our case, we know all of our polygons are convex, so triangulating our polygons is much simpler. All we have to do is make a triangle list by treating each polygon on each solid as a triangle fan. Although I will present the code for this, keep in mind that at this point our only objective is to get debug visuals that confirm we are on the right track. Don’t spend much time on this routine, and don’t throw away your polygon data yet. You will need it for building your BSP in the next tutorial, which will involved splitting polygons across planes and more fun. Basically, if you have a bunch of convex polygons, composed of line segments whose starting points are denoted as p, then the following code will export all of the triangles (as a triangle list) to a file. The file is the most basic format for you to load and display to ensure you have the correct results. Remember that in my code, a CTrisFromSolids is the class that represents a single solid, but in polygons rather than planes. First you need to create vertices from the polygons: * Generates triangle vertices from its polygon data. * \param _psaAllocator The stack allocator to be used for allocating the vertices. * \return Returns true if there was enough memory to allocate the vertex data. LSBOOL LSE_CALL CTrisFromSolids::GenerateTriangles( CStackAllocator * _psaAllocator ) { if ( !m_psSolid ) { return false; } // Since all the faces are convex, we don't need to do ear clipping, plus we can easily calculate // ahead of time the number of triangles there will be. LSUINT32 ui32Tris = TotalTris() * 3UL; m_pvVerts = static_cast<LSL_VERTEX *>(_psaAllocator->Alloc( sizeof( LSL_VERTEX ) * ui32Tris, 4UL )); if ( !m_pvVerts ) { return false; } LSUINT32 ui32Index = 0UL; // For each polygon. for ( LSUINT32 I = m_ui32TotalPolies; I--; ) { LSUINT32 ui32End = m_pp3hPolygons[I].TotalSegments() - 1UL; for ( LSUINT32 J = 1UL; J < ui32End; ++J ) { m_pvVerts[ui32Index].v3hPos = m_pp3hPolygons[I].Segments()[0].p; m_pvVerts[ui32Index].v3hNormal = -m_pp3hPolygons[I].Plane().n; m_pvVerts[ui32Index].fUv[0] = m_pvVerts[ui32Index].fUv[1] = 0.0f; m_pvVerts[ui32Index].v3hPos = m_pp3hPolygons[I].Segments()[J+1].p; m_pvVerts[ui32Index].v3hNormal = -m_pp3hPolygons[I].Plane().n; m_pvVerts[ui32Index].fUv[0] = m_pvVerts[ui32Index].fUv[1] = 0.0f; m_pvVerts[ui32Index].v3hPos = m_pp3hPolygons[I].Segments()[J].p; m_pvVerts[ui32Index].v3hNormal = -m_pp3hPolygons[I].Plane().n; m_pvVerts[ui32Index].fUv[0] = m_pvVerts[ui32Index].fUv[1] = 0.0f; return true; Then you can run over each solid/CTrisFromSolids object and export the vertex and normal. We have not generated UV coordinates yet, and we won’t be able to do so until far in the future, so just export the vertex and normal for now. Remember, this is just for debugging so that you know you are fine up to this point. if ( !fsOut.WriteUInt32( ui32Triangles * 3UL ) ) { return LSSTD_E_NODISKSPACE; } CSolidLeafBspHigh::OutputVertices( slbbBsp.Root(), fsOut ); for ( LSUINT32 I = ui32Solids; I--; ) { LSUINT32 ui32Total = ptfsTris[I].TotalTris() * 3UL; for ( LSUINT32 J = 0UL; J < ui32Total; ++J ) { if ( !fsOut.WriteFloat( static_cast<LSFLOAT>(ptfsTris[I].Verts()[J].v3hPos.x) ) ) { return LSSTD_E_NODISKSPACE; } if ( !fsOut.WriteFloat( static_cast<LSFLOAT>(ptfsTris[I].Verts()[J].v3hPos.y) ) ) { return LSSTD_E_NODISKSPACE; } if ( !fsOut.WriteFloat( static_cast<LSFLOAT>(ptfsTris[I].Verts()[J].v3hPos.z) ) ) { return LSSTD_E_NODISKSPACE; } if ( !fsOut.WriteFloat( static_cast<LSFLOAT>(ptfsTris[I].Verts()[J].v3hNormal.x) ) ) { return LSSTD_E_NODISKSPACE; } if ( !fsOut.WriteFloat( static_cast<LSFLOAT>(ptfsTris[I].Verts()[J].v3hNormal.y) ) ) { return LSSTD_E_NODISKSPACE; } if ( !fsOut.WriteFloat( static_cast<LSFLOAT>(ptfsTris[I].Verts()[J].v3hNormal.z) ) ) { return LSSTD_E_NODISKSPACE; } fsOut is a file stream here. How you implement this really doesn’t matter. You just need to export the vertices and normals to a file and then load them so that you can see the result. If you have done everything correctly, you should see results such as these: What? This map seems familiar? I have no idea why. Today we learned how to get the basic graphics data out of a .MAP file. The polygons—not the triangles—are still necessary for the next tutorial, so don’t throw those away. The graphics data we generated today is only for debug, hence the strange coloring in my final screenshot, but don’t worry—not only will the final screenshots look better than the Worldcraft screenshots, they will also be more functional, and you can include them in your own game projects, with physics as a bonus. Coming Up In the next tutorial we will use BSP’s to split our maps into little chunks. Like in the following: I have no idea why this map would look familiar to anyone, but in any case this was the result of BSP splitting which can’t be detected here. The little specs you can see are basically parts of other solids that are showing through. When we get to hidden surface removal, those will disappear and your triangle counts will decrease significantly. I will explain how to find the optimal triangle count in future tutorials. For now, enjoy what is next to come as you begin loading your own .MAP files. At the end of the tutorials, I will release these GoldenEye 007 .MAP files for you to load an enjoy. But until then, don’t ask how they came into being. L. Spiro 3 Awesome Comments So Far Don't be a stranger, join the discussion by leaving your own comment 1. brad the programmer June 26, 2012 at 5:57 AM # fantastic tutorial. really inspired me! can’t wait for the rest of the series 2. Wilds July 16, 2012 at 10:40 PM # I love these tutorial series! I hope there will be more to come! I was wondering what version of worldcraft are you using as the new hammer is for source. □ L. Spiro July 16, 2012 at 10:54 PM # Thank you. There will be more tutorials, but there will also be a delay for 2 reasons. #1: My harddrive completely crashed last week and I am still picking up the pieces. I did not lose any engine code, but I have a lot to reinstall. I haven’t even reinstalled Valve Hammer Editor or WorldCraft yet. #2: I decided to get full DirectX 11 support before I continue on any other part of the engine, because it will be easier to add features if all targets are fully supported. Luckily, I am almost done with that. DirectX 11 is about 95% there. To answer your question, I am using both Valve Hammer Editor (current version) and WorldCraft 3.3. WorldCraft 3.3 can export .MAP files while Valve Hammer Editor can export .VMF files. Both files are basically the same, but .VMF has some extra features which are easy to handle. In any case, my tutorials can be used with either format. Whether it is .MAP or .VMF, you just have to export the faces of each solid etc. After that, the BSP tree, portal generation, and PVS system work the same, which means you should be able to load Half-Life 2 maps into your own engine by the time my series is done. L. Spiro
{"url":"http://lspiroengine.com/?p=400","timestamp":"2014-04-17T07:33:06Z","content_type":null,"content_length":"50238","record_id":"<urn:uuid:d75491ad-2b0f-4681-af38-5dc67e58326b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Ekeland, Applied Nonlinear Analysis Results 1 - 10 of 109 , 1993 "... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write first-order optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..." Cited by 89 (7 self) Add to MetaCart Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write first-order optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of one-sided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the game-theoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows black-and-white constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject. , 2004 "... We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functio ..." Cited by 52 (11 self) Add to MetaCart We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functions. - Communications in Mathematical Sciences , 2005 "... Abstract. In this paper we generalize the iterated refinement method, introduced by the authors in [8], to a time-continuous inverse scale-space formulation. The iterated refinement procedure yields a sequence of convex variational problems, evolving toward the noisy image. The inverse scale space m ..." Cited by 47 (12 self) Add to MetaCart Abstract. In this paper we generalize the iterated refinement method, introduced by the authors in [8], to a time-continuous inverse scale-space formulation. The iterated refinement procedure yields a sequence of convex variational problems, evolving toward the noisy image. The inverse scale space method arises as a limit for a penalization parameter tending to zero, while the number of iteration steps tends to infinity. For the limiting flow, similar properties as for the iterated refinement procedure hold. Specifically, when a discrepancy principle is used as the stopping criterion, the error between the reconstruction and the noise-free image decreases until termination, even if only the noisy image is available and a bound on the variance of the noise is known. The inverse flow is computed directly for one-dimensional signals, yielding high quality restorations. In higher spatial dimensions, we introduce a relaxation technique using two evolution equations. These equations allow accurate, efficient and straightforward implementation. 1 - SIAM REVIEW , 1996 "... This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and app ..." Cited by 46 (10 self) Add to MetaCart This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and approximate solutions in situations where the set of Lagrange multipliers may be unbounded, or even empty. We give rather complete results for nonlinear programming problems, and describe some partial extensions of the method to more general problems. We illustrate the results by computing the equilibrium position of a chain that is almost vertical or horizontal. - Annals of Statistics , 1988 "... Abstract. We study the asymptotic behavior of the statistical estimators that maximize a not necessarily differentiable criterion function, possibly subject to side constraints (equalities and inequalities). The consistency results generalize those of Wald and Huber. Conditions are also given under ..." Cited by 40 (1 self) Add to MetaCart Abstract. We study the asymptotic behavior of the statistical estimators that maximize a not necessarily differentiable criterion function, possibly subject to side constraints (equalities and inequalities). The consistency results generalize those of Wald and Huber. Conditions are also given under which one is still able to obtain asymptotic normality. The analysis brings to the fore the relationship between the problem of finding statistical estimators and that of finding the optimal solutions of stochastic optimization problems with partial information. The last section is devoted to the properties of the saddle points of the associated Lagrangians. - SIAM J. Sci. Comp "... Most minimax theorems in critical point theory require one to solve a two-level global optimization problem and therefore are not for algorithm implementation. The objective of this research is to develop numerical algorithms and corresponding mathematical theory for finding multiple saddle points i ..." Cited by 20 (15 self) Add to MetaCart Most minimax theorems in critical point theory require one to solve a two-level global optimization problem and therefore are not for algorithm implementation. The objective of this research is to develop numerical algorithms and corresponding mathematical theory for finding multiple saddle points in a stable way. In this paper, inspired by the numerical works of Choi-McKenna and Ding-Costa-Chen, and the idea to define a solution submanifold, some local minimax theorems are established, which require to solve only a two-level local optimization problem. Based on the local theory, a new local numerical minimax method for finding multiple saddle points is developed. The local theory is applied and the numerical method is implemented successfully to solve a class of semilinear elliptic boundary value problems for multiple solutions on some non-convex, non star-shaped and multi-connected domains. Numerical solutions are illustrated by their graphics for visualization. In a subsequent paper [20], we establish some convergence results for the algorithm. , 1996 "... . Various search directions used in interior-point-algorithms for the SDP (semidefinite program) and the monotone SDLCP (semidefinite linear complementarity problem) are characterized by the intersection of a maximal monotone affine subspace and a maximal and strictly antitone affine subspace. This ..." Cited by 19 (3 self) Add to MetaCart . Various search directions used in interior-point-algorithms for the SDP (semidefinite program) and the monotone SDLCP (semidefinite linear complementarity problem) are characterized by the intersection of a maximal monotone affine subspace and a maximal and strictly antitone affine subspace. This observation provides a unified geometric view over the existence of those search directions. Key words Interior-Point Algorithm, Semidefinite Program, Semidefinite Linear Complementarity Problem, Monotonicity y Department of Mathematics, Kanagawa University, Rokkakubashi 3-27-1, Kanagawa-ku, Yokohama 221, Japan. z Department of Mathematics and Physics, The National Defense Academy, Hashirimizu 1-10-20, Yokosuka, 239, Japan. ] Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan. Research Report B-310, Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, Tokyo, Japan 1. Introduction. ... - PACIFIC J. MATH , 1999 "... The concept of a monotone operator — which covers both linear positive semi-definite operators and subdifferentials of convex functions — is fundamental in various branches of mathematics. Over the last few decades, several stronger notions of monotonicity have been introduced: Gossez’s maximal mono ..." Cited by 17 (9 self) Add to MetaCart The concept of a monotone operator — which covers both linear positive semi-definite operators and subdifferentials of convex functions — is fundamental in various branches of mathematics. Over the last few decades, several stronger notions of monotonicity have been introduced: Gossez’s maximal monotonicity of dense type, Fitzpatrick and Phelps’s local maximal monotonicity, and Simons’s monotonicity of type (NI). While these monotonicities are automatic for maximal monotone operators in reflexive Banach spaces and for subdifferentials of convex functions, their precise relationship is largely unknown. Here, it is shown — within the beautiful framework of Convex Analysis — that for continuous linear monotone operators, all these notions coincide and are equivalent to the monotonicity of the conjugate operator. This condition is further - COMM. CONTEMP. MATH , 2001 "... The classical notions of essential smoothness, essential strict convexity, and Legendreness for convex functions are extended from Euclidean to Banach spaces. A pertinent duality theory is developed and several useful characterizations are given. The proofs rely on new results on the more subtle beh ..." Cited by 17 (12 self) Add to MetaCart The classical notions of essential smoothness, essential strict convexity, and Legendreness for convex functions are extended from Euclidean to Banach spaces. A pertinent duality theory is developed and several useful characterizations are given. The proofs rely on new results on the more subtle behavior of subdifferentials and directional derivatives at boundary points of the domain. In weak Asplund spaces, a new formula allows the recovery of the subdifferential from nearby gradients. Finally, it is shown that every Legendre function on a reflexive Banach space is zone consistent, a fundamental property in the analysis of optimization algorithms based on Bregman distances. Numerous illustrating examples are provided.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=293455","timestamp":"2014-04-21T05:47:39Z","content_type":null,"content_length":"37532","record_id":"<urn:uuid:ae0d33b1-e292-40a7-b809-f0aba1dcfd0e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
help with a problem of probability combinations January 4th 2010, 04:42 PM #1 Jan 2010 my first post: let's see. I was solving a problem of probability combinations, but I had a little problem in dealing with a question, here is the problem: "a student must meet a test that consists of 3 questions randomly selected from a list of 100 questions (each question has the same probability of being selected). to approve the review he needs to answer at least two questions correctly. What is the probability that the student approve the test if he only knows the answers to 90 questions on the list?" I started getting the following: (i) (100 combination 3) = 161700 ........... number of possible questions in groups of 3 (ii) (90 combination 3) = 117480 ............ number of possible questions in groups of 3 students know really did not know which is the total probability, if 1 / 100 (for each question) or 1 / (100 combination 3) (for each group of questions). then: the 10 questions that the student does not know how it affects the probability? very grateful in advance, a response The entire space consists of all possible combinations of choosing three questions out of 100. There are however, 90C3 and 90C2*10 possible combinations that will allow him to pass the test so: Total # of possible 3-question combinations: 100 choose 3 Total # of possible 3-question combinations that student knows all answers: 90C3 Total # of possible 3-question combinations that students knows two answers and doesn't know the last one: (90 choose 2)X10 Probability he will be given a 3-question combinaiton he knows: $\frac{90C3+90C2*10}{100C3}$ This assumes that the test is premade, and its simply a matter of selecting one of the premade tests. If he is actually picking questions then its much easier: Last edited by ANDS!; January 4th 2010 at 05:32 PM. Reason: Left out notation The entire space consists of all possible combinations of choosing three questions out of 100. There are however, 90C3 and 90C2*10 possible combinations that will allow him to pass the test so: Total # of possible 3-question combinations: 100 choose 3 Total # of possible 3-question combinations that student knows all answers: 90C3 Total # of possible 3-question combinations that students knows two answers and doesn't know the last one: (90 choose 2)X10 Probability he will be given a 3-question combinaiton he knows: $\frac{90C3+90C2*10}{100C3}$ This assumes that the test is premade, and its simply a matter of selecting one of the premade tests. If he is actually picking questions then its much easier: $\frac{90}{100}\frac{89}{99}+\frac{90}{100}\frac{89 }{99}\frac{88}{98}$ I understand that 1-q = 90C2 ... but why " *10"? There are ten questions he does not know - therefore, if you have 90C2, you need to multiply that number by 10 to get the overall number of 3-question tests he can get where he knows two of the answers (90C2) and doesn't know the last one (10 of those). lol. is true, did not I think. thank you very much Argh. My mistake on the second "scenario": It should actually be a binomial, with $(100C3)(\frac{90}{100})^2(\frac{10}{100})+(100C3)( \frac{90}{100})^3(\frac{10}{100})^0$ Hello, killertapia! Welcome aboard! A student takes a test that has 3 questions randomly selected from a list of 100 questions (each question has the same probability of being selected). To pass the test he needs to answer at least two questions correctly. What is the probability that the student passes the test if he knows the answers to omnly 90 questions on the list? I got the same answer as ANDS! What is the probability that he fails the test? There are: $_{100}C_{3} \:=\:161,\!700$ possible tests. How many of these tests contain no answers he knows? . . There are: . $_{10}C_3 \:=\:120$ such tests. How many of these tests have exactly one answer he knows? . . There are: . $\left(_{90}C_1\right) \left(_{10}C_2\right) \:=\:4,\!050$ such tests. Hence, he fails on: . $120 + 4,\!050 \:=\:4,\!170$ of the tests. And: . $P(\text{fail}) \:=\:\frac{4,\!170}{161,\!700} \:=\:\frac{139}{5390}$ Therefore: . $P(\text{pass}) \;=\;1 - \frac{139}{5390} \;=\;\frac{5251}{5390}$ i'm sorry but, Are you sure that is well? I mean I know how is the binomial function, but with this data I get a very large number on an other hand thanks for the other scenario, i get a correct answer January 4th 2010, 04:50 PM #2 Super Member Jul 2009 January 4th 2010, 05:09 PM #3 Jan 2010 January 4th 2010, 05:24 PM #4 Super Member Jul 2009 January 4th 2010, 05:36 PM #5 Jan 2010 January 4th 2010, 05:41 PM #6 Super Member Jul 2009 January 4th 2010, 07:17 PM #7 Super Member May 2006 Lexington, MA (USA) January 4th 2010, 07:23 PM #8 Jan 2010
{"url":"http://mathhelpforum.com/statistics/122435-help-problem-probability-combinations.html","timestamp":"2014-04-21T06:00:56Z","content_type":null,"content_length":"55894","record_id":"<urn:uuid:ea695022-4a6a-4302-b245-51aa8ac8e7cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
expected value June 20th 2011, 11:38 PM #1 Jun 2011 expected value a garage knows that they have three charged and two dead batteries placed together by mistake. the random variable X is the number of batteries that have to be tested before all batteries are correctly identified as dead or charged. Use a probability tree to complete the table for the probability function of X. x= 0, P(X=x)=? then keep doing it until x reaches five. Re: expected value a garage knows that they have three charged and two dead batteries placed together by mistake. the random variable X is the number of batteries that have to be tested before all batteries are correctly identified as dead or charged. Use a probability tree to complete the table for the probability function of X. x= 0, P(X=x)=? then keep doing it until x reaches five. Well, did you draw the tree as instructed? What is the trouble doing that? Now use the tree to calculate the required probabilities. If more help is needed, please show all that you've done and say where exactly you're stuck. June 21st 2011, 01:35 AM #2
{"url":"http://mathhelpforum.com/statistics/183356-expected-value.html","timestamp":"2014-04-17T03:08:01Z","content_type":null,"content_length":"33482","record_id":"<urn:uuid:c3230b89-025d-4c23-9a6c-1daa36701df6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
A charge falls from infinity to within r of another charge, find velocity. 1. The problem statement, all variables and given/known data Velocity of an electron that falls to r from infinity? An electron falls from infinity to r=10^-8m from a charge q1=4.8x10^-19C. What is the velocity of the electron? 2. Relevant equations 3. The attempt at a solution Potential energy change U. coulomb's constant k=9x10^-9, electron charge q2=1.602x10^-19C, electron mass m=9.11x10^-31kg, kinetic energy Ek=½mv² set U=Ek => v=(2U/m)^0.5=1.23x10^5m/s Anyone see where I went wrong? Thanks.
{"url":"http://www.physicsforums.com/showthread.php?t=221839","timestamp":"2014-04-16T04:28:12Z","content_type":null,"content_length":"24024","record_id":"<urn:uuid:50170716-67c8-4b3a-b4f1-a7d3940c4e31>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Department of Physics and Astronomy ARTHUR E. CHAMPAGNE, Chair Bruce W. Carney (32) Optical Observational Astrophysics Gerald N. Cecil (47) Experimental Astrophysics Arthur E. Champagne (51) Experimental Nuclear Physics and Astrophysics Thomas B. Clegg (5) Nuclear Physics, Polarization Phenomena J. Christopher Clemens (64) Observational Astronomy, Astrophysics, Astronomical Instrumentation Louise A. Dolan (49) Theoretical Particle Physics, Quantum Gravity Jonathan Engel (57) Theoretical Nuclear Physics Charles R. Evans (48) Gravity, Relativity, Theoretical Astrophysics Christian G. Iliadis (61) Experimental Nuclear Astrophysics Hugon J. Karwowski (37) Experimental Nuclear Physics and Astrophysics Dmitri V. Khveshchenko (1) Theoretical Physics Jianping Lu (56) Condensed Matter Theory, Nanotechnology, Medical Physics Laurie E. McNeil (36) Experimental Condensed Matter and Materials Physics Y. Jack Ng (30) Theoretical Particle Physics, Gravitation Lu-Chang Qin (27) Materials Science, Nanotechnology Richard Superfine (55) Experimental Studies of Interfaces, Biophysics Frank Tsui (59) Experimental Condensed Matter and Materials Physics Sean Washburn (50) Experimental Condensed Matter and Materials Physics John Wilkerson, (12) Experimental Neutrino Physics and Fundamental Symmetries Yue Wu (54) Nuclear Magnetic Resonance, Electron Spin Resonance in Solids Otto E. Zhou (62) Materials Science, Nanotechnology Associate Professors Laura Mersini (19) Theoretical Cosmology Daniel E. Reichart (13) Gamma Ray Bursts, Early Universe, Interstellar Extinction, Galaxy Clusters Assistant Professors Rosa Tamara Branca, NMR Imaging Joaquin Drut, Theory of Strongly Interacting Systems Fabian Heitsch (26), Computational Astrophysics Reyco Henning (11) Neutrino Physics, Particle Astrophysics Sheila Kannappan (14) Observational Extragalactic Astronomy Rene Lopez (25) Experimental Condensed Matter Physics Amy Oldenburg, Biophotonics and Biomechanics Research Professors Russell M. Taylor II, Nanotechnology, Computer Imaging Michael R. Falvo, Biophysics, Nanomechanics Research Associate Professors Alfred Kleinhammes, Condensed Matter Physics, Materials Science Nalin R. Parikh (58) Solid State Physics, Materials Science Research Assistant Professor E. Timothy O'Brien, Physics Related to Biology, Light Microscopy, Biological Sample Preparation Adjunct Professors William W. Clark III, Electronics, Optics Richard T. Hammond, General Relativity, Gravity, Optics Ryan M. Rohm, Quantum Field Theory, Theoretical Particle Physics Jie Tang, Materials Physics, Nanomaterials Adjunct Associate Professor John D. Hunn, Applied Condensed Matter Physics Adjunct Assistant Professors Bower, Nanotechnology Yueh Lee, Nanotechnology Professors Emeriti C. Victor Briscoe Sang-Il Choi Wayne Christiansen Morris S. Davis Kian S. Dy John Hernandez William M. Hooke Paul S. Hubbard Horst Kessemeier Edward J. Ludwig J. Ross Macdonald Eugen Merzbacher James Rose Larry Rowan Dietrich Schroeer Stephen M. Shafroth Lawrence M. Slifkin William J. Thompson Hendrik Van Dam James W. York Jr. The Department of Physics and Astronomy offers graduate work leading to the degrees of master of science and doctor of philosophy. The active fields of research are biophysics, medical physics, condensed-matter physics, materials physics, nanotechnology, nuclear physics, neutrino physics and nuclear astrophysics, quantum field theory, theoretical particle physics, general relativity and gravitation, extragalactic and stellar astronomy, and astrophysics. Students can also work in the UNC–Chapel Hill biophysics program, or they can study under any advisor so long as the research project is supervised by a committee that contains a majority of UNC–Chapel Hill Physics and Astronomy faculty. The graduate courses are designed to give students a broad foundation and to introduce them to the special fields in which the research interests of the department lie. The general regulations of The Graduate School govern the work for the degrees of master of science and doctor of philosophy. To begin a graduate program in physics or astrophysics, the student should have completed most of the requirements for the degree of bachelor of science with a major in physics at the University, or their equivalent elsewhere. The minimum prerequisite for graduate study consists of the basic undergraduate courses PHYS 116, 117, 128, 128L, 301, 302, 341, 415, 311, and 312, together with MATH 232, 233, and 528. At the end of the spring semester a student must take the Ph.D. written examination. The examination is based upon the graduate student's first-year course work and will cover dynamics, quantum mechanics, statistical mechanics, and electromagnetic The M.S. degree in physics may be taken with or without thesis. However, even if a thesis is not submitted, a student must work with a research group for at least one semester, in order to learn the research techniques in a field of physics or astronomy. If the research is theoretical, the student must also gain experimental experience. A minor is not required for the M.S. degree, but one may be chosen in accord with the regular graduate requirements for this option. The equivalent of one semester teaching experience is required of all M.S. degree candidates. The M.S. astrophysics track must include ASTR 701 and a minimum of six hours from ASTR 519, 702, 703 or 704. The requirements for a Ph.D. in physics are a) successful completion of the following core courses in the department, or completion of their equivalents elsewhere as an undergraduate or graduate student: 701, 711, 712, 741, 721, and 722; b) passing the Ph.D. written examination based on core graduate courses in physics as listed in a), c) gaining experimental experience either through master's or doctoral research, or (if student's research is theoretical) by performing an experimental project deemed adequate by the director of graduate studies, d) taking a course outside his or her field of specialization from a list approved by the director of graduate studies and e) passing at least three other advanced graduate-level courses appropriate to his or her field of specialization. A Ph.D. candidate must also take a preliminary doctoral oral examination within the first three years of graduate study in physics at UNC–Chapel Hill. The oral examination is concerned mainly with the student's dissertation research project. A minor is not required, but may be elected, in which case requirement c) above is replaced by the requirement that the student pass at least five graduate-level courses selected from no more than two departments, with no fewer than two courses in either department. The minor program must be approved in advance by the minor department. Teaching experience, as part of professional training, is required of all doctoral candidates. This experience can be gained through laboratory or lecture instruction as a teaching assistant, either for two semesters or until teaching competence is acquired. The astrophysics Ph.D. track requirements are similar except that the course requirements are PHYS 701, 711, 721, 741 and ASTR 701, 702, 703, 704, 705 and an additional 700-level course. To gain familiarity with experimental astrophysics or observational astronomy, a student must pass ASTR 519/719, earn an M.S. degree which involves experimental or observational research in astrophysics, or perform other experimental/observational research deemed suitable by the director of graduate studies. Research Interests Astronomy and Astrophysics. Research includes the formation, structure, and evolution of stars, our Milky Way galaxy, other galaxies, gamma ray bursters and cosmology. Theory involves numerical relativity and sources of gravitational radiation, stellar seismology and quasars, and interstellar medium physics. UNC–Chapel Hill has guaranteed observing time on the 4.1-meter SOAR Telescope in Chile, which began regular operations in 2004, and on the 11-meter SALT Telescope in South Africa, which began operations in 2005. UNC–Chapel Hill operates a number of smaller robotic telescopes as Biological and Medical Physics. Experimental studies include manipulation and force measurement techniques with applications to DNA, molecular motors, cells, and cilia; hydration effects in adsorption of biochemicals. There is also a strong focus on the theoretical and experimental translational research in medical imaging technologies, including radiotherapy instruments based on carbon nanotube X-ray emitters such as single-cell irradiation and in vivo micro-CT; optical coherence tomography using nanoparticle molecular imaging agents; systems level implementation of tomographic imaging instruments. Condensed-Matter Physics. Experimental and Theoretical Studies of Nanomaterials. Atomic scale studies of devices and nanoelectromechanical systems, including quantum computation and transport, actuating nanomotors and sensors, amorphous materials, semiconductors, superconductors, the optical properties of solids, charge transport in solids and fluids, epitaxial growth, magnetic materials and heterostructures. Field Theory, Particle Physics, Cosmology, Gravitation and Relativity. Research includes gauge field theories, quantum chromodynamics, electroweak theory, grand unified theories, string theory, supersymmetry, supergravity, quantum gravity, theoretical cosmology, numerical relativity, gravitational radiation, and relativistic astrophysics. Materials Science and Materials Physics. Experimental and theoretical research in the design, synthesis, integration, and characterization of novel solid state materials, including nanostructured materials such as quantum dots, carbon nanotubes and nanorods, quasi-crystals, and metallic glass. Applications of novel materials for solar energy, electron field emission, probes and sensors, and data storage. Applications include flat-panel displays, an X-ray system for biomedical imaging, and rechargeable batteries. Nuclear Physics. Experimental and theoretical work includes neutrino oscillations and neutrino mass measurements, fundamental symmetries and weak interactions in supernovae. The structure and evolution of stars are investigated using nuclear probes. The origin of the elements in the universe is studied using local accelerator facilities. The nature of the nuclear force and properties of few-body systems. Polarized beams of light ions and gamma-rays and polarized 3He target. Applied nuclear physics. Facilities and Equipment Research in physics and astronomy is carried out in laboratories on and off the Chapel Hill campus. Within Phillips Hall and Chapman Hall there are several major research laboratories including the "nanomanipulator" (a combination of a scanning electron microscope, an atomic force microscope, and sophisticated visualization graphics), the Keck Laboratory for Atomic Imaging and Manipulation, which includes two transmission electron microscopes, and the Goodman Laboratory for Astronomical Instrumentation. Other facilities include apparatus for nuclear magnetic resonance studies, scanning probe microscopes, and Raman and optical spectrometers. For synthesis and fabrication, major facilities include molecular beam epitaxy, microwave plasma-enhanced chemical vapor deposition, laser ablation, and photolithography and reactive ion etching. Resources for highly parallel computing are provided by UNC's Information and Technology Services, as well as by national centers. The department is a partner in the Triangle Universities Nuclear Laboratory and plays a major role in experiments using the Laboratory for Experimental Nuclear Astrophysics (LENA), Tandem Accelerator, and the High-Intensity Gamma-Ray Source at the Free Electron Laser facility. UNC–Chapel Hill has an active program in low-background physics at the KURF underground facility near Blacksburg, VA. UNC–Chapel Hill has a 0.6-meter on-campus telescope, and is a major partner in the 4.1-meter SOAR Telescope in Chile and the 11-meter Southern African Large Telescope (SALT) in South Africa. The department operates the PROMPT array of robotic telescopes in Chile and manages the SkyNet array of robotic telescopes. Numerous national laboratories, including Oak Ridge, Brookhaven, NIST, Los Alamos and Argonne, as well as KamLAND, NRAO, NOAO, the Hubble Space Telescope, and the Chandra X-ray Observatory, are also vital parts of our research efforts. Fellowships and Assistantships Many teaching assistantships (with stipends of $17,100 for nine months) are available to qualified graduate students. Summer employment is usually available. The duties of assistants include supervising laboratory classes in elementary physics or astronomy, assisting in the supervision of advanced laboratories, teaching recitation sections, and grading papers. Graduate School fellowships are available for well-qualified applicants to the department's graduate program. Teaching assistants can usually be supported in the summer by teaching or research. Research assistantships are also offered, especially to those who have completed a year or two of graduate work. The stipend is $22,800 for the calendar year. Application forms for admission, including graduate appointments, should be completed online at gradschool.unc.edu/admissions. Courses for Graduate and Advanced Undergraduate Students 501 Astrophysics I (Stellar Astrophysics) (3). Prerequisites, MATH 383 and PHYS 128. Permission of the instructor for students lacking the prerequisites. An introduction to the study of stellar structure and evolution. Topics covered include observational techniques, stellar structure and energy transport, nuclear energy sources, evolution off the main-sequence, and supernovae. 502 Astrophysics II (Interstellar Matter and Galaxies) (3). Prerequisites, MATH 383 and PHYS 128. Permission of the instructor for students lacking the prerequisites. An introduction to the study of the structure and contents of galaxies. Topics covered include the interstellar medium, interstellar hydrodynamics, supersonic flow and shock formation, star formation, galactic evolution, the expanding universe, and cosmology. 503 Structure and Evolution of Galaxies (3). Prerequisites, ASTR 301, MATH 383, and PHYS 128. Internal dynamics and structure of galaxies; physics of star formation, active galactic nuclei, and galaxy interactions; large-scale clustering and environment-dependent physical processes; evolution of the galaxy population over cosmic time. 505 Physics of Interstellar Gas (3). Prerequisites, ASTR 301, MATH 383, and PHYS 128. Surveys the physical processes governing the interstellar medium (ISM), which takes up the "refuse" of old stars while providing fuel for young stars forming. Covers the processes regulating the galactic gas budget and the corresponding observational diagnostics. Topics: radiative transfer, line formation mechanisms, continuum radiation, gas dynamics, star formation. 519 Observational Astronomy (4). Prerequisite, ASTR 101. Permission of the instructor for students lacking the prerequisite. A course designed to familiarize the student with observational techniques in optical and radio astronomy, including application of photography, spectroscopy, photometry, and radio methods. Three lecture and three laboratory hours a week. Courses for Graduate and Advanced Undergraduate Students 405 Biological Physics (BIOL 431) (3). Prerequisites, PHYS 116 and 117. How diffusion, entropy, electrostatics, and hydrophobicity generate order and force in biology. Topics include DNA manipulation, intracellular transport, cell division, molecular motors, single molecule biophysics techniques, nerve impulses, neuroscience. 410 Teaching and Learning Physics (4). Prerequisites, PHYS 116 and 117. Permission of the instructor for students lacking the prerequisites. Learning how to teach physics using current research-based methods. Includes extensive fieldwork in high school and college environments. Meets part of the licensure requirements for North Carolina public school teaching. 415 Optics (3). Prerequisites, PHYS 311 and 312. Permission of the instructor for students lacking the prerequisites. Elements of geometrical optics; Huygens' principles, interference, diffraction, and polarization. Elements of the electromagnetic theory of light; Fresnel's equations, dispersion, absorption, and scattering. Photons. Lasers and quantum optics. 422 Physics of the Earth's Interior (GEOL 422) (3). Prerequisites, MATH 383 and either PHYS 201 and 211, or 301 and 311. Origin of the solar system: the nebular hypothesis. Evolution of the earth and its accretionary history. Earthquakes: plate tectonics and the interior of the earth. The earth's magnetic field. Mantle convection. 424 General Physics I (4). PHYS 104 equivalent, specifically for certification of high school teachers. 425 General Physics II (4). PHYS 105 equivalent, specifically for certification of high school teachers. 471 Physics of Solid State Electronic Devices (3). Prerequisite, PHYS 117; pre- or corequisite, PHYS 211 or 311. Properties of crystal lattices, electrons in energy bands, behavior of majority and minority charge carriers, PN junctions related to the structure and function of semiconductor diodes, transistors, display devices. 472 Chemistry and Physics of Electronic Materials Processing (APPL 472, CHEM 472, MTSC 472) (3). Prerequisite, CHEM 482 or PHYS 117. Permission of the instructor. A survey of materials processing and characterization used in fabricating microelectronic devices. Crystal growth, thin film deposition and etching, and microlithography. 481L Advanced Laboratory I (2). Prerequisite, PHYS 351 or 352. Permission of the instructor for students lacking the prerequisite. Selected experiments illustrating modern techniques such as the use of laser technology to study the interaction of electromagnetic fields and matter. Six laboratory hours a week. 482L Advanced Laboratory II (2). Prerequisite, PHYS 481. Permission of the instructor for students lacking the prerequisite. Independent laboratory research projects. Scientific writing and oral presentations, abstracts, and reports. Six laboratory hours per week. 491L Materials Laboratory I (APPL 491L) (2). Prerequisites, APPL 470 and PHYS 351. Structure determination and measurement of the optical, electrical, and magnetic properties of solids. 492L Materials Laboratory II (APPL 492L) (2). Prerequisite, APPL 491L or PHYS 491L. Continuation of PHYS 491L with emphasis on low- and high-temperature behavior, the physical and chemical behavior of lattice imperfections and amorphous materials, and the nature of radiation damage. 510 Seminar for Physics and Astronomy Teaching Assistants (1). How students learn and understand physics and astronomy. How to teach using current research-based methods. 521 Applications of Quantum Mechanics (3). Prerequisite, PHYS 321. Emphasizes atomic physics but includes topics from nuclear, solid state, and particle physics, such as energy levels, the periodic system, selection rules, and fundamentals of spectroscopy. 543 Nuclear Physics (3). Prerequisite, PHYS 321. Permission of the instructor for students lacking the prerequisite. Structure of nucleons and nuclei, nuclear models, forces and interactions, nuclear 545 Introductory Elementary Particle Physics (3). Prerequisites, PHYS 312 and 321. Relativistic kinematics, symmetries and conservation laws, elementary particles and bound states, gauge theories, quantum electrodynamics, chromodynamics, electroweak unification, standard model and beyond. 573 Introductory Solid State Physics (MTSC 573) (3). Prerequisite, PHYS 321. Permission of the instructor for students lacking the prerequisite. Crystal symmetry, types of crystalline solids; electron and mechanical waves in crystals, electrical and magnetic properties of solids, semiconductors; low temperature phenomena; imperfections in nearly perfect crystals. 594 Nonlinear Dynamics (MATH 594) (3). Prerequisite, MATH 383. Permission of the instructor for students lacking the prerequisite. Interdisciplinary introduction to nonlinear dynamics and chaos. Fixed points, bifurcations, strange attractors, with applications to physics, biology, chemistry, finance. 631 Mathematical Methods of Theoretical Physics I (3). Prerequisites, MATH 383 and PHYS 128. Vector fields, curvilinear coordinates, functions of complex variables, linear differential equations of second order, Fourier series, integral transforms, delta sequence. 632 Mathematical Methods of Theoretical Physics II (3). Prerequisite, PHYS 631. Permission of the instructor for students lacking the prerequisite. Partial differential equations, special functions, Green functions, variational methods, traveling waves, and scattering. 633 Scientific Programming (3). Prerequisite, MATH 528 or 529, or PHYS 631 or 632. Required preparation, elementary Fortran, C, or Pascal programming. Structured programming in Fortran or Pascal; use of secondary storage and program packages; numerical methods for advanced problems, error propagation and computational efficiency; symbolic mathematics by computer. 660 Fluid Dynamics (ENVR 452, GEOL 560, MASC 560) (3). See MASC 560 for description. 671L Independent Laboratory I (3). Prerequisites, PHYS 301 and 312. Permission of the instructor for students lacking the prerequisites. Six laboratory hours a week. 672L Independent Laboratory II (3). Prerequisites, PHYS 301 and 312. Permission of the instructor for students lacking the prerequisites. Six laboratory hours a week. Courses for Graduate Students 701 Stellar Interiors, Evolution, and Populations (3). Stellar structure and evolution including: equations of stellar structure, stellar models, star and planet formation, fusion and nucleosynthesis, stellar evolution, stellar remnants, and the comparison of theory to observations. 702 High Energy Astrophysics (3). Prerequisites, PHYS 711 and 721. White dwarfs and neutron stars: physical properties and observational manifestations. Extragalactic radio sources, relativistic jets and supermassive black holes. Particle acceleration and radiative processes in hot plasmas. Accretion phenomena. X-ray and gamma-ray astrophysics. 703 Structure and Evolution of Galaxies (3). Internal dynamics and structure of galaxies; physics of star formation, active galactic nuclei, and galaxy interactions; large-scale clustering and environment-dependent physical processes; evolution of the galaxy population over cosmic time. 704 Cosmology (3). Corequisite, PHYS 701. General relativity and cosmological world models; thermal history of the early universe, nucleosynthesis, and the cosmic microwave background; growth of structure through cosmic time. 705 Astrophysical Atmospheres (3). Prerequisites PHYS 711 and 721. Radiative transfer, opacities, spectral line formation, energy transport, models, chemical abundance determination, interstellar chemistry, magnetic fields. Applications to observations of planetary, stellar and solar, galactic (ISM) and intergalactic gaseous atmospheres. 719 Astronomical Data (4). Required preparation, physics-based cosmology course or permission of the instructor. A course designed to familiarize the student with observational techniques in optical and radio astronomy, including application of photography, spectroscopy, photometry, and radio methods. Three lecture and three laboratory hours a week. 891 Seminar in Astrophysics (1–21). Recent observational and theoretical developments in stellar, galactic, and extragalactic astrophysics. Courses for Graduate Students *The PHYS 821 and PHYS 896 sequence alternates with PHYS 822 and 823. 701 Classical Dynamics (3). Required preparation, advanced undergraduate mechanics. Variational principles, Lagrangian and Hamiltonian mechanics. Symmetries and conservation laws. Two-body problems, perturbations, and small oscillations, rigid-body motion. Relation of classical to quantum mechanics. 711 Electromagnetic Theory I (3). Prerequisites, PHYS 631 and 632. Electrostatics, magnetostatics, time-varying fields, Maxwell's equations. 712 Electromagnetic Theory II (3). Prerequisite, PHYS 711. Plane electromagnetic waves and wave propagation, wave guides and resonant cavities, simple radiating systems, scattering and diffraction, special theory of relativity, radiation by moving charges. 715 Visualization in Science (COMP 715, MTSC 715) (3). See COMP 715 for description. 721 Quantum Mechanics (3). Prerequisite, PHYS 321. Review of nonrelativistic quantum mechanics. Spin, angular momentum, perturbation theory, scattering, identical particles, Hartree-Fock method, Dirac equation, radiation theory. 722 Quantum Mechanics (3). Prerequisite, PHYS 321. Review of nonrelativistic quantum mechanics. Spin, angular momentum, perturbation theory, scattering, identical particles, Hartree-Fock method, Dirac equation, radiation theory. 741 Statistical Mechanics (3). Prerequisites, PHYS 701 and 721. Classical and quantal statistical mechanics, ensembles, partition functions, ideal Fermi and Bose gases. 771L Advanced Spectroscopic Techniques (3). Prerequisite, PHYS 301 or 312. Permission of the instructor for students lacking the prerequisite. Advanced spectroscopic techniques, including Rutherford backscattering-channeling, perturbed angular correlation, Raman scattering, electron paramagnetic resonance, nuclear magnetic resonance, optical absorption, and Hall effect. Two hours of lecture and three hours of laboratory a week. 772L Advanced Spectroscopic Techniques (3). Prerequisite, PHYS 301 or 312. Permission of the instructor for students lacking the prerequisite. Advanced spectroscopic techniques, including Rutherford backscattering-channeling, perturbed angular correlation, Raman scattering, electron paramagnetic resonance, nuclear magnetic resonance, optical absorption and Hall effect. One hour of lecture and five hours of laboratory a week. *821 Advanced Quantum Mechanics (3). Prerequisite, PHYS 722. Advanced angular momentum, atomic and molecular theory, many-body theory, quantum field theory. *822 Field Theory (3). Prerequisite, PHYS 722. Quantum field theory, path integrals, gauge invariance, renormalization group, Higgs mechanism, electroweak theory, quantum chromodynamics, Standard Model, unified field theories. *823 Field Theory (3). Prerequisite, PHYS 722. Quantum field theory, path integrals, gauge invariance, renormalization group, Higgs mechanism, electroweak theory, quantum chromodynamics, Standard Model, unified field theories. 824 Group Theory and its Applications (3). Required preparation, knowledge of matrices, mechanics and quantum mechanics. Discrete and continuous groups. Representation theory. Application to atomic, molecular, solid state, nuclear and particle physics. 827 Principles of Chemical Physics (CHEM 788) (3). Prerequisite, CHEM 781 or PHYS 321. Permission of the instructor for students lacking the prerequisite. The quantum mechanics of molecules and their aggregates. Atomic orbitals, Hartree-Fock methods for atoms and molecules. Special topics of interest to the instructor and research students. 829 Principles of Magnetic Resonance (3). Prerequisite, CHEM 781 or PHYS 721. Permission of the instructor for students lacking the prerequisite. 831 Differential Geometry in Modern Physics (3). Prerequisites, PHYS 701, 711, and 712. Applications to electrodynamics, general relativity and nonabelian gauge theories of methods of differential geometry, including tensors, spinors, differential forms, connections and curvature, covariant exterior derivatives, and Lie derivatives. 832 General Theory of Relativity (3). Prerequisite, PHYS 831. Permission of the instructor for students lacking the prerequisite. Differential geometry of space-time. Tensor fields and forms. Curvature, geodesics. Einstein's gravitational field equations. Tests of Einstein's theory. Applications to astrophysics and cosmology. 861 Nuclear Physics (3). Prerequisites, PHYS 543 and 721. Nuclear reactions, scattering, Nuclear structure, Nuclear astrophysics. 862 Nuclear Physics (3). Prerequisites, PHYS 543 and 721. Overview of Standard Model of particle physics. Fundamental symmetries and weak interactions. Neutrino physics. Particle-astrophysics and 871 Solid State Physics (MTSC 871) (3). Prerequisite, PHYS 321. Topics considered include those of PHYS 573, but at a more advanced level, and in addition a detailed discussion of the interaction of waves (electromagnetic, elastic and electron waves) with periodic structures, e.g., X-ray diffraction, phonons, band theory of metals and semiconductors. 872 Solid State Physics (MTSC 872) (3). Prerequisite, PHYS 321. Topics considered include those of PHYS 573, but at a more advanced level, and in addition a detailed discussion of the interaction of waves (electromagnetic, elastic, and electron waves) with periodic structures, e.g., X-ray diffraction, phonons, band theory of metals and semiconductors. 873 Theory of the Solid State (3). Prerequisite, PHYS 722. Calculation of one-electron energy band structure. Electron-hole correlation effect and excitons. Theory of spin waves. Many-body techniques in solid state problems including theory of superconductivity. 883 Current Advances in Physics (3). Permission of the instructor. In recent years, elementary particle physics, amorphous solids, neutrinos, and electron microscopy have been among the topics 893 Seminar in Solid State Physics (1–21). Research topics in condensed-matter physics, with emphasis on current experimental and theoretical studies. 895 Seminar in Nuclear Physics (1–21). Current research topics in low-energy nuclear physics, especially as related to the interests of the Triangle Universities Nuclear Laboratory. *896 Seminar in Particle Physics (1–21). Symmetries, gauge theories, asymptotic freedom, unified theories of weak and electromagnetic interactions, and recent developments in field theory. 897 Seminar in Theoretical Physics (1–21). Topics from current theoretical research including, but not restricted to, field theory, particle physics, gravitation, and relativity. 899 Seminar in Professional Practice (1–21). Required preparation, Ph.D. written exam passed. The role and responsibilities of a physicist in the industrial or corporate environment and as a 901 Research (1–21). 10 or more laboratory or computation hours a week. 992 Master's Research Project (3–6). 993 Master's Thesis (3–6). 994 Doctoral Dissertation (3–9).
{"url":"http://www.unc.edu/gradrecord/programs/physastron.html","timestamp":"2014-04-16T07:17:04Z","content_type":null,"content_length":"38649","record_id":"<urn:uuid:fcf5fd3d-6793-4acc-9c68-f6855be46a7a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: THIS IS MEDICAL MATH the recommended dose for theodur is 6mg/kg/day. it is available as 200 mg tablest. how many tablest should a 156 lb person take per day. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/505c98d5e4b0583d5cd1288c","timestamp":"2014-04-19T12:39:29Z","content_type":null,"content_length":"65659","record_id":"<urn:uuid:b6a813d4-47b1-488a-86cc-f8369ae98eda>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
OpenFOAM® v2.1.0: Numerical Methods 19th December 2011 Multiphase MULES The multidimensional universal limiter with explicit solution (MULES) now supports multiple phases/fields, while maintaining boundedness of individual phases and their sum using the new limitSum Further information Source code multiphaseInterFoam solver - $FOAM_SOLVERS/multiphase/multiphaseInterFoam multiphaseEulerFoam solver - $FOAM_SOLVERS/multiphase/multiphaseEulerFoam MULES - $FOAM_SRC/finiteVolume/fvMatrices/solvers/MULES Multivariate independent interpolation scheme Allows the application of independent limited schemes to be applied to each field in the ‘multivariate’ set independently, i.e. reverts the solver to the equivalent without the ‘multivariate’ Source code finiteVolume library - $FOAM_SRC/finiteVolume Linear-upwind stabilised transport Linear-upwind stabilised transport (LUST) is a new interpolation schemes in which linear-upwind is blended with linear interpolation to stabilise solutions while maintaining second-order behaviour. The scheme is proving particularly successful for LES/DES in complex geometries with complex unstructured meshes, e.g. external aerodynamics of vehicles. Source code LUST class - LES of external aerodynamics of a motor bike - Field sources Improvements have been made to the mechanism in which sources can be added to fields in equations. Sources can now generally be applied using a constant/sourcesProperties dictionary with entries like the following. type scalarExplicitSource; active true; timeStart 0.2; duration 2.0; selectionMode points; (2.75 0.5 0) volumeMode absolute; rho 1e-4; // kg/s H2O 1e-4; // kg/s The example shows an explicit volumetric source for a scalar equation, given by a ...ExplicitSource entry. Similarly a constraint can also be applied that sets values in given cells, given by a ...ExplicitSetValue entry. Specialised sources are also available, e.g. actuationDiskSource for wind turbing siting calculations. The new functionality is incorporated into solvers through an IObasicSourceList object, named sources, that appears in the solution of the momentum equation. The example below shows the implementation in simpleFoam, where volumetric sources are includes through sources in the UEqn. The constrain() function then applies constraints prior to solving the matrix equation. tmp<fvVectorMatrix> UEqn fvm::div(phi, U) + turbulence->divDevReff(U) solve(UEqn() == -fvc::grad(p)); The source handling in implemented in the following solvers: • simpleFoam; • MRFSimpleFoam; • SRFSimpleFoam; • pimpleFoam; • pimpleDyMFoam; • SRFPimpleFoam; • potentialFreeSurfaceFoam; • LTSReactingParcelFoam; • coalChemistryFoam; • porousExplicitSourceReactingParcelFoam. The windSimpleFoam is now deprecated since its behaviour is replacated by simpleFoam with the actuationDiskSource. For changes, see the updated turbinSiting example. Source code fieldSources classes - $FOAM_SRC/finiteVolume/cfdTools/general/fieldSources Turbine siting example - LTSReactingParcelFoam examples - coalChemistryFoam example - porousExplicitSourceReactingParcelFoam examples - Further developments to the numerics in OpenFOAM include: • new orthogonalSnGrad scheme: snGrad scheme in which the mesh is treated as if it were orthogonal; • support for dynamic meshes added to ddtPhiCorr, an important mechanism to avoid certain types of pressure-velocity decoupling for transient running; see interDyMFoam example with dynamic Dynamic refinement/unrefinement example -
{"url":"http://www.openfoam.org/version2.1.0/numerics.php","timestamp":"2014-04-17T02:29:53Z","content_type":null,"content_length":"26239","record_id":"<urn:uuid:35f44c08-38fa-4cc6-ade4-2b9c70d58117>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics Education for Elementary Teachers Mathematics Education for Elementary Teachers Related Programs Quick Links Back to Mathematics Education for Elementary Teachers - Bachelor of Arts Students will possess: • Technical skill in completing mathematical processes; • Breadth and depth of knowledge of mathematics; • An understanding of the relationship of mathematics to other disciplines; • An ability to communicate mathematics effectively; • A capability of understanding and interpreting written materials in mathematics; • An ability to use technology to do mathematics. Mathematics Education for Elementary Teachers The graduating Mathematics Education Major will possess each of the following. • Technical skill in completing mathematical processes. • Breadth and depth of knowledge of mathematics. • An understanding of the relationship of mathematics to other disciplines and mathematics education. • A recognition and understanding of a learning climate in which elementary school students can develop their mathematical knowledge as well as their abilities to communicate mathematically. • A capability of understanding and interpreting written materials in mathematics and mathematics education. • An ability to use grade-appropriate technology. • A rich understanding of the design and implementation of instruction that promotes • students' development of mathematical knowledge appropriate for the elementary school. • An extensive understanding of the nature and purpose of the K-8 mathematics curriculum in the United States.
{"url":"http://www.keene.edu/catalog/programs/detail/342/ba/mathematics_education_for_elementary_teachers/outcomes.html","timestamp":"2014-04-21T07:33:45Z","content_type":null,"content_length":"8002","record_id":"<urn:uuid:02d9b081-d976-4678-a9ba-bc6c45606bf9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Alan Turing in the twenty-first century: normal numbers, randomness, and finite automata Seminar Room 1, Newton Institute We discuss ways in which Turing's then-unpublished ``A Note on Normal Numbers'' foreshadows and can still guide research in our time. This research was supported in part by NSF Grant 0652569. Part of the work was done while the author was on sabbatical at Caltech and the Isaac Newton Institute for Mathematical Sciences at the University of Cambridge. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/SAS/seminars/2012070209301.html","timestamp":"2014-04-20T10:55:45Z","content_type":null,"content_length":"5981","record_id":"<urn:uuid:59f7e21f-2df6-439b-ac5f-3a1aba48e232>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Direc­tory tex-archive/info/Free_Math_Font_Survey/vn Documentation: Free_Math_Font_Survey for Vietnamese Author: Stephen Hartke, lastname at gmail dot com Translator: Thai Phu Khanh Hoa, h2vnteam at gmail.com This is a Vietnamese translation of the Free Math Font Survey by Stephen Hartke. From the original README: | This document is a survey of the free math fonts that are available | for use with TeX and LaTeX. Examples are provided for each font, | links to where to obtain the font, and the commands for loading the | associated LaTeX package. | "Free" here means fonts that are free to use (both commercially and | non-commercially) and free to distribute, but not necessarily free to | modify. I also am biased towards listing fonts that have outline | versions in PostScript Type 1 format suitable for embedding in | Postscript PS or Adobe Acrobat PDF files. | The survey is available both as a PDF file and as a webpage. The Vietnamese version of the survey is available as PDF and LaTeX source only -- no web version is made. The main purpose of this translation is to show that (almost) all the font packages mentioned in the survey now support vietnamese too. In order to compile the survey you need the latest version of vntex (please refer to vntex.org for further details). In case you have any questions, please send them to hanthethanh at gmail dot com Down­load the com­plete con­tents of this di­rec­tory in one zip archive (1.9M). free-math-font-sur­vey-vn – The sur­vey of avail­able free Math­e­mat­ics fonts, trans­lated to Viet­namese This is a trans­la­tion to Viet­namese of Stephen Hartke's sur­vey of free Math­e­mat­ics fonts. Doc­u­men­ta­tion Readme Li­cense The LaTeX Project Public Li­cense Main­tainer Thái Phú Khánh Hòa Topics sup­port for type­set­ting of Viet­namese sur­vey of TeX-re­lated fa­cil­i­ties
{"url":"http://www.ctan.org/tex-archive/info/Free_Math_Font_Survey/vn","timestamp":"2014-04-21T04:32:33Z","content_type":null,"content_length":"8920","record_id":"<urn:uuid:650ea02d-ddce-4ab7-af79-bdd323f78491>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00548-ip-10-147-4-33.ec2.internal.warc.gz"}
Finance Programs and Objectives All periods are treated as if they are of equal length, even when they are not. However, the only compound-interest mortgages are those involving negative amortization, in which the payment does not cover the interest. The A, B, and C variables are defined first, calculate loan amortization daily interest compounding to make the later equations simpler to read. You can use the above calculator to confirm this but the compound interest formula results in periodic interest of $833.33 (for our $10,000, 10.0% example) for both periods. Our payment schedule calculator offers the user a lot of flexibility that calculate loan amortization daily interest compounding we have not found when using other online amortization schedules. Let’s recalculate Section 7.3.1, “Example. The formula for the periodic interest rate is very straight forward. This calculator have been designed to calculate both simple and compound interest components and it is seperated by respective radio button. In this foreclosure, the sheriff then issues bank foreclosure sale a deed to the winning bidder at auction. Also, it might be different for these two terms for any compounding frequency shorter than monthly due to the stub period mentioned above. With daily accrual, the annual rate is divided by 365, and that number is multiplied by calculate loan amortization daily interest compounding the loan balance at the end of the preceding day to get the interest due for the day. Now, click on the Calculate button next to the Periodic Payment area. This means that the monies are lent on one day and the first payment isn't due until one period after the funds are received. Determining loan amortization schedules, periodic payment amounts, total payment value, or interest rates can be somewhat complex. If you read our explanation of simple interest you learned that $10,000 invested calculate loan amortization daily interest compounding (or borrowed) at 10.0% for a year will accrue just $1,000 interest. Pet Friendly Rentals However, there still may be calculations that you want to do that this calculator does not support. Simple Interest Loan Amortization Calculator is an online personal finance assessment tool which allows loan borrower to find out the best loan in the finance market. Once you assume the lease, you also assume any of the problems with the car. Benjamin moore aura interior paint covers like no other, even in the deepest. There had been a bug with the "Canadian Amortization Method" that caused the first payment to be calculated differently than subsequent payments. Online extra payment calculator with printable payment schedule showing extra payments applied to principal. Telecom Cost Management The "periodic interest rate" is the interest rate for a period — that is, the time between two scheduled payments. Amortization Software Generate fixed, variable or interest-only amortization schedules. He can be contacted through his Web site, http. In that case, you can set the "Amortization Method" to accommodate those types of loans. However, because our amortization calculator allows the user to set the "loan date" and "first payment date" separately, you can easily calculate the odd days interest. If it were a simple-interest account, the bank would also pay $250 in the second I am perplexed at what appears to be a nomenclature problem that the mortgage industry has created with its definition of "simple-interest mortgage." Aren't most monthly payment mortgages simple interest. You should see 49 in the Payment Periods field. The case where PMT .= 0 is fairly complex and will not be presented here. This calculator allows irregular length first periods. Ford Official Site One college student described his situation this way. Because a payment amount is not exactly $3,179.54 (but we can't have a payment that includes a fractional cent) the loan amount will not calculate to be exactly $365,000 either. You can calculate a loan's "nominal annual interest rate" which is the rate usually quoted by the lender. The "Amortization Method" should be set to "Normal" (level payments) unless you have a specific reason to set it to another method. Conversely, if the time between the "loan date" and "first payment date" is less than the payment period set, then the first period is said to be a "short initial period" and (as of the Jan. View buying request and quotations for instant tire quote instant tire inflator yl in on a. If you need to know the interest for 31 days, then enter 31 for the number of days and don't worry about the dates. If the "Loan Date" is May 15th and the "Payment Frequency" is "Monthly", then the "1st Payment Date" should be set to June 15th, that is IF you want a conventional interest calculations. Calculations based on compounding interest require that interest be calculated on the interest earned in prior periods. In this case, the amount of interest will be different for February and March. Prohibitions against charging interest on interest have been enacted at various times in some states. This calculator will solve for any one of four possible unknowns. Normally you would set the "Payment Method" to "Arrears" for a loan. In order to close the sale, you offer him owner financing. Payment Periods, Interest Rate, Present Value, Periodic Payment, or Future Value given that the other 4 have been defined. This contrasts with simple interest calculations which prohibit interest from being calculated on the interest earned during prior periods. Mortgage Calculators Track loans with ease. The smart borrower with a daily accrual mortgage consistently pays early. It is the rate institutions must quote in the US for interest bearing accounts. This kind ought to be called a daily accrual mortgage to clearly distinguish it from the standard mortgage. To convert from ieff to i, the following expressions are used. First, regular loan payments may be set calculate loan amortization daily interest compounding to any frequency, not just monthly. What is your monthly payment on a $100000 30 year loan at a fixed rate of 4% compounded monthly. Suppose a bank pays an annual rate of 3 percent, or 0.25 percent a month. Repo Mobile Homes In Sa In house car financing refers to an auto loan directly from the dealership. In some special cases loans will have only the interest paid as the regular payment or no interest at all. But it isn't; it is called a simple-interest mortgage. Amortization is the process of paying back a loan with a series of payments. Mortgages can also be simple interest or compound interest. For discrete compounding select the compounding frequency from the popup menu with a range from yearly to daily. This calculator supports 360, 364 and 365 day years. If you enter a negative number of days the start date will be updated. The interest accrual period is the period over which interest is credited. By using this daily interest calculator you can choose the best loan provider from the money lending market. Fly Now Pay Later Usa To Philippines Borrowers who don't understand these distinctions may not be able to manage their mortgages properly. Before do any calculation select an appropriate radio button. But if you are a borrower, then at any given interest rate, compound interest loans are to your disadvantage. The "First Payment Date" is the date the first payment is due. Please let us know of any problems — contact us. The year length impacts the interest calculation for stub periods as well as for daily compounding. To access the calculator, go to Tools Loan Repayment Calculator. Thank you for lending money letter, download thank you for lending money. The simple case for when PMT == 0 gives the solution. Points are common for mortgages in the US only. This calculator has the ability to calculate and show the dollar value of the points being charged. For a detailed explanation of the derivation of this equation, see the comments in the file src/calculation/fin.c from the GnuCash source code. Linder truck leasing linder truck lines scam artists, thieves, liars, buyer. Interest and future value are calculated (FV is nitial amount plus the interest.) Annual percentage yield is used for comparing investments. Mortgages can also accrue interest monthly or daily. The 0.22 difference is the rounding error. One month from March 15 makes the ending date April 15th or a period with 31 days. However, some people object to lenders charging interest on interest. Well, if you are an investor, then all other things being equal, earning compound interest is an advantage. If the mortgage is not simple interest, then it is a monthly accrual calculate loan amortization daily interest compounding mortgage on which the grace period is an interest-free period. Or, if you prefer a Windows program with all the benefits listed above, then download and try a fully functioning, 21-day evaluation copy of SolveIT., C—Value. This scenario is shown in the example image above. This will produce interest charges that do not match other calculators. The solution for interest is broken into two cases. Thus, we need functions which convert from i to ieff, and from ieff to i. For the purpose of this somewhat simplified discussion, a unit period is a consistent unit of time. Amortization schedule shows the interest saved and early payoff point. From this equation, functions which solve for the individual variables can be derived. This rate, ieff, is then used to compute the selected variable. The distinction can be easily understood in connection with savings Many of the below features are supported by our online TVM calculator. Because even a small change in interest rate percentage effect the greater change in total interest. The period of time, principal, interest rate and interest type are the key calculate loan amortization daily interest compounding components to perform the comparison between the different loan options. Thus, compounding for one year results calculate loan amortization daily interest compounding in nearly a 0.5% increase in interest. This mortgage has no special name because most mortgages are of this type. Please see the src/calculation/fin.c source file for a detailed explanation. One accrues interest monthly, and it is simple interest except when it allows negative amortization. Their documentation goes into more detail about this.
{"url":"http://www.allaboutsmilesde.com/wp-content/upgrade/supplement/appendix/calculateloanamortization.html","timestamp":"2014-04-16T04:44:19Z","content_type":null,"content_length":"20555","record_id":"<urn:uuid:f24da42d-fe3b-4e98-9ea4-67e062f2397a>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Fun with electrostatics I’m teaching our course in Electricity and Magnetism this semester, and even though I’ve done this plenty of times before, I still learn new things each time. Here are a couple from this semester. 1. Charged conducting disk. Suppose I take a thin disk of radius R, made out of a conducting material, and put a given amount of charge Q on it. How does the charge distribute itself? (In case it’s not clear, the disk lives in three-dimensional space but has negligible thickness. In other words, it’s a cylinder of radius R and height h, in the limit h << R.) This turns out to have a surprisingly simple answer. Take a sphere of radius R, and distribute the charge uniformly over the surface. Now smash the sphere down to a disk by moving each element of surface area straight down parallel to the z axis. The resulting charge density is the answer. My friend and colleague Ovidiu Lipan showed me a proof of this, and then I verified it numerically using Mathematica, so I’m confident it’s right. But I still have the feeling there’s more to the story than this. This result is simple enough that it seems like there should be a satisfying, intuitive reason why it’s true. Although Ovidiu’s proof is quite clever and elegant, it doesn’t give me the feeling that I understand why the result came out in this neat way. I’d love to hear any ideas. Update: These notes by Kirk McDonald have the answer I was looking for. I’ll try to write a more detailed explanation at some point. 2. Electric field lines near a conducting sphere. Take a conducting sphere and place it in a uniform external electric field. Find the resulting potential and electric field everywhere outside the sphere. This is a classic problem in electrostatics. I’ve assigned it plenty of times before, but I learned a little something new about it, once again from the exceedingly clever Ovidiu Lipan (who apparently got it from an old book by Sommerfeld). You can calculate the answer using standard techniques (separation of variables in spherical coordinates). The external field induces negative charge on the bottom of the sphere and positive charge on the top, distorting the field lines until they look like this: This picture looks just as you’d expect. In particular, one of the first things you learn about electrostatics is that field lines must strike the surface of a conductor perpendicular to the surface. Let’s zoom in on the region near the sphere’s equator: The field lines either strike the southern hemisphere, emanate from the northern hemisphere, or miss the sphere entirely. All is as it should be. Or is it? Let me put in a couple of additional field lines: The curves in red are legitimate electric field lines (i.e., the electric field at each point is exactly tangent to the curve), but they don’t hit the surface at a right angle as they’re supposed to. You can actually write down an exact equation for these lines and verify that they come it at 45-degree angles right up to the edge of the sphere. We constantly repeat to our students that electric field lines have to hit conductors at right angles. So is this a lie? Ultimately, it’s a matter of semantics. You can say if you want that those red field lines don’t actually make it to the surface: right at the sphere’s equator, the electric field drops to zero, so you could legitimately say that the field line extends all the way along an open interval leading up to the sphere’s edge, but doesn’t include that end point. This means you have to allow an exception to another familiar rule: electric field lines always start at positive charges and end at negative charges (unless they go to infinity). Here we have field lines that just peter out at a place where the charge density is zero. Alternatively, you can say that there’s an exception to the rule that says electric field lines have to hit conductors at right angles: they have to do this only if the field is nonzero at the point of contact. After all, the “must-hit-perpendicular” is really a rephrasing of the rule that says that the tangential component of the electric field must be zero at a conductor. The latter version is still true, but it implies the former version only if the perpendicular component is nonzero.
{"url":"http://blog.richmond.edu/physicsbunn/2013/02/21/fun-with-electrostatics/","timestamp":"2014-04-20T21:06:08Z","content_type":null,"content_length":"20715","record_id":"<urn:uuid:b1b19557-dba8-4506-acbc-c0def555c782>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Exp Function Word Problem May 19th 2009, 02:25 PM #1 Senior Member Nov 2008 Exp Function Word Problem One day, a controversial video is posted on the Internet that seemingly gives concrete evidence of life on other planets. Suppose that 50 people see the video the first day after it is posted and that this number doubles everyday after that. a) write an expression to describe the number of people who have seen the video t days after it is posted. b) one week later, a second video is posted that reveals the first as a hoax. Suppose that 20 people see this video the first day after it is posed and that this number triples every day after that. Write an expression to describe the number of people who have seen the 2nd video t days after it is posted. c) set the two expressions from parts a and b to equal each other and then solve for t. What does this solution mean? Ok can you just check my answers for parts a and b, except for part c i dont know what this means. a) A(t)=50(2)^t b) A(t)=20(3)^t c) I set them equal to each other and the variable disappeared on me..ie cancelled out What does this mean? One day, a controversial video is posted on the Internet that seemingly gives concrete evidence of life on other planets. Suppose that 50 people see the video the first day after it is posted and that this number doubles everyday after that. a) write an expression to describe the number of people who have seen the video t days after it is posted. b) one week later, a second video is posted that reveals the first as a hoax. Suppose that 20 people see this video the first day after it is posed and that this number triples every day after that. Write an expression to describe the number of people who have seen the 2nd video t days after it is posted. c) set the two expressions from parts a and b to equal each other and then solve for t. What does this solution mean? Ok can you just check my answers for parts a and b, except for part c i dont know what this means. a) A(t)=50(2)^t b) A(t)=20(3)^t c) I set them equal to each other and the variable disappeared on me..ie cancelled out What does this mean? I've noticed you've posted quite a few questions on exponential functions - if you're struggling it would be better to ask your tutor if you don't understand. A and B are fine. C will require logs to find t $50 \times 2^t = 20 \times 3^t$ Divide through by 20: $2.5 \times 2^t = 3^t$ Take logs: $ln(2.5 \times 2^t) = ln(3^t) = t \times ln(3)$ Use the law $ln(a) + ln(b) = ln(ab)$ $ln(2.5) + t \times ln(2) = t \times ln(3)$ Can you solve it from here? May 19th 2009, 03:36 PM #2
{"url":"http://mathhelpforum.com/pre-calculus/89702-exp-function-word-problem.html","timestamp":"2014-04-18T18:55:28Z","content_type":null,"content_length":"38430","record_id":"<urn:uuid:f645181a-c628-43ef-8a48-fe3c68ea21bc>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply hello! so, I've came accross a problem I can't solve at all :c. The problem is as it follows: The constant term of is equal -270, then a=? I searched how to solve it and found only one solution, but I didn't understand what was done... they turned into and somehow found that k = 2. also, they found "", which give us a = -3. That's actually the right answer, but I can't figure out what was done to solve this problem... :c Can you guys explain it to me? Thanks c:
{"url":"http://www.mathisfunforum.com/post.php?tid=19697&qid=276406","timestamp":"2014-04-18T06:06:21Z","content_type":null,"content_length":"20612","record_id":"<urn:uuid:46bc5572-49e9-4c09-83c7-7a5c8b244efb>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Week 10 Number sense and scale sense (March 19-25) 1. My first task about measurement I adapted to food and measurement. This "dinner time" activity would involve students working in groups or individually to prepare a meal on time. Given times on how long each food will take to cook and prepare they will have to figure out at what time each food needs to start preparation in order to be served at the right time. I believe this task lends itself to other math activities as well such as measuring and converting depending on the age level and readiness. Students could also extend this activity to actually preparing foods or relate foods to other cultures, therefore connecting to other subject areas. 2. My second task I found on the internet abotu scaling. It involves creating a superhero and then scaling it to a poster board. I think students would have a fun time interacting with each other to create and design this superhero. Again this activity could be collaborative or individual, but would probably take a few classes to complete. As far as lesson planning goes I believe it is important to connect lessons and subject area and that is what makes a lesson strong. Therefore I think this task relates to art obviously and also could relate to literature if the class is reading about heros and what their idea of one is. 3. Growing up in the US I obviously prefer counting. Until I read this post I had no idea that there was such a controversy over counting and scaling nor that different countries prefered different methods. I always thought that sclaing was more of an art task and was actually an art project of mine. But through this class I am realizing more and more how relate art and math are. In the end though I am a counting person and would probably focus on counting more. 1. There are lots of hands-on measurement tasks that would be good for kids! One that I can think of would be asking the kids how many steps they think it would take to get across the classroom. Every student can have their own estimate of the fewest number of steps they could use to get across the classroom, and then they could experiment to see how close their estimations were. Then, based on their discoveries, they could try to estimate how many steps it would take to walk other distances. Or, they could estimate how many feet there are in a distance and then measure it, or how many of THEIR feet there are in a distance and then measure it by walking the distance heel to toe. There could be a prize for the person with the closest estimate or it could just be for fun. I don't think this activity would keep the students' attention for TOO long, but it's definitely a good introduction for a measurement lesson and it gets students' brains working! 2. I took a drafting class in middle school and loved it! It's primary focus was scaling and everything we drew we had to draw to scale of our model. Students could be shown different shapes and told the dimensions of the shape and then given the instruction to draw the same shape, but to scale. For example, if a student is given a scalene triangle, side A = 5 feet, side B = 7 feet and side C = 3 feet, and it is to be drawn with a scale of 1 foot = 2 inches, then side A would be drawn 10 inches long, side B = 14 inches and side C = 6 inches long. 3. I think both counting and scaling is very important, in fact they often go hand in hand. Take the assignment I described in #2, for example; to draw this triangle to scale, the sides of both the original triangle and the new, scaled triangle, have to be measured. This would most likely be done with a ruler, in which case the inches and feet would have to be counted and added together. I think both concepts, counting and scaling, are important in mathematical development, and I'd say their imporance is equal. 1. A hands on activity that I thought of actually gets the students to use their whole bodies to do math. Start off by showing the kids an example. Call two students to stand up in the front of the class to the right. Call three more students and have them stand up to the left of the classroom. Make life size addition, subtraction, and equal signs for other students to hold up. If you were doing addition have a student hold the addition sign in between the groups of students in front of the class and the students holding the equal sign will stand up at the far left. Have the students sitting down solve the problem as a group and send the answer (the amount of students) to the problem up to the front of the room. You would do the same thing for subtraction problems. Let students take turns calling up two different numbers to solve. I like this task because it gets kids to be active within the classroom and work as a team. Everybody gets a chance to participate and solve the problem. It's a creative way of doing math problems. 2. A good hands on measurement task is making various geometric shapes with play-doh. Have the students mold the play-doh into the shapes (it can be flat or 3D). Print out different geometric shapes or bring in actual shapes for the students to reference from. You can also have pictures of a house, animals, or any other object you think would be fun for kids to make. This will allow them to learn how to make things symmetrical on both sides and making sure all sides are even. I like this activity because it's a great way to have students see what it means to have something be symmetrically even while having fun. They're not just looking at an object and talking about it. They get to actually make it and explain the process of doing it. Play-doh will always be popular amongst kids. 3. I believe both approaches are really good to teach to students. We shouldn't teach one more than the other, it should be evenly taught. Both come across in our everyday lives so why not teach both in class? 1. One activity using measurement and counting would be baking cookies. Children would split into groups of four and be given the ingredients needed to make the cookies. They will follow the recipe and measure the proper amount of flour, sugar, etc. in measuring cups and teaspoons. They will have to keep track of how many cups of flour they put in and count aloud with group members. It is a fun way for children to learn about measurement and to see how much math is used while cooking. It is something practical that they will have to learn for everyday use. 2. Students draw a picture with points at certain coordinates and then multiply all the coordinates by two to double the size of the picture. This would teach them scaling. They can then multiply either the x coordinate or the y coordinate only which teach them about stretching. This teaches the children how to change pictures mathematically. 3. I also prefer counting over scaling but it is what I am used to. However, I think scaling is more applicable to real world situations. It is a faster way to figure things out and simplifies the process. Counting to me is basically scaling in small increments. If children are taught with emphasis on scaling, they will be better equipped to handle problems that they face in real life. I think walking the students around the outside of the building to measure and count things would be great for the first measuring task that does involve numbers. It would probably also be a good idea to do an initial walk-around, ahead of time, alone, just to figure out what would be easily accessible, and the right size for measuring. In preparation, you could pre-measure some of the things you will be asking the students to measure, (but not in order to limit them). If the students have the urge to measure a plant or something that's not on the itinerary, they should feel completely free to do so. When you return to the classroom everyone could discuss measurements and compare the lengths of what they measured outside to objects found in the classroom. Is the front step of the building shorter or longer than the teacher's desk etc. If it's longer how much longer? Everyone could estimate the difference in length between the two, and then subtract in order to calculate the actual difference. Since this activity is engaging and kinesthetic, it will be sure to be memorable, and capture the student's interest. It is also what I would consider an open-ended activity which lends itself to many variations and many answers. WARNING: Do not use yardsticks for this activity because the male children will be tempted to engage in battle. Cloth measuring tapes would be preferable. The second measuring activity I came up with, involving no numbers, is a sort of experiment with car design. Designing and actually building rubberband-powered cars would be a bit too involved for younger children, so I would pre-make several unique cars, or have them made in varying designs. I would bring them to class so that we could race them on a track. The students would then identify how different aspects of design affect speed and distance. There would be no need to actually measure anything in this activity. If the cars were all being raced on a track, simultaneously, you would clearly see which car traveled the farthest distance, and which traveled the least etc. In this activity you could also explore other variables, such as how stretching the rubberband more tightly will affect the distance and the speed of the cars. This activity is hands-on and lends itself to experimentation and discovery. Children learn so much more when they can make a discovery on their own, so this activity would be very relevant. 1. One measurement task could be clock-based math. Students could create a schedule for the classroom and consider how many hours, minutes, seconds, etc. and between each of the daily activities. They could see that there are two hours between activity X and Y, there are 5000 seconds provided for activity Q, etc. This would help the students' knowledge of clocks and how to read them, while providing an opportunity to convert numbers. This would be a strong activity because the students would be connected to what they're learning, and could see the practical applications in the classroom. Plus, the kids could work together to form schedules, making it easy to utilize other areas of the curriculum. 2. Another hands-on activity that involves scaling could be having children select their favorite skyscraper or building and then making a scale model of it. We could decide as a class what a good scale would be (10 feet = 1 inch, etc.) and then use that as the standard for all of the projects. The kids could use a variety of materials to build their models and would learn how to scale things down. There would be a strong emphasis on ratios, and figuring out how to convert into smaller units. This activity would be fascinating for the kids, and would be very number-focused. 3. I would agree with Sandy that I prefer using counting more. I think that both have their place in the math world, and that each have core elements that students must learn. Yet, I feel like counting is easier to comprehend for myself, making it easier for me to teacher. I like things in a very structured manner, so counting is more applicable in my life. All I have to do is look at my Google calender, filled with classes and work hours and whatnot, to know that I base my life far more off counting! 1. Find or design a good hands-on measurement task that depends on counting, adding or subtracting units. Briefly explain qualities that make it a strong learning task. An easy hands-on measurement task using counting would be to have the students count the number of tiles on the floor. We could first measure the size of each block. We could measure the length of the room. I could then ask the students how many blocks are needed to go from wall to wall. Students could then count the blocks to verify their answer. This is a strong learning task because it involves a number of steps. There is more than one measurement needed, and the students will need to factor the answer. They will then prove their answer in a kinesthetically. This exercise can be done individually, as a buddy, or as a team. 2. Find or design a good hands-on measurement task that depends on scaling, folding, splitting, stretching and other actions that are NOT about counting, adding or subtracting units. What operations correspond to your task, in the formal math language? Again, explain why you like the task. I wasn’t sure about this, but I found this exercise that I think works great to teach fractions without counting. Students will be able to see the fractions which will lead to a better understanding of what a fraction is. Introduction to fractions Students are given various lengths of paper strips or pieces of paper streamers. Ask the students to fold their paper strips into halves and ask a question such as: "How do you know you have folded your strip into halves?" Ask students to compare their half strips with those of other students. Students are then shown other students' attempts to show one half of a rectangle (Figure 1). Ask questions such as: * Which of these students have successfully shaded their rectangles to shows one half? (Some students will not recognise that Mike's rectangle is showing one half as they think the left hand side is one half and the right hand side is two halves.) * Why is Jackson's half different to Mike's half? * Why do you think Jen has shaded her rectangle how she has? Comparison of half of a square Students are handed two squares of paper and asked to fold each square in half. Once students have folded one square in half, ask them to fold the other in a different way. Questions to ask students include: * Which half is larger: the triangle or the rectangle or are they both the same? * How do you know? * Prove it. (Show me.) Students enjoy proving that the triangular half is the same size as the rectangular half. Folding paper strips Students are given a paper strip that is 20 cm long and asked to fold it into two equal pieces. Discussion includes questions such as: * How many parts are there? * How many folds are there? * What do we call each part? * Show me one half of the paper strip. Show me a different half. * How many halves are there in a whole? Students are then asked to fold their halves of paper strip in half. Before opening their paper they are asked: How many parts will there be? * How many folds will there be? * What do we call each part? * Show me one quarter of the paper strip. Show me a different quarter. * Show me two quarters. What is another name for two quarters? * Which is larger: one half or one quarter? How do you know? Students are then asked to use their paper folding to show: three quarters, four quarters, one half and one whole. After folding their paper streamer in eighths students will be asked questions that involve equivalence, showing fractions that are larger and smaller than given fractions, and questions such as: Show me a fraction that is larger than one eighth but smaller than one half? If using paper folding for the first time then just fold halves, quarters and eighths. If students have used paper folding before another paper strip will be folded into thirds, sixths, ninths and similar questions asked as for the halves, quarters and eighths. Strips can then be folded into fifths and tenths. Students should be challenged to fold a paper strip into sevenths (Pern, 2011). 3. Curricula of some countries (such as US or China) emphasize counting tasks more, and curricula of other countries (such as Eastern Europe or Singapore) emphasize scaling tasks more. This is not new: for example, Ancient Egyptians were more into counting and Ancient Greeks more into scaling. Needless to say, there are Math Wars about these choices in the current math ed circles. What is your take on the two approaches to the number sense? To me, counting seems more easily understood. Perhaps that is simply because it is what I am used to here in the US, but I have trouble with spatial relations, and being able to judge based on my own visual perception would be very difficult for me. I like the concreteness of numbers. For me, it is more exact, and is seems to offer me a more reliable answer. Pern, C. A. (2011, Winter). Using paper folding, fraction walls, and number lines to develop understanding of fractions for students from years 5-8. Retrieved March 2012, from Resource library: http:
{"url":"https://p2pu.org/en/groups/ed218-developing-mathematics-the-early-years/content/week-10-number-sense-and-scale-sense-march-19-25/","timestamp":"2014-04-17T22:45:11Z","content_type":null,"content_length":"86556","record_id":"<urn:uuid:3e8577b2-10a5-4513-98eb-0130377c52ba>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'Traffic Light' Brain Teaser Traffic Light Probability puzzles require you to weigh all the possibilities and pick the most likely outcome. Puzzle ID: #44103 Category: Probability Submitted By: ronhoward Corrected By: javaguru There is a traffic light at the top of a hill. Cars can't see the light until they are 200 feet from the light. The cycle of the traffic light is 30 seconds green, 5 seconds yellow and 20 seconds red. A car is traveling 45 miles per hour up the hill. What is the probability that the light will be yellow when the driver first crests the hill and that if the driver continues through the intersection at her present speed that she will run a red Show Answer What Next?
{"url":"http://www.braingle.com/brainteasers/teaser.php?id=44103&comm=0","timestamp":"2014-04-20T21:08:43Z","content_type":null,"content_length":"21956","record_id":"<urn:uuid:db6418aa-6dbc-49a7-aff0-eed5c614439d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Precalc HW (NO IDEA HOW TO SOLVE) Number of results: 52,014 Hello, good afternoon. I need help with my precalc hw. we are learning about finding zeros I dont which method I should use for this problem: if i is a zero of x^3+3x^2+ix+(4+i) what do i do? synthetic divison? or long division? when i did long division i got a werid ... Tuesday, October 14, 2008 at 4:06pm by Samya okay i have the same hw too! i have managed to do most of it but i am stuck on the big idea of energy! i really need help. whyu do teachers give hw that people can't do!!! lmao xx Friday, October 20, 2006 at 2:34pm by laura <3 Precalc, i really don't get my hw WOAH. how did u know how to do that? like, waht is the rule? is gedunken a person? Tuesday, October 14, 2008 at 4:16pm by Samya the big idea of energy omg!! lol i got da ear defenders hw now!! ive gt no idea ov wat 2 do!! HELP!! Tuesday, December 5, 2006 at 2:18pm by Coco Precalc, i really don't get my hw how do i do this? factor x^4-x^2-20 over real and complex numbers I tried to do synthetic, but it doesnt work.=( Tuesday, October 14, 2008 at 4:16pm by Samya Precalc, i really don't get my hw Gedunken. (x^2-5)(x^2+4) Now each of those can be factored, the first give real, the second imaginary roots. Tuesday, October 14, 2008 at 4:16pm by bobpursley concentrated sulfuric acid has a density of 1.84 g/ml. calculate the mass in grams of 1.00 liter of this acid. how to solve this? Thursday, September 22, 2011 at 5:42pm by hw help ASAP Precalc HW (NO IDEA HOW TO SOLVE) Low Earth orbiting satellites orbit between 200 and 500 miles above Earth. In order to keep the satellites at a constant distance from Earth, they must maintain a speed of 17,000 miles per hour. Assume Earth's radius is 3960 miles. a. Find the angular velocity needed to ... Monday, November 19, 2007 at 7:19pm by Amie the big idea of energy omg i cant believe i have the iron and sulphur hw 2 and i am sooo stuck on the big idea of energy i dunno what 2 say arrrrgh Tuesday, December 5, 2006 at 2:18pm by confused Social Studies 7R - DBQ Essay Check Thank You! :) Finally! I'm done with all of my hw u kno i had hw for all of my classes 2nite (besides content support, gym and orchestra) i had hw for sci, math, eng, facs, and ss thanks for helping me w/ my hw have a wonderful night see u online 2morrow Monday, February 13, 2012 at 8:57pm by Laruen the big idea of energy i get da big idea of energy but nothin else 2 do wit ma hw! magnesium burning! Tuesday, December 5, 2006 at 2:18pm by Anon do you have an idea how i can find it Tuesday, December 15, 2009 at 7:36pm by john the big idea of energy KS3 what is the big idea of energy? To A. Einstein, it was that Energy and Mass are equivalent, and that the sum total of both, with a factor called entropy, is constant in the universe. what is the big idea of energy i need help on finding out what the big idea of energy is? ... Sunday, October 22, 2006 at 3:29pm by blue babe Science 8R - HW Help! Science 8R HW - Using the steps of the Scientific Method solve a problem that you may have (for example, 1. How can I get my little brother to stop eating my homework) I can't think of one.... please help! any ideas or suggestions??? Wednesday, September 19, 2012 at 8:48pm by Lauren the big idea of energy LOL omf this hw is so hard i hav no clue!! im doin a task were u hav 2 include da big idea of energy in digestion HAHA Tuesday, December 5, 2006 at 2:18pm by tash Precalc HW (NO IDEA HOW TO SOLVE) The key relationship is linear velocity = angular velocity x radius, a) since the satellite is 200 miles above the earth, the radius would be 4160 miles. lin. vel = ang vel. x radius 17000 = av(4160) 4.07 = av so the angular velocity would be 4.07 radians/hour (remember that ... Monday, November 19, 2007 at 7:19pm by Reiny the big idea of energy please help me to do ear defender hw Tuesday, December 5, 2006 at 2:18pm by rayan the big idea of energy i dnt gerrit and i need 2 do it on my hw 2-magnesium & oxegen Tuesday, December 5, 2006 at 2:18pm by Anonymous PRECALC HELP!! Actually, I believe they want a different answer for each child. I didn't account for that, so I have no idea. Monday, November 19, 2007 at 6:37pm by Michael the big idea of energy we all have the same hw and none of us know how to do it..aaaagggh Tuesday, December 5, 2006 at 2:18pm by anonymous i need help quick i have no clue how to do these problems and its hw! this is an example of the hw -w-5+5(5w+9) if w =1 Tuesday, October 23, 2007 at 11:02pm by Arely solve x/x^2-4 - 1/x+2=2 Wednesday, December 7, 2011 at 2:53pm by 95 the big idea of energy lol i have da ear defenders hw to i have no clue wat to do it is so hard!!!!!!!!!!!!!!!!!!! Tuesday, December 5, 2006 at 2:18pm by charlotte Plug in N and t and solve for k! ln280=(ln250)*k*(10) Then, plug in the k value and N=2 and solve for t! Monday, December 20, 2010 at 8:29pm by TutorCat PLEASE HELP! this is for tomorrow!!! English hw My hw says this: what is the question and answer flow for an exclamatory sentence? also... article adjective can be called? Tuesday, September 25, 2007 at 8:23pm by Mathilde Solve sinx+cosx=0 Thanks! Thursday, March 6, 2008 at 6:51pm by Andrea 1. Solve |2x + 5|< 9 Tuesday, November 3, 2009 at 1:20pm by kim i was looking for help ... not to solve my hw. -.- Thursday, October 7, 2010 at 2:09pm by natali Oops, I meant "1/R=1/2.3+1/x solve for R" Wednesday, December 1, 2010 at 9:15pm by Schnitzel How do you solve log5 sqrt125? Tuesday, April 26, 2011 at 5:55pm by Ashley hw help what is N*.40=20 solve for N Tuesday, September 13, 2011 at 8:57pm by bobpursley How do i solve for the variable 3x= sqrt5x+1 Wednesday, December 12, 2007 at 6:15pm by LT Solve for the variable m^2 + 2m +3 > = 0 Wednesday, December 12, 2007 at 6:15pm by LT how do u solve for this? sqrt(-10i+12) Tuesday, October 14, 2008 at 5:12pm by Samya Solve log subscript4 112 = x. Wednesday, September 1, 2010 at 9:16pm by Anonymous (2^(x))(5)=10^(x) steps too please how to solve Wednesday, February 2, 2011 at 10:06pm by Amy~ Solve the indicated variable: 1. Volume of a cone: solve for h: V=¨ir^2h/3 2. Temperature formula: solve for C: F=9/5C+32 pleaseeeee help me solve these! plzzz! i have nooo idea how to do these. THANKS! =))) Sunday, September 20, 2009 at 4:34pm by kitty kat solve the system of equations: x^(2)+y^(2) = 12 xy = -4 Thursday, April 7, 2011 at 12:53am by Haley solve 5^y=4^y+3.Round your answer to the nearest tenth. Saturday, January 14, 2012 at 6:26pm by 95 Given g(x) = print_coeff(m,1)x print_const(b), find g(print_coeff(n,1)a) I was told that the answer for this is a. However, I have no idea how to solve it. Please help me understand. I have no idea how to interpret what you have written. It is not standard algebraic notation. ... Friday, August 10, 2007 at 11:39pm by Doralis solve the equation on the interval [0,2pi) sin^2x=1 Friday, March 19, 2010 at 12:29am by Anonymous Math- Precalc could someone please help me solve this problem... e^x - 12e^(-x) - 1 Monday, November 15, 2010 at 7:16pm by Alex how to solve for x: 500e^(0.3x) = 600 Steps please. Wednesday, January 26, 2011 at 11:25pm by Amy hw help ASAP 8.00in^3 to ml please show me how to solve this Monday, September 19, 2011 at 12:40pm by bb222 Precalc (complete the squares) (2x+1)^2 +4=0 solve the equation. help please :( Friday, November 11, 2011 at 3:17pm by Cindy x^2 - 4 x - 4 = 8 x^2 - 4 x - 12 = 0 solve quadratic then other possible solution x^2 - 4 x -4 = -8 x^2 - 4 x + 4 = 0 (x-2)(x-2) = 0 x = 2 Thursday, December 20, 2012 at 8:18pm by Damon Math- Precalc I assume the complete question is: solve for x: e^x - 12e^(-x) - 1 = 0 Substitute y=e^x, and e^(-x)=1/e^x=1/y. e^x - 12e^(-x) - 1 =0 becomes y-12/y-1 = 0 Since y=e^x, y>0 (i.e. y≠0) y²-y-12=0 Factor and solve the quadratic. Reject the negative root, since e^x cannot... Monday, November 15, 2010 at 7:16pm by MathMate Someone please help me solve this equation for y. (x^2)(y^2)+xy=1 I need to solve for y so that i can graph it! y^2 + y/x - 1/x^2 = 0 Treat 1/x as b and -1/x^2 as c in the quadratic equation ax^2 + bx + c = 0 There are two solutions y = [-1/x + sqrt(1/x^2 + 4/x^2)]/2 and y... Monday, February 26, 2007 at 9:30pm by Jill Chemistry HW HELP Use PV = nRT and solve for V (in L) and convert to mL. Thursday, July 18, 2013 at 12:34am by DrBob222 Solve the following equation for x in terms of ln and e. (2e^3)-6-(16e^-x)=0 Monday, March 3, 2014 at 6:57pm by Aly Type "1/R=1/2.3+1/x solve for x" into WolframAlpha . com and click "Show steps" Wednesday, December 1, 2010 at 9:15pm by Schnitzel There's f(x) = (2x+3)/(x+4) and g(x) = (4x-3)/(2-x) Solve for x in f(g(x)) I tried this but I could not end up with the answer being x Thursday, January 6, 2011 at 9:18pm by Anonymous solve the following logarithmic equation: 2lnx= log base e^3 343 Sunday, June 30, 2013 at 3:53pm by Anonymous the ratio of height to shadow length is the same, so h/3.5 = 8/5 I imagine you can solve that for h. Sunday, November 24, 2013 at 6:06pm by Steve Precalc stuff.. square root of z-1= square root of z +6 are you solving for z? square both sides then solve for z. that's my problem, i'm not sure how to solve for z. i need a process. it's not in my book or anything. Wednesday, March 7, 2007 at 5:21pm by Blondie MEGSSS (math) im confused too,but in megsss, there is no mercy! where are you guys from, and how can i connect to you too, for hw help? it would be great for a quick responce to come my way, because my hw is due tomorow. eekk! help please! Tuesday, January 6, 2009 at 8:31pm by adeline Solve the system by the addition method -10x^2 + 10y^2-320=0 30x^2 + 6y^2 - 336 = 0 Tuesday, December 11, 2007 at 10:58pm by LT algebra ll IN my HW it says: Solve each equation.State the number and type of roots. -9x-15=0 Sunday, April 6, 2008 at 4:06pm by Yaili Everybody hw help who go facebook I dont use facbook.but hw essay topic what should people do to get rid out of facebook? What should they do if they r addicted to it? Ples helppp Wednesday, May 1, 2013 at 10:12pm by Nisha I have a couple questions, I need to know how to solve these type of problems. X^2 -4< X + 16 5/n+2 - 5/n = 2/3n 2X^3-x^2-6X + 1/ X^2 + 5x - 8 1) rearragne the terms to get x^2-x-20<0 (x-5)(x+4)<0 which means that one term can be negative, or -4<x<5 check that. ... Thursday, June 14, 2007 at 5:47pm by <3 A 1. Find the exact value 3sin(invertangent (-1)) I got 0. Given 0<x<orequalto 1, determine the value of inversine (x) + inversetan (squareroot(1-x^2)/x) I really have no idea how to do that. I am trying to draw a diagram but cant figure it out. Thanks! Monday, February 11, 2013 at 6:53pm by Rebekah solve 6=e^0.2t Round your answer to the nearest hundredth Someone please help me - I don't understand this problem at all! Thursday, April 19, 2012 at 12:55pm by tabby Science 8R - HW Help! I know the Scientific method it's just that I can't think of a problem I can solve How to get my homework done before 8pm? is that good???? Wednesday, September 19, 2012 at 8:48pm by Lauren Social Studies oh never mind i didn't see that on my hw now I found the answer we r doing a DBQ on the native americans, we are done with unit so as a project/essay we have to do a DBQ for hw we just have to finish in documents, we have 6 documents my teacher made the document packet it's ... Thursday, October 27, 2011 at 6:31pm by Laruen Hi. Can someone help me with this? Simplify: (sin^2x)(cos^2x)+(sin^4x) The answer is sin^2x, but I have no idea how to get there! Thanks! Monday, February 18, 2008 at 7:45pm by Andrea precalc: elimating the parameter Solve each equation for T from the first: 3T = x+4 T = (x+4)/3 from the second 2T = -y + 1 T = (-y+1)/2 but T=T (x+4)/3 = (-y+1)/2 cross-multiply and simplify to get 2x+3y = -5 Wednesday, July 9, 2008 at 9:21pm by Reiny Math - PreCalc (12th Grade) As explained , C, D and E cannot be correct. A + B + C = 15000 A = B + C + 3000 3A + 2B + 2C = 30000 Solve these equations to get A, B and C Wednesday, March 19, 2014 at 10:26am by Shawna the big idea of energy Yep... I'm using this stupid textbook and we're studying 8F. ARGH. Nope, I haven't gotten the 'Ear Defenders' hw yet but I guess I'll soon will. I am so dead...what is the 'BIG IDEAD OF ENERGY' anyway? :/ So need help!!! Tuesday, December 5, 2006 at 2:18pm by Homework Heaped I'm given some sort of curve with these points defined (-pi/2 , -1) and (pi/2,1) ok now I have to do this f(x+2) so I have to shift all the x cordinates over two... how do I do this -pi/2 - 2 Monday, November 16, 2009 at 5:58pm by PreCalc Math 8R - HELP! (PLZ!) can someone please help me because I'm really have a hard time with my homework. I understand some it but .... some problems are in different forms... and I looked in my notes (actually we don't have any notes JUST EXAMPLES! :( ) and..... omg it's hard please someone help me ... Thursday, January 31, 2013 at 9:23pm by Lauren Science (big idea of energy) errr ive got the sme friggin hw aswell and the worst thing is is tht we av to do it for our assesment enyways the reason they put the cotton wool is to stop the smoke getting out btw im in yr 8 so u no i migh be rong!!! Monday, October 1, 2007 at 11:36am by Annalisia how to solve 5(2^(3x)) = 8 Steps too please answer is ln 1.6 / 3 ln 2 Wednesday, January 26, 2011 at 8:21pm by Amy h=at-.25vtsquared ,solve for a Please help me, I have no idea how to solve! Sunday, October 4, 2009 at 3:17pm by Petronella How would I solve: 1/3(D+3)=5 I know you need to find the recipicile, but I have no idea how to solve this. Sunday, October 17, 2010 at 6:12pm by June Precalc: Logarithms Solve for x? xln(x) + 1 > x + ln(x) e^1/x > 1 Any help would be appreciated. THANK YOU! Sunday, August 3, 2008 at 10:20pm by Joe Solve for x log[1/3](x^(2) + x) - log[1/3] (x^(2) - x) = -1 Steps too please. so far I tried this and I got 3 = (x+1) / (x-1) Thursday, January 27, 2011 at 1:16pm by Amy Solve Absolute Value 2x+5Absolute Value5 is less than 9 |2x + 5|< 9 Tuesday, November 3, 2009 at 1:20pm by kim Precalc with Trig What value(s) of x from 0 to 2pi solve the following equation: cos squared x - cos x - 6 = 0 Thursday, April 11, 2013 at 2:04pm by Natalie Math - PreCalc (12th Grade) Use the inverse matrix to solve this system of equations: 4x+3y=7.5 7x+9z=14 4y-z=8.3 4,3,0 7,0,9 0,4,-1 Friday, March 21, 2014 at 9:20am by Shawna 1. Solve log (4x) = log (2) + log (x-1) the fact that this is x-1 and not + is really messing me up because i keep getting a negative number and that isnt possible 2. Solve: absolut value of(x^2 -4x-4) = 8 thanks so much!!! I have been studying and my final is tomorrow. these ... Thursday, December 20, 2012 at 8:18pm by Rebekah what is the molarity of a solution containing 1 mole of urea per litre of the solution ? Hw to solve this Thursday, July 12, 2012 at 3:04pm by star Check my precalc what is the f inverse of f(x)= ln(x-3) I got y= e^x - 4 Solve 8 times 10^(2x-7)=3 I do not know how to do this. Do i get 8 and 10 with the same base? How? Thanks! Sunday, November 18, 2012 at 8:46pm by Rebekah Solve the Logarithmic equation algebraically. Give an Exact Answer and an approximation for x to four decimal places. 20 log(2x+2)=45 Monday, December 5, 2011 at 2:01pm by Alexander chemistry hw please help PV=mass/molmassHe * RT solve for mass. I would start by changing ft^3 to m^3 Tuesday, November 10, 2009 at 3:42pm by bobpursley Solve for x log[1/3](x^(2) + x) - log[1/3] (x^(2) - x) = -1 Steps too please. so far I tried this and I got 3 = (x+1) / (x-1) I asked this question before, but what would x= ? Thursday, January 27, 2011 at 7:04pm by Amy I assume you know enought calc to take the derivative. y'=slope=-4x=4 y=4x+b solve for b a the point, and you have the line Friday, April 1, 2011 at 7:02pm by bobpursley concentrated sulfuric acid has a density of 1.84 g/ml. calculate the mass in grams of 1.00 liter of this acid. how to solve this? Thursday, September 22, 2011 at 5:42pm by hw help ASAP Use the following conditions to create a sketch of the function by hand. a. f(x) is increasing from (-infinity,-4)union(-4, infinity) b. there is a vertical asymptote at x=-4 c. There is a horizontal asymptote at y=2 d. f(0)=-5; f(1)=-3; f(3)=1 I have an idea of what this ... Wednesday, September 30, 2009 at 10:07pm by MUFFY First, you omitted the units on the 9.03 WHAT in a 0.8112 WHAT container. I'm confused because you are confused. There is nothing here that you need an idea about. You have a and b, R, T, and n. Solve for P by van d equation, then use the usual PV = nRT and solve for P. Then ... Saturday, January 14, 2012 at 5:46pm by DrBob222 HW 7_2_2 576 -576 281.33 -281.33 HW 7_3_1 -P*x Tuesday, June 18, 2013 at 9:24pm by simonsay MIT 2.01X HW 7_2_2 576 -576 281.33 -281.33 HW 7_3_1 -P*x Sunday, June 23, 2013 at 4:47pm by simonsay HW 7_2_2 576 -576 281.33 -281.33 HW 7_3_1 -p*x Sunday, June 30, 2013 at 5:48pm by Jon MIT 2.01X HW 7_2_2 576 -576 281.33 -281.33 HW 7_3_1 -P*x Tuesday, June 25, 2013 at 9:17am by simonsay Physics Help (A)Ht=Hs•ζ+(1-ζ)Hw, where Ht = total (actual) enthalpy (kJ/kg), Hs = enthalpy of steam (kJ/kg), Hw = enthalpy of saturated water or condensate (kJ/kg) Using steam-pressure table http:// enpub.fulton.asu.edu/ece340/pdf/steam_tables.PDF we can find the magnitudes for p=... Sunday, July 14, 2013 at 5:35pm by Elena A persn carries a 218N suitcse up a stairs.the displcment is 4.20m vertically & 4.60m horizntally. A.hw much work does the person do on the suitcase B. If the prsn carries the same suitcse dwn.hw much work does he do Monday, August 15, 2011 at 2:29am by Luc Algebra 2 How do I answer this f(x) = x - 3 (x + 4)^(1/2) and am asked how to solve for x when y = -5 I have no idea how to do this and know that I could just solve it by guess and checking but I want to know how to do this algebraically when plugging in the -5 and trying to solve for x... Tuesday, May 26, 2009 at 5:39pm by Dylan solve the following pair of simultaneous equation: (x/a)+(y/b)=1 (x/b)+(y/a)=1 can someone help please?i have no idea how to solve it =( x/(ab) + y/b^2 = 1/b x/(ab) + y/a^2 = 1/a Subtract one from the other y(1/b^2 - 1/a^2) = 1/b - 1/a y(1/b + 1/a)(1/b - 1/a) = (1/b - 1/a) ... Wednesday, July 11, 2007 at 4:18am by Emma algebra 2 yes, no idea, or yes, here's my idea? If no idea, look at the last two problems. There must be some first step you could make. Thursday, October 24, 2013 at 5:12pm by Steve Solve 3x +1 < 5 or 3x +1 > 4 By saying "solve," is it asking for me to show it on a line and in interval notation? Here's another: f(x) < 3 and f(x) > 4, where f(x) = 1/2x - 7 I have no idea how to solve this! Please walk me through it! Thank you so much! Monday, January 26, 2009 at 4:19pm by Harper I have no idea how to solve this problem. The question says: Determine whether the points A(4,5), B(-3,3), C(-6,-13) and D(6,-2) are the vertices of a kite. Explain your answer. Can you please help me with the steps to solve it and understanding it. Monday, February 18, 2008 at 8:23pm by Avalon Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=Precalc+HW+(NO+IDEA+HOW+TO+SOLVE)","timestamp":"2014-04-19T13:00:07Z","content_type":null,"content_length":"33859","record_id":"<urn:uuid:d6a5da1e-abe8-44c8-ac07-6601a8801a93>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Kenmore Algebra 2 Tutor Find a Kenmore Algebra 2 Tutor ...Whether you are looking for a general overview that will make the PSAT less intimidating, or are looking for high scores to qualify for the National Merit scholarship competition, I can help you out. National Merit hopefuls may be happy to hear that I've helped numerous SAT students achieve scor... 33 Subjects: including algebra 2, English, reading, writing ...I have programmed in various languages, installed and configured hardware and software, and recently have focused on computer network security issues. I was actively certified as a Computer Information Systems Security Professional (CISSP) for about six years (until October 2010). I prepped a 10... 43 Subjects: including algebra 2, chemistry, calculus, physics ...I started teaching prealgebra in 1984, my very first teaching job. I had a split 7/8th grade class. Since that time, I've tutored and taught prealgebra for three years. 39 Subjects: including algebra 2, reading, English, writing ...I've constructed, researched, and presented biological fuel cells. I've presented a fuel-cell powered car at a national AIChE conference. I've researched and presented UREX designs. 62 Subjects: including algebra 2, English, chemistry, reading Hi! My name is Joslynn, and I am currently a student at community college. I plan to transfer to the University of Washington in a year and double major in Bioengineering and mechanical engineering (I plan to go into bioprinting, so that's why there's the weird combination of majors). Also, I plan to minor in math because just through prereq, I'm only 2 classes away and love math, so why not. 12 Subjects: including algebra 2, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Kenmore_algebra_2_tutors.php","timestamp":"2014-04-21T15:26:22Z","content_type":null,"content_length":"23770","record_id":"<urn:uuid:dd4c1d90-322d-4a38-b6b2-0b1a8656d689>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Register or Login To Download This Patent As A PDF United States Patent Application 20120063494 Kind Code A1 Frenne; Mattias ; &nbsp et al. March 15, 2012 The invention relates to a technical field of multiple-antenna transmission in a wireless communication system. Communication of feedback representation of and generating a codebook suitable for precoding of multiple-antenna transmission is disclosed. An example matrix representation of precoding of a first number of antenna ports comprises precoding sub-matrices of less antenna ports. Inventors: Frenne; Mattias; (Uppsala, SE) ; Liu; Jianghua; (Beijing, CN) Assignee: Huawei Technologies Co., Ltd. Serial No.: 235120 Series Code: 13 Filed: September 16, 2011 Current U.S. Class: 375/219; 375/296 Class at Publication: 375/219; 375/296 International Class: H04L 1/02 20060101 H04L001/02; H04B 1/38 20060101 H04B001/38 Foreign Application Data Date Code Application Number Mar 17, 2009 CN PCT/CN2009/070850 1. A method of communicating a representation of precoding feedback of multiple-antenna transmission in a wireless communication system, comprising: communicating a representation of a first precoding matrix a.sub.p of a first codebook A is communicated, wherein said first precoding matrix, a.sub.p, having M' rows and R' columns, where M' and R' are natural numbers, and comprising at least a first and second sub-matrix, and wherein said first precoding matrix being based on at least a second precoding matrix, b.sub.i, and a third precoding matrix, b.sub.k, belonging to a second codebook, B, said second and third precoding matrices having M rows and R.sub.1 columns, where Mand R.sub.1 are natural numbers and M'>M.gtoreq.M.gtoreq.2 and R.sub.1.gtoreq.1; wherein said first and second sub-matrices are based on said second and third precoding matrices, respectively, and the columns in said first precoding matrix are orthogonal to each other when R'>1. 2. The method of communicating a representation of precoding feedback of multiple-antenna transmission according to claim 1, wherein the representation is based upon a representation of said second and third precoding matrices, respectively. 3. A method of generating a first codebook, A, of multiple-antenna communication in a wireless communication system, comprising at least a first precoding matrix a.sub.p, said first precoding matrix a.sub.p having M' rows and R' columns, where M' and R' are natural numbers and comprising at least a first and second sub-matrix, comprising: selecting at least a second b.sub.i and third b.sub.k precoding matrix belonging to a second codebook B, said second b.sub.i and third b.sub.k precoding matrices having M rows and R.sub.1 columns, where M and R.sub.1 are natural numbers and M'>M, M.gtoreq.2 and R.sub.1.gtoreq.1; and obtaining said first and second sub-matrices based on said second b.sub.i and third b.sub.k precoding matrices, respectively, so that the columns in said first precoding matrix a.sub.p are orthogonal to each other when R'>1. 4. The method according to claim 3, comprising multiplication of at least one of said second and third precoding matrices in the process of generating the first codebook. 5. The method according to claim 4, wherein at least one of said second and third precoding matrices is multiplied by a complex scalar. 6. The method according to claim 3, comprising permuting the rows or the columns of at least one of said second and third precoding matrices. 7. The method according to claim 3, comprising removing columns from at least two of said second and third precoding matrices so as to obtain R' columns of the resulting at least one matrix. 8. The method according to claims 3, comprising: permuting the columns of said first precoding matrix a.sub.p; and/or permuting the rows of said first precoding matrix a.sub.p. 9. The method according to claim 3, wherein said first precoding matrix a.sub.p comprises a third and fourth sub-matrix based on a fourth b.sub.j and fifth b.sub.m precoding matrix, respectively, said fourth b.sub.j and fifth b.sub.m precoding matrices belonging to said second codebook B, and having M rows and R.sub.2 columns, where M and R.sub.2 are natural numbers. 10. The Method according to claim 9, comprising: permuting the columns of at least one of said second b.sub.i, third b.sub.k, fourth b.sub.j and fifth b.sub.m precoding matrices; and/or permuting the rows of at least one of said second b.sub.i, third b.sub.k, fourth b.sub.j and fifth b.sub.m precoding matrices. 11. The method according to claim 9, comprising multiplication of at least one of said second, b.sub.i, third, b.sub.k, fourth, b.sub.j, and fifth, b.sub.m, precoding matrices. 12. The method according to claim 11, wherein at least one of said second, third fourth and fifth precoding matrices is multiplied by a complex scalar. 13. The method according to claim 11, wherein R'=R.sub.1+R.sub.2. 14. The method according to claim 13, wherein said first precoding matrix a.sub.p has the structure: a p = ( b i b j b k - b m ) . ##EQU00016## 15. The method according to claim 13, wherein i=j and k=m so that said first precoding matrix a.sub.p has the structure: a p = ( b i b i b k - b k ) . ##EQU00017## 16. The method according to claim 3, wherein R'=R.sub.1. 17. The method according to claim 3, wherein said second codebook B is a codebook used in a Long Term Evolution, LTE, communication system. 18. The method according to claim 3, wherein said second codebook B is a reduced codebook of a codebook used in the Long Term Evolution, LTE, communication system obtained by selecting a subset of the precoding matrices in the codebook for the Long Term Evolution, LTE, communication system. 19. The method according to claim 3, wherein said wireless communication system is a Long Term Evolution Advanced, LTE-A, communication system, and said codebookA is used in a base station, mobile station and/or relay station. 20. An apparatus of communicating a representation of precoding feedback of multiple-antenna transmission in a wireless communication system, wherein a representation of a first precoding matrix a.sub.p of a first codebookA is communicated, said first precoding matrix, a.sub.p, having M' rows and R' columns, where M' and R' are natural numbers, and comprising at least a first and second sub-matrix comprising: a processor configured to decide on a precoding matrix from a predefined codebook being the first codebook, A, said first precoding matrix being based on at least a second precoding matrix, b.sub.i, and a third precoding matrix, b.sub.k, belonging to a second codebook, B, said second and third precoding matrices having M rows and R.sub.1 columns, where Mand R.sub.1 are natural numbers and M'>M, M.gtoreq.2 and R.sub.1.gtoreq.1; wherein said first and second sub-matrices are based on said second and third precoding matrices, respectively, and the columns in said first precoding matrix are orthogonal to each other when R'>1. 21. The apparatus of communicating a representation of precoding feedback of multiple-antenna transmission according to claim 20, wherein the apparatus comprises channel estimation circuitry and processing means for determining a representation of precoding feedback accordingly. 22. The apparatus of communicating a representation of precoding feedback of multiple-antenna transmission according to claim 20, wherein the apparatus comprises: a receiver that receives the representation of precoding feedback, and a transmitter that transmits precoded data over multiple antennas. 23. The apparatus of communicating a representation of precoding feedback of multiple-antenna transmission according to claim 20 comprising a processor that precodes multiple antenna transmission in a wireless communication system according to the first codebook, A. 24. An apparatus for of generating a first codebook A comprising at least a first precoding matrix a.sub.p of multiple-antenna transmission in a wireless communication system, said first precoding matrix a.sub.p having M' number of rows and R' number of columns and comprising at least a first and second sub-matrix, said apparatus: configured to select at least a second b.sub.i and third b.sub.k precoding matrix belonging to a second codebook B, said second b.sub.i and third b.sub.k precoding matrices having M number of rows and R number of columns, where M'>M, M>2 and R.gtoreq.1; and configured to obtain said first and second sub-matrices based on said second b.sub.i and third b.sub.k precoding matrices, respectively, so that the columns in said first precoding matrix a.sub.p are orthogonal to each other when R'>1. 25. The apparatus according to claim 24 comprising a computer readable medium having thereon a computer program code, said computer readable medium comprising at least one media from the group: ROM, Read-Only Memory, PROM, Programmable ROM, EPROM, Erasable PROM, Flash memory, EEPROM, Electrically EPROM, and hard disk 26. A device for precoding a multiple antenna transmission in a wireless communication system comprising: a generating unit that generates a first codebook, A, of multiple-antenna communication in a wireless communication system, comprising at least a first precoding matrix a.sub.p, said first precoding matrix a.sub.p having M' rows and R' columns, where M'and R' are natural numbers and comprising at least a first and second sub-matrix, said gernerating unit: selects at least a second b.sub.i and third b.sub.k precoding matrix belonging to a second codebook B, said second b.sub.i and third b.sub.k precoding matrices having M rows and R.sub.1 columns, where M and R.sub.1 are natural numbers and M'>M, M.gtoreq.2 and R.sub.1.gtoreq.1; and obtains said first and second sub-matrices based on said second b.sub.i and third b.sub.k precoding matrices, respectively, so that the columns in said first precoding matrix a.sub.p are orthogonal to each other when R'>1. 27. The device according to claim 20 configured to use the first generated codebook A for reporting precoding feedback. 28. A computer readable medium comprising code that generates a first codebook A, of multiple-antenna communication in a wireless communication system, comprising at least a first precoding matrix a.sub.p, said first precoding matrix a.sub.p having M' rows and R' columns, where M' and R' are natural numbers and comprising at least a first and second sub-matrix, the code: selecting at least a second b.sub.i and third b.sub.k precoding matrix belonging to a second codebook B, said second b.sub.i and third b.sub.k precoding matrices having M rows and R.sub.1 columns, where Mand R.sub.1 are natural numbers and M'>M, M.gtoreq.2 and R.sub.1>1; and obtaining said first and second sub-matrices based on said second b, and third b.sub.k precoding matrices, respectively, so that the columns in said first precoding matrix a.sub.p are orthogonal to each other when R'>1. [0001] This application is a continuation of International Application No. PCT/IB2010/000980, filed on Mar. 17, 2010, which claims priority to International Application No. PCT/CN2009/070850, filed on Mar. 17, 2009, both of which are hereby incorporated by reference in their entireties. [0002] The disclosure relates to a technical field of multiple-antenna transmission in a wireless communication system. Particularly, it relates to the technical field of precoding and communication of a representation of precoding feedback. [0003] Precoding for Multiple Input Multiple Output (MIMO) communication systems is used to enhance signal gain and/or to improve receiver signal separation between multiple transmitted streams of information. Precoding is e.g. used in wireless communication standards such as 3GPP LTE, 3GPP2 UMB and IEEE 802.16e. In all these standards OFDM is used as the modulation technique for transmission. [0004] Precoding is performed by multiplying streams or a stream, to be transmitted, by a matrix or a vector, respectively. The matrix or vector representing used precoding (also denoted precoder in the following) should match the channel for good separation of different streams, and thereby achieve a high receive Signal to Noise Ratio (SNR) for each stream. The input-output relation in a OFDM system transmitting over a wireless channel, can be described in the frequency domain as, y=HWx+n, (B-1) [0005] where W is the precoding matrix, H the channel matrix, x is the input vector containing the symbols to be transmitted, n the vector of noise samples, and y the output signal at the receiver. The relation in equation B-1 holds for every subcarrier in the OFDM system. However, the same precoding matrix W is usually employed for a group of adjacent subcarriers. [0006] The model in equation B-1 is applicable in a wireless communication system where a base station, having multiple transmit antennas, communicates with a receiving mobile station, and also in a wireless communication system where multiple base stations, each having multiple transmit antennas, communicate with the same receiving mobile station. In the latter case, the transmission channel H has the same number of columns and the precoding matrix W the same number of rows as the total number of transmit antennas for all cooperating base stations. This scenario is called a Cooperative Multipoint Transmission (COMP) system and a codebook for a large number of transmit antennas is thus required. [0007] The precoding matrix W is selected based on the transmission channel H, i.e. the precoder is channel-dependent. Also, the transmission rank is selected based on the channel H . The transmission rank is equivalent to the number of transmitted streams and is equal to the number of columns of the precoding matrix W. Hence, if rank one is selected, the precoding matrix W becomes a precoding vector. [0008] FIG. 1 shows schematically a wireless communication system where one base station is transmitting two streams precoded by a rank two precoding matrix W. In this case, the transmitting base station has four transmit antennas and the receiving mobile station has two receive antennas. At the receiver a receiver filter is used with the purpose to separate the two streams. [0009] FIG. 2 shows schematically a scenario where three transmitting base stations in a wireless communication system, each having four transmit antennas, are transmitting three streams to a receiving mobile station equipped with three receive antennas. This is an example of a COMP transmission where multiple base stations are transmitting a coordinated signal towards a receiving mobile station. The transmission channel H has thus twelve columns and three rows in this case, since there are in total twelve transmit and three receive antennas. Further, since three streams are transmitted, the precoding matrix W has twelve rows and three columns, and is thus taken from a codebook for twelve transmitter antennas. Each base station transmits a sub-matrix of this twelve-by-three matrix W, for instance, base station transmits the three streams using a precoding matrix obtained as the top four rows and the three columns of the precoding matrix W. [0010] The receiver thus receives y, and with knowledge of the combined channel-precoder product G=HW , the receiver can create a receiver filter R that estimates the transmitted symbol vector as, {circumflex over (x)}=Ry (B-2) [0011] A commonly used receiver filter is the zero forcing filter, R=(G'G).sup.-1G', (B-3) [0012] or the Linear Minimum Mean Squared Error (LMMSE) receiver filter, R=(G'G+C.sub.nn).sup.-1G', (B-4) [0013] where C.sub.nn is the noise plus interference covariance matrix at the receiver. [0014] If information about the channel H is available at the transmitter, the corresponding precoder W is selected and used for transmission. Criteria for selecting precoder W, including its rank, could be to maximize the minimum Signal to Interference plus Noise Ratio (SINR) for the estimated symbols in Other criteria for selecting W are also known, such as maximizing the total number of transmitted information bits, taking into account all streams. Note that the column dimension of precoder W, also known as the rank of the transmission, is also part of the selection of precoder W so both a rank and a preferred precoder W within all possible precoding matrices with this rank (equivalent to number of columns) is selected. [0015] In general, the channel H is unknown at the transmitter. If the receiver measures and feeds back the full channel information H to the transmitter, and the transmitter decides the precoding matrix W based on the obtained channel information from the receiver, a vast amount of feedback signalling is needed, which is undesirable. [0016] In order to reduce signalling overhead, a conventional way is to construct a limited set of possible precoders W.sub.i, i=1, . . . , N for a given rank. A collection of these precoding matrices for a given rank is denoted as a precoding codebook. The codebook for a certain rank, or equivalently number of spatial streams, thus consists of N unique precoding matrices (or vectors if the rank is one), each of size M times R, where M is the number of transmit antenna ports and R is the number of parallel spatial streams or transmission rank, respectively. [0017] The codebook is known and stored at both the transmitter and receiver. Since, the receiver often has better knowledge about the channel H between the transmitter and receiver, the receiver can select a rank and an optimal precoder W from the codebook of this rank based on knowledge about the channel, and then feed back an index representing the rank and the selected precoder to the transmitter. The transmitter may then use the precoder corresponding to the index fed back by receiver for a transmission; or the transmitter may have other sources of information to choose a different precoder than the one selected by the receiver. Hence, the feedback from the receiver should only be seen as a recommendation, and it is the transmitter that makes the final decision on which precoding matrix that should be used for a particular transmission. For instance, the transmitter may choose to reduce the rank of the precoder, or to interpolate the precoding matrices between successive feedback reports. This operation of using feedback information to indicate the selected precoding matrix is denoted closed loop precoding. [0018] Alternatively, the transmitter may pseudo-randomly cycle through a set of precoders if a feedback control link is not available, which is denoted open loop precoding. To support open loop precoding, it is useful if different precoding matrices in a set of precoders only differs by a permutation of its columns or rows. Permutation of columns is equivalent to permuting the mapping of streams to the precoding matrix, and permutation of rows is equivalent to permuting the mapping of the precoding matrix to the physical antennas. This cycling of mappings will ensure that each stream encounter a channel with a variation in quality, to avoid the case that one stream always has bad channel quality, which could be the case in an open loop system. Hence, a codebook designed for open loop operation has a subset of precoding matrices which differ only by permutation of columns or rows. [0019] In FIG. 2, a feedback operation of a wireless communication system is schematically illustrated. The receiver estimates the channels from all transmit antennas to all receive antennas using a channel estimation unit. The estimated channel is then used in a precoding matrix selection unit where the rank and precoding matrix is selected from a codebook of available precoding matrices for this rank. [0020] The design of the codebook is a topic which has drawn a lot of attention in the recent years. It is a problem to find an optimal codebook since its performance depends on the channel models and performance metric used for the evaluation. However, a standard measure to evaluate the performance of different codebooks is the minimum pair-wise distance dmin between all precoders in the codebook. A codebook with a large dmin is considered to have a better performance than a codebook with small dmin. [0021] Other desirable properties of a codebook relate to implementation complexity requirements. Two such properties are the constant modulus and constrained alphabet properties, which means that all matrix elements of all precoders in the codebook have the same absolute magnitude, and are taken from a finite complex valued constellation such as {+1,-1,+i,-i} or 8-PSK. The constant modulus property assures that all M power amplifiers at the transmitter are utilized equally and transmits with the same power, and the constrained alphabet property simplifies receiver computations, such as when inverting and multiplying matrices. [0022] Another desirable property of a codebook is the nested property, which implies that a precoding matrix of rank R is a sub-matrix of a precoding matrix of rank R+1. For instance, a rank one precoding vector is a column in a rank two precoding matrix, and so on. This property simplifies decision making in the receiver regarding which rank and which precoder to select, since results from a lower rank calculation can be stored and re-used when calculating selection measures for other, either higher or lower ranks. [0023] Furthermore, codebook restriction is a property which allows the communication system to restrict which, of all precoding matrices in the codebook and for which ranks the receiver is allowed to select in precoder selection. Thereby, the system can exclude some precoding matrices or ranks, if beneficial. For one codebook, it is difficult to make each precoder suitable for any scenario. For example, in a certain channel scenario, maybe only a subset of the precoders within a codebook are suitable for use. In this case, if codebook restriction is employed, the receiver will have a smaller precoder search space to find the best precoder, which can reduce the complexity of the receiver and to improve the performance in a particular channel scenario. [0024] In the 3GPP LTE specification; with M=4 transmit antenna ports, there are N=16 precoders, each generated by a Householder transformation from its corresponding generating vector, defined for each rank R=1, 2, 3 and 4, respectively. Hence, N*R=64 different precoders are stored in the User Equipments (UEs) and the eNBs (base stations in LTE), respectively. If codebook restriction is employed, then the number of precoders eligible for selection can be reduced, for instance by removing all precoding matrices of the highest rank. [0025] In the 3GPP LTE-Advanced (LTE-A) communication system, which is supposed to be an extension of the LTE system, up to eight antenna ports will be supported to further increase system performance, such as peak data rate, cell average spectrum efficiency, etc, and therefore higher order MIMO with eight antenna ports will be supported. In the LTE system, the maximum number of antenna ports available for codebook precoding is four. [0026] How to design a precoding codebook for eight antenna ports as supported in LTE-A; or to design a codebook for multiple antenna transmissions in general? How to design a codebook having one or more of the desirable properties as discussed above? Generally, when designing codebooks, it is desirable that they are easy to implement in terms of computational complexity and memory storage [0027] In an example 3GPP LTE-A communication system, mentioned COMP transmissions will be supported. The total number of transmit antennas in a COMP system may vary depending on how many base stations are co-operating in the COMP transmission mode to a mobile station. Hence, codebooks for different number of transmit antennas are needed. For example, if two base stations, each having four antennas, are cooperating, then a codebook for eight antennas are needed. If four base stations, each having four antennas, are cooperating in COMP mode, then a codebook for sixteen transmit antennas is needed. Hence, a problem is to design codebooks for a variety of transmit antennas, and for a variety of different ranks or equivalently, number of transmit streams. [0028] According to a prior art solution, a method for constructing a codebook for eight antenna ports is described based on mutually unbiased bases from quantum information theory. The method gives the nested property and the precoding matrices are constant modulus and have a constrained alphabet. However, Kronecker products between complex valued vectors x and matrices C are used in the codebook generation according to this method. For instance, to obtain a precoding matrix, it is necessary to perform the multiplications xC=[xC.sub.1 xC.sub.2 . . . xC.sub.N] to create the precoding matrix, where denotes element wise multiplication, and C.sub.j the j:th column of C. [0029] According to another prior art solution, a codebook design for eight antenna ports are given using the complex Hadamard transformation, which consists of Kronecker products between complex valued matrices. The full rank eight codebook is always constructed first and to obtain the lower rank codebook, columns are removed from the full rank codebook. The codebook design consists of several design parameters and to find the final codebook, computer simulations are needed. [0030] According to an example aspect of the present invention, one or more of the aforementioned problems are solved with a method and product of generating a first codebook A comprising at least a first precoding matrix ap of multiple antenna transmission in a wireless communication system. For a first precoding matrix ap having M' number of rows and R' number of columns and comprising at least a first and second sub-matrix; said generating comprises: [0031] selecting at least a second bi and third bk precoding matrix belonging to a second codebook B, said second bi and third bk precoding matrices having M number of rows and R1 number of columns, where M'>M, M.gtoreq.2 and R1.gtoreq.1; and [0032] obtaining said first and second sub-matrices based on said second bi and third bk precoding matrices, respectively, so that the columns in said first precoding matrix ap are orthogonal to each other when R'>1. [0033] According to another example aspect of the present invention one or more of the aforementioned problems are solved with a method of using a first codebook A as generated for precoding multiple-antenna transmission in a wireless communication system. [0034] The claims comprise various embodiments of the invention. [0035] An advantage with a codebook according to the present invention is that a codebook can be generated from precoders with a smaller matrix dimension than the dimension of the codebook to be generated; and also that these generated precoders are inheriting desired properties, mentioned above, from the precoders of the smaller dimension. For instance, in a UE for a LTE-A communication system only the LTE codebook needs to be stored in the memory, since the LTE-A codebook can be based on the LTE codebook according to the invention. This will lower the memory requirements of the UE. Further, no matrix products (element wise products or Kronecker products) between complex valued matrices are needed to construct an eight antenna codebook, why computational complexity is reduced. [0036] Furthermore, example embodiments of the present invention provide large flexibility when designing codebooks of different transmission systems and scenarios, e.g. a single base station transmission, or a COMP transmission where a plurality of base stations cooperates in downlink transmission. Since a UE sometimes is connected to a single base station and sometimes to multiple base stations through COMP operation, the UE must generate and store corresponding precoding codebooks for a variety of scenarios. It is therefore beneficial if codebook generation and design for the single base station scenario and COMP scenario (and other scenarios as well) employ the same general method, since codebooks for different number of antenna ports can be obtained with low complexity, such as according to example embodiments of present invention. [0037] Other advantages of the present invention will be apparent from the following detailed description of embodiments of the invention. [0038] The appended drawings are intended to clarify and explain the present invention in which: [0039] FIG. 1 schematically shows how two streams are mapped to four antennas using a precoding matrix and how a two antenna receiver adopts receiver filtering to reconstruct the two transmitted [0040] FIG. 2 schematically shows how three streams are mapped to four antennas in three base stations (transmitter 1, 2 and 3, respectively) when transmitting in COMP mode. The three base stations therefore share one precoding matrix from a codebook, and hence each base station uses a sub-matrix of the one precoding matrix as their respective precoding matrix. The figure also shows a three antenna receiver adopting receiver filtering to reconstruct the three transmitted streams. [0041] FIG. 3 schematically shows how the receiver estimates the channel between the transmitter and receiver, use the channel estimates to decide an optimal precoding matrix from a predefined codebook and feeds back an index corresponding to the preferred precoding matrix to the transmitter, which applies the precoding matrix in the subsequent transmission; [0042] FIG. 4 shows simulation results comparing performance for a rank one codebook according to prior art with performance for a codebook according to the present invention, as the received SINR distribution after an LMMSE receiver; [0043] FIG. 5 shows simulation results comparing performance for a rank two codebook according to prior art with performance for a codebook according to the present invention, as the received SINR distribution after an LMMSE receiver; [0044] FIG. 6 shows simulation results comparing performance for a rank five codebook according to prior art with performance for a codebook according to the present invention, as the received SINR distribution after an LMMSE receiver; and [0045] FIG. 7 shows simulation results comparing performance for a rank eight codebook according to prior art with performance for a codebook according to the present invention, as the received SINR distribution after an LMMSE receiver. [0046] Define two codebooks, A and B, where codebook A is designed for M' transmit antenna ports and for rank R' transmission, and consists of N' number of precoding matrices a.sub.p of size M'.times.R' for each rank R', where 1.ltoreq.i.ltoreq.N', 1.ltoreq.R'.ltoreq.M', and where the columns of each a.sub.p are mutually orthogonal, A={a.sub.1, . . . , a.sub.N'}, [0047] and where codebook B for M<M' transmit antenna ports consists of N number of unitary precoding matrices b.sub.i of size M.times.R for rank R transmission, where 1.ltoreq.i.ltoreq.N.sub.R, B={b.sub.1, . . . b.sub.N}, [0048] Further, assume that b.sub.i.sup.{.omega.} is a M.times.S matrix consisting of S columns in the set .omega. of matrix b.sub.i. For example b.sub.i.sup.{13} is a M.times.2 matrix consisting of column 1 and 3 of matrix b.sub.1. Also, assume that codebook B has the nested property, i.e. selected columns in the precoding matrix b.sub.i of codebook B of rank R is a precoding matrix b.sub.j from the codebook B of rank R-1. Assume further that codebook B has the constant modulus and the constrained alphabet property, i.e. each element of an arbitrary matrix b.sub.i has the same magnitude and is taken from a finite signal constellation .OMEGA.. [0049] With the notation above, a codebook for a LTE-A communication system is a codebook A with M'=8 and a codebook for a LTE communication system is codebook B with M=4 and with maximum number of precoding matrices N=16. The present invention solves at least the mentioned problems by creating the eight antenna port codebook A for a given rank R , consisting of precoding matrices with orthogonal columns, in such a way that each precoding matrix in this codebook A contains at least two sub matrices based on precoding matrices obtained from the four antenna port codebook B. The precoding matrices belonging to codebook B can be multiplied with the same or different complex matrices or scalars, and the rows or columns of the two precoding matrices can also be permuted. The two sub-matrices can be based on two different precoding matrices in codebook B, but can also be -based on a single precoding matrix in codebook B. [0050] Note that the present invention provides codebooks A for all ranks R between one and the maximum number of transmit antennas ports M' for a given wireless communication system. Also, there may be reasons to select different codebooks for rank one or two, respectively, since these ranks are also used for multi-user transmission, which is a different mode where more than one user receives information simultaneously. In other words, in this mode, the transmitted streams are intended for different users or receivers. Therefore, this mode may have special codebook design requirements, but since only the lower rank codebooks are used in this mode, the higher rank codebooks could be generated by the method according to the present invention. Hence, this invention considers methods for codebook generation for a given rank R'. [0051] An embodiment of codebook generation according to the present invention, without permutations or multiplications, is given in equation 1, a p = ( b i b k ) . ( 1 ) ##EQU00001## [0052] The receiver can select two suitable precoding matrices b.sub.i and b.sub.k, from the set of precoding matrices belonging to codebook B, to be used for the sub-matrices in equation 1 and feed back this information to the transmitter. Since the codebook B has maximum rank four in LTE, this construction of an eight antenna port codebook A with two sub matrices works up to rank four. It should also be noted, as mentioned above, that the selected precoding matrices b.sub.i and b.sub.k may be the same precoding matrix, i.e. i=k. [0053] The resulting codebook A constructed in this way will inherit the nested, constant modulus and constrained alphabet properties of codebook B for all transmission ranks. Any matrix operation, known to a person skilled in the art, such as: row permutation, column permutation, inversion, transpose or Hermitian transpose of the matrices of codebook A are also possible, and part of the present invention. [0054] Furthermore, the size of codebook A is the size of codebook B squared if all combinations of b.sub.i and b.sub.k are allowed. If a smaller codebook A is desirable, precoding matrices can be removed from codebook B before constructing codebook A so that codebook A is based on a subset of the precoding matrices belonging to codebook B. [0055] Other benefits of the present invention are that the receiver can reuse computation algorithms from codebook B, since codebook A consists of an aggregation of at least two precoding matrices from codebook B. [0056] According to another embodiment of the invention, each precoding matrix from the smaller codebook B can be multiplied with a complex scalar. These scalars can be denoted as .beta..sub.i and .beta..sub.k, and an example of this embodiment with scalar multiplication is shown in equation 2, a p = ( .beta. i b i .beta. k b k ) . ( 2 ) ##EQU00002## [0057] In another embodiment, a precoding matrix a.sub.p of lower rank is obtained by removing columns from the two sub matrices in equation 2. For instance, a rank two precoding matrix can be obtained as shown in equation 3, a p = ( .beta. i b i { 13 } .beta. k b k { 12 } ) , ( 3 ) ##EQU00003## [0058] where column one and three are used from the upper precoding matrix (having index i) and columns one and two from the lower precoding matrix (having index k). The selection of columns to be removed may be dependent on the precoding matrix selected, i.e. depending on the indices i and k in equation 3. [0059] According to yet another embodiment of the invention, each precoding matrix from the smaller codebook B can be multiplied with a complex matrix. These matrices, which may be diagonal matrices, can be denoted as .GAMMA..sub.i and .GAMMA..sub.k, and an example of this embodiment with scalar multiplication is shown in equation 4, a p = ( .GAMMA. i b i .GAMMA. k b k ) . ( 4 ) ##EQU00004## [0060] And a different embodiment is shown in equation 5. a p = ( b i .GAMMA. i b k .GAMMA. k ) . ( 5 ) ##EQU00005## [0061] For generation of a codebook A with rank higher than four we need to extend the precoding matrices of codebook A to more than four columns. According to one embodiment of the invention at least four precoding matrices from codebook B are used as four sub matrices in one precoding matrix a.sub.p in codebook A, i.e. the precoding matrix a.sub.o in codebook A includes precoding matrices from codebook B as shown in equation 6, a p = ( .beta. i b i .beta. j b j .beta. m b m .beta. n b n ) . ( 6 ) ##EQU00006## [0062] Since a.sub.p have orthogonal columns, some restrictions on the choices of indices i, j, m and n is required. In the following other embodiments of the present invention of constructing codebooks A with rank higher than 4 based on the general design in equation 6 are presented. [0063] Yet other embodiments are shown in equations 7 and 8, a p = ( .GAMMA. i b i .GAMMA. j b j .GAMMA. m b m .GAMMA. n b n ) , ( 7 ) a p = ( b i .GAMMA. i b j .GAMMA. j b m .GAMMA. m b n .GAMMA. n ) . ( 8 ) ##EQU00007## [0064] Where the four submatrices have been multiplied from left and right with matrices .GAMMA..sub.i, .GAMMA..sub.j, .GAMMA..sub.n, respectively. These matrices may be diagonal matrices. [0065] To save feedback overhead compared to the design in equations 6-8 and to take into account the fact that the columns of a.sub.p are orthogonal, the precoding matrices for a rank eight codebook A is composed of only two precoding matrices from codebook B for rank four, but where the columns have been made orthogonal by multiplying the bottom right matrix with the scalar -1, i.e. for this sub matrix the scalar is .beta.=-1, which is shown in equation 9 below, a p = ( b i b i b k - b k ) . ( 9 ) ##EQU00008## [0066] With this construction, only two matrices need to be indexed and the feedback from the receiver is halved compared to the design in equation 6. [0067] The overhead with the design in equation 9 can be further reduced by selecting a single precoding matrix from the four antenna port codebook B, and applying the scalar .beta.=-1 multiplication on one of the sub matrices in equation 6. The precoding matrix for rank eight in codebook A will therefore be composed of only one precoding matrix from codebook B. Equation 10 shows this embodiment, a p = ( b i b i b i - b i ) . ( 10 ) ##EQU00009## [0068] According to the embodiments above, the full rank eight precoders in codebook A are generated, and to form lower rank precoders, columns needs to be removed from the full rank precoder in such way that the nested property is maintained. One way to remove columns from the rank eight codebook, is to remove columns from the right and keep the left half of the matrix intact, which is shown in equation 11 for the rank six case, a p = ( b i b i { 12 } b k - b k { 12 } ) . ( 11 ) ##EQU00010## [0069] In the embodiment in equation 11, the left four columns are unchanged for all ranks above four. Another alternative is to remove columns from the right in each sub-matrix, as shown in equation a p = ( b i { 123 } b i { 123 } b k { 123 } - b k { 123 } ) . ( 12 ) ##EQU00011## [0070] A third alternative to reduce the rank and at the same time maintain the nested property is to follow the column removal according to the LTE codebook B. For instance, in LTE, the rank four precoding matrix uses all columns {1234} of the precoding matrix. The rank three precoding matrix for a given precoding matrix index may select columns {124} from the rank four precoding matrix, and may select columns {123} for another index. To obtain a rank seven precoding matrix, the third column could be removed in a first precoding matrix and the fourth column in a second precoding matrix. An example of this case is shown in equation 13, a p = ( b i b i { 124 } b k - b k { 123 } ) . ( 13 ) ##EQU00012## [0071] In yet another embodiment of the present invention a four bit rank one codebook for LTE-A from a four bit rank one codebook for LTE is obtained. The LTE rank one codebook is given by, {tilde over (B)}={b.sub.1.sup.(1), . . . , b.sub.N.sup.(1)} (14) [0072] and four precoding vectors from the codebook in (14) are selected according to a predefined selection rule, for example by selecting vectors 1, 2, 3 and 4 such that, {tilde over (B)}{b.sub.1.sub.(1), b.sub.2.sup.(1), b.sub.3.sup.(1), b.sub.4.sup.(1)}, (15) [0073] The 16-element LTE-A rank one codebook is then obtained by stacking codebook vectors from all possible pair wise combinations of the codebook {tilde over (B)} as, A = { [ b 1 { 1 } b 1 { 1 } ] [ b 1 { 1 } b 2 { 1 } ] [ b 4 { 1 } b 4 { 1 } ] } . ( 16 ) ##EQU00013## [0074] It should be noted that to obtain the LTE-A codebook according to this embodiment, it will not be necessary to first generate the rank eight codebook, and then to remove columns. [0075] In yet another embodiment of the invention, a four bit rank two codebook for LTE-A from a four bit rank two codebook for LTE is obtained. The LTE rank two codebook is given by, B={b.sub.1.sup.(14), . . . , b.sub.N.sup.(12)} (17) [0076] Four precoding matrices from the codebook in equation 17 are selected according to some predefined selection rule, in this example precoding matrices 1, 2, 3 and 4 are selected such that, {tilde over (B)}={b.sub.1.sup.(14,b.sub.2.sup.(12),b.sub.3.sup.(13),b.sub.4.sup.(12)}- , (18) [0077] The 16-element LTE-A codebook is then obtained by stacking codebook matrices from all possible pair wise combinations of the codebook B such that, A 1 = { [ b 1 ( 14 ) b 1 ( 14 ) ] [ b 1 ( 14 ) b 2 ( 12 ) ] [ b 4 ( 12 ) b 4 ( 12 ) ] } . ( 19 ) ##EQU00014## [0078] It should also be noted that to obtain a LTE-A codebook according to this embodiment, it will not be necessary to first generate the rank 8 codebook and then remove columns. [0079] In yet another embodiment of the present invention an eight bit rank sixteen codebook for a system with sixteen transmit antennas is obtained from a two bit rank four codebook for a system of four antennas. A system with sixteen transmit antennas can e.g. appear in a COMP transmission where four base stations, each having four transmit antennas, are cooperating in a transmission towards the same receiver. The codebook B for four antennas is given by B={b.sub.1, . . . , b.sub.4} (20) [0080] and the codebook A for sixteen transmit antennas is obtained according to equation 21, a p = ( b a b a b a - b a b c b c - b c b c b d - b d b d - b d b f - b f - b f b f ) , ( 21 ) ##EQU00015## [0081] where the precoding matrices b.sub.a, b.sub.c, bd.sub.d, b.sub.f are se.sub.lected from codebook B. It is understood by the skilled person that this precoding matrix has orthogonal columns. It should be understood that if ranks lower than sixteen is desired for the generated precoding matrix, columns can be removed. It is also understood that precoders with higher dimension can be generated with the method according to the present invention, and that the present invention is a general method for generating codebooks within the scope of the appended claims. [0082] Performance Results [0083] To evaluate the performance of codebooks generated according to the present invention, simulations have been performed in a 5 MHz OFDM wireless communication system resembling the LTE-Advanced communication system with eight antenna ports for transmission at the base station and eight receiver antennas at the UE. For each group consisting of 24 subcarriers, a precoder was selected from a codebook of 16 precoders. The selection was performed by maximizing the sum capacity over the streams where the number of streams equals the rank of the precoding matrix. [0084] A Typical Urban channel was assumed and each of the channel elements in the 8.times.8 MIMO matrix was assumed to be independently fading. The average receive SNR per receive antenna was 10 dB, and a LMMSE receiver was used. [0085] In the performance analysis, the 16 element codebook according to the present invention was generated by selecting the first four precoding matrices of a given rank from the LTE codebook, and creating the eight antenna port codebook by selecting two precoding matrices and use equation 19 to compare its performance to a codebook with 16 precoding matrices/vectors per rank according to a prior art solution. An exception to the mentioned setup was the rank eight case where a codebook with four precoding matrices were used for the purpose of enabling to compare with the prior art solution (where only four matrices were used for rank eight). [0086] In FIGS. 4-7, the dashed lines represent the performance for codebooks according to the invention while the solid lines represents the performance for codebooks according to prior art. It can be seen from these figures that the difference is negligible in performance between the two groups of codebooks. Hence, with a codebook according to the present invention, the same performance can be obtained as for codebooks according to prior art, but the present innovation provides, as one of its benefits, a much simpler method for generation of codebooks, which avoids Kronecker multiplication of complex valued matrices. It is thus more attractive from an implementation point of view. Other advantages with the present invention have been described in this disclosure. [0087] Furthermore, as also understood by the person skilled in the art, a method for generating a codebook, and for using the codebook generated according to the present invention may be implemented in a computer program, having code means, which when run in a computer causes the computer to execute the steps of the method. The computer program is included in a computer readable medium of a computer program product. The computer readable medium may consist of essentially any memory, such as a ROM (Read-Only Memory), a PROM (Programmable Read-Only Memory), an EPROM (Erasable PROM), a Flash memory, an EEPROM (Electrically Erasable PROM), or a hard disk * * * * *
{"url":"http://patents.com/us-20120063494.html","timestamp":"2014-04-17T12:37:45Z","content_type":null,"content_length":"64762","record_id":"<urn:uuid:28c6f7c5-4baa-4e1e-8f27-8b32fc062f5a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization problem and geogebra Find the maximum area of a parallelogram drawn in the area enclosed by the curves y=4-x^2 & y=x^2+2x We will use geogebra! Let's see if we can do this. 1) Type in f(x) = 4 - x^2 2) Type in g(x) = x^2 + 2x 3) Use the intersection tool on the two functions and the points A and B will be created. 4) Relabel B to C. 5) Create a slider called b set the interval to -2 to 1 with a step size .001. Type (b,f(b)). Point B will be created on f(x). 6) Use the line tool to create a line from A to B. 7) Use the parallel line tool to create a line that is parallel to AB and passes through C. 8) Get the intersection of this second line and g(x) using the intersection tool. Point D and E will be created. Hide E. 9) Create a line through BC. 10) Draw a line through D that is parallel to BC. Notice that we now have a generic parallelogram drawn between the two curves.This is all we need! 11) You can hide the lines as best as you can. Create a polygon that uses A,B,C and D as its vertices. 12) Use the slider to get the maximum area. It is not difficult to get 6.75 13) You should have something close to the drawing shown below. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=220410","timestamp":"2014-04-16T16:24:12Z","content_type":null,"content_length":"13352","record_id":"<urn:uuid:4794e70f-02b5-455d-b53e-d900ab13becd>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
hi jacks, hhmmm, this looks like it could get complicated. Maybe it will simplify as you go along. I won't get a chance to do this until much later when I'll post again. Here's the way I'll start: (i) You need to find the line of intersection of the planes, call it L . (ii) Then pick a line perpendicular to L in each of the planes. (iii) Introduce the 45 degree constraint to find the missig parameter for the second plane. write x-4y+z-9=0 in the form r = a + lambda b + mu c write the other plane in the form r = d + eta e where a,b,c,d and e need to be found. Get the line of intersection. Pick the two lines. Set the angle to 45. Eliminate any remaining unknowns. Maybe let (f,g,h) be a point on the line of intersection. Get the equation of the target plane using the three points you know have. Get the normal to plane 2. Put the two normals at 45 to determine plane 2. I think this is even quicker: let plane 2 be Lx + my + nz = constant then vectors (L,m,n) and (1,-4,1) are the normals to the planes and must be at 45 to each other. So use the scalar product, and the known points to find L,m,n. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://mathisfunforum.com/viewtopic.php?id=18465","timestamp":"2014-04-18T00:14:42Z","content_type":null,"content_length":"11839","record_id":"<urn:uuid:91641e2c-b564-43b6-8dcd-331bb8489cf7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Bake Me A Wish! Cakes - Cookies & Clogs Bake Me a Wish! and what a great way to celebrate my blogoversary. There were so many delicious cakes to choose from including the delectable Tiramisu, Carrot Cake, and Pinapple Coconut but I decided on the Singular Sensation Party-Pack for the purpose of a well-rounded review. The cakes came quickly, exactly on the date expected. Inside the box was a branded styrofoam box with an ice pack. The decorative box within was a bit difficult to get out as it was sitting pretty tightly. When I opened up the box, the cute little cakes were individually wrapped and a card was inside. It may be a little corny but I did write out the message myself – but that was just so you know what it would look like if you sent it as a gift (I’m really not a narcissist, really). I thought the way it was presented was very nice, don’t you? Included in the Party-Pack are the following four flavors: Espresso Brownie, Cookies & Cream, Double Fudge Crunch and an Apple Crumb Tart. These cakes are only 4″ in diameter but they are VERY filling. When my family and I sat down to eat, the three of us could only eat 1-1/2 cakes. The rest I brought to a friend’s house (that loves sweets like me) and we ate it there. Three of the four cakes were brownie variations. The texture was nice and moist. Each ranked differently so here is a breakdown of each cake: • Espresso Brownie – By far my favorite. The coffee flavor was not very strong but the flavor was so rich. This cake just melted in my mouth. This would be wonderful accompanied with a large glass of milk or a nice warm cup of coffee. According to my friend, she would definitely buy this cake another time. • Cookies & Cream – This cake was good but nothing particularly stood out. It was very filling and I still have the other half waiting for the next time I get a case of ‘the munchies’. • Double Fudge Crunch – This one was a bit different. The brownie part was good but not too flavorful. At first, the crumb topping didn’t seem to mesh well. After eating it a bit more, I think this would appeal more to those that want dessert but are not particularly keen on overpowering sweetness. • Apple Crumb Tart – The apple filling was delicious. I only wish I could have eaten more of it. Since my husband is not really a cake eater, I let him have dibs on this tart. My daughter and I got a small portion to taste though. The verdict? He loved it. The outside and topping could have been a bit more moist but we all enjoyed this very much. Now that I know the quality is far superior to many other cakes I have tried, I may just have to get another. If you are hoping to give a special gift to a special person or just want a great way to celebrate whatever occasion is at hand, you might want to check here first! ► Giveaway closed, winners announced here! WIN IT! One reader will receive any cake of their choice. To enter (mandatory): Visit Bake Me A Wish and tell me which cake you’d most like to try. For additional entries leave a comment for each you do: “It’s a Blogoversary #Giveaway Event @cookiesANDclogs! #Win your choice of cake from Bake Me A Wish, Ends 5/31 http://bit.ly/ip200Y” Giveaway ends 5/31/11 at 11:59 PST. Open to US residents only. Winner has 48 hours to respond before a new winner is chosen. Disclosure: I was provided with the above mentioned product(s) at no charge to facilitate this review, which contains 100% my honest opinion. 1. Debra Ogilby says: Id like to try the pineapple cocnut, mmmm 2. Stacie says: Cookies and Cream, oh my! 3. kimberly hock says: i would love to try the peanut butter moose cake (even if my thighs strongly disagree) 4. kimberly hock says: i like you on facebook 5. kimberly hock says: i like bake me a wish on facebook 6. kimberly hock says: i subscribe via e-mail 7. kimberly hock says: i follow you via gfc under the name: hockandroll 8. kimberly hock says: i entered the artichoke set giveaway 9. kimberly hock says: i entered the betty crocker potatoes giveaway 10. kimberly hock says: i entered the eye shadow giveaway 11. kimberly hock says: i entered the thomas and friends giveaway 12. kimberly hock says: i entered the snuggle paws giveaway 13. DeeAnn S says: I’d most like to try the Chocolate Mousse Torte Cake! I’m drooling just looking at the picture. 14. DeeAnn S says: I’m an email subscriber. 15. Sonia says: I like Cookies and Clogs on FB 16. Sonia says: I follow you on Twitter. □ Sonia says: my twitter name is socango. sorry i forgot to give it last time. 17. Sonia says: I like Bake me a Wish on FB. http://www.facebook.com/BakeMeAWish?sk=app_201143516562748 18. Sonia says: Twitted http://twitter.com/#!/socango/status/69118813525901312 19. Sonia says: Follow you on GFC (Mighty Knitting Chick) 20. Kimberly says: Oh, I like cake! I would choose Double Fudge Crunch 21. Kimberly says: I like Bake Me A Wish on FB 22. Kimberly says: I like you on FB Kimberly Schotz 23. Kimberly says: Email follower 24. Kimberly says: Entered Thomas giveaway 25. Liza767 says: Caramel Apple Crumb Cake 26. Liza767 says: Liza Vladyka face book follows you 27. Liza767 says: Liza767 [at] roadrunner [dot] com is an email subscriber 28. Liza767 says: GFC follow back 29. Liza767 says: Blogoversary Giveaway #6: Thomas & Friends Water Tower Set by Cookies & Clogs 30. Mary Elderton says: The Caramel Pecan Fudge Cake! 31. Mary Elderton says: I’m an email subscriber 32. Mary Elderton says: I follow you on twitter as mbm218 33. Mary Elderton says: I follow Bake Me a Wish on twitter as mbm218 34. Mary Elderton says: I’m a gfc friend (Beth Elderton) 35. Mary Elderton says: 36. debbie says: I would like to try the tiramisu classico. 37. debbie says: I am a email subscriber. 38. debbie says: I am a gfc follower. 39. Ericka T says: I’d like the chocolate mousse torte cake. 40. Ericka T says: I follow on GFC as Ericka T. 41. Ericka T says: I follow on twitter as humanecats. 42. Ericka T says: I follow Bake me a wish on twitter as humanecats. 43. Ericka T says: I like you on Facebook as Ricky Todd. 44. Ericka T says: I like Bake me a wish on Facebook as Ricky Todd. 45. Ericka T says: tweeted giveaway as humanecats 46. amanda s says: id love to try the red velvet! pokerglr8 at gmail.com 47. amanda s says: like u on fb as amanda s 48. amanda s says: like bake me a wish on fb as amanda s 49. amanda s says: follow u on twitter at aes529 50. amanda s says: follow bake me a wish on twitter at aes529 51. amanda s says: follow u by gfc as mandy83 52. amanda s says: I`d like to try the Vanilla Bean Cake ctymice at gmail dot com “Like” Cookies & Clogs on Facebook(Lori Thomas) ctymice at gmail dot com “Like” Bake Me A Wish on Facebook(Lori Thomas) ctymice at gmail dot com Subscriber via email ctymice at gmail dot com Follow Cookies & Clogs on Twitter(@crftyduchess) ctymice at gmail dot com Follow Bake Me A Wish on Twitter(@crftyduchess) ctymice at gmail dot com Follow on Google Friend Connect(Lori Thomas) ctymice at gmail dot com entered Blogoversary Giveaway #6: Thomas & Friends Water Tower Set ctymice at gmail dot com ctymice at gmail dot com 62. kimberly hock says: I entered the veeshee giveaway 63. Felicia says: I would like to try the S’more Brownie Cake felicia.431 at gmail dot com 64. Felicia says: email subscriber felicia.431 at gmail dot com 65. Felicia says: follow cookies and clogs on twitter @fce431 felicia.431 at gmail dot com 66. Felicia says: follow BakeMeAWish on twitter @fce431 felicia.431 at gmail dot com 67. Felicia says: entered thomas and friends giveaway felicia.431 at gmail dot com 68. Felicia says: felicia.431 at gmail dot com 69. Debbie jackson says: i’d like the vanilla bean cake 70. Debbie jackson says: email sub 71. Debbie jackson says: email subscriber 72. Debbie jackson says: gfc debijackson 73. Debbie jackson says: fblikeu debbie jackson 74. Debbie jackson says: twu jacksondeb 75. Debbie jackson says: entered dora digital camera 76. Debbie jackson says: 77. Debbie jackson says: entered veeshee 78. MarilynsMoney says: Mmmm, carrot cake! 79. Debbie jackson says: entered thomas gw 80. Debbie jackson says: entered betty crocker 81. Debbie jackson says: entered dec purse 82. MarilynsMoney says: 83. Debbie jackson says: entered artichoke 84. Debbie jackson says: entered snuggle paws 85. MarilynsMoney says: I like cookies and clogs on FB 86. MarilynsMoney says: I follow Cookies and Clogs on twitter, @marilynjaegly 87. MarilynsMoney says: Following Bake me a wish on twitter 88. Patti Sherman says: I would love the Pineapple coconut cake! 89. Patti Sherman says: Like Bake me a Wish on facebook 90. Natalie U. says: I’m a chocoholic so the chocolate mousse torte pedidentalasst at yahoo dot com 91. Natalie U. says: your GFC follower pedidentalasst at yahoo dot com 92. Natalie U. says: your email subscriber pedidentalasst at yahoo dot com 93. Natalie U. says: like you on FB pedidentalasst at yahoo dot com 94. Natalie U. says: entered Dora giveaway pedidentalasst at yahoo dot com 95. Felicia says: daily tweet felicia.431 at gmail dot com 96. amanda s says: pokergrl8 at gmail.com I would get the Peanut Butter Mousse. YUMMY! 99. Debbie jackson says: That’s a hard choice between pineapple coconut & orange creamsicle ~ they both sound amazing! bamagv at aol dot com I like C&C on Facebook bamagv at aol dot com I follow you on Twitter (@TheJohnsFamily) bamagv at aol dot com I entered the Club Penguin Wii game giveaway too bamagv at aol dot com 104. kimberly hock says: i entered the club penguin wii game giveaway 105. Victoria Bayne says: I would like to try the Peanut butter Mousse cake 106. Victoria Bayne says: I “Like” Cookies & Clogs on Facebook 107. Victoria Bayne says: I Follow Cookies & Clogs on Twitter @ToriBayne 108. Victoria Bayne says: I am an email subscriber ajbayne@yahoo.com 109. Victoria Bayne says: I follow on GFC @ToriB 110. Victoria Bayne says: I entered the Club Penguin giveaway 111. Dana says: I want to try the red velvet cake 112. Dana says: Email subscriber 113. Dana says: Gfc follower danaj 114. Dana says: Entered the Dora giveaway 115. MarilynsMoney says: 117. Rachel says: I’d like the smore brownie cake. 118. Rachel says: I’m a gfc follower 119. MarilynsMoney says: 120. Felicia says: daily tweet felicia.431 at gmail dot com 121. Felicia says: daily tweet felicia.431 at gmail dot com 122. Tracy Robertson says: I definitely want to try the Chocolate Mousse Torte! 123. Tracy Robertson says: I like you on Facebook 124. Tracy Robertson says: I like Bake Me A Wish on Facebook. 125. Tracy Robertson says: I follow you on twitter @tracylr233 126. Tracy Robertson says: I follow bake me a wish on twitter @tracylr233 127. Tracy Robertson says: I follow you on GFC. 128. Tracy Robertson says: I entered your Peets Coffee giveaway :0) 129. Tracy Robertson says: I tweeted 130. Debbie jackson says: 131. Jessica says: I’d like to try the caramel pecan fudge cake! 132. Jessica says: I entered the Betty Crocker mashed potatoes giveaway! 133. MarilynsMoney says: 134. Jessica says: I entered the PartyLite candle giveaway. 135. Jessica says: I entered the Club Penguin Wii game giveaway. 136. kimberly hock says: i entered party lyte giveaway 137. Tracy Robertson says: daily tweet 138. Felicia says: daily tweet felicia.431 at gmail dot com 139. Jenn W. says: I’d get the orange creamcicle cake jenniferwlsn33 at gmail dot com 140. Jenn W. says: like you on fb jennifer Ruof Wilson 141. Jenn W. says: like them on fb Jennifer Ruof Wilson 142. Jenn W. says: gfc follower-jennifer wilson 143. Jenn W. says: email subscriber jenniferwlsn33 at gmail dot com 144. Jenn W. says: entered decorative purse holder 145. Jenn W. says: entered betty crocker 146. Jenn W. says: entered veeshee.com 147. Jenn W. says: entered glo lite pillar candles 148. MarilynsMoney says: 149. Cassondra says: i would love to try the S’more Brownie Cake. muloove at yahoo dot com the smores brownie cake tbarrettno1 at gmail dot com google follower tbarrettno1 at gmail dot com email subscriber tbarrettno1 at gmail dot com fb fan (michelle b) tbarrettno1 at gmail dot com twitter follower @chelleb36 tbarrettno1 at gmail dot com like bake a wish on fb tbarrettno1 at gmail dot com tbarrettno1 at gmail dot com follow bake a wish on twitter @chelleb36 tbarrettno1 at gmail dot com 158. Tracy Robertson says: Daily tweet 159. MarilynsMoney says: 160. Felicia says: daily tweet felicia.431 at gmail dot com 161. kimberly hock says: i entered the hay needle giveaway 162. Jessica says: I entered the Hayneedle giveaway. tbarrettno1 at gmail dot com 164. Tracy Robertson says: daily tweet 165. MarilynsMoney says: 166. Megan says: I’d like to try the Pineapple Coconut cake mearley1979 at gmail dot com 167. Megan says: like you on fb mearley1979 at gmail dot com 168. Megan says: like Bake Me a Wish on fb mearley1979 at gmail dot com 169. Megan says: email subscriber mearley1979 at gmail dot com 170. Megan says: gfc follower mearley1979 at gmail dot com 171. Megan says: entered betty crocker mearley1979 at gmail dot com 172. Megan says: entered snuggle paws mearley1979 at gmail dot com 173. Megan says: entered Thomas and Friends mearley1979 at gmail dot com 174. Felicia says: daily tweet felicia.431 at gmail dot com tbarrettno1 at gmail dot com 176. Cathy W says: I like the Chocolate Mousse Torte Cake 177. Cathy W says: 178. MarilynsMoney says: 179. Kristen says: I’d choose the Boston Cream Cake. 180. Kristen says: I follow on GFC. 181. Kristen says: entered Veeshee 182. Kristen says: entered Dora Camera 183. Kristen says: entered Penguin Game Day 184. Kristen says: entered Betty Crocker 185. Kristen says: entered Hayneedle 186. Kristen says: entered Partylite 187. Kristen says: entered Mega Bloks 188. Kristen says: entered Peet’s 189. Kristen says: entered Snuggle Paws 190. Kristen says: entered Thomas & Friends 191. Tracy Robertson says: daily tweet 192. Felicia says: daily tweet felicia.431 at gmail dot com 193. Tracy Robertson says: daily tweet 194. MarilynsMoney says: 195. Ladytink_534 says: My favorite item is the Vanilla Bean Cake 196. Ladytink_534 says: I LIKE you on FB~ Jennifer L. 197. Ladytink_534 says: I LIKE them on FB~ Jennifer L. 198. Ladytink_534 says: I follow you on Twitter (@Ladytink_534) 199. Ladytink_534 says: I follow them on Twitter (@Ladytink_534) 200. Ladytink_534 says: I’m a GFC follower: Ladytink_534 201. Ladytink_534 says: 202. Cathy W says: 203. Claire says: I’d like to try the Pineapple Coconut cake 204. Tracy Robertson says: daily tweet 205. Felicia says: daily tweet felicia.431 at gmail dot com 206. MarilynsMoney says: 207. Cathy W says: 208. katklaw777 says: I’d luv to try the Caramel Pecan Fudge Cake…yum! Thanks for the delish giveaway. 209. katklaw777 says: email subscriber 210. katklaw777 says: GFC follower 211. katklaw777 says: following you on twitter w/ katklaw777 212. katklaw777 says: I Follow Bake Me A Wish on Twitter w/ katklaw777. 213. katklaw777 says: 214. katklaw777 says: entered # Blogoversary Giveaway #6: Thomas & Friends Water Tower Set 215. katklaw777 says: entered # Blogoversary Giveaway #4: Snuggle Paws 216. katklaw777 says: entered # Blogoversary Giveaway #3: Betty Crocker “Loaded Mashed Potatoes” 217. katklaw777 says: entered # Blogoversary Giveaway #2: Artichoke Salt & Pepper Shaker Set 218. MarilynsMoney says: 219. katklaw777 says: entered Blogoversary Giveaway #14: Extra Wide Padded Zero Gravity Chairs ($100 GC For You) 220. katklaw777 says: entered Blogoversary Giveaway #8: Veeshee.com Customized Bags And More 221. kimberly hock says: i entered the coke/tfal giveaway 222. Ann F says: I would love to try the Chocolate Mint Chip Cake. abfantom at yahoo dot com 223. Ann F says: I follow via GFC: abfantom 224. Ann F says: I’m an email subscriber 225. Ann F says: I entered the Hayneedle giveaway Love to try the carrot cake. kport207 at gmail dot com Follow Bake me a wish on twitter as JT2ofusanddeals kport207 at gmail dot com Follow you on twitter as JT2ofusanddeals kport207 at gmail dot com GFC Follower-kport207 kport207 at gmail dot com 230. Keith says: My mom always made me a carrot cake for my birthday cake. Near and dear to my heart!!! 231. Angela Kinder says: I want to try the Magic Bar Brownie Cake… it looks so delicious and oh man, I want it now! 232. Kiara says: I would love to try the Vanilla Bean Cake! 233. KIM DARDEN says: CHOCOLATE MOUSSE TORTE 234. catherine copeland says: Chocolate Mousse Torte Cake sounds great 235. catherine copeland says: I’m following @BakeMeAWish on twitter 236. Amber says: I’d most love to try the Boston Cream Cake 237. Amber says: I follow your blog publicly with google friend connect 238. Amber says: I like you on Facebook – my FB name is Amanda Moore 239. Amber says: I like Bake Me A Wish! on Facebook – my FB name is Amanda Moore 240. Amber says: I follow @BakeMeAWish on Twitter – I’m @AmberGoo 241. Amber says: I follow @CookiesANDClogs on Twitter – I’m @AmberGoo 242. Amber says: I entered your Peet’s Coffee giveaway 243. Amber says: I tweeted, I’m @AmberGoo: http://twitter.com/#!/AmberGoo/status/73547022157881344 244. Pat says: Caramel Pecan Fudge Cake is my first choice. 245. Tracy Robertson says: daily tweet 246. Francesca says: DOUBLE FUDGE CRUNCH 247. Felicia says: daily tweet felicia.431 at gmail dot com 248. Michelle L says: I would like to try the Triple Mochaccino Cake 249. LeAnn V says: I’d like to try the Red Velvet Chocolate cake. 250. LeAnn V says: Following Cookies and Clogs on Twitter @dancersmom69. 251. LeAnn V says: Following Bake me a Wish on Twitter @dancersmom69. 252. LeAnn V says: Following on GFC. LKVOYER at aol dot com 253. LeAnn V says: Tweet: http://twitter.com/#!/dancersmom69/status/73581160881586176 254. Stephanie V. says: vanilla bean 255. LeAnn V says: Entered the Club Penguin WII game giveaway. 256. LeAnn V says: Already an Email subscriber. LKVOYER at aol dot com 257. Theron N.Willis says: Carrot Cake. That served with a tall glass of cold milk. Hope to win! 258. Theron N.Willis says: I follow via GFC. 259. Michelle L says: I like cookies and clogs on facebook as Michelle L 260. Michelle L says: I like Bake Me a Wish on facebook as Michelle L 261. Michelle L says: I follow cookies and clogs on twitter as blackbearpie 262. Michelle L says: I follow bake me a wish on twitter as blackbearpie 263. Michelle L says: I follow your blog with GFC 264. Diane R. says: I would pick the Peanut Butter Mousse Cake if I won. 265. Tami Vollenweider says: The Triple Chocolate Enrobed Brownie sounds YUMMY! 266. Tami Vollenweider says: Facebook Fan of Bake Me A Wish! 267. Tami Vollenweider says: I Follow Bake Me A Wish on Twitter (@tamivol) 268. Tami Vollenweider says: Facebook Fan! 269. Tami Vollenweider says: Email Subscriber! 270. Tami Vollenweider says: I Follow You with GFC! 271. Tami Vollenweider says: I Follow You on Twitter (@tamivol) 272. Amanda says: Being the chocolate lover I am, I would love to try the Cookies and Cream Cake!! 273. Cathy W says: 274. Rebecca Graham says: I would like to try the Vanlla Bean cake. 275. Rebecca Graham says: Follow Cookies and Clogs on Twitter: rhoneygee 276. Rebecca Graham says: Follow Bake me A wish on Twitter: rhoneygee 277. Rebecca Graham says: Tweeted: http://twitter.com/#!/rhoneygee/status/73662745475690497 278. Jean F says: I’ve tried the carrot cake, so this time I would love the tiramisu! 279. Jean F says: I like bake me a wish on fb. Jean Fischer 280. Jean F says: I follow with gfc. 281. SANDY says: 282. DeeAnn S says: I’d love to try the Chocolate Mousse Torte Cake! Thanks. 283. DeeAnn S says: I subscribe via email. 284. Holly says: OMG that Apple Crumb Tart looks so good, I’m on an apple kick lately, it looks delish! 285. Holly says: I like Bake me a wish on facebook! (facebook.com/hblaser) 286. Holly says: I follow you on twitter! @prismperfect 287. Holly says: I follow Bakemeawish on twitter! @prismperfect 288. Holly says: 289. McKim says: I would like the Boston Cream Cake. 290. McKim says: I ‘like’ Bake Me A Wish on Facebook. 291. McKim says: I ‘like’ Cookies and Clogs on Facebook. 292. McKim says: I’m a follower via GFC. 293. paige chandler says: I want to try the Red Velvet cake. Drool. 294. paige chandler says: I follow on GFC. elysesw 295. paige chandler says: I entered the Veeshee giveaway. 296. Heather McDonough says: I’d love to try their carrot cake. 297. Shilo Beedy says: The Strawberry Shortcake looks good 298. Shilo Beedy says: I Follow Bake Me A Wish on Twitter as samsakara 299. kimberly hock says: i entered the P&G and Target Retro Fabric Care Giveaway 300. Monique Rizzo says: Vanilla bean sounds wonderful. Thanks for the chance!! 301. Suzanne K says: I’d love the Red Velvet Chocolate Cake 302. Suzanne K says: I like Bake Me A Wish! on facebook (Sk Sweeps) 303. Suzanne K says: I’m a GFC follower (sksweeps) 304. Denise B. says: I would love to try the “S’more Brownie Cake”. 305. Denise B. says: I “Like” Bake Me a Wish on Facebook. 306. Denise B. says: I follow Bak Me a Wish on Twitter. 307. Nicole C. says: I would love to try the Pineapple Coconut Cake. 308. Nicole C. says: I like Cookies & Clogs on Facebook. 309. Nicole C. says: I like Bake Me A Wish on Facebook. 310. Nicole C. says: I follow on gfc. 311. KIM DARDEN says: ORANGE CREAMSICKLE 312. Felicia says: daily tweet felicia.431 at gmail dot com 313. trishden says: Hello, I would like to try the Mochaccino Brownie Cake. Thanks! 314. trishden says: Like” Cookies & Clogs on Facebook. Thanks! Trish Froehner 315. trishden says: “Like” Bake Me A Wish on Facebook. Thanks! Trish Froehner 316. trishden says: I’m an email subscriber. Thanks! trishden948(at)yahoo(dot)com 317. trishden says: Follow Cookies & Clogs on Twitter. Thanks! @trishden 318. trishden says: Follow Bake Me A Wish on Twitter. @trishden 319. trishden says: I follow on GFC. Thanks! trishden 320. trishden says: I tweeted on Twitter here: 321. trishden says: I entered your Hayneedle giveaway. Thanks! trishden 322. trishden says: I entered your Decorative Purse Holder giveaway. Thanks! trishden 323. Linda Lansford says: I want Caramel Pecan Fudge Cake 324. Linda Lansford says: •“Like” Bake Me A Wish on Facebook 325. Linda Lansford says: •“Like” Cookies & Clogs on Facebook 326. Linda Lansford says: I gfc 327. trishden says: I entered your Artichoke Salt & Pepper Shaker Set giveaway. Thanks! trishden tbarrettno1 at gmail dot com 329. MarilynsMoney says: 330. trishden says: I entered your Snuggle Paws giveaway. Thanks! trishden 331. Debra Hall says: wow they have my fav Orange Creamsicle 332. Debra Hall says: im a facebook fan 333. Cathy W says: 334. MarilynsMoney says: 335. Felicia says: daily tweet felicia.431 at gmail dot com 336. Tracy Robertson says: daily tweet tbarrettno1 at gmail dot com 338. Tracy Robertson says: daily tweet 339. Felicia says: daily tweet felicia.431 at gmail dot com 340. Katie says: I would love the S’Mores cake! 341. MarilynsMoney says: 342. joni says: Orange Creamsicle–yummo! 343. joni says: GFC follower 344. Meaghan F. says: I’d like to try the Triple Chocolate Enrobed Brownie Cake. Miss_slytherin[at]live[dot]com 345. Meaghan F. says: I like Bake me a Wish on Facebook. Meaghan F. Miss_slytherin[at]live[dot]com 346. D Schmidt says: Visited their site and I would love to try the Vanilla Bean Cake, it looks amazing! 347. D Schmidt says: Like you on Facebook (D Schmidt) 348. D Schmidt says: Like Bake me a Wish on Facebook (D Schmidt) 349. D Schmidt says: Follow you on twitter (mummytotwoboys1) 350. D Schmidt says: Follow Bake me a wish on twitter (mummytotwoboys1) 351. D Schmidt says: GFC Follower 352. D Schmidt says: 353. Tracy Robertson says: daily tweet The Carrot Cake! Like on FB~Deb S Follow on GFC 357. Felicia says: daily tweet felicia.431 at gmail dot com 358. MarilynsMoney says: 359. Christine says: Carrot Cake tbarrettno1 at gmail dot com 361. Carolsue says: I’d like to most try the Orange Creamsicle cake Digicat {at} Sbcglobal {dot} Net 362. Carolsue says: I like you on Facebook: Name: Carol Anderson Digicats @ sbcglobal.net (my FB e-mail) Digicat @ sbcglobal.net (my regular e-mail) 363. Carolsue says: I like Bake Me a Wish on Facebook: Name: Carol Anderson Digicats @ sbcglobal.net (my FB e-mail) Digicat @ sbcglobal.net (my regular e-mail) 364. Carolsue says: I follow you on Twitter (Cezovski) Digicat {at} sbcglobal {dot} net 365. Carolsue says: I follow Bake Me a Wish on Twitter (Cezovski) Digicat {at} sbcglobal {dot} net 366. Carolsue says: I am a public follower of yours on Google Friends Connect (Carolsue) digicat {AT} sbcglobal {DOT} net 367. Carolsue says: Digicat {at} Sbcglobal {dot} Net 368. Carolsue says: I entered the Peet’s Coffee and Tea Card Digicat {at} Sbcglobal {dot} Net 369. Debbie C says: I would like to try the Caramel Pecan Fudge Cake from Bake Me A Wish. dchrisg3 @ gmail . com 370. Debbie C says: I Follow Bake Me A Wish on Twitter. @DchrisG3 dchrisg3 @ gmail . com 371. Debbie C says: I follow you on Google Friend Connect/Blogger. (Debbie C) dchrisg3 @ gmail . com 372. Debbie C says: dchrisg3 @ gmail . com 373. Tracy Robertson says: daily tweet 374. Molly Capel says: The Tiramisu looks soo good 375. Molly Capel says: Following Bake on Twitter 376. Molly Capel says: Liked Bake on FB Molly Capel 377. MarilynsMoney says: 378. Kendra says: Caramel Pecan Fudge Cake–so many awesome choices! 379. Kendra says: c & c twitter kendra22007 380. Kendra says: GFC Kendra22 381. Kendra says: bake me a wish twitter kendra22007 382. Aisling says: I’d most like to try the Mochaccino Brownie Cake. 383. Aisling says: I “Like” Bake Me A Wish on Facebook as Gaye M. 384. Aisling says: I follow you on Google Friend Connect. 385. Aisling says: I entered your Hayneedle Giveaway. 386. Darcy B says: I would most like to try the Tiramisu Classico Cake—-I love Tiramisu!! 387. Darcy B says: like cookies & clogs on facebook as Darcy Bishop 388. Darcy B says: like bake me a wish on facebook as Darcy Bishop 389. Darcy B says: following bake me a wish on twitter as darcybel 390. Darcy B says: following via gfc 391. April Bever says: The Chocolate Mousse Torte looks divine! 392. Hoa says: I’d love to try the Black and White Mousse cake. 393. Hoa says: Liked Cookies & Clogs on FB: Hoa Le 394. Hoa says: Liked Bake Me A Wish! Gourmet Cakes on FB: Hoa Le 395. Hoa says: Following @cookiesANDclogs on Twitter @hle123. 396. Hoa says: Following @BakeMeAWish on Twitter @hle123. 397. Hoa says: Following your blog via GFC: Hoa 398. Hoa says: Tweet: http://twitter.com/#!/hle123/status/75344946168737792 399. Felicia says: daily tweet felicia.431 at gmail dot com 400. shawna says: I would like to try the red velvet. 401. trishden says: Daily tweet here: 402. Veronica L says: I would love to try their Red Velvet Chocolate Cake, yummy! 403. Veronica L says: I “Like” Cookies & Clogs on Facebook, thanks 404. Veronica L says: I “Like” Bake Me A Wish on Facebook, thanks 405. Veronica L says: I follow your blog via E-mail subscription, thanks 406. Veronica L says: I am Following Cookies & Clogs on Twitter @vhenline 407. Veronica L says: I am following Bake Me A Wish on Twitter @vhenline 408. Veronica L says: I am following your blog via GFC (Veronica) Thanks 409. Veronica L says: I tweeted about this giveaway: 410. Debra F says: I would love to try the Chocolate Mousse Torte. 411. Bev says: the Mochaccino Brownie Cake sounds yummy 412. Norma says: I’d like the Vanilla Bean! 413. LAMusing says: The Boston Cream Cake looks yummy! 414. LAMusing says: I follow you on GFC 415. LAMusing says: I follow you on Twitter 416. LAMusing says: I like you on Facebook (as the name in my email address) 417. LAMusing says: I like Bake Me A Wish on Facebook (as the name in my email address) 418. LAMusing says: I entered Giveaway #1 – decorative purse holder 419. LAMusing says: I follow Bake Me A Wish on Twitter 420. Norma says: I entered the Hayneedle giveaway 421. LAMusing says: I entered giveaway #8 – veeshee bags 422. LAMusing says: I entered giveaway #14 – Hayneedle 423. Lucy Schwartz says: Since I would share the cake I would select would be the Tiramisu Classico Cake . Now if I did not feel like sharing I would select the Pineapple Coconut cake —-my husband HATES coconut. 424. LAMusing says: I entered Artichoke Salt and Pepper giveaway 425. MarilynsMoney says: 426. kellye roberts says: I would love to try the Mochaccino Brownie Cake. 427. kellye roberts says: Following you via gfc. 428. kellye roberts says: Liked you on fb. 429. kellye roberts says: Also entered the mega block giveaway. 430. Katharina says: I’d love to try the Red Velvet cake for my daughter and myself and the Viennese Coffee Cake in Cinnamon for my husband. It would be a dessert to remember since we rarely have dessert. Katharina angelsandmusic[at]gmail[dot]com 431. Katharina says: I subscribed via email (and confirmed!) Katharina angelsandmusic[at]gmail[dot]com 432. Amy Delong says: chocolate mousse torte looks really yummy! 433. Amy Delong says: fb fan of yours 434. Amy Delong says: 435. Laura says: I’d like the S’more brownie cake 436. Laura says: I follow via gfc 437. tina reynolds says: I love the Black and White Mousse thanks for the chance to win eaglesforjack@gmail.com 438. tina reynolds says: I follow on gfc 439. tina reynolds says: “Like” Cookies & Clogs on Facebook (mrstinareynolds 440. tina reynolds says: “Like” Bake Me A Wish on Facebook 441. tina reynolds says: Subscribe via email eaglesforjack@gmail.com 442. tina reynolds says: Follow Cookies & Clogs on Twitter @eaglesforjack 443. tina reynolds says: Follow Bake Me A Wish on Twitter @eaglesforjack 444. Felicia says: daily tweet felicia.431 at gmail dot com 445. Kristie says: I’d like to try the Tiramisu Classico Cake. 446. Leslie S. says: I want to try the orange creamsicle cake.Thanks! 447. Angela Winesburg says: I would love to try the S’more Brownie Cake thanks! 448. Angela Winesburg says: Following via GFC, thanks! 449. Dara Nix says: I would LOVE to try the Mochaccino Brownie Cake! YUM! kitty32504 at cox dot net 450. Dara Nix says: I already “Like” Cookies & Clogs on Facebook (Dara Nix) kitty32504 at cox dot net 451. Dara Nix says: I “Like” Bake Me A Wish on Facebook (Dara Nix) kitty32504 at cox dot net 452. Dara Nix says: I Follow Cookies & Clogs on Twitter (@Kitty32504) kitty32504 at cox dot net 453. Dara Nix says: I Follow Bake Me A Wish on Twitter (@Kitty32504) kitty32504 at cox dot net 454. Dara Nix says: Tweet: https://twitter.com/#!/Kitty32504/status/75691924069163008 kitty32504 at cox dot net 455. Dara Nix says: I already follow you via GFC (Cataroo) kitty32504 at cox dot net 456. Dara Nix says: I am a confirmed e-mail subscriber! kitty32504 at cox dot net 457. Emma Peel says: Vanilla Bean Cake 458. Emma Peel says: I Like Cookies & Clogs on Facebook as Emm Enger 459. Emma Peel says: I LIke Bake Me A Wish on Facebook as Emm Enger 460. Emma Peel says: I Subscribed by email. emmaspeel(at)gmail.com 461. Emma Peel says: I Follow Cookies & Clogs on Twitter as Emma__Peel 462. Emma Peel says: I Follow Bake Me A Wish on Twitter as Emma__Peel 463. Ed Nemmers says: Peanut Butter Mousse would be my cake of choice! 464. Jennifer J says: I’d like the orange creamsicle cake! I would love to try the Pineapple Coconut-though I would be happy to taste any of them! smchester at gmail dot com I like you on facebook-Susan Chester smchester at gmail dot com I follow you on twitter as ptowngirl. smchester at gmail dot com tbarrettno1 at gmail dot com 469. kolpin says: Cookies and Cream Cake kolpin4680 at gmail dot com 470. kolpin says: c&c twitter follower @kolpin4680 kolpin4680 at gmail dot com 471. kolpin says: sponsor twitter follower @kolpin4680 kolpin4680 at gmail dot com 472. kolpin says: 473. kolpin says: gfc follower kolpin kolpin4680 at gmail dot com 474. kolpin says: email subscriber kolpin4680 at gmail dot com 475. kolpin says: entered peets kolpin4680 at gmail dot com 476. kolpin says: entered hayneedle kolpin4680 at gmail dot com 477. Sarabeth says: I’d like to try the S’more Brownie Cake 478. Sarabeth says: Like you on fb (Matty N’ Sarabeth) 479. Sarabeth says: Twitter follower (MattyNSarabeth) 480. Sarabeth says: Email subscriber 481. Sarabeth says: Like bake me a wish cakes on fb (Matty N’ Sarabeth) 482. Sarabeth says: Following bake me a wish cakes on twitter (MattyNSarabeth) 483. Sarabeth says: Also entered partylite candles giveaway 484. Sarabeth says: GFC follower (Matty and Sarabeth) 485. Sarabeth says: Tweeted: http://twitter.com/#!/MattyNSarabeth/status/75765562558521345 486. Joy F says: Cookies and Cream Cake sound good. Thanks. 487. Breanne says: I’d love to try the Triple Chocolate Enrobed Brownie 488. Breanne says: I follow you on Twitter @RazzMyBerry. 489. Barbara Long says: I would most like to try the Boston Cream Cake. 490. Veronica Garrett says: I would like to try the Peanut Butter Mousse Cake. Leave a Comment Cancel reply
{"url":"http://www.cookiesandclogs.com/blogoversary-giveaway-7-bake-me-a-wish-cakes/","timestamp":"2014-04-19T02:03:21Z","content_type":null,"content_length":"600573","record_id":"<urn:uuid:bdf83c54-bf7b-46b2-abac-1cf940a6f59d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis of Moving Object Imaging from Compressively Sensed SAR Data in the Presence of Dictionary Mismatch International Journal of Antennas and Propagation Volume 2013 (2013), Article ID 142602, 16 pages Research Article Analysis of Moving Object Imaging from Compressively Sensed SAR Data in the Presence of Dictionary Mismatch ^1Department of Electrical and Computer Engineering, Ryerson University, Toronto, ON, Canada M5B 2K3 ^2Department of Electrical Engineering, COMSATS Institute of IT, Wah Campus, Wah 47040, Pakistan Received 4 April 2013; Revised 4 October 2013; Accepted 6 October 2013 Academic Editor: Krzysztof Kulpa Copyright © 2013 Ahmed Shaharyar Khwaja et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We present compressed sensing (CS) synthetic aperture radar (SAR) moving target imaging in the presence of dictionary mismatch. Unlike existing work on CS SAR moving target imaging, we analyze the sensitivity of the imaging process to the mismatch and present an iterative scheme to cope with dictionary mismatch. We analyze and investigate the effects of mismatch in range and azimuth positions, as well as range velocity. The analysis reveals that the reconstruction error increases with the mismatch and range velocity mismatch is the major cause of error. Instead of using traditional Laplacian prior (LP), we use Gaussian-Bernoulli prior (GBP) for CS SAR imaging mismatch. The results show that the performance of GBP is much better than LP. We also provide the Cramer-Rao Bounds (CRB) that demonstrate theoretically the lowering of mean square error between actual and reconstructed result by using the GBP. We show that a combination of an upsampled dictionary and the GBP for reconstruction can deal with position mismatch effectively. We further present an iterative scheme to deal with the range velocity mismatch. Numerical and simulation examples demonstrate the accuracy of the analysis as well as the effectiveness of the proposed upsampling and iterative scheme. 1. Introduction According to compressed sensing (CS) [1–3] theory, randomly undersampled signals can be reconstructed using linear programming [1], orthogonal matching pursuit (OMP) [4], and Bayesian methods [5–7]. The advantages gained by using CS are hardware simplification [8], reduction in equipment cost, data size, and acquisition time [9, 10], and deblurring and enhancing resolution from incomplete measurements [11]. Compressed sensing for synthetic aperture radar (SAR) is an active area of research for remote sensing. The use of CS based reconstruction can have an impact on the design of high resolution SAR systems as these systems encounter hardware design problems and require significant processing [12]. CS has been applied for imaging of static objects in through-the-wall SAR imaging [13–15], tomographic SAR imaging [16–18], and SAR image formation with reduced data [19], where advantage is taken of the fact that the observed scenes are sparse. The static scenes may not always be sparse. The scenes containing a few strong intensity moving scatterers in a weak stationary background present an opportunity for CS application as they are inherently sparse. These moving targets suffer from position displacement and defocusing due to motion [20]. The use of CS can help in reducing acquired data size as well as simultaneous motion parameter estimation imaging with reduced data. Sparsity can be further enhanced using clutter cancelation where the static parts of an observed scene are suppressed [21]. Compressed sensing for SAR moving object imaging has become an active area of research. References [22, 23] apply CS for moving target parameter estimation by defining a dictionary based on the response of moving objects for different motion parameters. Both of these references use clutter cancelation to enhance sparsity. Reference [24] makes use of distributed CS applied to along-track interferometric SAR data for moving target imaging and shows that distributed CS can offer better performance with less samples compared to traditional CS. Reference [25] uses CS for moving target parameter estimation for mono- and multistatic SAR configurations and simulated data. These references show that CS can achieve imaging of moving objects as well as moving object parameter estimation when SAR data are sampled at a rate less than the traditional Nyquist sampling rate. Compressed sensing reconstruction algorithms use a dictionary in which the reconstructed signal is assumed to be sparse. However, the dictionary in which the signal is actually sparse may be different and the resulting dictionary mismatch causes a performance degradation [26, 27]. In order to apply CS for practical applications, it is necessary to study the reconstruction performance degradation in the presence of dictionary mismatch. Reference [26] shows that dictionary mismatch can be seen equivalent to multiplicative noise. It also shows that reconstruction error increases linearly with mismatch. Reference [27] considers the effect of dictionary mismatch in CS reconstruction. It shows that, in case of using a Fourier dictionary, reconstruction performance degrades considerably when a mismatch exists. Due to this reason, it recommends examining the effects of mismatch on radar imaging. Reference [15] has shown performance degradation by means of imaging examples for static targets in the presence of mismatch in position and wave propagation velocity. The authors in [15] also state that they are extending the initial results presented in [28] for dealing with position mismatch in through-the-wall imaging. According to the best of our knowledge, dictionary mismatch analysis has not been done theoretically for CS moving target SAR imaging in the presence of position and range velocity mismatch. A summary of the main features of the existing references is given in Table 1. It shows that, in the existing literature, the theoretical analysis of the effects of dictionary mismatch for moving target CS SAR imaging have not been carried out. Therefore, it remains an open problem. It further shows that a prior other than Laplacian prior (LP), for example, Gaussian-Bernoulli prior (GBP), for CS moving target imaging has not been used. Similarly, a theoretical analysis to show the advantage of the prior in dealing with dictionary mismatch is also missing. In [29], we have partially studied this problem and its effects for SAR and inverse SAR. We showed that dictionary generation using upsampled parameters is required to deal with errors arising due to mismatch in positions and range velocity. The emphasis of this paper is to show the performance degradation in case of a target moving in the range direction. The dictionary mismatch arising due to discretization and dictionary size considerations causes performance degradation in terms of mean square error (MSE) between actual and reconstructed results, especially when there is a range velocity mismatch. We examine reasons for this degradation and also show theoretically and experimentally that using GBP for CS reconstruction compared to the traditionally used LP can compensate for some amount of mismatch. The motivation of using a different prior is to make use of extra information in improving reconstructed image quality as shown in [30]. We propose to deal with CS SAR moving target imaging in the presence of dictionary mismatch due to positions and range velocity. The main contributions of this paper are as follows.(1)We analyze dictionary mismatch and its effects theoretically, show MSE calculated from simulated SAR data for different types of mismatch in range and azimuth pixels as well as range velocity, and give parameter resolution limits for maintaining a reasonable level of reconstruction accuracy. We show that CS SAR moving target imaging is very sensitive to range velocity mismatch.(2)We analyze the problem by means of Cramer-Rao Bounds (CRB) and show theoretically that reconstruction with Gaussian-Bernoulli prior (CSGBP) instead of traditional Laplacian prior (CSLP) can deal with some mismatch effectively.(3)We present simulation results using CSGBP reconstruction and show that its use can lead to lower MSE, especially when the dictionary mismatch is small. This can be used to deal with position mismatch and reduce upsampling in positions that is required to counter mismatch effects.(4)We also propose to reconstruct in the presence of range-velocity mismatch using an iterative scheme, where dictionaries with different range velocities are created efficiently. The contrast of the reconstructed result is maximized. We would also like to point out that we deal specifically with the case of pulsed SAR. Any extension of dictionary mismatch effects and parameter resolution calculations to other types of SAR will need to take into account the difference in imaging mechanism; for example, in case of continuous wave SAR, it is known that range velocity creates a shift in the range direction, which is absent in pulsed SAR. Therefore, results for mismatch analysis and resolutions in range position and range velocity will need to take this additional shift into account. This paper is organized as follows. Section 2 presents the data model and formulation of moving target velocity estimation problem in case of CS SAR. Section 3 analyzes the effects of different kinds of dictionary mismatch, that is, range and azimuth positions and range velocity on CS SAR moving target imaging. Section 4 presents numerical and imaging examples to present the effects of dictionary mismatch in terms of MSE as well as the accuracy of the analysis and the effectiveness of the proposed method. Conclusions are given in Section 5. 2. System Model and Problem Formulation In this paper, denotes a scalar, denotes a vector, and denotes a matrix. We use and to denote conjugate transpose and transpose of , respectively. The same notation is used for Greek characters; that is, denotes a scalar, denotes a vector, and denotes a matrix. We use and to denote conjugate transpose and transpose of , respectively. The function represents a function that converts a vector of size into a diagonal matrix of size and represents the determinant of the matrix . Synthetic aperture radar consists of an antenna mounted on a moving platform [31]. A pulsed SAR sends electromagnetic pulse at a carrier frequency and a chirp rate . The pulse length is denoted by . This pulse is given as where and . The signals are reflected from each scatterer in the observed scene. Let be a sparse vector of size that contains reflectivities for each point in the scene having different motion parameters. is an matrix in which the signal is actually sparse and contains response of moving targets for every point in the scene with each considered motion parameter. Let be a sampling matrix of size , where . This represents the case where the number of measurements is less than the required sampling rate due to data loss or intentionally reduced data acquisition to simplify the acquisition hardware [10], such as analog-to-digital converter. With different sampling configurations, one can get reasonable image reconstruction [13]. In this paper, we use undersampling in range direction as measurement operator. The raw data signal model can be written in one-dimensional form as [29] where denotes measurement noise. contains the response of each moving point in 1D form. This response for th moving point having th range velocity is given as [29] where The size of is . is generated for initial velocity and final velocity for a total number of range velocities. The dictionary element corresponding to a velocity is as follows: The final dictionary is stored in an matrix given as Due to undersampling, the problem of recovering from becomes an underdetermined problem. We can solve this problem by including a-priori information for getting the solution; for example, select a solution such that the number of nonzero coefficients is the smallest. This can be expressed as follows: The number of nonzero coefficients is denoted by , known as norm. However, this minimization problem is nonconvex, which means that finding a global solution is difficult or not guaranteed. In addition, it is computationally difficult to solve as it requires search over all possible combinations of the columns of . To deal with these issues, we use norm minimization. This minimization is a convex approximation of the norm minimization if a property known as restricted isometric property (RIP) is satisfied. This property essentially means that the columns formed by the matrix are sufficiently decorrelated with one another. The problem can be expressed as In order to obtain a solution based on norm minimization, we use Laplacian prior (LP) [32] as follows: If noise is Gaussian with variance , the solution is obtained by where The solution can be written as that leads to Thus, by using LP, we include the norm minimization in the solution. The parameter gives weight to a priori sparse information. Equation (15) can be solved using different recovery methods, for example, linear programming and OMP. The reconstructed result is of size and can be written as where each entry of shows the reconstructed reflectivity for each point in the scene for one velocity value; for example, represents the reflectivity for a point at position and having a velocity . The result can be rearranged into 2D matrices, each having a size , to show the estimated reflectivities at different velocities for SAR. The matrices of size may also be summed to give a final focussed reconstructed result , shown as follows: is a function that rearranges an input into a matrix of size . Dictionary mismatch can occur in the reconstruction process due to discretization of positions as well as range velocity; for example, instead of actual position of the scatterer and velocity , the basis has elements corresponding to and . Considering as the mismatched dictionary, (3) can be rewritten as and reconstruction using the mismatched dictionary causes the results to be decorrelated from shown as follows: Therefore, the effects of dictionary mismatch are related to the correlation between the mismatched and the original dictionary. In the next section, we examine the effects of this correlation on the reconstruction. Furthermore, we present solutions for the recovery of that can be written as where is the actual dictionary. We present solutions for calculation of using GBP that can reduce position mismatch effects, and propose an iterative scheme to recover in the presence of range velocity mismatch. 3. Analysis of CS Moving Target Imaging in the Presence of Dictionary Mismatch 3.1. Effects of Position Mismatch We consider a chirp signal that is commonly used in imaging radars and show the effects of position mismatch on reconstruction. The reconstruction in the presence of mismatch depends upon the correlation between the original and the mismatched dictionaries, as given by (19). Therefore, any form of mismatch will cause erroneous results due to a correlation loss. This can be seen by taking the inner product of two chirp signals and having frequencies ranging from to . The signals are displaced with respect to each other by a duration . They consist of samples with sampling time . The correlation is where . should be less than . Observing that , we can write (21) as where . As the position mismatch increases, increases and, with the increase of , correlation decreases. Consequently, the amplitude of the reconstructed result will be reduced by a factor of . The result of the correlation would be 0 when or Consequently, if the two chirp signals are displaced by with respect to each other, CS imaging will fail to reconstruct the correct position. An arbitrary element will be selected and cause a failure of the CS reconstruction model. Therefore, the smaller the distance between the dictionary elements, the smaller the mismatch and the better the reconstruction at the expense of larger dictionary size and higher number of computations. In effect, by making an oversampled dictionary, we can improve the reconstruction and this oversampling should be more than twice the sampling frequencies to reduce mismatch errors; that is, The implication of the above result will be studied in the next section. A moving target and the effects of mismatch in range and azimuth positions as well as range velocity on the moving target reconstruction are considered. 3.2. Effects of Range Position, Azimuth Position, and Range Velocity Mismatch on Reconstruction of a Moving Target First, we consider the equivalent static position of a moving point. A moving point at an initial position of and having a velocity of can be equivalently seen as a static point with coordinates and and rotated with an angle [33]; that is: where Assuming that our dictionary is created with resolutions of , and 1m/s in range position, azimuth position, and range velocity, respectively, the mismatch effects on a moving target can be divided into 3 categories as follows.(i)A subpixel mismatch in range position represented as . This mismatch will lead to an equivalent shift of in the range position and an equivalent shift of in the azimuth position.(ii)A subpixel mismatch in azimuth position represented as . This mismatch will lead to an equivalent shift of in the range position and an equivalent shift of in the azimuth position.(iii)A fraction of m/s mismatch in range velocity represented as . This mismatch will lead to an equivalent shift of in the range position and an equivalent shift of in the azimuth position. As an example, if a point in acquired raw data is at position moving with a velocity , and the dictionary contains elements with velocity , the reconstructed estimate will be a point at position instead of the true position of . As is large, the effect on azimuth position will be more evident even when range velocity mismatch is small. The mismatch effects due to , , and are summarized in Table 2. 3.3. Effects on Reconstruction for a Single Point in the Presence of Range Position, Azimuth Position, and Range Velocity Mismatch Based on the above discussion, the effects of mismatch on reflectivity reconstruction for a single element , where and are the pixel positions, can be summarized as follows.(i)A mismatch of will cause a shift of in range position in the reconstructed result. The shift in azimuth position can be neglected as it is small due to the presence of in the denominator. The result will be leading to a loss of amplitude.(ii)A mismatch of will cause a shift of in azimuth position in the reconstructed result. The shift in azimuth position can be neglected due to the presence of in the denominator. The result will be leading to a loss of amplitude.(iii)A range velocity mismatch causes a large shift in azimuth from the true position, given as . The shift in the range position can be neglected. However, the shift in azimuth position cannot be neglected due to the presence of in the numerator that is of the order of or higher. It can be further divided into 2 parts as follows.(1)An interpixel displacement: , where is the floor operation.(2)An intrapixel displacement: , where is the modulo operation.The reconstructed result will be leading to a loss of amplitude and azimuth position shift.(iv)In order to avoid the loss in amplitude as well as azimuth mispositioning of the reconstructed result, the dictionary can be created with higher parameter resolution. The dictionary resolutions in range and azimuth positions and range velocity are such that any mismatch does not lead to a misselection of elements. This can be achieved if the dictionary resolutions are less than half the pixel sizes. This ensures that a correct pixel positions is selected. These criteria can be expressed as follows for range and azimuth positions: In case of velocity, the shift in azimuth position caused by range velocity mismatch should be less than half the pixel size; that is, or As this shift is large for a larger value of , we choose the farthest slant-range distance to get a conservative estimate as follows: where the angle corresponding to is . This leads to The limit given by (32) is also applicable for compensating intrapixel displacements due to velocity mismatch. Please note that due to the presence of in the denominator, is very small, which means that the dictionary needs to be created with very closely spaced velocity values. When there is a moving scene consisting of a number of points given as the reconstructed result in the presence of mismatch is as follows: The three sinc functions represent a loss in estimated amplitude due to the mismatch, whereas the second term in the delta function represents a pixel-level shift. 3.4. Using CSGBP to Improve Performance in the Presence of Mismatch In order to avoid errors due to dictionary mismatch, the dictionary needs to be created with upsampled positions and range velocity parameters. This high upsampling may not be feasible due to limited storage and computational complexity. We propose to reduce this high upsampling requirement by using a different prior as well as an iterative scheme. The chosen prior is GBP given as [6] where is the th element of moving with th velocity. The main motivation of using this prior is to utilize a priori information about sparsity and signal strength for image reconstruction. can be assumed as -sparse that is represented by the probability of active elements in . The prior assumes that the probability of active elements, that is, an entry of being nonzero, is given by and these active elements are represented by a Gaussian distribution with mean and variance . The probability of an inactive element is given by . The solution to recover from for the prior can be obtained by rewriting (3) as follows: where . The th entry of is 1 if the corresponding entry in is 1. In this case, can be recovered from in two steps as follows.(1)The 1st step is the solution to the following problem: where and is given on the next page. For the sake of convenience, we define and the covariance matrix is given as . The solution can be further written as (2)The solution obtained from the 1st step is used to recover estimate of by using least squares solution given as Furthermore, this model is suitable for man-made moving scatterers as they may be represented as consisting of a coherent mean part and variation of reflectivities can be represented by an incoherent part represented as variance; that is, . In addition, noise can be assumed to be zero-mean Gaussian with variance ; that is, . This CSGBP model can be solved using the algorithms in [6] or [7]. In [6 ], the raw data is correlated with each column of the matrix , and the presence or the absence of an element is decided by hypothesis testing. This testing is based on the assumption that the signal is distributed according to the GBP and the noise has Gaussian distribution. In [7], an efficient method is proposed for finding a combination of active and inactive elements. 3.5. Analysis of CSGBP and CSLP Performance in the Presence of Dictionary Mismatch Using Cramer-Rao Bounds To show theoretically the advantage gained by using CSGBP reconstruction model given in (17) over CSLP model in (7), CRB of the vector estimated from data vector is calculated as the inverse of Fisher information matrix (FIM) . We consider to be identity matrix in (3) for the sake of convenience. The FIM bounds the estimation error in the following form: is assumed to be an identity matrix for the sake of convenience. is decomposed into two parts [34]; and represents prior information matrix whose individual elements are given as Making use of the explanation given in [35] and smooth approximation; that is, , the FIM is given as for the case where CSLP is used. When CSGBP is used, the FIM is As (50) contains more information compared to (49), in (50) will be larger and hence the estimation error will be lower that shows the improvement in performance. In case of a dictionary mismatch, using (18) and (19), (47) and (48) become and represents prior information matrix whose individual elements are given as Equation (49) becomes and (50) is rewritten as When no dictionary mismatch is present, has a maximum value along the diagonal elements. In case of mismatch, the diagonal elements of decrease. Subsequently, decreases leading to an increase in estimated error. It can be inferred that, due to the prior information in (54), the increase of estimated error in the presence of dictionary mismatch is less when CSGBP is used. This can be seen in Figure 1, where an identity matrix of size pixels is used as . is a mismatched basis that is decorrelated with in varying proportions as follows: where is the degree of correlation and the measure can be seen equivalent to dictionary mismatch proportion. MSE is calculated using the expression It can be seen that using the model given in (39) lowers MSE that can help in countering effects of decorrelation arising due to dictionary mismatch. 3.6. Dealing with Range Velocity Mismatch Using Iterative CSGBP As outlined in the previous section, CSGBP can compensate for some mismatch, which can help in reducing upsampling requirements. However, it is still not possible to deal with range velocity mismatch using only CSGBP. In general, CS SAR moving target imaging is very sensitive to range velocity mismatch. To avoid any error due to range velocity mismatch, the dictionary should be created with a very high resolution in range velocity; for example, for typical SAR configurations, this resolution can be of the order of 0.01m/s. Such a high upsampling requirement is not feasible due to limited memory requirements and very high computations. In this section, we propose to compensate for velocity mismatch by creating a dictionary iteratively, with range velocities varying at each iteration. In order to reduce the computational time, we make use of the following approximation to (6): We can make use of this approximation to create from as follows: where . This allows us to create a dictionary with varying mismatch iteratively using already computed dictionaries. Using the approximation, we propose the following scheme to reconstruct SAR image in the presence of dictionary mismatch.(1)Create a dictionary with range and azimuth positions at a subpixel resolution. This resolution is chosen so as to meet the upsampling requirements given by (31) and (32). We chose an upsampling factor of 4 in position, which means that the maximum mismatch that can occur is 1/8 of the pixel size. This process is carried out only once.(2)Carry out CSGBP reconstruction using the dictionary created in Step1. Due to the upsampling chosen in the range and azimuth directions, and, due to the fact that the range velocity mismatch does not affect the range position, the result contains correct range position as well as range velocity. There will be azimuth position displacements due to range velocity that will be compensated in the next steps.(3)For each set of reconstructed points belonging to the same range velocity , regenerate new dictionary elements at the selected range positions using (58) and a velocity increment of .(4)Step3 is repeated by incrementing the velocity in steps of , until the reconstructed image is judged to be of the best quality for the points. As a quality measure, contrast of the reconstructed vector is calculated as follows: where is the averaging operator.(5)Steps 3 and 4 are repeated for each velocity in the dictionary where moving points were detected in Step2. 4. Numerical and Imaging Results This section presents numerical and imaging results. We give examples with MSE calculated for different amounts of mismatch in range, azimuth, and range velocity for SAR data, followed by imaging 4.1. Numerical Results The simulation parameters for SAR data are given in Table 3. A scene of size 50m × 50m or 12 × 70 pixels in range and azimuth directions is considered. Raw data corresponding to multiple points are simulated and 5% of range data are retained. Positions and amplitudes of these points are chosen randomly, whereas ground-range velocities are chosen randomly from a set of 7 velocities: m/s. Performance in terms of dictionary mismatch is compared. For this purpose, data are generated using a dictionary and CS reconstruction is carried out using a mismatched dictionary . The mismatch has a value of 0.01, followed by values from 0.1 to 0.7 with a step-size of 0.1. For range and azimuth pixels, the mismatch unit is pixel size, whereas, for range velocity, it is m/s. A series of simulation is carried out at a signal-to-clutter ratio (SCR) of 20dB with randomly chosen positions and velocities of the moving targets. Reconstruction is carried out using CSLP and CSGBP and the resulting MSE between the original points and the reconstructed points are shown in Figure 2. MSE is calculated as follows: Four main parameters are used in CSGBP reconstruction: , , , and , which are initially estimated by using a-priori information. The value of is decided according to the ratio of supposed active scatterers to total number of scatterers present in the data, whereas the values of , , and are chosen based on SCR. They are then refined by trial and error to get the best results. In general, higher than required values of , , and help in producing weak scatterers but lead to more side lobes, whereas a higher value of suppresses weak scatterers. From Figure 2, the following observations can be made.(i)In general, reasonable reconstruction is obtained when the effect of basis mismatch is less than 1/3 of a pixel size.(ii)MSE is less in case of no range and azimuth pixel mismatch using CSGBP. Similarly, for a small mismatch in range and azimuth directions, the MSE level in case of CSGBP based reconstruction is less. Specifically, it can be remarked that although for the velocity mismatch, MSE increases when velocity mismatch reaches 0.1m/s; however, in case of range and azimuth pixels mismatch, MSE is very small as long as pixel mismatch stays less than 0.3 of the pixel size. Thus, CSGBP can be used for better reconstruction and reduction of the dictionary size in practical scenarios, compared to CSLP based reconstruction, where the MSE is higher even in case of no dictionary mismatch.(iii)MSE for range velocity is high using both methods. After the mismatch of 0.1m/s, CSLP seems to give slightly lower MSE. The reason may be that CSGBP gives higher number of side lobes. Further simulations for the values of mismatch ranging from 0.01 to 0.1 in a step size of 0.01m/s are shown in Figure 3. It can be seen that MSE using CSGBP is still smaller than that using CSLP. The reason for not reporting any ill effects of mismatch in velocity in [25] may be that the amount of mismatch considered is small for the configuration that was studied. There are two types of moving targets that are considered in [25], a slow one and a fast one. The former target has a range velocity of 2.35m/s, whereas the latter target has a range velocity of 28.15m/s. The range velocity mismatch for the slow target is 0.85m/s, whereas, for the fast target, it is 0.45m/s. The amount of mismatch is small to have any effect on the reconstruction for the particular case. This can be seen from reconstruction results in Figure 5 of [25] that shows focussing assuming no motion. The slow object, despite having a mismatch of 2.35m/s in the range direction, is still focussed at the same position. Our results show theoretically as well as experimentally that a mismatch in velocity can have a serious impact on reconstruction.(iv)The error increases gradually for position mismatch but increases very rapidly for range velocity mismatch. The reason is that, in case of range velocity mismatch, a large shift arises in azimuth direction. This is due to the reason that is of the order of 10^3m; for example, for a velocity mismatch of 0.05m/s, m, and , there is a single-pixel shift between the original and reconstructed position. Thus, the reconstruction result will contain azimuth pixels shifted according to the mismatch, which leads to a sudden increase in MSE. As there is a total misalignment between actual and estimation positions, MSE rises and stays at a roughly constant maximum level. This is further demonstrated in Figure 4, where a reconstructed scene contains a single pixel shift with respect to the actual position. The reason is that there is a mismatch in range velocity of 0.05m/s. A loss of amplitude and side lobes can be seen.(v)The error in azimuth is more than that in range position in general, especially using CSLP. The reason is that there are more than one combination of , , and that lead to closely resembling values of and in (26); for example, for the parameters given in Table 3, we can see that and of 2594.5m and −25.7143m with m/s lead to and of 2594m and −75.2693m. The same values of and are obtained with similar value of and m with m/s as well as m and m/s. Thus, it is possible that a dictionary mismatch will lead to selection of dictionary elements and subsequently, side lobes that are not in the immediate neighborhood. This is demonstrated in Figure 5, where there are 4 points at different azimuth positions having a velocity of 3m/s. The mismatch is 0.1, 0.3, 0.5, and 0.7 of a pixel size. When CSLP is used to carry out reconstruction, only a single point is identified with a velocity of 3m/s. This is shown in Figure 5(a). Two of the points are detected at shifted azimuth positions with a velocity of 4m/s, as shown in Figure 5(b). The fourth point is not identified at all. In case of reconstruction using CSGBP, all of the four points are identified correctly as shown in Figure 5(c), albeit with higher side lobes. This also demonstrates the advantage offered by CSGBP by identifying correct positions and velocity even in the presence of pixel mismatch. 4.2. Imaging Results In this section, we compare the reconstruction performance of CSLP and CSGBP through imaging results and demonstrate the effectiveness of the proposed iterative CSGBP. Eight points are simulated at positions (2, 55), (3, 15), (4, 9), (4, 25), (6, 40), (7, 50), (8, 60), and (9, 65). The scene is shown in Figure 6(a). The velocities of the points are 3, 4, −5, 5, 4, −3, −4, and −5m/s, respectively. Reconstruction is carried out in the presence of a mismatch of 1/8 of a pixel size in range and azimuth. Results using CSLP are shown in Figure 6(b), which shows that the point at (7, 50) is not reconstructed correctly. CSGBP results shown in Figure 6(c) indicate that all the points are correctly reconstructed. This demonstrates the superior performance of CSGBP. Furthermore, results using CSGBP show side lobes in the vicinity of actual positions, whereas, in case of CSGBP, the side lobes appear at positions that are not in the vicinity of actual positions. A further example is shown with a mismatch of 0.4m/s in range velocity. The original scene is shown in Figure 7(a), where there are closely spaced scatterers roughly in the middle of the scene. They have a velocity of 4.4m/s, whereas the closest velocity in the dictionary is 4m/s. Reconstruction using both CSLP and CSGBP shows shifted results due to the mismatch. Furthermore, results obtained using CSLP were obtained at a velocity of 5m/s. Result obtained using iterative CSGBP is shown in Figure 7(d), where the points are located at their correct positions. The velocity in the dictionary is increased iteratively with a step size of 0.05m/s, until the highest contrast is achieved. A plot of contrast with velocity is shown in Figure 8, where it can be seen that the contrast is the highest when the velocity in the dictionary matches the actual velocity. This shows that creating dictionary elements iteratively and using contrast to measure quality are effective methods for dealing with CS moving target imaging in the presence of range velocity mismatch. Another example is shown with a scene in Figure 9(a). The points are at positions of (2, 5), (5, 45), (6, 34), (2, 70), (10, 15), and (7, 65). The point at (2, 5) has a velocity of −4.9m/s, the point at (5, 45) has a velocity of −4m/s, and the point at (6, 34) has a velocity of −3m/s. The remaining points have a velocity of 3.3m/s. Thus, there is a mismatch of 0.1m/s and 0.3m/s. The pixel at position (2, 5) has a 1/2 pixel mismatch in azimuth and 1/4 pixel mismatch in azimuth. The pixel at positions (6, 34) has a 1/4 pixel mismatch in azimuth and 1/2 pixel mismatch in range. CSLP reconstruction results with the dictionary containing elements at 1/4 pixel spacing are shown in Figure 9(b). CSLP is unable to detect two of the scatterers at positions (7, 65) and (2, 70); other scatterers having velocity mismatch are shifted in azimuth. CSGBP reconstruction results are shown in Figure 9(c). All the range positions are correctly identified, but the result is shifted in azimuth. Result using iterative CSGBP with velocity varying in a step size of 0.05m/s for each velocity in the dictionary is shown in Figures 9(d)–9(f). The result obtained by maximizing the contrast for the point at (6, 34) is shown in Figure 9(d), where the points moving at 3.3m/s are focussed at their true position. Some side lobes can be seen. Similarly, the point moving at −4.9m/s is shown correctly focussed in Figure 9(e). Final result obtained using the calculated velocities is shown in Figure 9(f), where all the points are focussed at their true positions. Some side lobes can be observed. 5. Conclusion In this paper, we studied compressed sensing (CS) synthetic aperture radar (SAR) moving target imaging in the presence of dictionary mismatch. We analyzed the sensitivity of the imaging process to range pixel, azimuth pixel, and range velocity mismatches. The mismatch analysis shows that the reconstruction error increases with mismatch and especially increases very rapidly in the presence of range velocity mismatch. Unlike existing references, we show that using a Gaussian-Bernoulli prior compared to the traditionally used Laplacian prior offers advantage in CS SAR imaging for dealing with small mismatch. This advantage is apparent in dealing with positions mismatch. We calculated Cramer-Rao bounds that demonstrate theoretically the lowering of mean square error between actual and reconstructed result by using the GBP. We show that creating an upsampled dictionary and using the GBP for reconstruction can deal with position mismatch. We also presented an iterative scheme to deal with the range velocity mismatch where dictionary elements are created efficiently. CS reconstruction is carried out at each iteration until the image contrast is maximized for each velocity. Numerical and imaging examples confirm the analysis and the effectiveness of the proposed upsampling and iterative scheme. Notations and Symbols : Range time : Central frequency : Slant-range positions : Azimuth positions : Chirp rate : Sensor velocity : Antenna height : Pulse length : Ground-range positions : Speed of light : Azimuth time : Incidence angle at range , equal to : th ground range velocity : Translational velocity, equal to : Pixel size in range : Pixel size in azimuth : Number of range pixels in the scene : Number of azimuth pixels in the scene : Number of range pixels in raw data : Number of azimuth pixels in raw data : Number of range velocities : Raw data from all the points in the scene arranged in 1D form : Raw data for th point moving with th velocity, arranged in 1D form : Raw data element for range time and azimuth time : Radar-target distance for th point moving with th velocity : Original dictionary : Mismatched dictionary : Original reflectivity vector : Mismatched reflectivity vector : Reconstructed reflectivity vector : Reconstructed reflectivity vector in the presence of mismatch : Reconstructed reflectivity in 2D : Sampling matrix : Number of columns of : Noise vector : Inner product : Averaging operation : Rotation angle with which a moving scatterer can be seen equivalent to a static scatterer : Subpixel mismatch in range position : Subpixel mismatch in azimuth position : Fraction of m/s mismatch in range velocity : Shift in range position due to range velocity mismatch : Element of reflectivity vector at th position and moving with th velocity : Reconstructed element of reflectivity vector at th position and moving with th vecloity : Dictionary resolution for range : Dictionary resolution for azimuth : Dictionary resolution for range velocity : Range position for equivalent static point : Azimuth position for equivalent static point : Probability of active coefficients in : Sparsity of : Correlation : Variance for noise : Variance of reflectivity vector : Identity matrix : Contrast of FIM: Fisher information matrix CSLP: CS reconstruction with Laplacian prior CSGBP: CS reconstruction with Gaussian-Bernoulli prior : FIM for CSLP without mismatch : FIM for CSGBP without mismatch : FIM for CSLP with mismatch : FIM for CSGBP with mismatch. 1. E. J. Candès and T. Tao, “Decoding by linear programming,” IEEE Transactions on Information Theory, vol. 51, no. 12, pp. 4203–4215, 2005. View at Publisher · View at Google Scholar · View at 2. E. J. Candès, J. Romberg, and T. Tao, “Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information,” IEEE Transactions on Information Theory, vol. 52, no. 2, pp. 489–509, 2006. View at Publisher · View at Google Scholar · View at Scopus 3. D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. View at Publisher · View at Google Scholar · View at Scopus 4. J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matching pursuit,” IEEE Transactions on Information Theory, vol. 53, no. 12, pp. 4655–4666, 2007. View at Publisher · View at Google Scholar · View at Scopus 5. S. Ji, Y. Xue, and L. Carin, “Bayesian compressive sensing,” IEEE Transactions on Signal Processing, vol. 56, no. 6, pp. 2346–2356, 2008. View at Publisher · View at Google Scholar · View at 6. H. Zayyani, M. Babaie-Zadeh, and C. Jutten, “Bayesian pursuit algorithm for sparse representation,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '09), pp. 1549–1552, April 2009. View at Publisher · View at Google Scholar · View at Scopus 7. P. Schniter, L. C. Potter, and J. Ziniel, “Fast bayesian matching pursuit,” in Proceedings of the 2008 Information Theory and Applications Workshop (ITA '08), pp. 326–332, San Diego, Calif, USA, February 2008. View at Publisher · View at Google Scholar · View at Scopus 8. J. Ma, “Single-Pixel remote sensing,” IEEE Geoscience and Remote Sensing Letters, vol. 6, no. 2, pp. 199–203, 2009. View at Publisher · View at Google Scholar · View at Scopus 9. M. Lustig, D. Donoho, and J. M. Pauly, “Sparse MRI: the application of compressed sensing for rapid MR imaging,” Magnetic Resonance in Medicine, vol. 58, no. 6, pp. 1182–1195, 2007. View at Publisher · View at Google Scholar · View at Scopus 10. G. Shi, J. Lin, X. Chen, F. Qi, D. Liu, and L. Zhang, “UWB echo signal detection with ultra-low rate sampling based on compressed sensing,” IEEE Transactions on Circuits and Systems II, vol. 55, no. 4, pp. 379–383, 2008. View at Publisher · View at Google Scholar · View at Scopus 11. J. Ma and F.-X. Le Dimet, “Deblurring from highly incomplete measurements for remote sensing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 47, no. 3, pp. 792–802, 2009. View at Publisher · View at Google Scholar · View at Scopus 12. X. Nie, D.-Y. Zhu, and Z.-D. Zhu, “Application of synthetic bandwidth approach in SAR polar format algorithm using the deramp technique,” Progress in Electromagnetics Research, vol. 80, pp. 447–460, 2008. View at Publisher · View at Google Scholar · View at Scopus 13. Q. Huang, L. Qu, B. Wu, et al., “UWB through-wall imaging based on compressive sensing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 3, pp. 1408–1415, 2010. 14. W. Zhang, M. Amin, F. Ahmad, et al., “Ultrawideband impulse radar through-the-wall imaging with compressive sensing,” International Journal of Antennas and Propagation, vol. 2012, Article ID 251497, 11 pages, 2012. View at Publisher · View at Google Scholar 15. M. Duman and A. Gurbuz, “Performance analysis of compressive-sensing-based through-the-wall imaging with effect of unknown parameters,” International Journal of Antennas and Propagation, vol. 2012, Article ID 405145, 11 pages, 2012. View at Publisher · View at Google Scholar 16. X. X. Zhu and R. Bamler, “Tomographic SAR inversion by L[1]-norm regularization—the compressive sensing approach,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 10, pp. 3839–3846, 2010. View at Publisher · View at Google Scholar · View at Scopus 17. X. X. Zhu and R. Bamler, “Super-resolution power and robustness of compressive sensing for spectral estimation with application to spaceborne tomographic SAR,” IEEE Transactions on Geoscience and Remote Sensing, vol. 50, no. 1, pp. 247–258, 2012. View at Publisher · View at Google Scholar · View at Scopus 18. A. Budillon, A. Evangelista, and G. Schirinzi, “Three-dimensional SAR focusing from multipass signals using compressive sampling,” IEEE Transactions on Geoscience and Remote Sensing, vol. 49, no. 1, pp. 488–499, 2011. View at Publisher · View at Google Scholar · View at Scopus 19. M. T. Alonso, P. López-Dekker, and J. J. Mallorquí, “A novel strategy for radar imaging based on compressive sensing,” IEEE Transactions on Geoscience and Remote Sensing, vol. 48, no. 12, pp. 4285–4295, 2010. View at Publisher · View at Google Scholar · View at Scopus 20. R. K. Raney, “Synthetic aperture imaging radar and moving targets,” IEEE Transactions on Aerospace and Electronic Systems, vol. 7, no. 3, pp. 499–505, 1971. View at Publisher · View at Google Scholar · View at Scopus 21. J. H. G. Ender, “On compressive sensing applied to radar,” Signal Processing, vol. 90, no. 5, pp. 1402–1414, 2010. View at Publisher · View at Google Scholar · View at Scopus 22. Q. Wu, M. Xing, C. Qiu, B. Liu, Z. Bao, and T.-S. Yeo, “Motion parameter estimation in the SAR system with low PRF sampling,” IEEE Geoscience and Remote Sensing Letters, vol. 7, no. 3, pp. 450–454, 2010. View at Publisher · View at Google Scholar · View at Scopus 23. A. S. Khwaja and J. Ma, “Applications of compressed sensing for sar moving-target velocity estimation and image compression,” IEEE Transactions on Instrumentation and Measurement, vol. 60, no. 8, pp. 2848–2860, 2011. View at Publisher · View at Google Scholar · View at Scopus 24. Y. G. Lin, B. C. Zhang, W. Hong, and Y. R. Wu, “Along-track interferometric SAR imaging based on distributed compressed sensing,” Electronics Letters, vol. 46, no. 12, pp. 858–860, 2010. View at Publisher · View at Google Scholar · View at Scopus 25. I. Stojanovic and W. C. Karl, “Imaging of moving targets with multi-static SAR using an overcomplete dictionary,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 1, pp. 164–176, 2010. View at Publisher · View at Google Scholar · View at Scopus 26. M. A. Herman and T. Strohmer, “General deviants: an analysis of perturbations in compressed sensing,” IEEE Journal on Selected Topics in Signal Processing, vol. 4, no. 2, pp. 342–349, 2010. View at Publisher · View at Google Scholar · View at Scopus 27. Y. Chi, L. L. Scharf, A. Pezeshki, and A. R. Calderbank, “Sensitivity to basis mismatch in compressed sensing,” IEEE Transactions on Signal Processing, vol. 59, no. 5, pp. 2182–2195, 2011. View at Publisher · View at Google Scholar · View at Scopus 28. O. Teke, A. C. Gurbuz, and O. Arikan, “A new OMP techniques for sparse recovery,” in Proceedings of the 20th Signal Processing and Communications Applications Conference (SIU '12), Fethiye, Turkey, April 2012. 29. A. S. Khwaja and X.-P. Zhang, “Compressed sensing based image formation of SAR/ISAR data in presence of basis mismatch,” in Proceedings of the 2012 IEEE International Conference on Image Processing (ICIP '12), Orlando, Fla, USA, 2012. 30. S. Yu, A. Shaharyar Khwaja, and J. Ma, “Compressed sensing of complex-valued data,” Signal Processing, vol. 92, no. 2, pp. 357–362, 2012. View at Publisher · View at Google Scholar · View at 31. G. Franceschetti and R. Lanari, Synthetic Aperture Radar Processing, CRC Press, Oxford, UK, 1999. 32. E. G. Larsson and Y. Selén, “Linear regression with a sparse parameter vector,” IEEE Transactions on Signal Processing, vol. 55, no. 2, pp. 451–460, 2007. View at Publisher · View at Google Scholar · View at Scopus 33. M. Soumekh, Synthetic Aperture Radar Signal Processing, John Wiley and Sons, 1999. 34. P. Tichavsky, C. H. Muravchik, and A. Nehorai, “Posterior Cramér-Rao bounds for discrete-time nonlinear filtering,” IEEE Transactions on Signal Processing, vol. 46, no. 5, pp. 1386–1396, 1998. View at Publisher · View at Google Scholar · View at Scopus 35. H. Zayyani, M. Babaie-Zadeh, and C. Jutten, “Bayesian cramer-rao bound for noisy non-blind and blind compressed sensing,” http://arxiv.org/abs/1005.4316.
{"url":"http://www.hindawi.com/journals/ijap/2013/142602/","timestamp":"2014-04-17T13:21:53Z","content_type":null,"content_length":"567478","record_id":"<urn:uuid:dd0e8c60-9b79-43d3-9398-28e9d0887f49>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Bit array in CUDA up vote 3 down vote favorite Am implementing Sieve of Eratosthenes in CUDA and am having a very weird output. Am using unsigned char* as the data structure and using the following macros to manipulate the bits. #define ISBITSET(x,i) ((x[i>>3] & (1<<(i&7)))!=0) #define SETBIT(x,i) x[i>>3]|=(1<<(i&7)); #define CLEARBIT(x,i) x[i>>3]&=(1<<(i&7))^0xFF; I set the bit to denote it's a prime number, otherwise it's = 0. Here is where i call my kernel size_t p=3; size_t primeTill = 30; if(ISBITSET(h_a, p) == 1){ int dimA = 30; int numBlocks = 1; int numThreadsPerBlock = dimA; dim3 dimGrid(numBlocks); dim3 dimBlock(numThreadsPerBlock); cudaMemcpy( d_a, h_a, memSize, cudaMemcpyHostToDevice ); reverseArrayBlock<<< dimGrid, dimBlock >>>( d_a, primeTill, p ); cudaMemcpy( h_a, d_a, memSize, cudaMemcpyDeviceToHost ); printf("This is after removing multiples of %d\n", p); for(size_t i = 0; i < primeTill +1; i++) printf("Bit %d is %d\n", i, ISBITSET(h_a, i)); Here is my kernel __global__ void reverseArrayBlock(unsigned char *d_out, int size, size_t p) int id = blockIdx.x*blockDim.x + threadIdx.x; int r = id*p; if(id >= p && r <= size ) while(ISBITSET(d_out, r ) == 1 ){ CLEARBIT(d_out, r); // if(r == 9) // { // /* code */ // CLEARBIT(d_out, 9); // } } The output should be: 2, 3, 5, 7, 11, 13, 17, 19, 23, 29 while my output is: 2, 3, 5, 9, 7, 11, 13, 17, 19, 23, 29 If you take a look at the kernel code, if i uncomment those lines i will get the correct answer, which means that there is nothing wrong with my loops or my checking! c cuda add comment 2 Answers active oldest votes Multiple threads are accessing the same word (char) in global memory simultaneously and thus the written result gets corrupted. up vote 1 down You could use atomic operations to prevent this but the better solution would be to alter your algorithm: Instead of letting every thread sieve out multiples of 2, 3, 4, 5, ... let vote accepted every thread check a range like [0..7], [8..15], ... so that every range's length is a multiple of 8 bits and no collisions occur. Also I would use bigger words like shorts or ints to store the bit array because "(memory accesses to global memory are fastest if reads and writes of the threads in a half-wrap can be coalesced to 32, 64 or 128bytes)". – Dave O. Mar 17 '11 at 14:31 add comment I would suggest replacing the macros with methods to start with. You can use methods preceded by __host__ and __device__ to generate cpp and cu specific versions where necessary. That will eradicate the possibility of the pre-processor doing something unexpected. up vote 1 down vote Now just debug the particular code branch that is causing the wrong output, checking that each stage is correct in turn and you'll find the problem. I will check that, but after some thought i think the problem is happening due to race conditions between threads trying to change the value of the same char, i will get back with my findings. Thanks – Salem Sayed Dec 22 '10 at 19:24 add comment Not the answer you're looking for? Browse other questions tagged c cuda or ask your own question.
{"url":"http://stackoverflow.com/questions/4486677/bit-array-in-cuda","timestamp":"2014-04-24T16:54:47Z","content_type":null,"content_length":"68808","record_id":"<urn:uuid:dc573b6a-0e16-4720-8a73-742a3c47a5c2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
March 11th 2009, 06:26 PM #1 Yet again I am stuck on this one. I am studying for a test next week. I am not sure where to even start on this. Can someone work this one complete? Here's a hint: by the Product Rule, $\frac{dy}{dx}=2x\cdot\tan^{-1}(2x)+(1+x^2)\cdot\frac{d}{dx}\tan^{-1}(2x)\cdot 2,$ and the derivative of $\tan^{-1}x$ is $\frac{1}{1+x^2}$. I take it the problem is: $y= [1+x^{2}] * [ \arctan 2x]$? Use the product rule. Hint: the derivative of $arctan(u)$ is $\frac {u'}{1+u^{2}}$ Yes it is arctan. Thanks Guys for all of your help. I am just not understanding derivatives at all, and you guys have been very helpful in me trying to learn them! Like anything, they get easier with practice. I scanned this particular problem, since I didnt know how to type it in. Any help on it? Since $y= ln (x^{n})$ = $n \ln (x)$, then: $\ln [\frac {3x+2}{3x-2}]^{ \frac {1}{3}}$ = $y= \frac {1}{3} \ln \frac {3x+2}{3x-2}$ Since you are dividing, break it up into two natural logarithms. $y= \ln \frac {a}{b}$ = $\ln a - \ln b$ Does that help? Just use the rules of logarithms to simply so that you can find the derivative. if $y= \ln u$, then $y'= (\frac {1}{u}) \frac {du}{dx}$ March 11th 2009, 06:43 PM #2 Senior Member Dec 2008 March 11th 2009, 06:46 PM #3 Mar 2009 March 11th 2009, 06:51 PM #4 March 11th 2009, 06:54 PM #5 Mar 2009 March 11th 2009, 07:06 PM #6 March 11th 2009, 07:28 PM #7 Mar 2009
{"url":"http://mathhelpforum.com/calculus/78258-derivatives-2-a.html","timestamp":"2014-04-17T02:30:55Z","content_type":null,"content_length":"45541","record_id":"<urn:uuid:1f6a633f-da0c-4c70-a73d-bc7b40b4e77d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] Generalized least square on large dataset Nathaniel Smith njs@pobox.... Thu Mar 8 10:31:52 CST 2012 On Thu, Mar 8, 2012 at 4:09 PM, Peter Cimermančič <peter.cimermancic@gmail.com> wrote: >> I agree with Josef, the first thing that comes to mind is controlling >> for spatial effects (which happens in various fields; ecology folks >> worry about this a lot too). >> In this case, though, I think you may need to think more carefully >> about whether your similarity measure is really appropriate. If your >> matrix is uninvertible, then IIUC that means you think that you >> effectively have less than 1000 distinct genomes -- some of your >> genomes are "so similar" to other ones that they can be predicted >> *exactly*. > That's exactly true - some of the bacteria are almost identical (I'll try > filtering those and see if it changes anything). You aren't just telling the computer that they're almost identical -- that would be fine, the model would just mostly-but-not-entirely ignore the near-duplicates. You're telling the computer that they are exactly identical and you had no reason to even collect the data because you knew ahead of time exactly what it would be. This is the sort of thing that really confuses statistical programs :-). >> In terms of the underlying probabilistic model: you have some >> population of bacteria genomes, and you picked 1000 of them to study. >> Each genome you picked has some length, and it also has a number of >> genes. The number of genes is determined probabilistically by taking >> some linear function of the length, and then adding some Gaussian >> noise. Your goal is to figure out what that linear function looks >> like. >> In OLS, we assume that each of those Gaussian noise terms is IID. In >> GLS, we assume that they're correlated. The way to think about this is >> that we take 1000 uncorrelated IID gaussian samples, let's call this >> vector "g", and then we mix them together by multiplying by a matrix >> chol(V), chol(V)*g. (cholV) is the cholesky decomposition; it's >> triangular, and chol(V)*chol(V)' = V.) So the noise added to each >> measurement is a mixture of these underlying IID gaussian terms, and >> bacteria that are more similar have noise terms that overlap more. > I'm also unable to calculate chol of my V matrix, because it doesn't appear > to be a positive definite. Any suggestion here? Singular matrices can't be positive definite, by definition. They can be positive semi-definite. (The analogy is numbers -- a number that is zero cannot be greater than zero, by definition. But it can be >= zero.) Any well-defined covariance matrix is necessarily positive semi-definite. If your covariance matrix isn't positive semi-definite, then that's like claiming that you have three random variables where A and B have a correlation of 0.99, and B and C have a correlation of 0.99, but A and C are uncorrelated. That's impossible. ([[1, 0.99, 0], [0.99, 1, 0.99], [0, 0.99, 1]] is not a positive-definite matrix.) Singular, positive semi-definite matrices do *have* Cholesky decompositions, but your average off-the-shelf Cholesky routine can't compute them. (Again by analogy -- in theory you can compute the square root of zero, but in practice you can't reliably with floating point, because your "zero" may turn out to actually be represented as "-2.2e-16" or something, and an off-the-shelf square root routine will blow up on this because it looks negative.) You can look around for a "rank-revealing Cholesky", perhaps. Anyway, the question is whether your matrix is positive semi-definite. If it is, then this is all expected, and your problem is just that you need to fix your covariances to be more realistic, as discussed. If it isn't, then you don't even have a covariance matrix, and again you need to figure out how to get one :-). You can check for positive (semi-)definiteness by looking at the eigenvalues -- they should be all >= 0 for semi-definite, > 0 for definite. The easiest way to manufacture a positive-definite matrix on command is to take a non-singular matrix A and compute A'A. - N More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2012-March/031714.html","timestamp":"2014-04-16T14:15:52Z","content_type":null,"content_length":"7258","record_id":"<urn:uuid:465ffc9b-8685-4d9e-82be-b26bf5378c63>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the distance between two points November 7th 2009, 04:47 PM #1 Junior Member Nov 2009 Finding the distance between two points Let Q = (0,5) and R = (10,6) be given points in the plane. We want to find the point P = (x,0) on the x axis such that the sum of distances PQ+PR is as small as possible. To solve this problem, we need to minimize the following function of x: f(x) = ? over the closed interval [A,B] where A = ? and B = ? Let Q = (0,5) and R = (10,6) be given points in the plane. We want to find the point P = (x,0) on the x axis such that the sum of distances PQ+PR is as small as possible. To solve this problem, we need to minimize the following function of x: f(x) = ? over the closed interval [A,B] where A = ? and B = ? $PQ = d_1$ $d_1 = \sqrt{(x-0)^2 + (0-5)^2}$ $PR = d_2$ $d_2 = \sqrt{(x-10)^2 + (0-6)^2}$ $d_1 + d_2 = S$ find $\frac{dS}{dx}$ and minimize Still not getting it... Thank you for your help, I really appreciate it... But I am still not getting what the technique is here to answer the question. I understand finding distance (d1) and distance (d2) but the derivative part has me thrown off. Also, I don't get what function is supposed to be minimized... How do I go step by step through this? Thank you for your help, I really appreciate it... But I am still not getting what the technique is here to answer the question. I understand finding distance (d1) and distance (d2) but the derivative part has me thrown off. Also, I don't get what function is supposed to be minimized... How do I go step by step through this? minimize the function $S = \sqrt{x^2+25} + \sqrt{x^2-20x+136}$ start by finding $\frac{dS}{dx}$, set the result equal to 0, and solve for the value of x that minimizes S. November 7th 2009, 05:02 PM #2 November 7th 2009, 05:12 PM #3 Junior Member Nov 2009 November 7th 2009, 05:43 PM #4
{"url":"http://mathhelpforum.com/calculus/113058-finding-distance-between-two-points.html","timestamp":"2014-04-16T08:22:45Z","content_type":null,"content_length":"42293","record_id":"<urn:uuid:ffee7532-dac9-45ed-a2fd-4d786c965162>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus: Early Transcendentals 10th Edition Chapter 5.MC Solutions | Chegg.com Denote the Riemann sum S. To evaluate S, the intervaln intervals. Assume that the intervals are If k^th interval, then The Riemann sum can be written in an expanded form as Substituting the required values, the Riemann sum can be re-written as Thus, it can be seen that the given Riemann sum is a telescoping sum as alternate terms cancel out and the sum consists of the last and the first terms only.
{"url":"http://www.chegg.com/homework-help/calculus-early-transcendentals-10th-edition-chapter-5.mc-solutions-9780470647691","timestamp":"2014-04-25T06:46:24Z","content_type":null,"content_length":"32127","record_id":"<urn:uuid:f16585bb-5aed-49a8-abeb-05c35d102b19>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Avondale, AZ Algebra 1 Tutor Find an Avondale, AZ Algebra 1 Tutor ...Spanish is my second language, and I understand the frustrations that come with learning a new language. My enthusiastic and interactive teaching style engages students and encourages them to actively participate in learning. I am patient, mindful, and make learning fun. 21 Subjects: including algebra 1, English, Spanish, reading ...In addition to the undergraduate tutoring, I was also involved with scientific outreach programs for elementary schools in the Flagstaff area. After my undergraduate degrees, I pursued a PhD in Physics at West Virginia University. While there, I continued tutoring introductory students with an emphasis on both engineering and bio/medical applications. 20 Subjects: including algebra 1, chemistry, physics, calculus ...Whether a student is struggling in English literature or composition, I am able to work with the student to maximize their potential and improve academic performance. I earned a Chinese Language and Literature degree from Smith College that grounded me in the principles of literary analysis and ... 33 Subjects: including algebra 1, Spanish, reading, English ...My educational background has given me excellent language and grammar skills, as well as a strong foundation in math and science. I love being able to help a student understand a subject with which they have previously struggled. I have tutored students in most math and science courses, and I am comfortable with all grade levels, from elementary school to college. 28 Subjects: including algebra 1, English, ASVAB, algebra 2 ...I bring patience and enthusiasm to tutoring. I look forward to learning with you. I have written Pascal programs professionally since the early 1970s, when I worked on a large shipping model for petroleum imports to the US west coast ports. 20 Subjects: including algebra 1, English, calculus, reading Related Avondale, AZ Tutors Avondale, AZ Accounting Tutors Avondale, AZ ACT Tutors Avondale, AZ Algebra Tutors Avondale, AZ Algebra 2 Tutors Avondale, AZ Calculus Tutors Avondale, AZ Geometry Tutors Avondale, AZ Math Tutors Avondale, AZ Prealgebra Tutors Avondale, AZ Precalculus Tutors Avondale, AZ SAT Tutors Avondale, AZ SAT Math Tutors Avondale, AZ Science Tutors Avondale, AZ Statistics Tutors Avondale, AZ Trigonometry Tutors
{"url":"http://www.purplemath.com/Avondale_AZ_algebra_1_tutors.php","timestamp":"2014-04-17T19:19:48Z","content_type":null,"content_length":"24147","record_id":"<urn:uuid:46872062-3f7f-42e5-92b2-f89e379e800b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Rayleigh-Jeans Law Radiated Energy as a Function of Wavelength If we consider energy radiated perpendicular to a small increment of area, then it must be noted that half of the energy density in the waves is going toward the walls and half is coming out if the system is in thermal equilibrium. Evaluating the power seen at a given observation point requires a consideration of the geometry: For perpendicular radiated energy but at an angle θ, the effective area will be Acosθ and the effective speed will be c cosθ, so the radiated energy will be reduced to For a given observation point near a radiating surface, the power will be the average power from all directions, and the average gives another factor of 1/2. Having averaged over all angles, the calculated radiated power per unit wavelength is finally This is the Rayleigh-Jeans formula. The fact that it failed to predict the spectral distribution from hot objects was one of the major unresolved issues in physics at the beginning of the 20th To express this in terms of frequency, an application of the chain rule as was done above with the energy density yields a radiated power per unit frequency:
{"url":"http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/rayj.html","timestamp":"2014-04-21T10:28:39Z","content_type":null,"content_length":"11512","record_id":"<urn:uuid:93ce711a-fe62-4ec7-9c0a-b627ec7f2cba>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Only connect! Euclid famously said, “There is no royal road to geometry.” Among the non-royal roads, the computational pathway is notably muddy, rutty and potholed. Over the weekend I needed to write a program for a simple geometric task—finding the intersection of two line segments in the plane. It’s [DEL:Wednesday:DEL] now, and I finally have my program. Let me tell you some of my adventures along the way. First steps. This is a low-key, off-the-cuff, back-of-the-envelope project. The program doesn’t have to be blazingly fast; it isn’t going to be doing real-time animation in a video game. Nor is it going to trace the trajectories of aircraft in the holding pattern at JFK; no lives will be lost if I make a goof. Still, I would like to get correct answers, and not waste cpu cycles too In Euclid’s way of doing geometry—with straightedge and compass—there’s not much to be said about algorithms for finding intersections. You construct the lines and see where they cross. Most computers, however, are not adept with straightedge and compass; the problem has to be encoded somehow in a more-algebraic language. Descartes knew how to do this. The cartesian equivalent of the Euclidean procedure goes something like this: Given two segments specified by the x and y coordinates of their end points, write the equation y = mx + b for each of the infinite lines that pass through these pairs of points. Then try to solve the two simultaneous equations to find a point (x,y) that lies on both lines; this point, if it exists, must be the intersection of the lines. Now go back to check whether the intersection lies within each of the segments. Seems straightforward—but watch out for those potholes. For starters, in the equation y = mx + b the slope m is defined as Δy/ Δx. What if the segment is vertical? Then Δx = 0, and the slope is undefined. More insidiously, what if the two end points of a segment are actually the same point, so that the segment has zero length? Again the slope is undefined; now it’s 0/0. Then there’s the matter of parallel line segments. In Euclid’s world, parallel lines just don’t intersect; that’s pretty much how “parallel” is defined. But in dealing with segments, it does makes sense to ask about the intersection of two segments lying along the same line. In this case the intersection could be empty, or it could be a single point of overlap, or it could be a segment. The method of slopes and intercepts and simultaneous equations won’t work in this situation. Down the garden path. I like a good puzzle, and I try not to cheat, but sometimes it’s just crazy to pretend you’re the first person on Earth who ever faced a problem. So I did some scouting around on the Net, as well as in filing cabinets and on bookshelves. If you do a Web search for “segment intersection algorithm,” you’ll find lots of references to works by eminent computer scientists: Jon Louis Bentley, Bernard Chazelle, Herbert Edelsbrunner, and others. Following those leads, you’ll find some eminently clever algorithms, the result of a long series of inventions and refinements in a sort of algorithmic arms race that has gone on for more than 20 years. Unfortunately, these powerful tools solve the wrong problem—or at least they don’t solve my problem. They deal with the task of efficiently identifying all the intersections among a large set of line segments; the subtask of finding the specific point of intersection between two given segments is a minor detail generally left as an exercise for the reader. Furthermore, most of the papers describing these algorithms take a fairly cavalier attitude to the various singularities and degeneracies mentioned above. I quote from Chazelle and Edelsbrunner 1992 (An optimal algorithm for intersecting line segments in the plane, Journal of the Association for Computing Machinery 39:1–54): For the ease of exposition, we shall assume that no two endpoints have the same x or y coordinates. This, in particular, applies to the two endpoints of the same segment, and thus rules out the presence of vertical or horizontal segments…. Our rationale is that the key ideas of the algorithm are best explained without having to worry about special cases every step of the way. Relaxing the assumptions is very easy (no new ideas are required) but tedious. That’s for the theory. Implementing the algorithm so that the program works in all cases, however, is a daunting task. There are also numerical problems that alone would justify writing another paper. Following a venerable tradition, however, we shall try not to worry too much about it. Thanks, guys. Out on the Web I did find a few odd bits of code that address the specifics of the two-segments problem. One author finesses the inconvenience of vertically oriented segments by setting Δy/0 equal to 10^10, with the comment, “close enough to infinity.” I wasn’t seriously tempted to follow this example. It’s not that I want a better approximation to infinity; I’d agree that 10^10 is probably close enough. But this is one of those cases where infinity itself is not a very good approximation to the result of dividing by zero. Even if you believe that parallel lines meet at infinity, single isolated points don’t get there even in the limit. Also, despite venerable tradition, I do worry that a finite infinity may invite numerical trouble somewhere down the road. The road not taken. What’s most annoying about the whole vertical-line mess is that it’s not a fundamental geometric constraint but just an artifact of how lines are represented. It’s only by convention that we measure slope with respect to the y axis; we could choose another orientation. Which suggests a way around the problem. If one of the segments turns out to be vertical, rotate the whole coordinate frame before attempting to test for intersections, and then rotate it back afterwards. The rotation is just a matrix multiplication, so it’s not a big computational burden, and you can do it only when needed. I seriously considered this strategy, and even now I wonder if it isn’t the right way to go about it. I finally decided against it because it doesn’t solve the problem of a segment that’s a single point: That kind of degenerate line has no well-defined slope in any coordinate frame. Perhaps this fact is an argument for outlawing single-point segments altogether. I considered that too, but it seemed a bit pusillanimous, legislating a problem out of existence just because I found it inconvenient to solve. Mathematically, a one-point line segment may be pathological, but computationally it’s just an instance of aliasing—a very common occurrence, where two things turn out to be the same thing. Besides, it would be handy to have an intersection routine that can also test whether a given point is on a line. On the road again. Lacking a brilliant Gordian-knot insight that would solve the problem with a single stroke, I had no choice but to push on into the tangled maze of if-then-else case analysis. Is either segment vertical? Is either segment a single point? Are the segments parallel (i.e., identical slopes)? If they are parallel, are they also collinear (identical y intercepts)? But be careful: If they’re parallel, they could both be vertical, with undefined y intercepts. Each of these cases could require a separate calculation of the intersection point. I’m not even sure how many cases there are—at least a dozen, but it depends on how you count. The procedure I eventually wrote (after exploring several more blind alleys) is not quite as ugly as I feared it would be, but I still have the firm sense that there’s a better way. If not a royal road, then at least a trail that doesn’t require four-wheel drive. I was able to get the number of cases down to five, though only by exploiting a little Lisp hocus-pocus. For convenience of reference in what follows I label the segments red and blue. • Case 1. Both slopes exist, and they are not equal. This is the general case of two lines that are not vertical and not parallel. We can solve the simultaneous equations. • Case 2. The red slope is well-defined but the blue slope does not exist. The blue segment could be either a vertical segment or a single point. In either case, if the intersection exists, its x coordinate must be that of the blue segment. • Case 3. The blue slope is well-defined but the red slope does not exist. Ibid, mutatis mutandis. • Case 4. Both segments have identical slopes and also identical y intercepts; hence they are parallel and collinear. This is where the hocus-pocus comes in: The Lisp predicate (equalp (slope red-segment) (slope blue-segment)) returns true if both slopes are numeric and are equal, and it also yields true if both slopes are nil, the Lisp idiom for things that don’t exist. By this trickery, with a similar test on the y intercepts, I pack several cases into a single cond clause: two collinear vertical segments, two collinear non-vertical segments, and various combinations of vertical segments and single points. • Case 5. None of the above. The only possibility remaining—unless I am mistaken—is equal slopes and different y intercepts: The lines are parallel but not collinear, and so there is no All that analysis merely determines whether or not the lines intersect; we still have to check whether or not the intersection lies within the segments. That involves still more case analysis, since the process is a little different for parallel segments than for others. I found a way of doing it that’s not grotesquely ugly. The road back. Belatedly, after I had a working procedure, I did some more scouting in the literature and discovered a few paths worth exploring. Robert Sedgewick’s Algorithms textbook suggests a quite different and ingenious method of detecting intersections of segments. Suppose you walk along the red segment, and when you come to the end, you have to turn counterclockwise to reach one end point of the blue segment, and clockwise to reach the other blue end point. Now try the same experiment walking on the blue segment; if again you must turn counterclockwise in one case and clockwise in the other to see the red end points, then the two segments intersect. Conversely, if in either case both end points are reached by turning in the same direction, there can be no intersection. (To be comprehensive, we also have to consider the case where an end point is straight ahead or directly behind.) Nifty! Unfortunately, the procedure only detects the existence of an intersection; I see no easy way to make it yield the coordinates. Joseph O’Rourke’s Computational Geometry text (I looked at the C version) also has a discussion of intersection-finding. O’Rourke suggests working with parametric equations for the two segments—defining x and y as functions of distance along the segment. This avoids the problem with undefined slopes but encounters another singularity with parallel lines. Overall, the computation does not appear to be notably simpler. Stuck in the mud. The animus behind this entire rant is a feeling that I must be missing something obvious, that a problem like finding the intersection of two line segments shouldn’t be this hard. The difficulty I’m talking about is not computational but conceptual. There are lots of hard computational problems—graph coloring, say, or factoring integers—for which one can write a very tidy and perspicuous program. True, the program may have a running time that exceeds your lifespan, but it’s easy to describe what needs to be done. Programs for geometric problems, in contrast, seem often to be efficient but hideous, with tangled logic, an abundance of special cases, and hidden numerical perils. Why is that? Nature seems to have no trouble at all detecting intersections or collisions. If two wires cross on a circuit board, you can count on blowing a fuse no matter what the slopes of the conductors. Why can’t we compute the same result so effortlessly? Or maybe I have indeed missed something obvious—or even something subtle. If anyone has a better intersection algorithm, please pass it on. Update 2006-09-15. My thanks to all of the readers who so promptly weighed in with good ideas. Over the weekend I’m going to try turning a couple of those suggestions into working code, and I’ll report back. 11 Responses to Only connect! 1. It’s much easier in projective geometry. Cartesian coordinates of points to projective points: (x,y) => (x,y,1) The projective line ax+by+cz=0 through two projective points: (y0z1-y1z0, z0x1-z1x0, x0y1-x1y0) The projective point where two lines intersect: same equation as the previous one, with the variable names changed Projective back to Cartesian: (x,y,z) => (x/z,y/z) If you get a divide by zero in the last step, the lines are parallel. 2. I would find the intersection point of the two entire lines (not segments), then check the distance from the midpoint of each segment to the intersection point. If the distance is both cases is less than 1/2 the segment length, then the segments intersect. (I think that would work. Didn’t test it though.) 3. Also, if you leave the cartesian coordinate system and use polar coordinates, you can specify a line in it’s normal form, as is done with the Hough transform: In normal form, a line is defined as: x cos(theta) + y sin(theta) = rho. That will eliminate the problem of infinite slopes. 4. Just a thought, since you have two line segments, can you figure out whether the endpoints of one are both in front of or behind the other? If they aren’t, then you know they don’t cross. I guess what I’m suggesting is that maybe first you can figure out whether one line segment crosses the line made by extending the other, as an easier problem? My other thought would be to project the one line onto the other, and see if the projected… Uh, I’m not sure where I’m going with that. Wait, if the projection doesn’t overlap (which is easy to check), then the line segments don’t cross? I don’t really know if that gets you any closer, but those would be the first two things I would try. 5. I sympathize with your plight: I’m surprised though that you didn’t encounter yet another problem: precision !! After all, how do you really know that the two endpoints of a segment coincide. If they are floats, and came from some other application upstream, it’s not clear that x == y suffices as a check. 6. Well, projective geometry is easier than Euclidean geometry in part because it has no good way to define “between”, so you can’t really even talk about line segments. (This is similar to the reason that algebra is easier over the complex numbers than over the real numbers; in fact, maybe it’s the exact same reason.) (Also, even the projective geometry is not so easy as all that; the functions from two points to a line, and from two lines to a point, only work if the points/lines are distinct.) I’m going to try for an algorithm without many case splits. This is inspired by your description of the algorithm in Sedgewick, but it does give you the intersection point. I’ll use some special notation: “.” is the dot-product operator; if A is a point then Ax is the x coordinate; and if A,B,C are points then Area(ABC) is the signed parallelogram area: (Ax-Bx)* (Cy-By) – (Ay-By)*(Cx-Bx) (then |Area(ABC)| is the area of a parallelogram where 3 of the vertices are A,B,C; and Area(ABC) = -Area(CBA)). We’re testing whether the line segments AB and CD intersect. First, if A=B and C=D, then we actually have two points; check whether A=C. (first case split) Now, assume without loss of generality that A!=B. (second case split) Next, compute Area(CAB) and Area(BAD). If the product of these numbers is negative, then C and D are on the same side of AB; report no intersection, and exit. (third case split) If Area(CAB) and Area(BAD) are both 0, then A,B,C,D are all collinear; jump to the CollinearIntersection(AB,CD) routine below. (fourth case split) Now we know that the lines AB and CD are neither identical nor parallel, so we can find a unique intersection point E. (Note that I said “lines”, not “line segments”; we still don’t know if the line segments intersect.) This intersection point is: E = C + (D-C)*Area(CAB)/(Area(CAB) + Area(BAD)) (Due to the above case splits, we know that we are not dividing by 0 here.) Now, A,B,E are collinear; jump to the CollinearIntersection(AB,EE) routine immediately below. Now we are testing CollinearIntersection(AB,EF) (where EF is either CD or EE, depending on how we got here); we know that A!=B, and that A,B,E,F are all collinear. Define UNIT=(B-A).(B-A). Compute dE = (E-A).(B-A)/UNIT, and dF = (F-A).(B-A)/UNIT. If dE and dF are both less than 0, or both greater than 1, then report no intersection, and exit. (fifth case split) Now, sort the 4 numbers 0,1,dE,dF; remove the least and the greatest number; and call the remaining middle two numbers dG and dH. (several more case splits, depending on how you count) Compute G = A + dG*(B-A) and H = A + dH*(B-A). The intersection is the line segment GH (which may be a single point). Finished! After 5 case splits (into 6 cases), plus a 4-element sort (which you can decide to count as several case splits, or not, as you please). Of course, in the real world, there’s more to worry about geometric algorithms than just this. Are you computing over the rationals (in which case this algorithm should work perfectly), or over floating-point numbers? If you are computing over floating-point numbers, and you pass in coordinates which are collinear, is it acceptable to sometimes say that there is no intersection, or that there is a single-point intersection, rather than that there is a line-segment intersection? (If you want an exact answer using floating-point arithmetic, then you’ve got a lot more work ahead of 7. For one thing: what Eppstein said. For another. Why not go parametric? You represent your lines as a pair of functions R->R. To make it work with being segments, let them be represented as functions [0,1]->R .Thus, if a line goes from (x1,y1) to (x2,y2), then this is represented by the pair x: t -> x1+t*(x2-x1) y: t -> y1+t*(y2-y1) Now, the vertical line has t -> x1, a constant function. The point has both functions constant. And finding the intersection of the lines described by (x1(t),y1(t)), (x2(t),y2(t)), respectively, is the matter of finding t1 and t2 such that x1(t1)=x2(t2) and y1(t1)=y2(t2). This yields a system of linear equations, easily solved, and with a final check for whether the solution stays within the constraints given. The method is not hard to adapt to rays and lines – just adjust permissible solutions to be positive or all of them, respectively. 8. To elaborate a little on my answer, so that it applies to line segment intersection and not just line intersection (and yes, line segments are perfectly well defined in projective geometry, only there are two of them between every pair of points so you have to be clear which one you mean): - If the calculation for the line between one of the pairs of endpoints comes up (0,0,0), the two points are equal. In this case, there is only an intersection if one of these points lies on the line through the other two points, which can be tested by taking the dot product of the point’s coordinates with the line’s. - If the calculation for the point where the two lines meet comes up (0,0,0), the two lines are coincident. In this case you can test for an intersection by comparing the order of the points’ - As I said in my original comment, if you get a divide by zero converting back to Cartesian coordinates, the lines are parallel. In this case there can be no intersection. - Finally, once you have the intersection point in Cartesian coordinates, you can test the ordering of its coordinates against the original points to determine whether it lies between them. But, if you want to perform exact comparisons such as this (do you really have (0,0,0) or just a vector very close to it?) you probably need to be using multiprecision integers, and avoiding floating point, or your results will not be reliable. Or, to put it another way: if you think approximate calculation with floating point is good enough, why are you worrying about cases like two coincident lines that only make sense for exactly defined numbers? 9. (Oops, my last reply got corrupted by less-than-signs reinterpreted as HTML. Here’s a repost.) I wrote some lecture notes that include the line intersection formula, as well as a bunch of other geometric constructors. You can download it from (in gzipped PostScript form; see Section 3.6). I believe this is the simplest possible formula. The notes also contain some tips on getting a numerically stable answer despite floating-point The notes don’t address how to tell whether the segments intersect, so I’ll address that here (and I should add it to the notes next week). If your segments are ab and cd, and they’re not parallel (see the notes for how to tell that), then they intersect if and only if Orient2D(c,d,a) x Orient2D(c,d,b) <= 0 AND Orient2D(a,b,c) x Orient2D(a,b,d) <= 0. The function Orient2D() is described in the notes, and I have C code for computing it exactly from floating-point input at If the lines are identical, then you determine the segment intersection by comparing x-coordinates (y-coordinates if the lines are vertical). 10. You might also be interested in the final chapter of the recent O’Reilly book Beautiful Code: Leading Programmers Explain How They Think, which focuses on the related problem of determining whether three points are collinear. The description of both the problem and the development of the eventual solution is lucid, engaging, and insightful. Still, I doubt there’s much you could learn from the author. 11. David: Thanks for the suggestion. I’ll see if I can learn anything from that author. (I’m an autodidact, but I had a very good teacher.) This entry was posted in computing, mathematics.
{"url":"http://bit-player.org/2006/only-connect","timestamp":"2014-04-18T03:04:53Z","content_type":null,"content_length":"53502","record_id":"<urn:uuid:397dbda7-4513-4f38-aa95-e78d5c8b6639>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Analysis (Residual Theorem). December 4th 2008, 06:31 PM #1 Nov 2008 Complex Analysis (Residual Theorem). This is what I did... I'm not sure if I'm in the right direction about this problem. I am french so excuse me if my translation in english isn't accurate. I used the residual theorem. The integral of Poles of order 1 at Only the pole at z = ia is in the semi-circle drawn above. Residual at z = ia : What does cos(ia) represent ? Does that even exist? If this is totally wrong, then how do I solve this problem? That's not Cauchy's Theorem. You'd need to show that as the radius goes to infinity, the integral over the half-circle arc goes to zero. But it's easier if you use: $\mathop\oint\limits_{H_u}\frac{e^{iz}}{z^2+a^2}dz= 2\pi i\mathop\text{Res}\limits_{z=ia}\left\{\frac{e^{iz }}{z^2+a^2}\right\}=f+gi$ where $H_u$ is the upper half circle contour. I think it's not to hard to show: $\lim_{R\to\infty}\mathop\int\limits_C \frac{e^{iz}}{z^2+a^2}dz\to 0$ where $C$ is the half-arc contour and on the straight-line segment along the real axis $R$ we have: $\mathop\int\limits_{R} \frac{e^{iz}}{z^2+a^2}dz=\int_{-\infty}^{\infty} \frac{cos(x)}{x^2+a^2}dx+i\int_{-\infty}^{\infty} \frac{sin(x)}{x^2+a^2}dx=f+gi$ (since the part over C is zero). $\int_{-\infty}^{\infty} \frac{cos(x)}{x^2+a^2}dx=\textbf{Re}\left[2\pi i\mathop\text{Res}\limits_{z=ia}\left\{\frac{e^{iz }}{z^2+a^2}\right\}\right]$ [edit] I made a silly mistake on this initially but corrected it here. Hope that didn't cause problems for you Illusion. Last edited by shawsend; December 5th 2008 at 12:06 PM. Reason: corrected formulas December 4th 2008, 11:25 PM #2 December 5th 2008, 05:08 AM #3 Super Member Aug 2008
{"url":"http://mathhelpforum.com/calculus/63392-complex-analysis-residual-theorem.html","timestamp":"2014-04-16T17:31:53Z","content_type":null,"content_length":"40861","record_id":"<urn:uuid:da92405e-260e-4bc1-b9ba-53231227b5f5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Scientists' greatest pleasure comes from theories that derive the solution to some deep puzzle from a small set of simple principles in a surprising way. These explanations are called "beautiful" or "elegant". Historical examples are Kepler's explanation of complex planetary motions as simple ellipses, Bohr's explanation of the periodic table of the elements in terms of electron shells, and Watson and Crick's double helix. Einstein famously said that he did not need experimental confirmation of his general theory of relativity because it "was so beautiful it had to be true." See :2012 : WHAT IS YOUR FAVORITE DEEP, ELEGANT, OR BEAUTIFUL EXPLANATION? which comments resonate with you. Some of my picks as I go through was by : My Favorite Annoying Elegant Explanation: Quantum Theory .......General Relativity, in turn, is only a classical theory. It rests on a demonstrably false premise: that position and momentum can be known simultaneously. This may a good approximation for apples, planets, and galaxies: large objects, for which gravitational interactions tend to be much more important than for the tiny particles of the quantum world. But as a matter of principle, the theory is wrong. The seed is there. General Relativity cannot be the final word; it can only be an approximation to a more general Quantum Theory of Gravity. But what about Quantum Mechanics itself? Where is its seed of destruction? Amazingly, it is not obvious that there is one. The very name of the great quest of theoretical physics—"quantizing General Relativity"—betrays an expectation that quantum theory will remain untouched by the unification we seek. String theory—in my view, by far the most successful, if incomplete, result of this quest—is strictly quantum mechanical, with no modifications whatsoever to the framework that was completed by Heisenberg, Schrödinger, and Dirac. In fact, the mathematical rigidity of Quantum Mechanics makes it difficult to conceive of any modifications, whether or not they are called for by observation. Yet, there are subtle hints that Quantum Mechanics, too, will suffer the fate of its predecessors. The most intriguing, in my mind, is the role of time. In Quantum Mechanics, time is an essential evolution parameter. But in General Relativity, time is just one aspect of spacetime, a concept that we know breaks down at singularities deep inside black holes. Where time no longer makes sense, it is hard to see how Quantum Mechanics could still reign. As Quantum Mechanics surely spells trouble for General Relativity, the existence of singularities suggests that General Relativity may also spell trouble for Quantum Mechanics. It will be fascinating to watch this battle play out. President, The Royal Society; Professor of Cosmology & Astrophysics; Master, Trinity... Physical Reality Could Be Hugely More Extensive Than the Patch of Space and Time Traditionally Called 'The Universe' .....As an analogy (which I owe to Paul Davies) consider the form of snowflakes. Their ubiquitous six-fold symmetry is a direct consequence of the properties and shape of water molecules. But snowflakes display an immense variety of patterns because each is molded by its distinctive history and micro-environment: how each flake grows is sensitive to the fortuitous temperature and humidity changes during its growth. If physicists achieved a fundamental theory, it would tell us which aspects of nature were direct consequences of the bedrock theory (just as the symmetrical template of snowflakes is due to the basic structure of a water molecule) and which cosmic numbers are (like the distinctive pattern of a particular snowflake) the outcome of environmental contingencies. . Theoretical physicist An Explanation of Fundamental Particle Physics That Doesn't Exist Yet.....What is tetrahedral symmetry doing in the masses of neutrinos?! Nobody knows. But you can bet there will be a good explanation. It is likely that this explanation will come from mathematicians and physicists working closely with Lie groups. The most important lesson from the great success of Einstein's theory of General Relativity is that our universe is fundamentally geometric, and this idea has extended to the geometric description of known forces and particles using group theory. It seems natural that a complete explanation of the Standard Model, including why there are three generations of fermions and why they have the masses they do, will come from the geometry of group theory. This explanation does not yet exist, but when it does it will be deep, elegant, and beautiful—and it will be my favorite. Mathematician, Harvard; Co-author, The Shape of Inner Space A Sphere....Most scientific facts are based on things that we cannot see with the naked eye or hear by our ears or feel by our hands. Many of them are described and guided by mathematical theory. In the end, it becomes difficult to distinguish a mathematical object from objects in nature. One example is the concept of a sphere. Is the sphere part of nature or it is a mathematical artifact? That is difficult for a mathematician to say. Perhaps the abstract mathematical concept is actually a part of nature. And it is not surprising that this abstract concept actually describes nature quite accurately. theoretical physicist; Professor, Department of Physics, University of California,... Gravity Is Curvature Of Spacetime … Or Is It?......We do not yet know the full shape of the quantum theory providing a complete accounting for gravity. We do have many clues, from studying the early quantum phase of cosmology, and ultrahigh energy collisions that produce black holes and their subsequent disintegrations into more elementary particles. We have hints that the theory draws on powerful principles of quantum information theory. And, we expect that in the end it has a simple beauty, mirroring the explanation of gravity-as-curvature, from an even more profound depth. Albert Einstein Professor in Science, Departments of Physics and Astrophysical... Quasi-elegance....As a young student first reading Weyl's book, crystallography seemed like the "ideal" of what one should be aiming for in science: elegant mathematics that provides a complete understanding of all physical possibilities. Ironically, many years later, I played a role in showing that my "ideal" was seriously flawed. In 1984, Dan Shechtman, Ilan Blech, Denis Gratias and John Cahn reported the discovery of a puzzling manmade alloy of aluminumand manganese with icosahedral symmetry. Icosahedral symmetry, with its six five-fold symmetry axes, is the most famous forbidden crystal symmetry. As luck would have it, Dov Levine (Technion) and I had been developing a hypothetical idea of a new form of solid that we dubbed quasicrystals, short for quasiperiodic crystals. (A quasiperiodic atomic arrangement means the atomic positions can be described by a sum of oscillatory functions whose frequencies have an irrational ratio.) We were inspired by a two-dimensional tiling invented by Sir Roger Penrose known as the Penrose tiling, comprised of two tiles arranged in a five-fold symmetric pattern. We showed that quasicrystals could exist in three dimensions and were not subject to the rules of crystallography. In fact, they could have any of the symmetries forbidden to crystals. Furthermore, we showed that the diffraction patterns predicted for icosahedral quasicrystals matched the Shechtman et al. observations. Since 1984, quasicrystals with other forbidden symmetries have been synthesized in the laboratory. The 2011 Nobel Prize in Chemistry was awarded to Dan Shechtman for his experimental breakthrough that changed our thinking about possible forms of matter. More recently, colleagues and I have found evidence that quasicrystals may have been among the first minerals to have formed in the solar system. The crystallography I first encountered in Weyl's book, thought to be complete and immutable, turned out to be woefully incomplete, missing literally an uncountable number of possible symmetries for matter. Perhaps there is a lesson to be learned: While elegance and simplicity are often useful criteria for judging theories, they can sometimes mislead us into thinking we are right, when we are actually infinitely wrong. Physicist, Harvard University; Author, Warped Passages; Knocking On Heaven's Door The Higgs Mechanism......Fortunately that time has now come for the Higgs mechanism, or at least the simplest implementation which involves a particle called the Higgs boson. The Large Hadron Collider at CERN near Geneva should have a definitive result on whether this particle exists within this coming year. The Higgs boson is one possible (and many think the most likely) consequence of the Higgs mechanism. Evidence last December pointed to a possible discovery, though more data is needed to know for sure. If confirmed, it will demonstrate that the Higgs mechanism is correct and furthermore tell us what is the underlying structure responsible for spontaneous symmetry breaking and spreading "charge" throughout the vacuum. The Higgs boson would furthermore be a new type of particle (a fundamental boson for those versed in physics terminology) and would be in some sense a new type of force. Admittedly, this is all pretty subtle and esoteric. Yet I (and much of the theoretical physics community) find it beautiful, deep, and elegant. Symmetry is great. But so is symmetry breaking. Over the years many aspects of particle physics were first considered ugly and then considered elegant. Subjectivity in science goes beyond communities to individual scientists. And even those scientists change their minds over time. That's why experiments are critical. As difficult as they are, results are much easier to pin down than the nature of beauty. A discovery of the Higgs boson will tell us how that is done when particles acquire their masses. Professor of Quantum Mechanical Engineering, MIT; Author, Programming the Universe The True Rotational Symmetry of Space.....Although this excercise might seem no more than some fancy and painful basketball move, the fact that the true symmetry of space is rotation not once but twice has profound consequences for the nature of the physical world at its most microscopic level. It implies that 'balls' such as electrons, attached to a distant point by a flexible and deformable 'strings,' such as magnetic field lines, must be rotated around twice to return to their original configuration. Digging deeper, the two-fold rotational nature of spherical symmetry implies that two electrons, both spinning in the same direction, cannot be placed in the same place at the same time. This exclusion principle in turn underlies the stability of matter. If the true symmetry of space were rotating around only once, then all the atoms of your body would collapse into nothingness in a tiny fraction of a second. Fortunately, however, the true symmetry of space consists of rotating around twice, and your atoms are stable, a fact that should console you as you ice your shoulder. Remember even though I pick some of these explanations does not mean I discount all others. It's just that some are picked for what they are saying in highlighted quotations. Lisi's statement on string theory is of course in my opinion far from the truth, yet, he captures a geometrical truth that I feel exists.:) You sort of get the jest of where I am coming from in the summation of Paul Xianfeng David Gu and Shing-Tung Yau To a topologist, a rabbit is the same as a sphere. Neither has a hole. Longitude and latitude lines on the rabbit allow mathematicians to map it onto different forms while preserving information. William Thurston of Cornell, the author of a deeper conjecture that includes Poincaré’s and that is now apparently proved, said, “Math is really about the human mind, about how people can think effectively, and why curiosity is quite a good guide,” explaining that curiosity is tied in some way with intuition. “You don’t see what you’re seeing until you see it,” Dr. Thurston said, “but when you do see it, it lets you see many other things.” Elusive Proof, Elusive Prover: A New Mathematical Mystery Some of us are of course interested in how we can assign the relevance to perceptions the deeper recognition of the processes of nature. How we get there and where we believe they come from. As a layman I am always interested in this process, and of course, life's mysteries can indeed be a motivating factor. Motivating my interest about the nature of things that go unanswered and how we get William Paul Thurston (born October 30, 1946) is an American mathematician. He is a pioneer in the field of low-dimensional topology. In 1982, he was awarded the Fields medal for the depth and originality of his contributions to mathematics. He is currently a professor of mathematics and computer science at Cornell University (since 2003). There are reasons with which I present this biography, as I did in relation to Poincaré and Klein. The basis of the question remains a philosophical one for me that I question the basis of proof and intuition while considering the mathematics. Mathematical Induction Mathematical Induction at a given statement is true of all natural numbers. It is done by proving that the first statement in the infinite sequence of statements is true, and then proving that if any one statement in the infinite sequence of statements is true, then so is the next one. The method can be extended to prove statements about more general well-founded structures, such as trees; this generalization, known as structural induction, is used in mathematical logic and computer science. Mathematical induction should not be misconstrued as a form of inductive reasoning, which is considered non-rigorous in mathematics (see Problem of induction for more information). In fact, mathematical induction is a form of deductive reasoning and is fully rigorous. Deductive reasoning Deductive reasoning is reasoning which uses deductive arguments to move from given statements (premises), which are assumed to be true, to conclusions, which must be true if the premises are The classic example of deductive reasoning, given by Aristotle, is * All men are mortal. (major premise) * Socrates is a man. (minor premise) * Socrates is mortal. (conclusion) For a detailed treatment of deduction as it is understood in philosophy, see Logic. For a technical treatment of deduction as it is understood in mathematics, see mathematical logic. Deductive reasoning is often contrasted with inductive reasoning, which reasons from a large number of particular examples to a general rule. Alternative to deductive reasoning is inductive reasoning. The basic difference between the two can be summarized in the deductive dynamic of logically progressing from general evidence to a particular truth or conclusion; whereas with induction the logical dynamic is precisely the reverse. Inductive reasoning starts with a particular observation that is believed to be a demonstrative model for a truth or principle that is assumed to apply generally. Deductive reasoning applies general principles to reach specific conclusions, whereas inductive reasoning examines specific information, perhaps many pieces of specific information, to impute a general principle. By thinking about phenomena such as how apples fall and how the planets move, Isaac Newton induced his theory of gravity. In the 19th century, Adams and LeVerrier applied Newton's theory (general principle) to deduce the existence, mass, position, and orbit of Neptune (specific conclusions) from perturbations in the observed orbit of Uranus (specific data). Deduction and Induction Our attempt to justify our beliefs logically by giving reasons results in the "regress of reasons." Since any reason can be further challenged, the regress of reasons threatens to be an infinite regress. However, since this is impossible, there must be reasons for which there do not need to be further reasons: reasons which do not need to be proven. By definition, these are "first principles." The "Problem of First Principles" arises when we ask Why such reasons would not need to be proven. Aristotle's answer was that first principles do not need to be proven because they are self-evident, i.e. they are known to be true simply by understanding them. Back to the lumping in of theology alongside of Atlantis. Rebel dreams, it is hard to remove one's colour once they work from a certain premise. Atheistic, or not. Seeking such clarity would be the attempt for me, with which to approach a point of limitation in our knowledge, as we may try to explain the process of the current state of the universe, and it's shape. Such warnings are indeed appropriate to me about what we are offering for views from a theoretical standpoint. The basis presented here is from a layman standpoint while in context of Plato's work, brings some perspective to Raphael's painting, "The School of Athens." It is a central theme for me about what the basis of Inductive and deductive processes reveals about the "infinite regress of mathematics to the point of proof." Such clarity seeking would in my mind contrast a theoretical technician with a philosopher who had such a background. Raises the philosophical question about where such information is derived from. If ,from a Platonic standpoint, then all knowledge already exists. We just have to become aware of this knowledge? How so? Lawrence Crowell: The ball on the Mexican hat peak will under the smallest perturbation or fluctuation begin to fall off the peak, roll into the trough and the universe tunnels out of the vacuum or nothing to become a “something.” Whether I attach a indication of God to this knowledge does not in any way relegate the process to such a contention of theological significance. The question remains a inductive/deductive process? I would think philosophers should weight in on the point of inductive/deductive processes as it relates to the search for new mathematics? Allegory of the Cave For me this was a difficult task with which to cypher the greater contextual meaning of where such mathematics arose from. That I should implore such methods would seem to be, to me, in standing with the problems and ultimates searches for meaning about our place in the universe. Whether I believe in the "God nature of that light" should hold no atheistic interpretation to my quest for the explanations about the talk on the origins of the universe.
{"url":"http://www.eskesthai.com/search/label/Shing-tung%20Yau","timestamp":"2014-04-19T00:15:44Z","content_type":null,"content_length":"210683","record_id":"<urn:uuid:a52e63cd-f027-4d20-b53c-05fdcda95372>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Thermodynamics help please. Compression, heat, cool, decompression I agree with rap - that looks good. You can use the same technique for the second part of your question as well. That temperature is low enough that dry air should pretty much act ideally (so you don't need to worry about deviations from ideal gas behavior) too, so unless you've got an extremely high humidity, that should be a pretty accurate result. I was writing when you posted and missed your reply earlier. Thank you for the verification and input. So, I can use the same equation to find the temperature upon decompression? Let's try it out! k = 1.4 so the exponent is 0.286 P2 = 29.4 absolute (14.7 psig) P1 = 73.5 T1 = 294 So, 29.4 / 73.5 to the power of 0.286 = 0.7695 0.7695 x 294 = 226 kelvin = -53°F If all the above is sound and correct, that's pretty awesome! Thank you cjl!
{"url":"http://www.physicsforums.com/showthread.php?s=05f9df2fccf4c581dfa585c421b9a365&p=4298269","timestamp":"2014-04-23T15:27:25Z","content_type":null,"content_length":"69484","record_id":"<urn:uuid:61c3c091-3ec9-4cd9-9fe1-105faf106a14>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Multichannel DDS enables efficient FSK/PSK modulation | EE Times Design How-To Multichannel DDS enables efficient FSK/PSK modulation Frequency-shift keying (FSK) and phase-shift keying (PSK) modulation schemes are used in digital communications, radar, RFID, and numerous other applications. The simplest form of FSK uses two discrete frequencies to transmit binary information, with Logic 1 representing the mark frequency and Logic 0 representing the space frequency. The simplest form of PSK is binary (BPSK), which uses two phases separated by 180°. Figure 1 illustrates the two types of modulation. Figure 1. Binary FSK (a) and PSK (b) modulation. The modulated output of a direct digital synthesizer (DDS) can switch frequency and/or phase in a phase-continuous or phase-coherent manner, as shown in Figure 1 (also see Ref. 1), making DDS technology well suited for both FSK and PSK modulation. This article describes how two synchronized DDS channels can implement a zero-crossing FSK or PSK modulator. Here, the AD9958 two-channel, 500-MSPS DDS is used to switch frequencies or phases at the zero-crossing point, but any two-channel synchronized solution should be capable of accomplishing this function. In phase-coherent radar systems, zero-crossing switching reduces the amount post processing needed for signature recognition of the target; and implementing PSK at the zero crossing reduces spectral splatter. Although both of the DDS-channel outputs are independent, they share an internal system clock and reside on a single piece of silicon, so they should provide more reliable channel-to-channel tracking over temperature and power-supply deviations than the outputs of multiple, single-channel devices synchronized together. The process variability that may exist between distinct devices is also larger than any process variability you might see between two channels fabricated in a single piece of silicon, making a multichannel DDS preferable for use as a zero-crossing FSK or PSK modulator. Figure 2. Setup for zero-crossing FSK or PSK modulator. A critical element of any DDS is the phase accumulator, which, in this implementation, is 32 bits wide. When the accumulator overflows, it retains any excess value. When the accumulator overflows with no remainder (see Figure 3), the output is precisely at phase 0, and the DDS engine starts over from where it was at time 0. The rate at which the zero-overflow is experienced is referred to as the grand-repetition rate (GRR) of the DDS. Figure 3. Basic DDS with overflowing accumulator. The GRR is determined by the rightmost nonzero bit of the DDS frequency tuning word (FTW), as established by the following equation: GRR = FS/2n, where FS is the sampling frequency of the DDS, and n is the rightmost nonzero bit of the FTW. For example, suppose a DDS with a 1-GHz sampling frequency employs 32-bit mark and space FTWs with the binary values shown. In this case, the rightmost nonzero bit of either FTW is the 19th bit, so GRR = 1 GHz/219, or approximately 1907 Hz.
{"url":"http://www.eetimes.com/document.asp?doc_id=1279639&piddl_msgorder=asc","timestamp":"2014-04-23T10:44:09Z","content_type":null,"content_length":"133565","record_id":"<urn:uuid:0850e2a4-c51d-4c83-bc18-bd985d8e85ab>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00615-ip-10-147-4-33.ec2.internal.warc.gz"}