content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
3-14 CHARACTERISTIC IMPEDANCE OF A TRANSMISSION LINE You learned earlier that the maximum (and most efficient) transfer of electrical energy takes place
when the source impedance is matched to the load impedance. This fact is very important in the study of
transmission lines and antennas. If the characteristic impedance of the transmission line and the load
impedance are equal, energy from the transmitter will travel down the transmission line to the antenna with no power loss caused by reflection. Definition and Symbols
Every transmission line possesses a certain CHARACTERISTIC IMPEDANCE, usually designated
as Z[0]. Z[0] is the ratio of E to I at every point along the line. If a load equal to the characteristic impedance
is placed at the output end of any length of line, the same impedance will appear at the input terminals of
the line. The characteristic impedance is the only value of impedance for any given type and size of line
that acts in this way. The characteristic impedance determines the amount of current that can flow when a
given voltage is applied to an infinitely long line. Characteristic impedance is comparable to the resistance that determines the amount of current that flows in a dc circuit.
In a previous discussion, lumped and distributed constants were explained. Figure 3-15, view A,
shows the properties of resistance, inductance, capacitance, and conductance combined in a short section
of two-wire transmission line. The illustration shows the evenly distributed capacitance as a single
lumped capacitor and the distributed conductance as a lumped leakage path. Lumped values may be used
for transmission line calculations if the physical length of the line is very short compared to the
wavelength of energy being transmitted. Figure 3-15, view B, shows all four properties lumped together and represented by their conventional symbols.
Figure 3-15. Short section of two-wire transmission line and equivalent circuit. Q19. Describe the leakage current in a transmission line and in what unit it is expressed. | {"url":"http://electriciantraining.tpub.com/14182/css/14182_122.htm","timestamp":"2014-04-16T07:22:28Z","content_type":null,"content_length":"28719","record_id":"<urn:uuid:7adf7867-5085-481f-a189-ecd3df78c153>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
A plane curve consisting of two infinite branches symmetrically placed with reference to the diameter of a circle, so that at one of its extremities they form a cusp, while the tangent to the circle
at the other extremity is their common asymptote.
Given a fixed point A and two curves C and D, the cissoid of the two curves with respect to A is constructed as follows: pick a point P on C, and draw a line l through P and A. This cuts D at Q. Let
R be the point on l such that AP = QR. The locus of R as P moves on C is the cissoid. The name "cissoid," which means "ivy-like," first appears in the work of Geminus in the first century BC. The
reason for the name is that when the asymptote is vertical, the circular cissoid seems to grow toward it as ivy does to a wall.
A special case of this curve, now known as the cissoid of Diocles, was first explored by Diocles in his attempt to the solve the classical problem of duplicating the cube. Later investigators of the
same curve include Pierre de Fermat, Christiaan Huygens, John Wallis, and Isaac Newton. Newton first showed how to describe the curve by continuous motion.
The cissoid of Diocles is traced out by the vertex of a parabola as it rolls, without slipping, on a second parabola of the same size. It has the Cartesian equation
y^2 = x^3/2(a - x).
The area include between the two branches of a cissoid and the asymptote is exactly equal to three times the area of the generating circle.
Interestingly, Diocles investigated the properties of the focal point of a parabola in On Burning Mirrors (a similar title appears in the works of Archimedes). The problem, then as now, is to find a
mirror surface such that when it is placed facing the Sun, it focuses the maximum amount of heat.
Related category
PLANE CURVES | {"url":"http://www.daviddarling.info/encyclopedia/C/cissoid.html","timestamp":"2014-04-17T15:31:41Z","content_type":null,"content_length":"8616","record_id":"<urn:uuid:5d08d8d9-1d49-466c-95c2-b44505716a62>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Computer Science Doesn
Version: 4.0.2.6
1 Why Computer Science Doesn’t Matter
(with Shriram Krishnamurthi)
1.1 Grade School, High School, and the Three Rs
The three Rs – reading, ’riting, ’ritmetic – symbolize what matters in American primary and secondary education. Teaching these three essential skills dominates the schools’ agenda in the minds of
parents, educators and legislators. Any new material competes with these core elements; if it isn’t competitive, it plays a marginal role, often relegated to nerd status or aligned with vocational
Computer science plays such a marginal role. While participation in Advanced Placement courses on mathematical subjects soars, the enrollment in AP computer science courses barely registers. Even
during the dot-com boom in the 1990s, it never managed to compete with mainstream subjects, motivating massive industry outreach efforts such as the Academies of Information Technology project.
Worse, evidence shows many schools are likely to eliminate courses on computing as soon as the enrollment drops off from its heights during boom times.
A large part of the problem is due to how computing is marketed to schools, to parents, to the people who allocate the education budgets, and even to students. Our community insists on teaching the
fashionable programming language and the popular programming paradigm. Some can’t even imagine missing out on the latest software engineering ideas du jour: structural and top-down programming,
objects and encapsulation, objects-first, objects-last, event-triggered, and all with uml diagrams. Of course, most of the languages and programming paradigms won’t be around by the time students
graduate but never mind that.
Not all of these are necessarily bad, but they are tangents not essentials.
When enrollments drop off, the leaders of the computer science education community routinely look for saviors in programming content: graphics, animation, robotics, and games have all been cast in
this role. Others act embarrassed about programming, our field’s single most valuable skill, and propose to deemphasize it in favor of “appealing” aspects. And a third group believes that lobbying
and advertising differently may just be the key to success.
What our community should really aim for is the development of a curriculum that turns our subject into the Fourth R – as in ’rogramming – of our education systems. One way of doing so is to align
programming with one of the Three Rs and to make it indispensable. An alignment with mathematics is obvious, promising, and may even help solve some problems with mathematics education.
1.2 Mathematics and Programming
All students must enroll in mathematics for most of their school years. Many students have difficulties imagining how mathematics works and seeing why it matters. Good programming has the potential
to partially solve this problem with properly used graphics and animations, something that almost every student likes.
Naturally, we realized that doing so cannot be achieved with the dominant approaches to programming instruction. Writing down “public static void main ( String argv [ ]) {” just to define a
mathematical function makes little sense.
Even automating this one annoying step of Java programming – plus eliminating “class Example” – won’t help because the results still aren’t aligned with mathematics.
• the nature of this form of programming must be so mathematical that it is nearly indistinguishable from mathematics;
• the mathematics must be expanded to the manipulation of non-numeric forms of data, including images and animations; and
• programming must become so lightweight that imposes less than a few minutes of overhead of teaching it.
If we don’t develop and accept this notion of programming, little skill transfer will take place, and no educational administrator will ever admit that Rogramming belongs right up there with Reading,
Riting, Rithmetic. If we do succeed, we will solve two problems. First, we will provide students with an interactive, engaging medium for studying mathematics. Second, we will align programming with
a subject that is considered indispensable to every student’s education. If we do it right, there will also be a smooth (continuous) path from this lightweight notion of programming to full-fledged
software engineering.
Let’s make this vision a bit more concrete. Imagine a typical algebra text book from your younger days. As you may know from watching your children, things haven’t changed that much. These books
still contain exercises that ask questions such as what is the next entry in the following table
1 2 3 4 5 ... x
1 4 9 16 ?? ... ??
or create a general “variable expression” that computes any arbitrary entry of the table. Students are of course expected to say that 5 comes with 25 and x comes with x * x. A student who has been
told about functions may even write down y = x * x.
Why not train the students to embrace f(x) = x * x? This notation explicitly states what varies and what depends on variations.
To make such problems look “relevant,” math textbooks may reformulate this kind of question involving real-life things. Well-worn examples demand that trains meet between two cities and that garden
tiles be arranged in some rectangular pattern that happens to correspond to a quadratic formula.
Why not tell students that animations are series of images and that displaying them at a rate of 28 images per second makes a movie?
We could also tell them modern arithmetic and algebra doesn’t have to be about numbers. It may involve images, strings, symbols, Cartesian points, and other forms of “objects.” For example,
is an arithmetic expression involving images in addition to numbers, and the value of such an expression is just another image:
placeImage(,25,0,) =
Making movies is all about using the “arithmetic of images” (and its algebra).
Now imagine asking students to determine how quickly a rocket reaches a certain height. We could start with a table and the simplifying assumption that rockets lift off at constant speeds:
0 1 2 3 4 ... t
0 10 20 30 40 ... height(t) = ?
And, since students know about images, we could express this exercise as a problem involving series of images and asking such as what is the next entry in the following table
1 2 3 4 5 ... t
?? ... rocketImage(t) = ?
or create a general “variable expression” that computes any arbitrary entry of the table.
You would hope to get an answer like this one:
rockeImage(t) = placeImage(,25,10*t,)
With one more step, students can display this mathematical series of images and get the idea that constructing mathematical series can become something visually pleasing:
This expression demands that rocketImage be applied to 0, 1, 2, etc and that the result be displayed at a rate of 28 images per second.
A sophisticated teacher may even point out here the possibility of re-using the results of one mathematical exercise in another:
rockeImage(t) = placeImage(,25,height(t),)
Students thus see and visualize the composition of functions and expressions, all while using mostly plain mathematics as a programming language.
By this time, a reader shouldn’t be surprised to find out that the above isn’t just imagination or a simple software application for scripting scenes. It is a full-fledged programming language and,
in this language, even the design and implementation of interactive video games doesn’t take much more than algebra and geometry. We have prototyped and field-tested this idea, and it works.
1.3 TeachScheme!, Reach Java
When we founded the TeachScheme! (pronounced: teach Scheme, not) project in 1995, we explicitly set the goal of integrating an appealing form of programming into the core curriculum. We wanted
• to exploit what students find attractive about programming, graphics and animations;
• to introduce such concepts as quickly as possible and as mathematically as possible;
• to make programming enriching and indispensable for all students; and
• to create a smooth path from such humble beginnings to full-fledged programming.
We wanted to place computing where it belongs: in the hearts and minds of every single student.
At the technical level, we succeeded, though it took us several years to get there. We now use the above “rocket launching” program as our “hello world” program. Students are introduced to
programming just as we described it above. Nobody should be surprised to find out that students love this introduction and find it appealing; we never tell them that they are “doing mathematics.” It
is only a matter of four weeks to go from here to interactive graphical games.
While the project does not use the Scheme programming language per se, it relies on a series of four loosely related dialects that stay as close as possible to mathematics. Every operation is a
mathematical operation; translating any program into plain mathematics is straightforward.
Last but not least, the project team has also extended the curriculum with units that lead to plain Java programming. More precisely, the project supports a series of teaching languages that take
students from this mathematically oriented introduction to classes, interfaces and methods on a step by step basis. At no point encounter students a step too steep to conquer. Everything is just one
more link in a seemingly natural chain. In the end, students find computing with objects and message sending completely natural.
Over 12 years, we have trained a few hundred teachers and college faculty in extremely intensive one-week workshops. Independent evaluators have confirmed that 95% of the attendees found these
workshops to be mind-opening. “They changed the way we thought about computing” was the most common response. Due to preconceptions and prejudices, only some of these trained teachers and faculty
have had a chance to translate their workshop training into a couple of years of classroom teaching. They found it appealing and they also found highly positive effects on the performance of their
students on the AP/CS exam. Even more important, they found that their students improved their mathematical problem-solving skills. In short, the curriculum works for all students and brings students
naturally into the fold of our discipline.
1.4 Crossroads
Our community is at a crossroads. We can continue to search for more saviors and hope that somehow, somewhere computing will receive the respect that it deserves. Or we can try to help ourselves,
exploit what students find appealing and align ourselves with (and help) the core school curriculum. TeachScheme! is just one way of moving in this direction. There could be others. What we do know
is that we tried the old ways for 40 years and that they didn’t succeed. The choice is ours.
1.4.1 Acknowledgment
Kathi Fisler helped hone our thoughts in this essay. | {"url":"http://www.ccs.neu.edu/home/matthias/essays/not-matter.html","timestamp":"2014-04-19T12:01:06Z","content_type":null,"content_length":"20021","record_id":"<urn:uuid:1f80d7d4-996c-40c8-abcc-3a95b2423fea>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonlinear transformations in Lagrangians and a graphical calculus
Seminar Room 1, Newton Institute
We describe first a general graphical method to represent nonlinear transformations , and apply it to the case of the invariant transformations of tensors of a certain kind . We interpret then the
Lie algebra as well as the Hopf algebra of functions of such a group of nonlinear transformations . Finally we show how to connect these constructions with the Hopf algebra introduced by Connes and
Kreimer in Renormalization Theory . As a bonus , we settle a vexing question of normalization connected with the number of symmetries of a Feynman diagram , and we recover a theorem of Connes and
Kreimer about the renormalization group . | {"url":"http://www.newton.ac.uk/programmes/NCG/seminars/2006121816001.html","timestamp":"2014-04-16T07:38:07Z","content_type":null,"content_length":"4716","record_id":"<urn:uuid:c0d7faac-3a34-4fbc-a0aa-ec01e2b518d0>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matches for:
AMS/IP Studies in Advanced Mathematics
1997; 604 pp; hardcover
Volume: 2
ISBN-10: 0-8218-0654-8
ISBN-13: 978-0-8218-0654-8
List Price: US$84
Member Price: US$67.20
Order Code: AMSIP/2.1
This item is also sold as part of the following set:
This is Part 1 of a two-part volume reflecting the proceedings of the 1993 Georgia International Topology Conference held at the University of Georgia during the month of August. The texts include
research and expository articles and problem sets. The conference covered a wide variety of topics in geometric topology.
• Kirby's problem list, which contains a thorough description of the progress made on each of the problems and includes a very complete bibliography, makes the work useful for specialists and
non-specialists who want to learn about the progress made in many areas of topology. This list may serve as a reference work for decades to come.
• Gabai's problem list, which focuses on foliations and laminations of 3-manifolds, collects for the first time in one paper definitions, results, and problems that may serve as a defining source
in the subject area.
Titles in this series are copublished with International Press, Cambridge, MA.
Graduate students, research mathematicians, and physicists interested in manifolds and cell complexes.
• J. H. Rubinstein -- Polyhedral minimal surfaces, Heegaard splittings, and decision problems for 3-dimensional manifolds
• D. R. Auckly -- Surgery numbers of 3-manifolds: A hyperbolic example
• M. Eudave-Munoz -- Non-hyperbolic manifolds obtained by Dehn surgery on hyperbolic knots
• S. P. Boyer and X. Zhang -- Cyclic surgery and boundary slopes
• J. E. Luecke and Y.-Q. Wu -- Relative Euler number and finite covers of graph manifolds
• M. Ouyang -- Geometric invariants for 3-manifolds
• N. Habegger -- Link homotopy in simply connected 3-manifolds
• D. Gabai and W. H. Kazez -- Homotopy, isotopy, and genuine laminations of 3-manifolds
• D. Bar-Natan -- Non-associative tangles
• X.-S. Lin -- Power series expansions and invariants of links
• G. Masbaum -- Introduction to spin TQFT
• J. D. Roberts -- Refined state-sum invariants of 3- and 4-manifolds
• D. R. Auckly and L. A. Sadun -- A family of Möbius invariant 2-knot energies
• H. U. Boden -- Invariants of fibred knots from moduli
• L. C. Jeffrey -- Symplectic forms on moduli spaces of flat connectionso n 2-manifolds
• A. L. Edmonds -- Automorphisms of the \(E_8\) four-manifold
• P. Teichner -- On the star-construction for topological 4-manifolds
• Y. Eliashberg and L. Polterovich -- The problem of Lagrangian knots in four-manifolds
• F. Lalonde -- Energy and capacities in symplectic topology
• W. B. R. Lickorish -- Piecewise linear manifolds and Cerf theory
• G. A. Venema -- Local homotopy properties of topological embeddings in codimension two
• S. A. Weinberger -- \(SL(n,{\bf Z})\) cannot act on small tori
• K. Fukaya -- Morse homotopy and its quantization
• F. D. Ancel, M. W. Davis, and C. R. Guilbault -- CAT(0) reflection manifolds
• A. D. Randall -- On oriented bundle theories and applications
• J.-C. Hausmann -- On minimal coverings in the sense of Lusternik-Schnirelmann for spaces and manifolds
• K. Yoshikawa -- The centers of fibered two-knot groups
• M. Yan -- Equivariant periodicity in surgery for actions of some nonabelian groups
• S. A. Weinberger -- Microsurgery on stratified spaces
• J. L. Block and S. A. Weinberger -- Large scale homology theories and geometry
• R. B. Kusner and J. M. Sullivan -- Möbius energies for knots and links, surfaces, and submanifolds | {"url":"http://ams.org/bookstore?fn=20&arg1=amsipseries&ikey=AMSIP-2-1","timestamp":"2014-04-16T11:13:33Z","content_type":null,"content_length":"17711","record_id":"<urn:uuid:5b52142a-53c3-4492-8200-4f5474415ed5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
3-14 CHARACTERISTIC IMPEDANCE OF A TRANSMISSION LINE You learned earlier that the maximum (and most efficient) transfer of electrical energy takes place
when the source impedance is matched to the load impedance. This fact is very important in the study of
transmission lines and antennas. If the characteristic impedance of the transmission line and the load
impedance are equal, energy from the transmitter will travel down the transmission line to the antenna with no power loss caused by reflection. Definition and Symbols
Every transmission line possesses a certain CHARACTERISTIC IMPEDANCE, usually designated
as Z[0]. Z[0] is the ratio of E to I at every point along the line. If a load equal to the characteristic impedance
is placed at the output end of any length of line, the same impedance will appear at the input terminals of
the line. The characteristic impedance is the only value of impedance for any given type and size of line
that acts in this way. The characteristic impedance determines the amount of current that can flow when a
given voltage is applied to an infinitely long line. Characteristic impedance is comparable to the resistance that determines the amount of current that flows in a dc circuit.
In a previous discussion, lumped and distributed constants were explained. Figure 3-15, view A,
shows the properties of resistance, inductance, capacitance, and conductance combined in a short section
of two-wire transmission line. The illustration shows the evenly distributed capacitance as a single
lumped capacitor and the distributed conductance as a lumped leakage path. Lumped values may be used
for transmission line calculations if the physical length of the line is very short compared to the
wavelength of energy being transmitted. Figure 3-15, view B, shows all four properties lumped together and represented by their conventional symbols.
Figure 3-15. Short section of two-wire transmission line and equivalent circuit. Q19. Describe the leakage current in a transmission line and in what unit it is expressed. | {"url":"http://electriciantraining.tpub.com/14182/css/14182_122.htm","timestamp":"2014-04-16T07:22:28Z","content_type":null,"content_length":"28719","record_id":"<urn:uuid:7adf7867-5085-481f-a189-ecd3df78c153>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Anxiety with Adult Learners
Colorado Technical University Online
Mathematics anxious individuals have a tendency to avoid mathematics, which can damage their mathematics competence and can have detrimental effects on their career. This paper looks at several
studies that may point to the cause of mathematics anxiety and then examines some of the successful treatment methods available to adult learners. The research points to past experiences and
instructional errors as possible causes of mathematics anxiety; however, the exact cause or causes are still unknown. There do exist known treatment methods for adult learners to take advantage of in
order to correct the harm that may be done by mathematics anxiety.
Math Anxiety with Adult Learners
First introduced by Dreger and Aiken in 1957 (Baloglu and Zelhart, 2007), researchers have been studying mathematics anxiety as a cause of low-self confidence, a pessimistic attitude towards
mathematics, poor achievement in mathematics such as low test scores, and a fear of failure (Bessant, 1995). Mathematics anxiety, according to Ashcraft (2002), is “commonly defined as a feeling of
tension, apprehension, or fear that interferes with math performance” (p. 181). McAuliffe and Trueblood (cited in Bessant, 1995) believe that mathematics anxiety may come from the more general
construct of anxiety. Some of the other causes of mathematics anxiety may be a student’s overall mathematical experience and disposition, learning preferences, and mathematics competence. This paper
will focus on adult learners with respect to the causes of mathematics anxiety and some possible solutions and methods to relieve the effects of mathematics anxiety.
Hembree (1990) sought to relate the research on mathematics anxiety with respect to its nature, effects, and relief. His focus was on limiting the effects of the theoretical issues surrounding the
idea. He strove to answer three questions:
1. Is there a casual direction in the relationship between mathematics anxiety and mathematics performance?
2. Does test anxiety subsume mathematics anxiety?
3. Are behaviors related to mathematics anxiety more pronounced in females than males? (p. 35)
Hembree’s research yielded 151 articles, with ages ranging from K-12 to postsecondary, and using meta-analysis he reached conclusions for each of his questions. For the relationship between
mathematics anxiety and mathematics performance he found that when a student’s performance in mathematics increases, their level of mathematics anxiety recedes. Also, performance levels in
mathematics can be reduced to levels associated with low mathematics anxiety via treatment on those students who displayed high anxiety. Despite finding a direct relation between increased
performance and lowered levels of anxiety, there was no evidence of the inverse relationship. To answer his second question, he found that there are similarities; however, mathematics anxiety is not
limited to taking tests but a “general fear of contact with mathematics, including classes, homework, and tests” (Hembree, 1990, p. 45). With respect to his last question regarding gender and
mathematics anxiety, despite a higher level of mathematics anxiety in females, it did not relate to a decrease in performance or an increase in mathematics avoidance. He hypothesized two
explanations: “1) Females may be more willing than males to admit their anxiety, in which case their higher levels are no more than a reflection of societal mores; 2)females may cope with anxiety
better” (p. 45).
A study conducted by Jackson and Leffingwell (1999), asked the question of what types of instructor behaviors created or exacerbated anxiety and also at what point in their education did mathematics
anxiety first occur. This study was conducted over three semesters using a writing prompt to answer the question, “’Describe your worst or most challenging mathematics classroom experience from
kindergarten through college’” (Jackson and Leffingwell, 1999, p. 583). They found that only 7% of the students, from 157 responses, had a positive experience from kindergarten through college. Of
the remaining responses, Jackson and Leffingwell identified three groups of grade levels where the anxiety-producing problem occurred. These three groups were: Elementary level, High school level and
the College level (Jackson and Leffingwell, 1999, p. 583). This paper is focused on the adult learner, thus, the continued review of this study will remain aimed at the college level group. The
respondents identified several factors from the instructor which led to their mathematics anxiety: communication and language barriers, uncaring attitude of the instructor, quality of instruction,
evaluation of instruction, instructor’s dislike for the level of class, gender bias, and age discrimination (Jackson and Leffingwell, 1999). The authors intended for instructors to use this
information to alter their behavior in the classroom to limit the mathematics anxiety of their students. The authors hoped that instructor could use these finding to provide a better learning
Schacht and Stewart (1990) point out that researchers may argue that mathematics and statistics anxiety are not the same but the literature shows that the two anxieties involve many of the same
feelings. Schacht and Stewart (1990) also note that there exist several means to overcome mathematics anxiety such as “desensitization, group therapy, and math immersion” (p. 53). However, they
wanted to find something that could be implemented quickly and frugally as most academic departments do not have the resources, funding and personnel to carry-out these treatment methods. Due to the
volatility of using humor in the classroom, Schacht and Stewart chose to use a cartoon technique to control the medium. Bogart states, “The comics act to reduce tension in their readers mainly by
offering variety and a recurrent focus of interest. Their name implies that they also reduce tension through laughter” (cited in Schacht and Stewart, 1990, p. 54). The authors studied two statistics
courses at the university level and evaluated the effect the cartoons had on mathematics/statistics anxiety using a five question evaluation for the first class. This proved to be inadequate for the
author’s findings; thus, they chose to use the Mathematics Anxiety Rating Scale (MARS) to evaluate student mathematics anxiety levels for the second class. The MARS data did show a reduction in the
student’s mathematics anxiety levels; however, it did not measure if the cartoons were effective. The authors then asked the second class the original five questions and received similar results,
stating that the cartoons were effective. This led Schacht and Stewart (1990) to conclude that humor does help in anxiety reduction and creates a positive learning environment.
With many coping strategies available, Peskoff (2000) chose to “evaluate the relationship between college students’ level of mathematics anxiety and the strategies they employ to cope with it” (p.
34). Peskoff (2000) compared the students’ assessment of the coping methods and faculty’s assessment of the coping strategies that the students evaluated. From a community college, 279 students, from
either remedial algebra or non-remedial pre-calculus classes participated in this study. They completed the Composite Math Anxiety Scale (CMAS) to evaluate their mathematics anxiety. After the exam,
the students were then asked to complete a survey rating the ten mathematics anxiety coping strategies. The ten coping strategies available were tutoring, relaxation or exercise, discussing
mathematics anxiety with other students, discussing mathematics anxiety with a school counselor, using additional textbooks or review books, asking the instructor questions in class, completing
homework on time, reminding oneself they are a good student, setting aside extra study time, and letting one’s instructor know one is having problems (Preskoff, 2000). When analyzing the data,
Peskoff (2000) took into account three independent variables: “Mathematics Anxiety level…, Gender…, and Course Enrollment…” (p. 34), each of the ten strategies being the dependent variables. His
analysis found that students with low mathematics anxiety used the most coping methods and valued a wider variety of the coping methods. This was attributed to the fact that lower mathematics anxiety
students have a good foot hold on coping methods and are more open to use different strategies. High mathematics anxiety students tended to utilize the coping strategies of tutoring services and
discussions with their counselors more often than low mathematics anxiety students. However, both groups found these to coping strategies to be the least effective. Between the sexes, the males used
the relaxation or exercise coping strategy more than their female counterparts despite both groups and the faculty rating this as the least helpful. Females tended to use the coping strategies of
doing their homework and letting the instructor know they don’t understand, which were rated the highest as the most helpful by both students and faculty. The students and faculty found that the best
coping strategies were the ones in which the student met the problem head-on and was pro-active: doing homework, extra study time, asking questions, and letting their instructor know. The avoidance
methods simply remove the stressful situation from the picture to reduce mathematics anxiety.
There are many speculated reasons for the causes of mathematics anxiety. Some of these stem from bad experiences while learning mathematics (Fiore, 1999) to student avoidance (Ashcraft, 2002) to
instructional method (Clute, 1984). Nonetheless, “there has been no thorough empirical work on the origins or causes of math anxiety” (Aschcraft, 2002, p. 184). That does not mean it is not real,
Faust stated, “math anxiety is a bona fide anxiety reaction, a phobia” (cited in, Ashcraft, 2002, p. 184). Despite not knowing the cause, as with medicine, we can treat the symptoms. There are
several techniques available, such as changing instruction methods (Clute, 1984), using coping strategies (Peskoff, 2000) such as humor. There exists a great body of research in the causes and the
treatment of mathematics anxiety but until the root is found, a cure may be hard to find. Until such point we can still move forward using successful coping strategies but more research needs to be
done to find the main reason for mathematics anxiety as it can affect a person’s livelihood.
Author Bio
Mr. Edward Marchewka
Colorado Technical University Online
Edward Marchewka has taught at the college level in the mathematics department at Colorado Technical University-Online and at Elgin Community College, and he has also taught with MicroTrain. He is a
Microsoft Certified Trainer and a CompTIA subject matter expert for CTT+, Certified Technical Trainer, Network+, and A+. He is a member of NCTM, ICTM, IMACC, ANN, MENSA and is a ComTIA IT
Professional Member. He completed a BA in Liberal Studies and BS in Nuclear Engineering Technologies from Thomas Edison State College. He has completed an MS in Mathematics at Northern Illinois
University where he is also currently enrolled in the MBA program. He will be starting a doctoral program at NIU in the spring of 2011 to begin an Ed.D in Adult and Higher Education with a
specialization in Human Resource Development. Edward Marchewka can be reached at EMarchewka@faculty.ctuonline.edu
Ashcraft, M. H. (2002). Math anxiety: Personal, educational, and cognitive consequences. Current directions in Psychological Science, 11(5), 181-185.
Baloglu, M. & Zelhart, P. F. (2007). Psychometric properties of the revised mathematics anxiety rating scale. The Psychological Review, 57, 593-611.
Bessant, K. C. (1995). Factors associated with types of mathematics anxiety in college students. Journal for Research in Mathematics Education, 26(4), 327-345.
Clute, P. S. (1984). Mathematics anxiety, instructional method, and achievement in a survey course in college mathematics. Journal for Research in Mathematics Education, 15(1), 50-58.
Fiore, G. (1999). Math abused students: Are we prepared to teach them? Mathematics Teacher, 92(5), 403-406.
Hembree, R. (1995). The nature, effects, and relief of mathematics anxiety. Journal for Research in Mathematics Education, 21(1), 33-46.
Jackson, C. D., & Leffingwell, R. J. (1999). The role of instructors in creating math anxiety in students from kindergarten through college. Mathematics Teacher, 92(7). 583-586.
Peskoff, F. (2000, July). Mathematics anxiety and the adult student: An analysis of successful coping strategies. Proceedings of the International Conference on Adults Learning, 7, 34-38.
Schacht, S. & Stewart, B. J. (1990). What’s funny about statistics? A technique for reducing student anxiety. Teaching Sociology, 18(1), 52-56. | {"url":"http://www.imagine-america.org/online-journal/math-anxiety-adult-learners","timestamp":"2014-04-19T22:08:23Z","content_type":null,"content_length":"53442","record_id":"<urn:uuid:cf8a22bb-1b31-4e38-85d9-13b3958fb113>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Integration by Substitution
June 12th 2009, 06:21 AM #1
Jan 2009
Integration by Substitution
$<br /> \int 5\cos(5x) dx<br />$
$<br /> g'(x) = 5, f(g(x)) = cos5x<br />$
$<br /> \int (cos5x)(5) dx<br />$
$<br /> (1/5) \sin5x + C<br />$
Those are my steps above, but the book says the answer is sin5x. It seems like it got (1/5)*sin5x*5 after integrating and then canceled the 5 and 1/5, but I thought that $\int f(g(x))g'(x) = F(g
(x)) + C$, which to me means that you drop the g'(x) and find the antiderivative of f(g(x)). My other thought is that I just find the antiderivative of f(u), which would be cosu in this case,
which yields sinu + C. Am I correct?
$<br /> \int 5\cos(5x) dx<br />$
$<br /> g'(x) = 5, f(g(x)) = cos5x<br />$
$<br /> \int (cos5x)(5) dx<br />$
$<br /> (1/5) \sin5x + C<br />$
Those are my steps above, but the book says the answer is sin5x. It seems like it got (1/5)*sin5x*5 after integrating and then canceled the 5 and 1/5, but I thought that $\int f(g(x))g'(x) = F(g
(x)) + C$, which to me means that you drop the g'(x) and find the antiderivative of f(g(x)). My other thought is that I just find the antiderivative of f(u), which would be cosu in this case,
which yields sinu + C. Am I correct?
$\int (cos5x)(5) dx$
= $(1/5) \sin5x(5) + C$
= $sin 5x+C$
You can take constants outside the integration sign.
$5 \int \cos(5x) dx = 5 \bigg{(}\sin(5x) \cdot \frac{1}{5} \bigg{)} + C$
(note that you don't have to multiply the constant by 5 because the result will still be a constant.)
Then cancel the 5 and 1/5
Just noticed the mistake in your answer, you only integrate f(x) which is cos(x)
Then you 'sub' in x -> g(x). So you get... sin(x) -> sin(5x)
I think the proper way to write it is...
$\int f(g(t))g'(t) dt = \int f(x) dx = F(x) + c$
where x = g(t)
June 12th 2009, 06:27 AM #2
June 12th 2009, 06:27 AM #3
June 12th 2009, 06:32 AM #4 | {"url":"http://mathhelpforum.com/calculus/92625-integration-substitution.html","timestamp":"2014-04-17T06:50:18Z","content_type":null,"content_length":"41315","record_id":"<urn:uuid:8413ea7e-8918-434e-a96c-c2c00097ece1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many square feet are there in a room that measures 12x12
How many square feet are there in a room that measures 12x12?
Top Q&A For: How many square feet are there in a room that m...
How many square feet are there in a room that measures 18 feet 1 inch by 13 feet 7 inches?
How many square feet are there in a room that measures 7 feet by 9 feet?
How many square feet are there in a room that measures 15 feet by 19 feet?
How many square meters in a room that measures 10 feet by 9 feet? | {"url":"http://www.qacollections.com/How-many-square-feet-are-there-in-a-room-that-measures-12x12","timestamp":"2014-04-20T03:19:50Z","content_type":null,"content_length":"23378","record_id":"<urn:uuid:f3172c65-0e58-43c0-ae6d-819cc218f113>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thanks guys - I just want the events to be the best they can be. J
[mailto:rcse@googlegroups.com] On Behalf Of Rob
Sent: Thursday, September 29, 2011 6:51 PM
Subject: Re: [RCSE] Re: F3J Flight Matrix Calculations
Bubba says Amen!
Rob Glover
On Sep 29, 2011, at 16:35, <mike@themapsmith.com> wrote:
I've told you this before Jim, but another "You Rock" couldn't hurt.
You Rock!
-------- Original Message --------
Subject: RE: [RCSE] Re: F3J Flight Matrix Calculations
From: "Jim Monaco" < <mailto:jimsoars@earthlink.net>
Date: Thu, September 29, 2011 1:26 pm
To: < <mailto:rcse@googlegroups.com>
Thanks for this pointer. I downloaded the program and I can probably make
it work. I generated a matrix from the test data that he has in there and
the STDDEV is much smaller than the ones I was seeing. Now the magic is to
export the registration and team data from my DB, reformat it to import into
his DB, run the matrix and then export his data and reformat it to import
back into my program. Then I'll run my stats and see if it still looks as
Piece of cake. J
From: <mailto:rcse@googlegroups.com>
<mailto:rcse@googlegroups.com> mailto:rcse@googlegroups.com] On Behalf Of
Sent: Thursday, September 29, 2011 12:05 PM
To: <mailto:rcse@googlegroups.com>
Subject: RE: [RCSE] Re: F3J Flight Matrix Calculations
And F3J comps as well. It will work for you Jim.
-------- Original Message --------
Subject: [RCSE] Re: F3J Flight Matrix Calculations
From: Don Harban < <mailto:misterharban@cox.net>
Date: Thu, September 29, 2011 11:00 am
To: RCSE < <mailto:rcse@googlegroups.com>
Go to <http://gliderscore.com> gliderscore.com. I just shook the synapses
loose and remembered
the site. The program is already set up to generate the matrices you
described and to specifically score F3K comps.
I've been evaluating it for other uses and will publish an article
about it soon.
Happy Landings,
On Sep 29, 12:40 pm, Don Harban < <mailto:misterhar...@cox.net>
misterhar...@cox.net> wrote:
> I've been looking at a glider scoring program developed by an Aussie
> and widely used for nearly every kind of comp there and in Europe. I
> am on the road right now, and cannot remember the source. But it has
> Avery sophisticated matrix routine which appears to be effective for
> nearly any number of contestants and flight groups. It is a complete
> scoring program, facilitating such things as throw outs, reflies and,
> of course frequency conflicts.
> It will generate flight matrices, score cards and round-by-round
> results. It will score teams and resolve team conflicts within
> rounds. It is a stand alone, not requiring excel or anything like
> that.
> In dealing with matrices it will not only create the matrices, but it
> will furnish a ton of statistical information on the matrices that it
> forms and allow unlimited retries (with statistical breakdowns) until
> you are satisfied.
> If you are interested I will furnish more information when I get home.
> Happy Landings,
> Don
> On Sep 29, 12:22 pm, < <mailto:m...@themapsmith.com> m...@themapsmith.com>
> > Jim, try gliderscore here...
> >
> > Joe Wurts says it does a pretty good job of getting a non random, evenly
distributed flight matrix developed. He says that it uses a program similar
to one he designed to do just what you are attempting to do. Goal should be
to get each pilot to fly against other pilots an even amount of times. So
the software has to check for previous pilot matches and throw out
> >
> > Hope this helps. You could contact Joe directly, but I only have his
RCGroups contact.
> >
> > Cheers,
> >
> > Mike
> >
> > -------- Original Message --------
> > Subject: [RCSE] F3J Flight Matrix Calculations
> > From: "Jim Monaco" < <mailto:jimso...@earthlink.net>
> > Date: Thu, September 29, 2011 10:15 am
> > To: < <mailto:rcse@googlegroups.com>
>As many of
you know I regularly run F3J events and handle all the matrix and scoring
tasks. In the past I have always generated the flight matrix using a random
assignment and rarely have any complaints. However, I am now looking at
generating the fairest possible matrix for the US F3J Team Selections and
have run a number of tests and often see anomalies in the matrix where a
pilot flies against another an inordinate number of times or very few times.
I ain't the sharpest pencil in the box, so I could use some suggestions on
how to improve the matrix from some of the smart math people in the group.
I am a programmer so if I understand the algorithm I can program it.. :)
> >
> > Here are the constraints:
> > 1. There are 10 teams
> > 2. Each team consists of 4 pilots (if there are less than 4 pilots
a BYE will fill the empty slots).
> > 3. Each pilot will fly once in a round (so there are 4 groups in a
> > 4. Pilots on the same team are protected and will never fly against
another team member.
> > 5. We will schedule 24 total rounds in the matrix
> >
> > Here is what I have attempted so far:
> > 1. Pure random - for each round I process each team. On each team
every pilot has a fixed pilot number between 1 and 4 (including BYES). I
then compute a random sequence of the position numbers 1-4 and assign the
flight group based on that sequence. Repeat for each team. When all teams
have been computed for a round I go on to compute the next round.
> > 2. Random with statistical preference - I do the above, but I
calculate a factor to use in determining the variance in the matrix and run
1000 trials and select the one with the least variance. To do this I
compute and record how many time each pilot flies against the other pilots.
For each pilot I compute the standard deviation of these numbers. I then
add all the pilots standard deviations together and compute the average
standard deviation for the entire matrix. I pick the matrix that generates
the lowest standard deviation in 1000 trials. Generally I see a minimum
standard deviation of about 1.28. Some individual matrix SDs are 1.9 or so.
> > ...
> > read more >
You received this message because you are subscribed to the Google Groups
"RCSE" group.
To post to this group, send email to <mailto:rcse@googlegroups.com>
To unsubscribe from this group, send email to
For more options, visit this group at
You received this message because you are subscribed to the Google Groups
"RCSE" group.
To post to this group, send email to <mailto:rcse@googlegroups.com>
To unsubscribe from this group, send email to
For more options, visit this group at
You received this message because you are subscribed to the Google Groups
"RCSE" group.
To post to this group, send email to <mailto:rcse@googlegroups.com>
To unsubscribe from this group, send email to
For more options, visit this group at
You received this message because you are subscribed to the Google Groups
"RCSE" group.
To post to this group, send email to <mailto:rcse@googlegroups.com>
To unsubscribe from this group, send email to
For more options, visit this group at
You received this message because you are subscribed to the Google Groups
"RCSE" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at
You received this message because you are subscribed to the Google Groups "RCSE" group.
To post to this group, send email to
To unsubscribe from this group, send email to
For more options, visit this group at | {"url":"http://www.rcgroups.com/forums/showthread.php?t=1514177","timestamp":"2014-04-17T18:50:40Z","content_type":null,"content_length":"221424","record_id":"<urn:uuid:b8ad3d9a-0f78-4632-b1da-5144e420077b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Numerical Range for Some Complex Upper Triangular Matrices
This Demonstration gives a portrait of the neighborhood of the spectrum of a matrix , including an approximation of the numerical range of matrices acting in , with low dimension .
We restrict our attention to upper triangular matrices, since neither the numerical range nor the spectrum is altered by a unitary similarity transformation. For every matrix there exists a unitary
similarity that transforms to upper triangular form (in
this can be achieved by
). Note that the diagonal elements of an upper triangular matrix are exactly the eigenvalues, and an upper triangular matrix that is normal must be diagonal.
Choose a dimension! Then some of the entries of a default matrix are depicted as locators: all diagonal entries (with a red inner point), some strict upper diagonal entries (with a blue inner
point), and no others. Moreover, for your convenience, you can see the convex hulls of the entries of (in blue) and of the diagonal entries of (in red), which, at the same time, are the eigenvalues
of .
In addition, the whole matrix is shown below the graphic and updated as you drag any locator.
A yellow point with a red boundary marks the mean of the diagonal entries of , which is also the mean of the eigenvalues of . In the literature this is usually expressed as . We call this point the
hub of , because it is the natural pivot point for all matrices similar to . Indeed, one could claim that the whole question of similarity hinges on this point! For contrast, a light blue point with
a dark blue boundary marks the mean of all the entries of .
Most importantly, a family of yellow rectangles of different orientations is drawn with a green frame around each. Their intersection constitutes an outer approximation of the field of values. The
number of those rectangles can be influenced by pushing the slider for the minimal rotational angle .
To be able to go back to the original matrices, click "reset ".
To be able to compare results for different dimensions more easily, you can fix the plot range.
Now, you can experiment by dragging some of the locators. You will probably be surprised!
You can neither create nor delete locators; you can only drag the ones given.
Almost every item in the graphics is annotated, so mouseover them to see explanations.
When you try to locate the eigenvalues of complex matrices in the complex plane or to separate characteristic features of nonnormal matrices from normal ones, you inevitably will find terms like
field of values
numerical range
, or
Rayleigh quotient
, and maybe also the important new notion of
. These ideas are in part more than a hundred years old, but up to and including
7, there are no built-in functions for them. When starting to fill this gap it seems appropriate to give more explanations than usual; more details can be found in any of the books mentioned in the
Normal matrices are usually characterized as the ones that commute with their adjoints , but this description appears anemic. Some of the really attractive properties that make them interesting are:
they are not deficient, that is, the algebraic and geometric multiplicities of each eigenvalue agree, and the eigenspaces of different eigenvalues are orthogonal. So, within each eigenspace, you can
find an orthonormal basis of eigenvectors spanning that particular eigenspace. Since different eigenspaces are orthogonal, too, for normal matrices , there is an orthonormal set of eigenvectors that
spans the whole space, that is, for normal matrices there exists an orthonormal basis consisting purely of eigenvectors of . For a discussion of normal matrices, see, for instance, Teil 1, §16 of
[1], and chapter 2 of [2].
Nonnormal matrices are either deficient, in which case there are not even enough linearly independent eigenvectors to build a basis (never mind an orthonormal basis), or, if they are diagonalizable,
lack the property that eigenspaces belonging to different eigenvalues are orthogonal. In the latter case there certainly exists a basis consisting purely of eigenvectors, even normalized ones, but
not orthonormal ones. For if they were orthonormalized, some of them would not be eigenvectors anymore, because a linear combination of eigenvectors belonging to different eigenvalues can never be
an eigenvector!
For a normal matrix , all that matters is the knowledge of its spectrum . Geometrically, the spectrum of a matrix is a point set in the complex plane consisting of at least one to at most points,
where is the dimension (over ) of the space in which acts.
Within an orthonormal basis of eigenvectors of , the action of the normal matrix reduces to a simple multiplication of the eigenvectors with their corresponding eigenvalues, that is, for an
eigenvector with corresponding eigenvalue , the equation holds, which says that the image of an eigenvector under the mapping is just the -fold of itself. Taking the Fourier coefficient with on both
sides, one gets , or, , where denotes the inner product of .
But the ratio is well defined for any nonzero vector and any matrix ; the ratio is called the
Rayleigh quotient of with respect to
. Its relevance lies in showing how strongly any vector is magnified in its own direction under the action of the mapping , regardless of whether is an eigenvector or not! Note that, as long as is
not an eigenvector, this is not the same as comparing the norm of an image to the norm of its preimage, because !
Since the Rayleigh quotient is constant on any one-dimensional subspace, it suffices to restrict its domain to the unit ball , from where it acts as a continuous mapping into the complex numbers.
The whole set of self--magnifiers of , the set of images under this mapping, that is, , is called the
field of values of
, the
numerical range of
, or, in German,
der Wertebereich von
It is clear that the spectrum of is part of the field of values of , that is, , but more can be said (see chapter 1, pp. 5–13 of [4]): is a compact, connected, and convex subset of the complex
numbers that includes the closure of the convex hull of the spectrum , that is, . (The convex hull of a set is the intersection of all convex sets that include .)
Because the diagonal elements of , can be found as the Rayleigh quotients of the standard orthonormal basis vectors with respect to , that is, , the whole diagonal of is also part of the field of
values, that is, .
For a normal matrix , , so that these two sets actually coincide! In this case, the strongest magnification that can occur must take place in the eigenspace belonging to the eigenvalue with the
biggest modulus; for all other vectors the magnification will be smaller!
Not so for a nonnormal matrix! In the nonnormal case, the eigenvalue with the biggest modulus must by no means necessarily represent the strongest amplification factor!
Consider, for example, the upper triangular matrix , with . is deficient; it has one two-fold eigenvalue 1, but only one corresponding eigenvector, namely , which happens to be just the standard
unit vector . So is an orthonormal basis of . Let be any vector in with positive coefficients. Then
But , so ! And the bigger is, the bigger is!
So, for a nonnormal matrix , the knowledge of its spectrum alone might lead to premature conclusions! To be able to understand the action of the underlying mapping, additional information must be
taken into account. An important item in that direction will be the determination of the numerical range.
But even for normal matrices , finding more or less crude supersets of the numerical range is a big gain if one is to enclose the spectrum ; see [3], for instance.
Manipulate Graphic: Get familiar with what is shown! Mouseover the graphics to see explanations, push buttons one after the other; drag any of the locators. After any change, mouseover the graphic
again. Almost every item is explained, but some annotations only show up when the respective items are not shadowed by other ones.
[1] R. Zurmühl and S. Falk,
Matrizen und ihre Anwendungen,
Berlin/Heidelberg/New York: Springer–Verlag, 1984.
[2] R. A. Horn and C. R. Johnson,
Matrix Analysis
, New York: Cambridge University Press, 1985.
[3] R. S. Varga,
Gershgorin and His Circles
, Berlin/Heidelberg/New York: Springer–Verlag, 2004.
[4] R. A. Horn and C. R. Johnson,
Topics in Matrix Analysis
, New York: Cambridge University Press, 1991.
[5] L. N. Trefethen and M. Embree,
Spectra and Pseudospectra: The Behavior of Nonnormal Matrices and Operators
, New Jersey: Princeton University Press, 2005. | {"url":"http://demonstrations.wolfram.com/NumericalRangeForSomeComplexUpperTriangularMatrices/","timestamp":"2014-04-20T08:17:34Z","content_type":null,"content_length":"62397","record_id":"<urn:uuid:ecbd0f70-f0e7-4cf7-aa54-7d10e0a2d4fb>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
WHAT IS The range of 3 √ x !?!?! PLEASE PLEASE PLAESE PLAESE I BEGGG I BEGGGGGGG! LOL
• one year ago
• one year ago
Best Response
You've already chosen the best response.
What can x be? What numbers can you not take the square root of?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
\[\sqrt{0} = 0\]
Best Response
You've already chosen the best response.
You can take the square root of 0, since 0*0=0. You can't divide by 0, don't forget that when determining range and domain.
Best Response
You've already chosen the best response.
YES BUT LET ME SHOW U GUYS THE OPTIONS
Best Response
You've already chosen the best response.
Try turning off your caps lock, it doesn't really help you.
Best Response
You've already chosen the best response.
\[\sqrt{-1} \notin R\]
Best Response
You've already chosen the best response.
o sorry lol :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
what types of numbers can we not square root?
Best Response
You've already chosen the best response.
Haha it's fine. So yes, eigenschmeigen is right, if you can guess what that means. Remember that a domain is all the numbers of x you can choose to satisfy an equation while the range is all the
numbers you can have as a result of your domain.
Best Response
You've already chosen the best response.
i attached a picture of the options can u guys check it out
Best Response
You've already chosen the best response.
Numbers in these brackets [x,y] mean that they are included in the range or domain. Numbers in parenthesis (x,y) mean that they're not included, but every number between them is.
Best Response
You've already chosen the best response.
can u guys help because he didnt help :(
Best Response
You've already chosen the best response.
Well, what do you think the answer is?
Best Response
You've already chosen the best response.
@Kainui i personally think you were explaining very well
Best Response
You've already chosen the best response.
C ?
Best Response
You've already chosen the best response.
what happens if you put a negative number for x?
Best Response
You've already chosen the best response.
@eigenschmeigen im just very slow in math like i dont get anything im sorry and i need to get this half a credit graduate with my class by june 3rd and its a whole bunch of quizes this invluded
and im just stressing! :(((
Best Response
You've already chosen the best response.
what happens if we put a negative number in for x?
Best Response
You've already chosen the best response.
what do we get, for example, if i put in -16
Best Response
You've already chosen the best response.
i believe the answer is A.
Best Response
You've already chosen the best response.
yes you would be correct
Best Response
You've already chosen the best response.
thanks so much.
Best Response
You've already chosen the best response.
I hope you're not just guessing and you truly, genuinely understand the concept behind your answer.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fbfba7fe4b0964abc82401d","timestamp":"2014-04-20T18:53:37Z","content_type":null,"content_length":"88308","record_id":"<urn:uuid:4bf3f8fe-ff1d-4e9c-bef2-0b3c6774b2f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: July 2011 [00123]
[Date Index] [Thread Index] [Author Index]
Re: How to write a "proper" math document
• To: mathgroup at smc.vnet.net
• Subject: [mg120044] Re: How to write a "proper" math document
• From: Richard Fateman <fateman at cs.berkeley.edu>
• Date: Thu, 7 Jul 2011 07:28:35 -0400 (EDT)
• References: <201107041044.GAA02461@smc.vnet.net> <iuukk8$epi$1@smc.vnet.net> <iv1aho$smk$1@smc.vnet.net>
On 7/6/2011 2:40 AM, AES wrote:
> In article<iuukk8$epi$1 at smc.vnet.net>,
> "McHale, Paul"<Paul.McHale at excelitas.com> wrote:
>> Interactive documents seem to be a limited by corporate security more than
>> technology. I.e. If Mathematica were open source/free, this would be closer
>> to a non-issue. This is not practical.
> This the second and equally difficult problem with Mathematica. No
> question whatsoever: Mathematica is a marvelous intellectual
> development.
> But if Mathematica had emerged from,
Its predecessor, SMP, emerged from work done at Caltech by several
people including SW, based in part on various prior technology.
For various reasons, which I suspect he may have indicated in print, SW
decided quite the opposite from your assertion that software is best
produced in an academic environment.
and was continually further
> developed in, an open, academic, competitive environment, with
> publications, technical meetings, peer review, student involvement, and
> all the other advantages that this environment can provide, rather than
> a closed, quite secretive, and commercially driven enterprise, it would
> be even richer than it is today.
Past experience for the example of a computer algebra system, which is a
rather more esoteric beast than (say) a text editor or formatter, or
even an operating system, does not necessarily abide by your guidelines.
Support (e.g. in the US) by the National Science Foundation or perhaps
some part of the defense establishment, has been key in academic
research, but it tends to be inadequate in amount and/or duration for
the funding of an essentially interminable project. Support by
contributions of users is iffy, to say the least. See how many moribund
projects there are on sourceforge etc.
> Ask yourselves where many of the most important software tools of today
> emerged from?
Some of the most widely used pieces of software are, of course,
commercially supported. Some are free but not open source. Think of
Microsoft, Oracle, Apple, Google, Adobe, even (gack) Facebook.
Do you think those apps for your telephone would be written if the
programmers thought they would not make any money (by sales,
advertisements, whatever?) Do you think that the 200 people (or however
many WRI now claims) could be supported by an academic enterprise?
(Actually, my guess is that 80% of them work in marketing, sales,
packaging -- non-technical; so maybe we would need to support fewer
people in a non-commercial setting. Unlikely unless computer algebra
became as important as building fusion machines or nuclear weapons.. or
Bill Gates decides to cure math in addition to polio.)
(and whether they are free and open tools, or closed and
> excessively expensive tools?) Just to cite two of these:
> * Unix: The essentially open world of the original Bell Labs.
The original Bell Labs Unix was not free, but sold commercially. It was
available to educational institutions free. The distribution of Berkeley
Unix for the DEC/VAX computer required a paid-up commercial license
(paid to Bell Labs, not Berkeley) for non-academic installations.
This changed, eventually. Commercial support of free unix is plausible
because of factors that are probably NOT in play for computer algebra
But there are gobs of free and open operating system projects that just
disappeared from the scene. My guess is that a pretty good recipe for
working hard for a few years and having no impact on the world is to
write an open-source operating system.
> * TeX and LaTeX: Donald Knuth and his academic students and disciples.
I think that adoption by the AMS was a key.
> And then all the immense swarm of freeware, shareware, and low-cost
> software tools that we all enjoy today.
You have a somewhat romanticized view of this swarm. 300,000 iphone
apps? How many free computer algebra systems that (apparently) you
don't use? Free and low-cost malware?
While I agree that Mathematica could be improved, I think it is pretty
speculative to say that it would be better if Mathematica were free and
open source. I'm in favor of paying programmers and mathematicians. I
doubt that you get the best results from students who have to deliver
pizzas in the evenings to pay their rent.
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2011/Jul/msg00123.html","timestamp":"2014-04-18T10:37:51Z","content_type":null,"content_length":"30048","record_id":"<urn:uuid:4142ad46-123e-49e2-8360-ee0316764c08>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comparative analysis of methods for detecting interacting loci
Interactions among genetic loci are believed to play an important role in disease risk. While many methods have been proposed for detecting such interactions, their relative performance remains
largely unclear, mainly because different data sources, detection performance criteria, and experimental protocols were used in the papers introducing these methods and in subsequent studies.
Moreover, there have been very few studies strictly focused on comparison of existing methods. Given the importance of detecting gene-gene and gene-environment interactions, a rigorous, comprehensive
comparison of performance and limitations of available interaction detection methods is warranted.
We report a comparison of eight representative methods, of which seven were specifically designed to detect interactions among single nucleotide polymorphisms (SNPs), with the last a popular
main-effect testing method used as a baseline for performance evaluation. The selected methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG),
Bayesian epistasis association mapping (BEAM), SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIT), and logistic
regression (LR) were compared on a large number of simulated data sets, each, consistent with complex disease models, embedding multiple sets of interacting SNPs, under different interaction models.
The assessment criteria included several relevant detection power measures, family-wise type I error rate, and computational complexity. There are several important results from this study. First,
while some SNPs in interactions with strong effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives. In this study, the
best-performing method was MECPM. Second, the statistical significance assessment criteria, used by some of the methods to control the type I error rate, are quite conservative, thereby limiting
their power and making it difficult to fairly compare them. Third, as expected, power varies for different models and as a function of penetrance, minor allele frequency, linkage disequilibrium and
marginal effects. Fourth, the analytical relationships between power and these factors are derived, aiding in the interpretation of the study results. Fifth, for these methods the magnitude of the
main effect influences the power of the tests. Sixth, most methods can detect some ground-truth SNPs but have modest power to detect the whole set of interacting SNPs.
This comparison study provides new insights into the strengths and limitations of current methods for detecting interacting loci. This study, along with freely available simulation tools we provide,
should help support development of improved methods. The simulation tools are available at: http://code.google.com/p/simulation-tool-bmc-ms9169818735220977/downloads/list webcite.
Genome-wide association studies (GWAS) have been widely applied recently to identify SNPs associated with common human diseases [1-9], including cardiovascular diseases [6,10-13], diabetes [6,14-18],
lupus [19-21], autoimmune diseases [22], autism [23], and cancer [24-27]. However, with few exceptions [13,15,17,24], the discovered genetic variants with significant main effects account for only a
small fraction of clinically important phenotypic variations for many traits [5,28]. While there are multiple causes for missing some well-known genetic risk factors or disease heritability
(including e.g., rare variants not genotyped in a GWAS study), a frequently cited reason is that most common diseases have complex mechanisms, involving multi-locus gene-gene and gene-environment
interactions [5,28-31]. For detecting interacting loci in high dimensional GWAS data with sufficient power and computational feasibility, some pioneering work, with promising results, has been
reported, encompassing: i) real GWAS study papers, as cited above; ii) interaction detection methodology [32-44]; iii) theoretical papers that characterize the principle problem (interaction
detection) and its challenges [30,45-47]; iv) review and methods comparison papers [29,31,47-51].
Novel Methods for Detecting Interacting SNPs
A variety of SNP interaction detection methods have been recently proposed. In particular, multifactor dimensionality reduction (MDR) [33] measures the association between SNPs and disease risk using
prediction accuracy of selected multifactor models. Full interaction model (FIM) [41] applies logistic regression, 3 using^d-1 binary variables constructed based on a d-SNP subset. Information gain
(IG) [34,52] measures mutual information to assess multi-locus joint effects. Bayesian epistasis association mapping (BEAM) [32] treats the disease-associated markers and their interactions via a
Bayesian partitioning model and computes, via Markov chain Monte Carlo (MCMC), the posterior probability that each SNP set is associated with the disease. SNP harvester (SH) [39] proposes a heuristic
search to reduce computational complexity and detect SNP interactions with weak marginal effects. Random forest (RF) [44] is an ensemble classifier consisting of many decision trees, each tree using
only a subset of the available features for class decision making. Thus, the detected features (SNPs) are the ones most frequently used by trees in the ensemble. Logic regression (LOR) [36]
identifies interactions as Boolean (logical) combinations of SNPs. In [42], an extension of logic regression was also proposed to identify SNP interactions explanatory for the disease status, with
two measures devised for quantifying the importance of these interactions for the accuracy of disease prediction. Treating SNPs and their interaction terms as predictors, penalized logistic
regression (PLR) [37] maximizes the model log-likelihood subject to an L2-norm constraint on the coefficients. Related to FIM and PLR, adaptive group lasso (AGL) [43] adds all possible interaction
effects at first and second order to a group lasso model, with L1-norm penalized logistic regression used to identify a sparse set of marginal and interaction terms. Maximum entropy conditional
probability modeling (MECPM) [40], applying a novel, deterministic model structure search, builds multiple, variable-order interactions into a phenotype-posterior model, and is coupled with the
Bayesian Information Criterion (BIC) to estimate the number of interaction models present. Logistic regression with an interaction term (LRIT) has been widely applied to detect interactions [35]. It
treats the multiplicative term between SNPs, along with individual SNP terms, as predictors in the logistic regression model.
Evaluation of Methods to Detect Interacting SNPs
Despite strong current interest in this area and a number of recent review articles [29,31,47-51], no commonly accepted performance standards for evaluating methods to detect multi-locus interactions
have been established. For example, one might choose to evaluate power to detect individual SNPs involved in interactions, or power to precisely detect whole (multi-SNP) interactions. Moreover, the
relationship between the power to detect interacting loci and the factors on which it depends (penetrance, minor allele frequency (MAF), main effects, and LD), while considered in some previous
studies [32,41,43,45,53], has not been fully investigated, either experimentally or analytically. Most importantly, although some assessment and performance comparison was undertaken both in the
original papers proposing new methods [32-34,39,41,43] and in the comparison papers [49,50], it is difficult to draw definitive conclusions about the absolute and relative performance of these
methods from this body of studies due to the following: (1) each study was based on a different simulation data set and a different set of experimental protocols (including the detection power
definition used, the sample size, the number of evaluated SNPs, and the computational allowance of methods). While use of different data sets and protocols may be well-warranted, as it may allow a
study to focus on unique scenarios/application contexts not considered previously, it also makes it difficult to compare the performance of methods, excepting those head-to-head evaluated in the same
study. Some methods were found to perform quite favorably in one study but poorly in others. For example, MDR [33] performed well in the original simulation study and the comparison paper [50], but
poorly in subsequent studies [32,40,43]; (2) often, only simple cases were tested, which may not reflect the realistic application of a method. For example, a common practice is to include only a
single interaction model in the data [32-34,39,41,50], whereas common diseases are usually complex, with multiple genetic causes [28], suggesting that multiple interaction models should be present.
Our previous papers [40,54] considered multiple interaction models, but an insufficient number of data set replications to draw definitive conclusions on relative performance of methods [50]. also
evaluated multiple interaction models, but only compared three methods, evaluated only one interaction power definition, and did not comprehensively evaluate the effects of penetrance, MAF, main
effects, and linkage disequilibrium (LD) on power; (3) only limited interaction patterns were considered, e.g. 2-way interactions but no higher-order interactions in [43,49]. This is an important
limitation, especially considering that data sets with 1000 or fewer SNPs were evaluated in these studies - in such cases, exhaustive evaluation of candidate pairwise interactions is computationally
feasible, whereas heuristic search, which will affect detection power in practice, is necessitated if either higher order interactions or much larger SNP sets are considered. Thus, to more
realistically assess detection power, either higher order interactions and/or more SNPs should be considered; (4) Perhaps most critically, methods providing P-value assessments [32,39,41] evaluated
power for a given significance threshold, but did not rigorously evaluate the accuracy of the P-value assessment, i.e. whether the Bonferroni-corrected P-value truly reflects the family-wise type I
error rate [55]. This evaluation is of great importance for methods that use asymptotic statistics [32,39,41], since it reveals whether or not the asymptotic P-value is a reliable detection
criterion. Specifically, the P-value could be too liberal (in which case, more family-wise errors than expected will occur in practice and the estimated detection power is too optimistic) or too
conservative (in which case the detection power estimate is too pessimistic). By not performing such assessment, it is unclear even whether use of P-values is providing a fair comparison of detection
power between methods (i.e., for the same family-wise error rate) in [32,39,41]. We further note that although there were efforts to measure the type I error rate in [32,43,50], the evaluations were
not based on the commonly used family-wise error rate, but rather on another definition of type 1 error [32] that does not directly reflect the Bonferroni-corrected P-value; (5) In most past studies
[32-34,39,41,43,50], only a single definition of an interaction detection event (and, thus, a single measure of detection power) was used. However, this does not capture the full range of relevant
detection events for some applications of GWAS. In particular, in some works an exact joint detection event is defined, i.e. detection is successful only if all SNPs involved in the interaction (and
only these SNPs) are jointly detected [43,50]. This is a stringent definition that gives no credit to a method that detects a subset of the interacting SNPs (e.g. 3 of the SNPs in a 5-way
interaction), even though such partial detection is clearly helpful if e.g. one is seeking to identify a gene pathway, or if the remaining SNPs in the interaction can be subsequently detected by
applying more sensitive (and computationally heavy) methods. Exact detection is especially stringent when there are multiple interactions present, with the disease risk effectively divided between
the multiple models. Finally, we note that individual methods have their own inductive biases and, thus, may perform better under different detection criteria - one method may find more ground-truth
SNPs, while another may be more successful at finding whole interactions. Use of multiple power definitions can reveal these differences between methods; (6) Most of the proposed methods (e.g. MDR,
FIM, BEAM, MECPM, SH) are designed to detect both main effects and interaction effects, while to date they have only been evaluated on data sets containing interactions. It is thus also meaningful to
measure how effective they are at detecting SNPs with only main effects, and how many false positive interactions they detect involving main effect SNPs.
Finally, we note that there are very few true (strict) comparison papers - most studies have focused on developing new methods, with experimental evaluation not the central paper focus. Two
exceptions are [50] and [49]. However, they both embedded only a single interaction model in the data and considered data sets with only 100 SNPs; Moreover, [50] evaluated only 2-way and 3-way
interaction detection, while [49] evaluated only two-way interaction detection.
The aforementioned limitations of previous studies are not surprising because of the following challenges associated with comparison studies: (1) it is impractical to evaluate methods on all of the
(numerous possible) interaction models; (2) multiple aforementioned factors (MAF, penetrance, LD) jointly decide interaction effects, which thus entails extensive study design, experimentation, and
computational efforts; (3) many replicated data sets are required to accurately estimate power and family-wise type I error rate, further increasing computational burden; (4) computational costs of
some methods are inherently high; thus a thorough evaluation of these methods is a difficult hurdle; and (5) fair evaluation criteria are not easily designed because distinct methods have different
inductive biases and produce different forms of output (e.g., some give P-value assessments while others only provide SNP rankings); (6) there is no consensus definition of power when seeking to
identify multiple sets of predictors that are jointly associated with outcomes of interest.
Addressing the above challenges, a ground-truth based comparative study is reported in this paper. The goals are three-fold: (1) to describe and make publicly available simulation tools for
evaluating performance of any technique designed to detect interactions among genetic variants in case-control studies; (2) to use these tools to compare performance of eight popular SNP detection
methods; (3) to develop analytical relationships between power to detect interacting SNPs and the factors on which it depends (penetrance, MAF, main effects, LD), which support and help explain the
experimental results.
Our simulation tools allow users to vary the parameters that impact performance, including interaction pattern, MAF, penetrance (which together determine the strength of the association) and the
sporadic disease rate, while maintaining the normally occurring linkage disequilibrium structure. Also, the simulation tools allow users to embed multiple interaction models within each data set.
These tools can be used to produce any number of test sets composed of user specified numbers of subjects and SNPs.
Our comparison study, based on these simulation tools, involves thousands of data sets and consists of three steps, as graphically illustrated in Figure 1. Step 1 (with no ground-truth SNPs present)
measures the empirical family-wise type I error rate, which has not been evaluated in many previous studies, and yet is critically important if the (e.g. P-value based) significance threshold is used
as the criterion for detecting interacting SNPs.
Figure 1. A flowchart for the performance evaluation of interaction detection methods.
In particular, foreshadowing our Step 1 results, we will find that most methods (except LR) in this study that produce P-values in fact produce conservative ones, with the degree of conservativeness
method-dependent. Thus, using the same P-value threshold for all methods will not ensure the methods are being fairly compared, at a common family-wise error rate. Both for this reason, and because
some of the methods do not even produce P-values, in Step 2 we evaluate detection power as a function of the number of top-ranked SNPs, rather than for a specified P-value threshold. Accordingly,
note the logical structure in Figure 1, with the Step 1 results helping us to determine how to evaluate detection power in Step 2.
As aforementioned, Step 2 (with a variety of ground-truth interaction models present) investigates power. We formulate a more challenging, yet more realistic situation than most previous studies by
including multiple ground-truth interaction models in each simulated data set. These simulations are motivated in part by our experience with complex genetic diseases such as autoimmune diseases,
diabetes and end-stage renal disease [18,19,56,57]. In total, ninety different interaction models are investigated in this study, jointly determined by 5 underlying interaction types and 3
parameters, controlling penetrance, MAF, and LD. Step 3 investigates the power to detect main effect SNPs, i.e. we investigate how the methods (many of which are designed to detect both interactions
and main effect SNPs) perform when only main effects are present in the data.
The main contributions and novelty of our comparison study are: (1) comprehensive comparison of state-of-the-art techniques on realistic simulated data sets, each of which includes multiple
interaction models; (2) new proposed power criteria, well-matched to distinct GWAS applications (e.g., detection of "at least one SNP in an interaction"); (3) evaluation of the accuracy of (P-value
based) significance assessments made by the detection methods; (4) investigation of detection of models with variable order interactions (up to 5th order) in SNP data sets; (5) new analytical results
on the relationship between interaction parameters and statistical power; (6) investigation of the flexibility of interaction-detection methods, i.e. whether (and with what accuracy) they can detect
both interactions and main effects; (7) discoveries concerning relative performance of methods (e.g., comparative evaluation of the promising recent method, MECPM). Since we are presenting a
diversity of results, both experimental and analytical, to assist the reader in navigating our work, Figure 1 gives a graphical summary of our experimental steps, the results produced there from, and
the connections between the different results, both experimental and analytical.
Experimental Design and Protocol
We selected eight representative methods for evaluation, based on their reported effectiveness and computational efficiency. Seven of them (MDR, FIM, IG, BEAM, SH, MECPM and LRIT) are designed to
detect interacting loci, with the remaining one based on the widely-used logistic regression model (LR). LR, using only main effect terms, serves as a baseline method to compare against all the
interaction-detection methods, i.e., to see whether they give any advantage over pure "main effect" methods when the goal is simply to detect the subset of SNPs that either individually, or via
interactions, are predictive of the phenotype. The description of the eight methods is given in the "Methods" part.
Simulation Data Sets
Each data set contains individuals simulated from the control subjects genotyped by the 317K-SNP Illumina HumanHap300 BeadChip as part of the New York City Cancer Control Project (NYCCCP). To
facilitate this investigation [40], a flexible simulation program was written that generates user defined sample size, number of SNPs, no missing data or missing data patterns consistent with the
observed missing data in the original genome scan, and affected or unaffected disease status under the null hypothesis (i.e., no associations in the genome) or under the alternative hypothesis (i.e.,
hard-coded penetrance functions). Missing data is filled in completely at random and proportional to the allele frequencies in the original data. The data sets were produced as follows. Consider a
matrix with 223 rows corresponding to NYCCCP individuals and 317,503 columns corresponding to the 317,503 SNPs. The elements of this matrix are the individual genotypes. The columns were partitioned
into blocks of 500 SNPs, i.e. 636 blocks, with the last block containing only 3 SNPs. The simulated genome scan data for each individual was obtained by random draws (with replacement) from a real
data matrix of 223 individuals and 636 blocks of 500 SNPs. Specifically, the simulated data for an individual was generated by randomly selecting the first block from the 223 individuals (rows),
randomly selecting with replacement the second block from the 223 individuals, randomly selecting with replacement the third block from the 223 individuals, and so on. Thus the data retains the basic
patterns of linkage disequilibrium (broken by strong recombination hotspots), missing data, and allele frequencies observed in the original genome scan data. The exception to this is only at the 635
breaks in the genome corresponding to the block boundaries. Figure 2 visually illustrates this simulation approach for randomly resampling genome scan data starting from the real NYCCP scans. The
simulations presented here correspond to approximately 2000 subjects simulated under the alternative hypothesis described below and no missing data. Only autosomal loci are considered in the data.
Figure 2. A visual illustration of SNP "blocking" and random sampling, used for generating simulated individuals. "Ind i" denotes the ith real individual, and "Sim Ind" denotes the simulated
individual. First, genomes of the real individuals are segmented into a number of blocks; second, for each block, a genome segment is randomly drawn from the set of real individuals; finally, the
randomly drawn genome segments, for all blocks, are stitched together to form a simulated individual.
The eight methods were applied to sets of 1000~10,000 SNPs selected at random from the autosomal loci. This number of SNPs is consistent with a GWAS study following an initial SNP screening stage and
also with pathway-based association studies. When selecting SNPs, we first removed those with genotypes that significantly deviate from Hardy-Weinberg equilibrium, and then selected the desired
number of ground-truth and "null" SNPs. For each replication data set, ground-truth SNPs were randomly selected, according to the requirements of MAF (within a narrow window of tolerance), and "null"
SNPs were chosen completely at random. The simulations reported assume that the disease risk is explained by several ground-truth interaction models and the sporadic disease rate S, which accounts
for the missing heritability and other disease-related factors. Let P[r](d[i]), r = 1,2,...,R be the disease probability generated by R interaction models for the ith subject. Assuming all disease
factors act independently, disease risk of this subject is then
The simulation data sets have different ground truth interaction models P[r](d[i]), r = 1,2,...R and the sporadic disease rate S for different steps. For Step 1, we did not embed any ground truth
SNPs in the data sets; for Step 2, we embedded five interaction models in each data set; and for Step 3, we embedded five main-effect-only SNPs in each data set. In all three steps, we adjusted the
sporadic rate S so that each data set has approximately 1,000 cases and 1,000 controls, the typical situation (balanced cases and controls) in GWAS studies, e.g. in Step 1, S = 0.5. The ground truth
interaction models in Step 2 and the ground truth main-effect-only SNPs will be described later.
In Figure 3, we provide a flowchart detailing all of the steps (as described above) used in producing our simulated GWAS data sets.
Figure 3. A flowchart detailing all of the steps used in producing the simulated GWAS data sets.
The simulation approach used in this comparison study is the same as that used in [40]. Our simulation approach has one commonality with, but two main differences from the simulation approaches used
in the previous methods and comparison study papers evaluating MDR, IG, FIM, SH, and BEAM [32-34,39,41,50]. Both in these papers and in our current study, all SNPs are consistent with Hardy-Weinberg
Equilibrium. However, in these previous papers, the simulated data were purely synthetic, generated according to user-specified allele frequencies [29-31,36,38,47]. By contrast, our simulated data is
obtained by resampling from real genome scan data and is thus more realistic, preserving the allele frequencies and LD structure manifested by the original genome scan data. Another resampling
simulation method was proposed in [58,59], but this approach has not been used for evaluating the MDR, IG, FIM, SH, and BEAM methods. Another important distinction between our simulation method and
other simulation methods lies in the phenotype generation. In our simulation, multiple interactions simultaneously exist in each data set (which is reasonable considering complex disease mechanisms)
and jointly decide the phenotype; by contrast, other simulation methods usually embed only one SNP interaction (i.e., single interaction model) in each data set [32-34,39,41,50]. Also, we consider
interactions with interaction order from 2 to 5, while most other simulations [33,34,39,41,50] only consider interactions with interaction order up to 3.
As mentioned previously, our simulation study consists of three main experimental steps, which we next more fully describe.
Step 1: assess family-wise type I error rate
An accurate family-wise type I error rate is crucial for methods that select candidate SNPs based on their P-values and for reliably comparing methods. If the family-wise type I error rate is either
conservative or liberal, the P-value loses its intended meaning and does not reflect the actual false positive rate. That is, we will not be able to control how many false positives are detected by
setting a (e.g. P-value based) threshold. For example, a method with a lower family-wise type I error rate than expected (based on the estimated P-value) sets a threshold that overestimates the
empirical false positive rate; thus, fewer false positives (than the target) will be selected, likely also leading to fewer true associations being identified.
BEAM, SH and FIM detect significant SNPs based on P-values calculated from asymptotic distributions and heuristic searches. Thus, based on the preceding discussion, evaluating the accuracy of their
P-value assessments is not only of theoretical importance (how well their proposed asymptotic distributions approximate the real distribution), but also of great practical necessity in applying these
To evaluate the accuracy of P-value assessment, we replicated 1,000 data sets by repeatedly randomly selecting 1,000 null SNPs from the SNP pool, i.e. to easily assess family-wise type I error rate,
no ground-truth SNPs were embedded in these data sets.
Step 2: assess power
In step 2, each data set has N SNPs, with 15 ground-truth SNPs and N-15 null SNPs, selected via the procedure described in the "Simulation Data Sets" subsection. N is chosen to be either 1000 or
10,000 for different experiments. There are several points to make regarding the number of SNPs we consider. First, assuming approximately 1000~10,000 SNPs is realistic for candidate gene and
biological pathway studies where interaction detection is needed. Second, considering GWAS studies, a 0.15%~1.5% percentage of ground-truth SNPs realistically models the output of first stage SNP
screening/filtering (which greatly reduces the number of candidate SNPs) in the widely-applied 2-stage GWAS detection process. Finally, the 1000~10,000 SNPs considered here is much larger than the
100 SNPs in the previous comparison study [49,50] and comparable to that considered in several other recent papers.
The 15 ground-truth SNPs each participate in one of 5 ground-truth SNP interactions, which contribute independently to the disease, as described by equation (1). There are three standard factors that
determine interactions: penetrance, MAF and LD [3,7]. Penetrance is the proportion of individuals with a specific genotype who manifest the phenotype. For example, if all individuals with a specific
disease genotype show the disease phenotype, then the penetrance value is 1 and the genotype is said to be "completely penetrant"; otherwise, it is "incompletely penetrant" [3]. LD is the non-random
association of alleles of different linked polymorphisms in a population [7]. MAF is the frequency of the least common allele of a polymorphic locus. It has a value that lies between 0 and 0.5, and
can vary between populations [7]. The 5 ground-truth SNP interactions are jointly determined by 5 basic model types and 3 (discrete-valued) parameters, controlling the MAF, penetrance, and LD, which
will be specified later. Based on the choices for these 3 parameters, there are 3 × 3 × 2 = 18 possible parameter configurations (so the aforementioned ninety models are generated by the 5 basic
model types, each with 18 different parameter settings). Each configuration is applied simultaneously to the 5 basic models, thus yielding 5 fully specified interaction models for a given data set.
With some allowable randomness in the 5 new interaction models, we generated 100 replication data sets for each configuration with N = 1000, and 10 replication data sets for one typical configuration
with N = 10,000; thus we have in total 18 × 100+10 = 1,810 data sets in step 2, involving 18 × 5 = 90 interaction models.
The 5 basic models vary in interaction order, genetic models (dominant, recessive, or additive), incomplete/complete penetrance, MAF, and marginal effects. To indicate the strength of interaction
effects and main effects for each basic model, we calculated the odds ratio by dichotomizing the genotypes of each interaction into a group with the lowest penetrance value (usually with "0"
penetrance) and another group with higher penetrance values (the specific calculation can be found in section S4 of the Additional file 1).
Additional file 1. Supplementary information: comparative analysis of methods for detecting interactive SNPs. This supplementary information consists of 6 sections: S1. Section S1 presents our
theoretical analysis of the relationship between association strength, joint effect, main effect, penetrance function, and MAF. This section also provides some theoretical explanations about our
experimental results. S2. Section S2 presents comprehensive power evaluation results of the methods for different interaction models and parameter settings, related to power definition 1. The
reproducibility of the methods is also shown by the standard deviation of power. As an extension of the main text, we also summarize our findings and analytical explanations for these results. S3.
Section S3 provides ROC curves of the methods based on the whole ground-truth SNP set. These ROC curves illustrate the sensitivity and specificity for the methods. The reproducibility of the methods
is also shown by the standard deviation of sensitivity. S4. Section S4 describes in detail how the effect size (odds ratio) is calculated for each interaction model. S5. Section S5 analyzes the
conservativeness of χ^2 statistics applied by SH and FIM. This analysis partly explains why SH and FIM are conservative. S6. Section S6 gives the empirical relationship between power and the false
positive SNP count under a given significance threshold.
Format: DOC Size: 1.6MB Download file
This file can be viewed with: Microsoft Word Viewer
The 5 basic models are defined by the penetrance tables and MAFs below. The penetrance function is the probability of disease given the individual's genotype. Thus, the penetrance tables show the
probability of developing disease given the genotypes [3,60], with each table entry being the disease probability conditional on the specific single or multi-locus genotypes. The interaction models
are motivated by our experience studying complex genetic traits where there are multiple loci contributing to disease risk. Specifically, the simulation study is motivated by our experience in
autoimmune diseases, diabetes and renal diseases where there are some larger effects (e.g., human leukocyte antigen region in autoimmune diseases such as systemic lupus erythematosus, neonatal lupus,
and juvenile arthritis [19]; and gene APOL1 in end-stage renal disease in African Americans [18]), and multiple modest to smaller effects with 1.1 < odds ratios < 1.3. To date, there are few robustly
established (i.e., with convincing discovery evidence on multiple replications in independent cohorts) gene-gene interactions in the human disease literature. Thus, we attempted to be consistent with
the complex genetic disease paradigm and assumed multiple loci, several interacting, contribute to the risk of disease. We examined combinations of SNPs in the lupus genome-wide scan (Harley et al,
2008) to estimate some examples of potential two-locus interactions as well as constructed other higher-order interactions consistent with traditional interpretations of Mendelian inheritance (i.e.,
dominant, additive or recessive genetic model) but spanning multiple loci. Some interactions are based on a two-locus, common allele with a low penetrance model as might be hypothesized in diabetes
from the "thrifty gene hypothesis" [56] and other multi-locus models are modest penetrance models for the low frequency alleles. Additional motivation comes from studies of epistasis [57]. The five
locus interaction is a conjectural one that should challenge these analytic methods.
Basic model 1-.two-locus interaction under a dominant model for the major allele. The model is for two very common but low penetrant alleles. The MAFs at these two loci are both 0.25. This model is
expected to generate 62 cases per 1000 subjects. The odds ratio is 1.16 for the joint interaction effect between A and B, and 1.15 for main effects of both A and B. This model simulates the situation
of common disease where the major allele is disease-related but with weak interaction effects. "M1" denotes model 1.
Basic model 2- two-locus interaction for common alleles under a dominant genetic model at each locus. The minor allele frequencies are 0.20 for locus A and 0.30 for locus B. This model is expected to
generate 102 cases per 1000 subjects. The odds ratio is 3.79 for the joint interaction effect between A and B, 1.89 for the main effect of A and 1.56 for the main effect of B. This model simulates
the situation that the minor allele is disease-related, and both interaction effects and main effects are strong.
Basic model 3- three-locus interaction, common alleles, incomplete penetrance. The MAFs at the three loci are 0.40 for A, 0.25 for B, and 0.25 for C. This model is expected to generate 46 cases per
1000 subjects. The odds ratio is 2.28 for the joint interaction effect among A, B and C, 1.16 for the main effect of A, 1.25 for the main effect of B, and 1.25 for the main effect of C.
Basic model 4- three-locus interaction among common alleles. The minor allele frequencies are 0.25 for A, 0.20 for B, and 0.20 for C. This model is expected to generate 26 cases per 1000 subjects.
The odds ratio is 5.79 for the joint interaction effect among A, B and C, 2.45 for the main effect of A, 1.06 for the main effect of B, and 1.06 for the main effect of C. This model has strong
interaction effects and a strong main effect at A, but weak main effects at B and C. Two-SNP subsets of the three-locus interaction, {A, B} and {A, C}, also have strong effects.
Basic model 5- five-locus interaction among common alleles. It assumes a MAF of 0.30 at each locus and has a penetrance value of 0.63 if the minor allele is present at each locus; and 0 otherwise. In
equation form, the penetrance function is:
where D means the subject gets disease. This model is expected to generate 22 cases per 1000 subjects. The odds ratio is 4.48 for the joint interaction effect among the five loci, and 1.09 for the
main effect at all five loci. This model simulates the situation of significant high-order interaction effects but weak main effects.
Three parameters are used to assess the robustness of the various methods to variations in penetrance, MAF, and LD, because i) as aforementioned, penetrance, MAF, and LD jointly define the disease
model, and thus decide the disease status; ii) it is of interests, in the field of SNP interaction detection, to explore how detection power varies with these parameters [32,34,41,43,50]; iii) we
have derived the analytical relationships between interaction effects and these parameters in the Additional file 1, so a simulation study using these parameters provides us the opportunity to
validate the analytical relationships in an empirical way. For each basic model, we control its penetrance by multiplying every value in the penetrance table by the penetrance factor (multiplier) θ ∈
{1,1.3,1.4} (the larger θ is, the larger disease risk there will be); we discount the MAF by multiplying the MAF of each SNP by a MAF factor β ∈{1,0.9,0.7} (the larger β is, the larger frequency the
minor allele will have); and to control the LD level, we replace each ground-truth SNP by an "LD SNP", which has a certain correlation coefficient l ∈{0.8,null} with the ground-truth SNP (l = null
means we do not replace the ground-truth SNP). The "LD SNP" simulates the realistic case where the ground-truth SNP is not directly genotyped; in this case we may detect a SNP in LD with the
ground-truth SNP. For example, for basic model 2, under parameters θ, β, l, the MAFs are 0.2 * β for locus A and 0.3 * β for locus B, θ determines a new penetrance function shown below, and if l =
0.8, we replace A/B by a SNP correlated to A/B with correlation coefficient 0.8.
The theoretical, analytical relationship among penetrance, MAF, and statistical significance of an interaction model is investigated in the Additional file 1, with these results also summarized in
the "Experimental Results" section.
Step 3: assess the power to detect SNPs with only main effects
Most of the interaction-detection methods are designed to find either interactions or main effects (e.g. MDR, FIM, BEAM, MECPM and SH). Thus, it is meaningful to see how these methods fare in
detecting main effects and also whether they detect false positive interactions (which may involve either null and/or main effect SNPs) when there are only main effects present.
In Step 3, we simulated 100 replication data sets, following a similar approach
as in Step 2. Each data set includes five main-effect ground truth SNPs and 995 null SNPs. The penetrances and MAFs for the five ground truth SNPs are:
SNP 1. Dominant model for the major allele, low penetrance, MAF = 0.25.
SNP 2. Additive model for the minor allele, MAF = 0.3.
SNP 3. Additive model for the minor allele, MAF = 0.4.
SNP 4. Recessive model for the minor allele, high penetrance, MAF = 0.25.
SNP 5. Dominant model for the minor allele, low penetrance, MAF = 0.3.
Although SNP 1 and SNP 5 have relatively weaker effects, we still included them because (1) they also affect many subjects' disease status, since a large proportion of subjects carry the disease
genotype of SNP1 and SNP 5 (which simulates common-disease markers); (2) our experimental results will show that these weak-effect SNPs differentiate the performance of the methods.
Note that we configured the methods to detect both main effects and interaction effects since, in practice, it will not be known whether interactions are present or not.
Design of Performance Measures
The performance of the methods is evaluated by the accuracy of P-value assessment, various definitions of power, reproducibility, and computational complexity.
A. Family-wise type I error rate (the accuracy of P-value assessment)
There are 1,000 SNPs in each data set. Thus there are multiple comparison effects, and the P-values obtained by the methods are accordingly adjusted by Bonferroni correction. In this way, the
accuracy of P-value assessment is represented by the family-wise type I error rate: an error event occurs on a data set with no ground truth SNPs if there are any (necessarily false) positive
detections. Since SH, MDR and FIM use Bonferroni correction, we measure the accuracy of their P-value assessments by how well the significance threshold (P-value) agrees with the family-wise type I
error rate.
B. Various power definitions and the ROC curve
Power can be defined in several ways, depending on what we desire to measure. We next give several power definitions experimentally evaluated in the sequel.
Power to progressively detect interactions (Power definition 1)
the frequency with which a model's ground-truth SNPs are ranked within the top K positions. Several comments are in order here. First, it is important to note that the significance threshold is not
being applied to define power because (1) the methods' P-value assessments are, as noted earlier, conservative (as shown in the sequel), and (2) not all methods provides significance assessments (e.g
. IG and MECPM). Second, in our experiments, the ranking of a SNP is decided by the strength of effect of the most significant interaction that includes this SNP. Third, note that each data set
contains multiple interaction models, with the detection power measured separately for each model. In measuring the power to detect SNPs in a given interaction amongst the top K SNPs, we are only
interested in whether the ground-truth SNPs in the interaction are ranked higher than null SNPs, not whether they are ranked higher than ground-truth SNPs from other interactions that are present.
Accordingly, when measuring the power to detect SNPs in a given interaction, we do not rank ground-truth SNPs from other interactions, but only rank SNPs from the given interaction and all null SNPs.
For an M-way interaction, let {x[K](i), i = 1,2,...,100} be the number of its ground-truth SNPs reported within the top K SNPs over the 100 replicated data sets. The power for this interaction model
is then given by:
We can also define power over the entire ground-truth SNP set by setting M = 15 and considering all ground-truth SNPs in the ranking.
Power to precisely detect interactions (power definition 2: exact interaction power)
for an M-way ground-truth interaction, how likely it is detected amongst the top K M-way candidates produced by a method. This power definition evaluates the sensitivity to detect the interaction as
a whole, rather than as individual SNPs. Again, similar to power definition 1, in evaluating the top K M-way candidates, we only consider M-way combinations that include ground-truth SNPs from the
interaction of interest and null SNPs, i.e. we exclude M-way SNP combinations involving any SNPs that participate in other ground truth interactions. Mathematically, for an M-way interaction {s
[1],..., s[M]}, in the ith data set, if {s[1],..., s[M]} is detected within the top K M-way candidates, x[2, i](K) = 1; otherwise, x[2, i](K) = 0. Power definition 2 is then given by:
Power to detect at least 1 SNP in the ground-truth interaction (power definition 3: partial interaction power)
As revealed by the definitions of the interaction models, a subset of the interacting SNPs may have strong association to disease risk. Detecting an interaction subset should be acceptable since this
gives a good "clue" to help further identify the complete interaction. We thus give power definition 3 as follows: for an M-way interaction model {s[1],...,s[M]}, if any SNP from {s[1],...,s[M]} is
within the top K SNPs reported by the methods (excluding other ground-truth SNPs that do not participate in this interaction model), x[3,i](K) = 1; otherwise, x[3,i](K) = 0. Power definition 3 is
then given by:
Power to detect individual SNPs (power definition 4: single SNP power)
The power definitions above ignore differences between SNPs within the same interaction, e.g., differences in MAF, asymmetric penetrance table and thus different main effects, which may largely
affect their potential for being detected. So it is also necessary to see how well individual ground-truth SNPs with different MAFs, penetrances, and main effects, are detected by the 5 methods.
Accordingly, we give power definition 4 as follows. For a ground-truth SNP s[j], j = 1,2,...,15, if s[j ]is within the top K SNPs reported (excluding the other ground-truth SNPs), x[i](K) = 1;
otherwise, x[i](K) = 0. The single SNP power for s[j ]is then given by:
ROC curve
We also evaluate the methods via the ROC curve, which shows how many ground-truth SNPs are detected for a given false positive SNP count.
C. Reproducibility
The estimated power, even if high, could deviate significantly across different data set replications, due to the inherent randomness in our simulation approach. Thus, we also want to see how
reproducible the detection power is over the data set replications. To evaluate this, we measure the standard deviation of the estimated power across the replicated data sets.
D. Computational complexity
Computational complexity was measured by the execution time and memory occupancy of the methods for the same platform.
Experimental Results
In Step 1, we evaluated the three methods with asymptotic statistics (FIM, BEAM and SH). In Step 2, we evaluated all eight methods (as described in the "Method" section) on the 1000-SNP data sets,
and six methods (FIM, IG, BEAM, MECPM, SH and LR) on the 10,000-SNP data sets - we do not evaluate MDR for the 10,000-SNP data sets because the high memory occupancy of the MDR software prevents this
evaluation. We also evaluated six methods (MDR, FIM, BEAM, MECPM, SH and LR) in Step 3 - we do not evaluate IG and LRIT, because, by design, they only output multi-locus interaction candidates, and
thus are inappropriate to be assessed in Step 3's main effect evaluation. Specifically, IG and LRIT will necessarily have 0 true positives, no matter how well they detect interactions involving the
main-effect-only SNPs, since in Step 3 only "singlet" main effects are considered to be true positives. MDR, BEAM, SH and MECPM were all implemented using the authors' freely available software. LR,
LRIT, FIM and IG were implemented using C++, with the software freely available. The eight methods were tested on the same platform: OS: Windows, CPU: 3G, RAM: 2G. The parameters used by the
respective methods follow their default settings wherever possible. We only modified one parameter when testing MDR: we used its heuristic search (1 hour execution time limit) instead of exhaustive
search when testing MDR on the 1000-SNP data sets in step 2, because exhaustive search of MDR required huge memory and quite impractically high computational cost - when implementing MDR with
exhaustive search, our machine crashed from running out memory; moreover, the estimated exhaustive-search MDR execution time for a 1000-SNP, 2000-sample data set is 1.4 × 10^6 seconds (roughly 15
years) on our platform. Here we compare the eight peer methods along several performance fronts. The results are then further evaluated and summarized in the "Discussion" section.
Accuracy of P-value assessment in step 1
Based on the definition in the subsection "Design of Performance Measures", we tested the accuracy of P-value assessment for BEAM, SH, and FIM on the 1,000 data sets in step 1. Regarding the other
methods, IG and MECPM do not give significance assessments, while the significance assessment of MDR is (necessarily) accurate since it uses random permutation testing (However, it should also be
noted that MDR only evaluates the significance of the top-ranking interaction. Thus, in practice, MDR does not in fact use a P-value to practically set an interaction detection threshold.). The
average family-wise type I error rates at different significance thresholds were calculated. Since each interaction order has a different Bonferroni penalty, we separately list the results for 1st,
2nd, and 3rd orders, shown in Table 1. BEAM, SH and FIM all have accurate family-wise type I error rates at 1st order, but give conservative results (empirical family-wise type I error rate is less
than expected) at 2nd and 3rd order. BEAM is the most conservative and FIM the least. Thus, the P-values generated by these methods are conservative, and not to the same degree. Thus the estimation
of power (at the targeted type I error rate value) is likewise both conservative and not truly comparable across the methods. There are multiple causes for this conservativeness, which we
subsequently discuss.
Table 1. The average family-wise type I error rates (step 1) for BEAM, SH and FIM under the significance threshold of 0.1 (after Bonferroni correction). More results can be found in the Additional
file 1.
Power (definition 1) and ROC curve in step 2
We measured power (definition 1) for each interaction model and also for the entire 15-SNP ground-truth set. Figure 4 and 5 show some of our results for 1000 SNPs and 10,000 SNPs in each data set,
respectively. Many more results, under different parameter configurations, are given in the Additional file 1.
Figure 4. Power evaluation (definition 1) of the eight methods on 100 replication data sets with parameter setting: θ = 1.4, β = 1, l = null. (a) evaluates the power on the whole ground-truth SNP
set, and (b) (c) (d) (e) (f) evaluate the power individually on the 5 interaction models. Blue curve - SH, magenta curve - FIM, green curve - MDR, black curve - IG, cyan curve - MECPM, grey curve -
LRIT, yellow curve - LR.
Figure 5. Power evaluation (definition 1) of six methods on 10 replication data sets with parameter setting: θ = 1.4, β = 1, l = null. (a) evaluates the power on the whole ground-truth SNP set, and
(b) (c) (d) (e) (f) evaluate the power individually on the 5 interaction models. In (c), all the methods have overlapped power curve at the upmost part of the figure. Magenta curve - FIM, black curve
- IG, red curve - BEAM, blue curve - SH, cyan curve - MECPM, grey curve - LRIT, yellow curve - LR.
For the 1000-SNP case (Figure 4), although the methods can detect some SNPs with strong interacting effects (Figure 4(c), model 2), most of the methods (MDR, BEAM, IG, FIM, SH, and LR) miss many
other ground-truth interacting SNPs at a low false positive SNP count (i.e., for small K) (Figure 4(b), (d), (e) and 4(f)); further increases in power are modest and are only attained by accepting
many more false positive SNPs. Comparatively, MECPM performs quite well on most interaction models (including the difficult five-way (Figure 4(f)) and the three-way interactions (Figure 4(d) and 4(e)
), except for model 1 (Figure 4(b)). Only a partial curve is shown for MECPM because MECPM uses the BIC criterion to choose its model order (and, thus, the number of interactions) [40]. Few true SNPs
are added as K is increased beyond the BIC stopping point -- MECPM has high specificity at the BIC stopping point [40] (MECPM specificity = 0.99 for the whole ground truth SNP set at the BIC stopping
point). Accordingly, MECPM execution was terminated shortly after the BIC stopping point. From Figure 4(a), MECPM is overall the best-performing method, with SH second, BEAM third, FIM fourth, the
baseline LR fifth, LRIT sixth, IG seventh and MDR eighth. Individual methods perform more favorably for certain models, e.g. IG performs well for a 3-way model (Figure 4(e)). Also, all methods tend
to detect more interacting SNPs with strong main effects than those with weak main effects (power of all the methods on models 2 and 3 is generally higher than on models 1, 4, and 5). We give some
explanation for these results in the "Discussion" section.
For the 10,000-SNP case (Figure 5), we have similar observations as in the 1000-SNP case, except that the general performance of all methods is degraded. It is worth noting that all the methods
perform comparably to their 1000-SNP detection power for model 2 (Figure 5(c)), and MECPM also performs comparably to its 1000-SNP detection power for models 3 and 4 (Figure 5(d) and 5(e)). MECPM is
the overall best-performing method, with SH second, BEAM third, LR fourth, LRIT fifth, FIM sixth and IG seventh.
Impact of penetrance, MAF, and LD on power (definition 1)
Figure 6 shows the power for different penetrance, MAF, and LD factors. The power is calculated based on the whole ground-truth SNP set. More detailed results are given in the Additional file 1. From
Figure 6, a smaller penetrance value or MAF significantly degrades the power curves of the methods. Among the methods, SH is most robust to changes in penetrance and MAF, and IG is most sensitive to
these changes.
Figure 6. The impact of penetrance value (θ), MAF (β), and LD factor (l) on power for the whole ground-truth SNP set. Blue curve - SH, magenta curve - FIM, green curve - MDR, black curve - IG, cyan
curve - MECPM, yellow curve LR..
Reproducibility of power (definition 1)
We measured the reproducibility by the standard deviation of power across the 100 replication data sets. These results are given in the Additional file 1.
Power (definition 1) to detect interacting SNPs for a fixed significance threshold
Although the statistical significance level is unreliable for measuring performance of the methods (as illustrated in Table 1), we want to give readers an empirical sense of how the methods perform
when using the statistical significance level to select candidate SNPs in the step 2 experiment. These results, given in the Additional file 1, show that using the same significance threshold, the
methods detect very different numbers of both true positive and false positive SNPs. Moreover, considering the balance of true positives and false positives achieved by each of the methods, none of
them performs strongly.
Power to detect entire interactions (definition 2)
Based on power definition 2, we did experiments to evaluate all the methods on the 1000-SNP data sets of Step 2. Considering the high computational complexity and the applicability of the methods, we
compare the power of IG, LRIT, FIM, SH and MDR on 2-way interactions, the power of FIM, SH, and MDR on 3-way interactions, and the power of MDR on 5-way interactions. Figure 7 shows the results. Due
to the limited number of total interactions output by BEAM and MECPM, we do not evaluate BEAM here, and list the power of MECPM only at its stopping point: model 1 - 0, model 2 - 0.96, model 3 -
0.94, model 4 - 0, model 5 - 0.46.
Figure 7. Power evaluation (definition 2) of the methods on 100 replication data sets with parameter setting: θ = 1.4, β = 1, l = null. In (a), FIM, IG, MDR and LRIT have power constantly equal to 0;
in (b) FIM and IG and LRIT have power constantly equal to 1; in (d) SH, FIM and MDR have power constantly equal to 0. Blue curve - SH, magenta curve - FIM, green curve - MDR, black curve - IG, grey
curve - LRIT, yellow curve - LR.
We can observe that all the methods have poor performance for models 1 and 4. For models 3 and 5, all the methods fare poorly except for MECPM. For model 2, IG, LRIT and FIM have very good
performance (power = 1); MECPM also performs well (power = 0.96); while the other methods still perform poorly.
Power to detect at least 1 SNPin an interaction - partial interaction detection (definition 3)
Based on power definition 3, we evaluated SH, BEAM, IG, FIM, MDR, LRIT and MECPM. The major results are shown in Figure 8. Due to the limited number of total interactions output by MECPM at its
stopping criteria, we give a text description, instead of drawing a curve, to show the power at its stopping point: model 1 - 0.17, model 2 - 1, model 3 - 0.98, model 4 - 0.97, model 5 - 0.46. From
Figure 8, BEAM, SH, FIM, LRIT and MECPM obtain good results for models 2, 3, 4, 5. We believe that these good results are partly due to the relatively strong main effects of SNPs involved in these
interaction models. Note that there is a substantial increase in power compared to Figure 7. Also, by comparing with the results for power definition 1 (Figure 4), we can see that there is largely
increased power for most models, indicating most interaction models can be partly detected by the methods.
Figure 8. Power evaluation (definition 3) of the eight methods on 100 replication data sets with parameter setting: θ = 1.4, β = 1, l = null. Blue curve - SH, magenta curve - FIM, green curve - MDR,
black curve - IG, grey curve - LRIT, yellow curve - LR.
Power to detect individual SNP main effects (definition 4)
Based on Figure 9, we can confirm our previous statement that main effects play an important role in determining whether or not a SNP can be detected. For example, the two SNPs in model 2 (odds
ratio: 1.89 in the basic model) and SNP A (odds ratio: 2.45 in the basic model) in model 4 have strong main effects, and all the methods detect them well.
Figure 9. The power to detect individual SNPs, for parameter θ = 1.4, β = 1, l = null. Blue curve - SH, magenta curve - FIM, green curve - MDR, black curve - IG, cyan curve -MECPM, grey curve - LRIT,
yellow curve - LR.
Also, we observe similar power for SNPs participating in interactions with symmetric penetrance tables and the same MAFs. For example, all the SNPs in model 1 and model 5 have similar power; likewise
for SNPs B and C in models 3 and 4. This observation is reasonable since these SNPs not only have the same main effects, but also have the same interaction effects.
For SNPs participating in interactions with a symmetric penetrance table but different MAFs, an interesting (and perhaps unexpected) finding is that for model 2, the power to detect SNP A (MAF =
0.2), is greater than the power to detect SNP B, which has a larger MAF (MAF = 0.3). We give theoretical justification for this result in section 1 of the Additional file 1.
Performance for step 3, the main-effect-only case
We used power definition 4 to evaluate performance of the methods on the main-effect-only data sets in Step 3. We did not include the IG and LRIT method in this Step because IG and LRIT only detect
multilocus interactions, not single (main effect) SNPs; thus, for Step 3, involving only main effects, detected interactions, even ones involving the main effect SNPs, are necessarily false positive
interactions. Figure 10 shows the power curves, from which we observe that, except for MDR, most methods (FIM, BEAM, SH, MECPM, LR) achieve similar, good power at the beginning, with SH becoming a
bit better as K increases.
Figure 10. Power evaluation of 6 methods (using power definition 1) on main-effects-only data (step 3). Blue curve - SH, magenta curve - FIM, green curve - MDR, cyan curve - MECPM, yellow curve - LR.
We also evaluated whether the methods detect false positive interactions when there are only main effects. Here we evaluated the 3 methods that give P-value assessments, looking at the number of
false positive interactions detected under the P-value of 0.1 after Bonferroni correction. Table 2 lists the results, from which we can see that BEAM and SH are quite good at inhibiting false
positive interactions caused by marginal effects, but FIM produces many false positive interactions.
Table 2. The average number of false positive interactions (step 3) for BEAM, SH and FIM under the significance threshold of 0
Computational complexity and memory occupancy
Computational complexity for the eight methods was evaluated for the same platform: OS: Windows, CPU: 3G, RAM: 2G. SH, IG, FIM, LR, LRIT, MECPM and BEAM do not require much memory, but the exhaustive
search used by MDR requires an impractical amount of memory for a large number of SNPs. Thus, as noted earlier, we applied the heuristic search option in the MDR software, with a 1 hour time limit to
avoid memory overflow. Figure 11(a) shows that, as expected, most methods' execution times increase linearly with sample size. The exception is BEAM execution, which grows more quickly. Figure 11(b)
shows execution times for different numbers of SNPs. SH obtains the highest efficiency (~ linearly increasing execution time); IG and BEAM are more time consuming (~ quadratically increasing); and
FIM is most time-consuming (~ cubic in the number of SNPs). Besides Figure 11(b), we also list execution time of LR, LRIT and MECPM (at MECPM's stopping point): the execution time of LR on 1000-SNP
data and 10000-SNP data is 1 second and 10 seconds, respectively; the execution time of LRIT on 1000-SNP data and 10000-SNP data sets is 24 seconds and 576 seconds, respectively; the execution time
for MECPM on the 1000-SNP data and 10,000 SNP data was 7033 seconds and 25944 seconds, respectively. Compared with Figure 11(b), we can see that MECPM's computation complexity is, relatively, quite
high for 1000 SNPs, but is in fact lower than that of several of the other methods for 10,000 SNPs.
Figure 11. Execution time (sec) of 4 methods for: (a) number of SNPs = 1,000; (b) number of subjects = 2,000. Due to limited space in (b), we list hereby the execution time of the methods on
2000-subject 10,000-SNP data: SH - 962 seconds, IG - 18291 seconds, BEAM - 36423 seconds, FIM - 91251 seconds.
General Summary of the Study and Its Results
We report a comparison of eight representative methods, multifactor dimensionality reduction (MDR), full interaction model (FIM), information gain (IG), Bayesian epistasis association mapping (BEAM),
SNP harvester (SH), maximum entropy conditional probability modeling (MECPM), logistic regression with an interaction term (LRIM), and logistic regression (LR). The first seven were specifically
designed to detect interactions among SNPs, and the last is a popular main-effect testing method serving as a baseline for performance evaluation. The selected methods were compared on a large number
of simulated data sets, each, consistent with complex disease models, embedded with multiple sets of interacting SNPs, under different interaction models. The assessment criteria included several
relevant detection power measures, family-wise type I error rate, and computational complexity. The principal experimental results are as follows: i) while some SNPs in interactions with strong
effects are successfully detected, most of the methods miss many interacting SNPs at an acceptable rate of false positives; in this study, the best-performing method was MECPM; ii) the statistical
significance assessment criteria, used by some of these methods to control the type I error rate, are quite conservative, which further limits their power and makes it difficult to fairly compare
them; iii) the power varies for different models as a function of penetrance, minor allele frequency, linkage disequilibrium and marginal effects; iv) analytical relationships between power and these
factors are derived, which support and help explain the experimental results; v) for these methods the magnitude of the main effects plays an important role in whether an interacting SNP is detected;
vi) most methods can detect some ground-truth SNPs, but fare modestly at detecting the whole set of interacting SNPs.
Based on the simulation data sets used in this study, which include multiple interaction models present in each data set in Step 2, most of the methods miss some interacting SNPs, leading to only
moderate power at low false positive SNP counts (Figures 4, 5)
Compared to the promising powers achieved for the simulation studies reported in the methods' respective papers, the degraded performance seen in this comparative study for most methods is attributed
to the more difficult yet likely more realistic simulation data that we used. The methods (excepting LR and MECPM) were previously reported as powerful on simulation data sets including only a
single, strong ground-truth interaction, but our study included 5 interactions present in each data set to simulate multiple genetic causes for complex diseases. The disease risk is thus effectively
divided among the 5 interaction models, giving each a weaker (less easily detected) effect.
Main effects play an important role in whether a ground-truth SNP is detected at low false positive SNP counts
Another notable finding is that the main effects of the interacting SNPs affect their likelihood of being detected at low false positive SNP counts by most methods. For interaction models with very
weak marginal effects (models 1 and 5), all the methods have low power (see Figure 4(b) and 4(f)). Although some methods (e.g. SH) emphasize the detection of interactions with weak marginal effects,
their results on these models are very modest. Heuristic search strategies used by the methods count on at least one SNP in the interaction having a relatively strong effect; this explains why model
1, without strong main effects, is difficult to detect. Moreover, the huge search space for 5-way interactions makes it easy for heuristic search strategies to miss model 5.
For the same interaction model, different levels of power are achieved by the eight methods
For each interaction model, the power varies across methods because of the quite different detection principles applied by the methods. For example, IG and LRIT, which are based on pairwise SNP
statistics, can detect 2-way interaction effects well (see models 2 and 4, where model 4 can be considered as two overlapped 2-way interactions), but IG and LRIT gets poorer results for higher-order
models. For the difficult 5-way interaction, only MECPM gave promising results.
Power on the whole ground-truth SNP set - MECPM performs the best, while MDR performs the worst
From Figure 4(a), MECPM achieves the best performance; BEAM, SH, FIM, LR, LRIT and IG have similar and moderate performance; MDR performs the worst, among the eight methods we tested. From Figure 6,
SH outperforms BEAM and FIM for weaker effects (i.e., for discounted penetrance values and MAF). Here we briefly discuss how these performance differences are a product of the different methodologies
Power may be degraded by an insufficiently sensitive ranking criterion, by the heuristic search strategy used, or by a suboptimal output design of a method. The high computational complexity of MDR
necessitates using its heuristic search option to keep the running time/memory usage in a reasonable range. This heuristic search forces a significantly reduced search space, and hence the
performance of MDR is expected to be degraded.
The ranking criterion of IG detects pure interaction effects (see equation (4) and the definition of mutual information). However, what really affects disease risk is a combination of both pure
interaction effects and main effects. Additionally, IG is only explicitly designed to detect 2-way interactions, and thus may have difficulty detecting higher order ones.
Comparatively, MECPM, BEAM, FIM and SH have less critical limitations, with these mainly in the sensitivity of their ranking criteria and their use of heuristic search -- e.g., the difficulty for
heuristic search to pick up interactions with weak marginal effects and high-order interactions due to the large search space (Consider a contingency table with 3^5 = 243 cells for a 5-way
The performance of the methods is sensitive to changes in penetrance value, MAF, and LD
From Figure 6, the seven methods all have clearly decreased power when we reduce penetrance values and the MAF, or replace ground-truth SNPs by surrogates in LD with them. Among the methods, SH is
the most robust while IG is the most sensitive to these factors. Besides our empirical results, a theoretical analysis of how power changes with penetrance or MAF is given in the Additional file 1.
The analytical results, which are consistent with (and thus support and explain) our experimental results are as follows: 1) increasing the penetrance of an interaction model results in both a
stronger (more easily detected) joint interaction effect and in stronger marginal effects of the participating SNPs; 2) increasing the frequency of a disease-related genotype results in a stronger
joint effect, under certain conditions; 3) the impact of genotype frequency on main effects is more complicated -- when the marginal frequency, a, of a disease-related genotype is small, the
strengths of the marginal effects increase when a increases, and when a is large, the strengths of the marginal effects decrease as a increases.
Most methods can partially but not exactly detect the interactions
The results for power definition 2 (see Figure 7) are quite different from those for power definition 1 (see Figure 4), indicating that most methods detect the interacting ground truth SNPs as
singlets or subsets of the ground truth interactions. There are multiple reasons for this, in some cases method-specific. For example, the large degrees of freedom of FIM render a high false positive
rate, making ground-truth interactions easily buried amongst many false positives; due to the use of heuristic search strategies, the methods may not even evaluate the ground-truth interactions as
candidates; also, for some methods, e.g. FIM, successful detection of an interaction relies on first detecting main effects for (at least some) SNPs involved in the interaction, thus this type of
heuristic search strategy will miss ground-truth interactions that possess only weak main effects; moreover, SH excludes SNPs with strong main effects from higher-order search, so SH in particular
will miss interactions that possess strong main effects (see Figure 7(b)).
The P-value assessments of BEAM, SH and FIM are variable across method and all are overly conservative
From the subsection "Power for a fixed significance threshold" and results given in the Additional file 1, we observe that for the same significance threshold, BEAM, SH and FIM have quite different
power and false positive SNP counts. Also, in the subsection "Accuracy of P-value assessment in step 1", we showed that their P-value assessments are conservative for 2nd and 3rd order interactions.
From further experiments, we conclude that this phenomenon originates from three factors: the heuristic search strategies, dependencies between SNP combinations, and the summary statistics used by
the methods.
For BEAM, SH, and FIM, the heuristic search strategies evaluate fewer SNP combination candidates than the number actually penalized in the Bonferroni correction. Moreover, SH and BEAM exclude SNPs
with strong marginal effects from high-order interactions, which further decreases the number of searched SNP combinations. So the Bonferroni-corrected P-value is smaller than it should be. Also,
some SNP combinations have dependencies with others, either because they share a common SNP subset and/or because SNPs in different subsets are in LD. Such dependencies make the Bonferroni correction
inherently conservative.
Besides heuristic search and dependencies, the conservativeness also derives from the summary statistics themselves. The authors of BEAM evaluated the B statistic's conservativeness with exhaustive
search. In the Additional file 1, we likewise evaluate conservativeness of the χ^2 statistics applied by SH and FIM. We considered the case where there is neither multiple testing nor heuristic
search. The χ^2 statistics turn out to be conservative, becoming more so as the significance threshold is decreased (see Tables 1, 2 in the Additional file 1). Theoretically, such conservativeness
may come from the discreteness of the SNP data. Since the χ^2 statistics in SH and FIM are calculated from the discrete-valued SNP data, the χ^2 statistics are also discrete. At the tail part of the
χ^2 distribution, two consecutive discrete χ^2 values may correspond to very different significance levels. For example, let the P-values of consecutive χ^2 values be p[1], p[2 ](p[1 ]>> p[2]); when
the significance threshold is p[0 ]and p[1 ]> p[0 ]>p[2], the type I error rate actually corresponds to p[2], which is much less than p[0], making the results quite conservative.
Limitations of the Current Study and Future Work
There are a number of possible extensions of this simulation study that we intend to consider in our future work. First, our current simulation software only handles categorical traits and
categorical (ternary-valued, SNP) covariates. Environmental covariates and admixture-adjusting variables could be either quantitative or ordinal-valued. Likewise, traits (phenotype) could be
quantitative or ordinal. There are natural ways of extending our current simulation approach to allow for these more general covariate and trait types, which we will consider in future work. Second,
we have not investigated missing SNP-values and their effect on detection power. Third, while we have chosen five plausible penetrance function models, another possibility would be to use
"data-driven" penetrance functions, i.e. penetrance functions estimated based on real GWAS data sets with known ground-truth and known (i.e., previously detected) interactions.
The methods explored in this study are useful tools in the exploration of potential interacting loci. Each of the methods studied here has its strengths and weaknesses. Our comparative examination of
these methods suggests that continued research into methods that test for interacting loci is necessary to expand the tools available to researchers and to achieve improved power for detecting
complex interactions, along with accurate assessment of statistical significance.
Methods Tested in the Comparison Study
The eight [32-35,39-41] methods originate from different underlying techniques and principles, and thus can be categorized in different ways, as shown in Table 3. FIM, BEAM, SH, LRIT, and LR
asymptotically approximate the null distribution to assess statistical significance; MECPM models SNP interactions under a maximum-entropy principle, and uses the Bayesian information criterion (BIC)
as the model selection strategy; MDR and IG only provide a ranking of candidate interactions. These methods employ three main search strategies: exhaustive search (IG, LRIT and LR), stochastic search
(BEAM and MDR), and deterministic heuristic search (SH, FIM and MECPM). Each method uses a different detection principle: SH applies χ^2 or B statistics [32,39]; BEAM uses Bayesian inference or B
statistics; FIM, LRIT and LR are based on the logistic regression model; IG ranks SNPs by mutual information; MDR selects SNPs via prediction error; MECPM uses BIC to rank interactions and to assess
statistical significance.
Table 3. Properties of methods tested in this paper.
A brief summary of these eight methods follows.
(1) Multifactor dimensionality reduction (MDR) [33]
For a set of SNPs, MDR labels a genotype as "high-risk" if the ratio between the number of cases and the number of controls exceeds some threshold (e.g., 1.0). A binary variable is thus formed,
pooling high-risk genotypes into one group and low-risk ones into another. If the subject has a high-risk genotype it is predicted as a case; otherwise as a control. The prediction error of each
model is estimated by 10-fold cross validation and serves as the measure of association between the set of SNPs and the disease.
(2) Full Interaction Model (FIM) [41]
In FIM, 3^d-1 binary variables x[j], j = 1,2,..., 3^d-1 are introduced for a subset of d SNPs and a logistic regression model with 3^d parameters is estimated from the data. x[j](i) corresponds to
the jth genotype combination (or interaction term) of the SNP subset on the ith subject. x[j](i) = 1 if the jth genotype combination is present for the ith subject, and 0 otherwise. For the row
vector π(x(i)) be the disease risk. The logistic regression is parameterized as
where χ^2 distribution.
(3) Information Gain (IG) [34]
Let C denote the disease status random variable. The information gain of {A, B} is defined as
where the mutual information I(A;B) is a non-negative measure of the reduction in uncertainty about the value of (SNP locus) random variable A, given knowledge of random variable B [61] Equivalently,
it is a measure of the statistical dependence between A and B. The conditional mutual information I(A;B |C) likewise gives a measure of the statistical dependence between A and B given that the
phenotype random variable, C, is known. The magnitude of IG thus indicates the increased statistical dependence between A and B given knowledge of C, i.e. the strength of an interaction between loci
A and B.
(4) Bayesian Epistasis Association Mapping (BEAM) [32]
Suppose N samples (N[d ]cases and N[u ]controls) are genotyped at L SNPs. BEAM partitions the L SNPs into 3 groups: markers with no association with disease, markers with only main effects, and
markers with interaction effects. Let the genotypes on cases be D = (d[1],...,d[L]) with jth SNP at all the cases. According to the abovementioned partitioning, D can be divided into three subsets, D
[0], D[1], and D[2], where D[0 ]is the subset consisting of SNPs (SNP genotype vectors) with no association, D[1 ]is the subset consisting of SNPs with only main effects, and D[2 ]is the subset
consisting of SNPs with interaction effects. Likewise, let the genotypes on controls be U = (u[1],...,u[L]) with jth SNP at the controls. Let I = [I[1],I[2],..., I[L]] be the membership of SNPs
within each group, e.g. I[j ]= 0 means that the jth SNP has a main effect, I[j ]= 1 means that the jth SNP has only main effects, I[j ]= 2 means that the jth SNP has interaction effects. Let P() be
the probability symbol. Following some assumptions [32], the posterior distribution of I given D and U is inferred by:
Based on equation (5), BEAM draws I using the Metropolis-Hastings algorithm. The output is the posterior probability of main-effect markers and interactions associated with the disease. A "B"
statistic is also applied to measure statistical significance of SNPs and interactions.
(5) SNP Harvester (SH) [39]
This method aims to detect interactions with weak marginal effects. It includes the following steps:
5a. Remove SNPs with significant main effects;
5b. For a fixed M, run the "PathSeeker" heuristic search to identify M-way SNP interactions. First, randomly select M SNPs to form a M-way set A = {x[1], x[2],..., x[M]}. Second, swap one of the
remaining SNPs with each member of A, to see whether a statistical score s(A) (e.g. χ^2statistic, B statistic) increases. Then iteratively repeat this second step until convergence; record s(A) if
statistically significant. Then go back to the first step, with the optimal A removed as a candidate for the next run.
5c. Use L2-norm penalized logistic regression [37] as a post processing step to further select interactions from those identified in 5b.
Although SH removes SNPs with strong main effects, for purpose of fair comparison, we still give it credit for identifying these main-effect SNPs in calculating its power.
(6) Maximum entropy conditional probability modeling (MECPM) [40]
MECPM builds the phenotype posterior under a maximum entropy principle, encodes constraints into the model that correspond 1-to-1 to interactions, flexibly allows dominant or recessive coding for
each locus in a candidate interaction, searches interactions via a greedy interaction growing search strategy that evaluates candidates up to fifth order, and uses the Bayesian information criterion
(BIC) as the model selection strategy.
(7) Logistic regression (LR) [35]
LR is a generalized linear model used for binomial regression. Let x(i) correspond to the genotype of a SNP for the ith subject. x(i) = 0 denotes homozygous major alleles; x(i) = 1 denotes
heterozygous genotypes; and x(i) = 2 denotes homozygous minor alleles. Let π (x(i)) be the disease risk. The logistic regression is parameterized as:
, where β[0 ]and β[1 ]are the regression coefficients, learned via maximum likelihood. By a likelihood ratio test, logistic regression evaluates statistical significance for each SNP.
(8) Logistic regression with interaction term (LRIT) [35]
LRIT aims at detecting interaction effects based on the logistic regression model. Let x[m](i) and x[n](i) correspond to genotypes of the mth SNP and nth SNPs for the ith subject, respectively. x[m](
i) = 0 or x[n](i) = 0 denotes homozygous major alleles; x[m](i) = 1 or x[n](i) = 1 denotes heterozygous genotypes; and x[m](i) = 2 or x[n](i) = 2 denotes homozygous minor alleles. Let π (x[m](i), x
[n](i)) be the disease risk. The logistic regression is parameterized as:
, where β[0], β[1], β[2], β[3 ]are the regression coefficients, learned via maximum likelihood. By a likelihood ratio test, logistic regression evaluates the statistical significance for this pair of
SNPs (the statistical significance reflects the joint effects of the two individual terms and the multiplicative term).
Authors' contributions
LC and YW designed the experiment protocols and evaluation measures, conducted the experiments, participated in implementation of the methods and design of simulation tools, and drafted the
manuscript. GY implemented the conventional (also some advanced) interaction-detection methods, and participated in design of the experiments. CL designed the simulation tools. CL, RG and XY carried
out the development of simulation software. DM helped to draft and extensively edited the manuscript. DM and JR implemented MECPM. YW and DH conceived of the study, participated in its design and
coordination, and helped draft the paper. All authors read and approved the final manuscript.
This work was supported in part by the National Institutes of Health (HL090567 to D.M.H. and GM085665 to Y.W.).
1. Brookes A: Review: the essence of SNPs.
Gene 1999, 234:177-186. PubMed Abstract | Publisher Full Text
2. Couzin J, Kaiser J: Genome-wide association. Closing the net on common disease genes.
Science 2007, 316:820-2. PubMed Abstract | Publisher Full Text
3. Hirschhorn J: Genome-wide association studies for common diseases and complex traits.
Nature reviews Genetics 2005, 6:95-108. PubMed Abstract | Publisher Full Text
4. Donnelly P: Progress and challenges in genome-wide association studies in humans.
Nature 2008, 456:728-31. PubMed Abstract | Publisher Full Text
5. Manolio TA, et al.: Finding the missing heritability of complex diseases.
Nature 2009, 461:747-53. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
6. T. W. T. C. C. Consortium: Genome-wide association study of 14,000 cases of seven common diseases and 3,000 shared controls.
Nature 2007, 447:661-78. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
7. Wang WY, et al.: Genome-wide association studies: theoretical and practical concerns.
Nat Rev Genet 2005, 6:109-18. PubMed Abstract | Publisher Full Text
8. Hardy J, Singleton A: Genomewide association studies and human disease.
N Engl J Med 2009, 360:1759-68. PubMed Abstract | Publisher Full Text
9. Ku CS, et al.: The pursuit of genome-wide association studies: where are we now?
Journal of Human Genetics 2010, 55:195-206. PubMed Abstract | Publisher Full Text
10. Mohlke KL, et al.: Metabolic and cardiovascular traits: an abundance of recently identified common genetic variants.
Hum Mol Genet 2008, 17:R102-8. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
11. Kathiresan S, et al.: Polymorphisms associated with cholesterol and risk of cardiovascular events.
N Engl J Med 2008, 358:1240-9. PubMed Abstract | Publisher Full Text
12. Samani NJ, et al.: Genomewide association analysis of coronary artery disease.
N Engl J Med 2007, 357:443-53. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
13. McPherson R, et al.: A common allele on chromosome 9 associated with coronary heart disease.
Science 2007, 316:1488-91. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
14. Tsai FJ, et al.: A genome-wide association study identifies susceptibility variants for type 2 diabetes in Han Chinese.
15. Scott LJ, et al.: A genome-wide association study of type 2 diabetes in Finns detects multiple susceptibility variants.
Science 2007, 316:1341-5. PubMed Abstract | Publisher Full Text
16. Paterson AD, et al.: A genome-wide association study identifies a novel major locus for glycemic control in type 1 diabetes, as measured by both A1C and glucose.
Diabetes 2010, 59:539-49. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
17. Saxena R, et al.: Genome-wide association analysis identifies loci for type 2 diabetes and triglyceride levels.
Science 2007, 316:1331-6. PubMed Abstract | Publisher Full Text
18. Freedman BI, et al.: Differential effects of MYH9 and APOL1 risk variants on FRMD3 association with diabetic ESRD in African Americans.
19. Harley JB, et al.: Genome-wide association scan in women with systemic lupus erythematosus identifies susceptibility variants in ITGAM, PXK, KIAA1542 and other loci.
Nat Genet 2008, 40:204-10. PubMed Abstract | Publisher Full Text
20. Harley IT, et al.: Genetic susceptibility to SLE: new insights from fine mapping and genome-wide association studies.
Nat Rev Genet 2009, 10:285-90. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
21. Crow MK: Collaboration, genetic associations, and lupus erythematosus.
N Engl J Med 2008, 358:956-61. PubMed Abstract | Publisher Full Text
22. Lettre G, Rioux JD: Autoimmune diseases: insights from genome-wide association studies.
Hum Mol Genet 2008, 17:R116-21. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
23. Hussman JP, et al.: A noise-reduction GWAS analysis implicates altered regulation of neurite outgrowth and guidance in autism.
Mol Autism 2011, 2:1. PubMed Abstract | BioMed Central Full Text | PubMed Central Full Text
24. Easton DF, et al.: Genome-wide association study identifies novel breast cancer susceptibility loci.
Nature 2007, 447:1087-93. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
25. Easton DF, Eeles RA: Genome-wide association studies in cancer.
Human Molecular Genetics 2008, 17:R109-R115. PubMed Abstract | Publisher Full Text
26. Hunter DJ, et al.: A genome-wide association study identifies alleles in FGFR2 associated with risk of sporadic postmenopausal breast cancer.
Nat Genet 2007, 39:870-4. PubMed Abstract | Publisher Full Text
27. Amundadottir L, et al.: Genome-wide association study identifies variants in the ABO locus associated with susceptibility to pancreatic cancer.
Nat Genet 2009, 41:986-90. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
28. Maher B: Personal genomes: The case of the missing heritability.
Nature 2008, 456:18-21. PubMed Abstract | Publisher Full Text
29. Cordell H: Detecting gene-gene interactions that underlie human diseases.
Nature reviews Genetics 2009, 10:392-404. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
30. Moore JH, et al.: Bioinformatics challenges for genome-wide association studies.
Bioinformatics 2010, 26:445-55. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
31. Musani SK, et al.: Detection of gene × gene interactions in genome-wide association studies of human population data.
Hum Hered 2007, 63:67-84. PubMed Abstract | Publisher Full Text
32. Zhang Y, Liu JS: Bayesian inference of epistatic interactions in case-control studies.
Nat Genet 2007, 39:1167-73. PubMed Abstract | Publisher Full Text
33. Ritchie MD, et al.: Multifactor-dimensionality reduction reveals high-order interactions among estrogen-metabolism genes in sporadic breast cancer.
Am J Hum Genet 2001, 69:138-47. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
34. Moore JH, et al.: A flexible computational framework for detecting, characterizing, and interpreting statistical patterns of epistasis in genetic studies of human disease susceptibility.
J Theor Biol 2006, 241:252-61. PubMed Abstract | Publisher Full Text
35. Kooperberg C, Ruczinski I: Identifying interacting SNPs using Monte Carlo logic regression.
Genet Epidemiol 2005, 28:157-70. PubMed Abstract | Publisher Full Text
36. Park MY, Hastie T: Penalized logistic regression for detecting gene interactions.
Biostatistics 2008, 9:30-50. PubMed Abstract | Publisher Full Text
37. G Yu, et al.: Detection of complex interactions of multi-locus SNPs," presented at IEEE Machine Learning for Signal Processing. Cancun, Mexico; 2008.
38. Yang C, et al.: SNPHarvester: a filtering-based approach for detecting epistatic interactions in genome-wide association studies.
Bioinformatics 2009, 25:504-11. PubMed Abstract | Publisher Full Text
39. Miller DJ, et al.: An Algorithm for Learning Maximum Entropy Probability Models of Disease Risk That Efficiently Searches and Sparingly Encodes Multilocus Genomic Interactions.
Bioinformatics 2009, 25:2478-2485. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
40. Marchini J, et al.: Genome-wide strategies for detecting multiple loci that influence complex diseases.
Nature Genetics 2005, 37:413-417. PubMed Abstract | Publisher Full Text
41. Schwender H, Ickstadt K: Identification of SNP interactions using logic regression.
Biostatistics 2008, 9:187-198. PubMed Abstract | Publisher Full Text
42. Yang C, et al.: Identifying main effects and epistatic interactions from large-scale SNP data via adaptive group Lasso.
BMC Bioinformatics 2010, 11(Suppl 1):S18. PubMed Abstract | BioMed Central Full Text
43. Machine Learning 2001, 45:5-32. Publisher Full Text
44. Wang X, et al.: The meaning of interaction.
Human Heredity 2010, 70:269-277. PubMed Abstract | Publisher Full Text | PubMed Central Full Text
45. W Li, Reich J: A complete enumeration and classification of two-locus disease models.
Hum Hered 2000, 50:334-49. PubMed Abstract | Publisher Full Text
46. Szymczak S, et al.: Machine learning in genome-wide association studies.
Genet Epidemiol 2009, 33(Suppl 1):S51-7. PubMed Abstract | Publisher Full Text
47. Garcia-Magarinos M, et al.: Evaluating the ability of tree-based methods and logistic regression for the detection of SNP-SNP interaction.
Ann Hum Genet 2009, 73:360-9. PubMed Abstract | Publisher Full Text
48. Motsinger-Reif AA, et al.: A comparison of analytical methods for genetic association studies.
Genet Epidemiol 2008, 32:767-78. PubMed Abstract | Publisher Full Text
49. Carlborg O, Haley C: Epistatis: too often neglected in complex trait studies?
Nature Reviews Genetics 2004, 5:618-625. PubMed Abstract | Publisher Full Text
50. Jakulin A, Bratko I: Testing the Significance of Attribute Interactions," presented at the 21st International Conference on Machine Learning (ICML-2004). Banff, Canada; 2004.
51. Jung HY, et al.: New methods for imputation of missing genotype using linkage disequilibrium and haplotype information.
Information Sciences 2007, 177:804-814. Publisher Full Text
52. Chen L, et al.: A Ground Truth Based Comparative Study on Detecting Epistatic SNPs.
presented at Proc. IEEE Intl Conf. on Bioinformatics & Biomedicine, Washington D.C., USA 2009.
53. Neel J: Diabetes mellitus: a "thrifty" genotype rendered detrimental by "progress".
Am J Hum Genet 1962, 14:353-362. PubMed Abstract | PubMed Central Full Text
54. Wolf J, et al.: Epistasis and the Evolutionary Process. New York: Oxford University Press Inc.; 2000.
55. Wright FA, et al.: Simulating association studies: a data-based resampling method for candidate regions or whole genome scans.
Bioinformatics 2007, 23:2581-8. PubMed Abstract | Publisher Full Text
56. Yuan X, et al.: Simulating linkage disequilibrium structures in a human population for SNP association studies.
Biochem Genet 2011, 49:395-409. PubMed Abstract | Publisher Full Text
57. Cordell H: Epistasis: what it means, what it doesn't mean, and statistical methods to detect it in humans.
Human Molecular Genetics 2002, 11:2463-2468. PubMed Abstract | Publisher Full Text
Sign up to receive new article alerts from BMC Genomics | {"url":"http://www.biomedcentral.com/1471-2164/12/344/","timestamp":"2014-04-17T13:20:13Z","content_type":null,"content_length":"245962","record_id":"<urn:uuid:fe20de16-0d45-4bc8-b38c-d3ca76d2bd91>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Expectation of product of Gaussian random vectors
up vote 2 down vote favorite
Say we have two multivariate Gaussian random vectors $p(x_1) = N(0,\Sigma_1), p(x_2) = N(0,\Sigma_2)$, is there a well known result for the expectation of their product $E[x_1x_2^T]$ (matrix result)
without assuming independence?
What is known about these rv's? Do they have a joint denisty function? Are they jointly normal? Also, in view of the notation and the tag, are these rv's multivariate normal? – Shai Covo Dec 12 '10
at 13:12
Indeed, I neglected to mention they are both jointly normal and multivariate normal. – asd123 Dec 12 '10 at 14:34
Isn't this just more or less the matrix of covariances between the random variables in $x_1$ and the random variables in $x_2$? – Deane Yang Mar 26 '11 at 0:54
add comment
1 Answer
active oldest votes
Without assuming anything about the random vectors you can only get upper and lower bounds. For a complete description of the matrix $$ \mathbb{E}[x_1x_2^{T}] $$ you need to know the
up vote 3 correlation between the $i$-th entry of the vector $x_1$ and the $j$-th entry of the vector $x_2$ for all possibles indexes $i$ and $j$. I hope it helps!
down vote
add comment
Not the answer you're looking for? Browse other questions tagged random-matrices or ask your own question. | {"url":"http://mathoverflow.net/questions/49138/expectation-of-product-of-gaussian-random-vectors","timestamp":"2014-04-19T10:07:13Z","content_type":null,"content_length":"52704","record_id":"<urn:uuid:72a8c049-6ea2-466b-a5af-476e266e873a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
"If it can be written, or thought, it can be filmed..." STANLEY KUBRIC
From Encyclopedia PRO
YIQ is a color space, formerly used in the NTSC television standard. I stands for in-phase, while Q stands for quadrature, referring to the components used in quadrature amplitude modulation. NTSC
now uses the YUV color space, which is also used by other systems such as PAL.
The Y component represents the luma information, and is the only component used by black-and-white television receivers. I and Q represent the chrominance information. In YUV, the U and V components
can be thought of as X and Y coordinates within the colorspace. I and Q can be thought of as a second pair of axes on the same graph, rotated 33°; therefore IQ and UV represent different coordinate
systems on the same plane.
The YIQ system is intended to take advantage of human color-response characteristics. The eye is more sensitive to changes in the orange-blue (I) range than in the purple-green range (Q) — therefore
less bandwidth is required for Q than for I. Broadcast NTSC limits I to 1.3 MHz and Q to 0.4 MHz. I and Q are frequency interleaved into the 4 MHz Y signal, which keeps the bandwidth of the overall
signal down to 4.2 MHz. In YUV systems, since U and V both contain information in the orange-blue range, both components must be given the same amount of bandwidth as I to achieve similar color
Very few television sets perform true I and Q decoding, due to the high costs of such an implementation. The Rockwell Modular Digital Radio (MDR) was one, which in 1997 could operate in
frame-at-a-time mode with a PC or in realtime with the Fast IQ Processor (FIQP).
Image Processing
The YIQ representation is sometimes employed in color image processing transformations. For example, applying a histogram equalization directly to the channels in an RGB image would alter the colors
in relation to one another, resulting in an image with colors that no longer make sense. Instead, the histogram equalization is applied to the Y channel of the YIQ representation of the image, which
only normalizes the brightness levels of the image.
These formulae approximate the conversion between the RGB color space and YIQ.
$R, G, B, Y \in \left[ 0, 1 \right]$
$I \in \left[-0.595716 , 0.595716 \right]$
$Q \in \left[ -0.522591, 0.522591 \right]$
From RGB to YIQ:
Y = 0.299R + 0.587G + 0.114B
I = 0.595716R - 0.274453G - 0.321263B
Q = 0.211456R - 0.522591G + 0.311135B
From YIQ to RGB:
R = Y + 0.956295719758948I + 0.621024416465261Q
G = Y - 0.272122099318510I - 0.647380596825695Q
B = Y - 1.106989016736491I + 1.704614998364648Q
Or, using a matrix representation:
$\begin{bmatrix} Y \\ I \\ Q \end{bmatrix} = \begin{bmatrix} 0.299 & 0.587 & 0.114 \\ 0.595716 & -0.274453 & -0.321263 \\ 0.211456 & -0.522591 & 0.311135 \end{bmatrix} \begin{bmatrix} R \\ G \\ B \
$\begin{bmatrix} R \\ G \\ B \end{bmatrix} = \begin{bmatrix} 1 & 0.956295719758948 & 0.621024416465261 \\ 1 & -0.272122099318510 & -0.647380596825695 \\ 1 & -1.106989016736491 & +1.704614998364648 \
end{bmatrix} \begin{bmatrix} Y \\ I \\ Q \end{bmatrix}$
Two things to note regarding the RGB transformation matrix:
• The top row is identical to that of the YUV color space
• If $\begin{bmatrix} R & G & B \end{bmatrix}^{T} = \begin{bmatrix} 1 & 1 & 1 \end{bmatrix}$ then $\begin{bmatrix} Y & I & Q \end{bmatrix}^{T} = \begin{bmatrix} 1 & 0 & 0 \end{bmatrix}$. In other
words, the top row coefficients sum to unity and the last two rows sum to zero.
• Buchsbaum, Walter H. Color TV Servicing, third edition. Englewood Cliffs, NJ: Prentice Hall, 1975. ISBN 0-13-152397-X
See also | {"url":"http://www.encyclopediapro.com/mw/YIQ","timestamp":"2014-04-21T04:31:53Z","content_type":null,"content_length":"19930","record_id":"<urn:uuid:64694ddc-6393-48b3-a6fb-6c1a84ec8830>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-User] Problems with 2D interpolation of data on polar grid
[SciPy-User] Problems with 2D interpolation of data on polar grid
Kyle Parfrey kyle@astro.columbia....
Mon Aug 30 11:11:31 CDT 2010
Thanks everyone for all your replies, especially to Denis for those
detailed notes on stackoverflow.
I've managed to get it to work well with
interpolate.RectBivariateSpline (as suggested above) on a rectangular
(r, theta) grid, with quintic splines in theta and cubic in r, and no
smoothing (s=0). I don't know why that particular combination works so
well (quintic in both directions requires some smoothing, and the
amount needed depends on the dataset, not just the grid, which is
really bad for what I'm trying to do), I think I was being careful
enough to always convert to Cartesian coordinates (both X, Y and Xnew,
Ynew) etc, but the routines just seem to do a much better job on
rectangular grids.
If I run into problems with this later I'll try griddata() as described.
Thanks again,
On 30 August 2010 10:48, denis <denis-bz-gg@t-online.de> wrote:
> Kyle,
> it looks as though there's a mixup here between Cartesion X,Y and
> polar Xnew,Ynew.
> griddata() will catch this to some extent with masked array Znew
> but the the Fitpack routines will extrapolate *without warning*.
> See the too-long notes under
> http://stackoverflow.com/questions/3526514/problem-with-2d-interpolation-in-scipy-non-rectangular-grid
> cheers
> -- denis
> _______________________________________________
> SciPy-User mailing list
> SciPy-User@scipy.org
> http://mail.scipy.org/mailman/listinfo/scipy-user
More information about the SciPy-User mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2010-August/026546.html","timestamp":"2014-04-19T17:28:45Z","content_type":null,"content_length":"4633","record_id":"<urn:uuid:75fcd2e9-031a-4f45-b75a-f4f3aceff69e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Instruments for Measuring Angles
Date: 09/28/2001 at 00:01:17
From: Gentry Wesner
Subject: I need the name of 3 to 5 devices used to measure angles
I need the name, picture, or description of 5 devices used to measure
angles for a geometry project I am doing.
Thank you very much.
Gentry Weser
Date: 09/28/2001 at 17:05:37
From: Doctor Rick
Subject: Re: I need the name of 3 to 5 devices used to measure angles
Hi, Gentry.
I have some words for you. Look them up in a dictionary for starters,
then in an encyclopedia or on the Web. Find what kind of angles each
instrument measures, and how it works.
magnetic compass
- Doctor Rick, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/55359.html","timestamp":"2014-04-17T05:40:08Z","content_type":null,"content_length":"5939","record_id":"<urn:uuid:a79f5190-fdf1-4157-b101-ea82542825db>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 774.60036
Autor: Erdös, Paul; Révész, P.
Title: Three problems on the random walk in Z^d. (In English)
Source: Stud. Sci. Math. Hung. 26, No.2/3, 309-320 (1991).
Review: A simple symmetric random walk in Z^d is considered. The following three functionals are studied and their asymptotic behaviour is analysed:
R[d](n): Largest integer for which there exists a random variable u such that all the points in the ball of radius R[d](n) centered at u are visited by time n (d \geq 3).
\nu[d](n): Time needed, after time n, to visit a point not previously visited.
f[n]: Cardinality of the set of ``favourite values'', i.e. sites most often visited by time n.
Reviewer: B.Bassan (Milano)
Classif.: * 60F15 Strong limit theorems
60G50 Sums of independent random variables
00A07 Problem books
60J15 Random walk
Keywords: symmetric random walk
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/77460036.htm","timestamp":"2014-04-18T13:53:57Z","content_type":null,"content_length":"3938","record_id":"<urn:uuid:453038ec-d924-49c9-88db-719b50da9eb7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
Subsequence of a Cauchy Sequence is Cauchy
September 26th 2010, 08:23 PM
Subsequence of a Cauchy Sequence is Cauchy
Show that a subsequence of a Cauchy sequence is a Cauchy sequence. The catch is that you can't use the fact that every Cauchy subsequence is convergent.
I really don't know where to begin without assuming what the problem says I can't assume.
Any direction would be appreciated.
September 26th 2010, 09:30 PM
Show that a subsequence of a Cauchy sequence is a Cauchy sequence. The catch is that you can't use the fact that every Cauchy subsequence is convergent.
I really don't know where to begin without assuming what the problem says I can't assume.
Any direction would be appreciated.
Let $\{x_n\}\,,\,\,\{x_{n_k}\}$ be the Cauchy seq. and one of its subsequences, and let $\epsilon>0$
1) There exists $M_\epsilon\in\mathbb{N}\,\,s.t.\,\,n,m>M_\epsilon\ Longrightarrow |x_n-x_m|<\epsilon$
2) By the very definition of "infinite subsequence", there exist $K\in\mathbb{N}\,\,s.t.\,\,\forall k>K\,,\,n_k>M_\epsilon$
Now end the proof.
September 27th 2010, 10:04 AM
Thanks. Doesn't that nearly end the proof? Not that I'm mad at you or anything for taking me that far :P
I can't think of anything after that but then going right to $|x_{n_k}-x_m|\leq \epsilon$ which completes the proof.
Is this a valid step or do I have more work to do?
September 27th 2010, 10:33 AM
The ‘trick’ comes from the nature of subsequences and the definition of Cauchy Sequences.
If $\left(x_{n_k}\right)$ is a subsequence of $\left(x_n\right)$ then $\left(n_k\right)\ge k$.
So if $k~\&~j~\ge N$ then $n_k~\&~n_j~\ge N$.
September 27th 2010, 01:42 PM
Two questions though. Doesn't that statement near complete the proof and also can $k~\&~j~\ge N$ be the same $N$ as in $n_k~\&~n_j~\ge N$?
It would seem to me that the subsub scripts would need to be greater then some other number. Maybe I'm being too nitpicky but just to be sure I'm getting what you two are saying I'm going to
write it all out.
We start with the known that ${s_n}$ is convergent and therefore a Cauchy sequence. Then we know from that that there exists some $m,n \ge N$ such that $|s_n-s_m|\le \epsilon$.
We then choose some $j,k\le M$such that the subsequences of ${s_n}$$s_{n_j}$ and $s_{m_k}$ such that $n_j, n_k\ge N$
We're then want to prove that $s_{n_j}-s_{m_k} \le \epsilon$
since ${s_n}$ is convergent it is bounded and therefore so is $s_{n_j}$ and $s_{m_k}$.
If we choose a large enough $N$ the sequences will be bounded into smaller and smaller intervals because they are bound by the sequences bounds. The process of tightening this window is by
choosing larger values of N and therefore forcing larger values of M. This is exactly the Cauchy criterion therefore the subsequence of a Cauchy sequence is a Cauchy sequence.
Does that fly?
September 27th 2010, 01:58 PM
The point is: that for Cauchy Sequences it is true that for each $\varepsilon > 0$ there is one $N$ that works for $k\ge N~\&~j\ge N$ then $\left| {x_k - x_j } \right| < \varepsilon$.
I am puzzled by this question.
If the sequence $\left( {x_n } \right)$ converges then each subsequence $\left( {x_{n_k} } \right)$ converges to the same limit.
Moreover, any convergent sequence is a Cauchy Sequence.
So any subsequence of a Cauchy Sequence must be a Cauchy Sequence.
September 27th 2010, 02:24 PM
September 27th 2010, 02:51 PM
Nevertheless, do you understand why for each $\varepsilon > 0$ one $N$ works?
September 27th 2010, 04:50 PM
Yes I do now. Thank you all for all your help.
September 30th 2010, 01:29 AM | {"url":"http://mathhelpforum.com/differential-geometry/157539-subsequence-cauchy-sequence-cauchy-print.html","timestamp":"2014-04-17T21:56:51Z","content_type":null,"content_length":"17980","record_id":"<urn:uuid:dcb86cdd-cccb-474c-a52a-62c28eeab16a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cutler Bay, FL SAT Math Tutor
Find a Cutler Bay, FL SAT Math Tutor
...I look forward to meeting you soon. ElenaI have been running competitively since I was 8 years old and have always loved the feeling of the race. After being high school team captain and MVP
of both the Cross Country and Track and Field teams, I went on to compete for Juniata College as an NCAA DIII athlete where I became named MVP, All-Conference, and All-Region.
40 Subjects: including SAT math, reading, writing, biology
...I think this was my favorite math class, because it was a good challenge. Also, I helped out a lot of my classmates that had difficulties in this subject. In 11th grade, I took AP Calculus and
Honors Precalculus, because my teacher thought I could do it.
11 Subjects: including SAT math, chemistry, physics, calculus
...Painting is not only using brushes to create beautiful images, but another way of showing beauty of colors and your inside thoughts. I've been helped with the Year Book of University of Miami
for a while. Mostly what I did were page designing (spreads) and photoshopping some pictures.
9 Subjects: including SAT math, Chinese, painting, Adobe Photoshop
...We will learn terms like circumference and area of a circle; also, area of a triangle, volume of a cylinder, sphere, and a pyramid. Trigonometric functions and angle derivation will be
explained and applied. Geometric proofs are an important aspect of geometry and so these will be extensively explained.
46 Subjects: including SAT math, chemistry, reading, English
...I scored 5s on both the Classical Mechanics and Electromagnetism AP Physics examinations. The Mechanical and Aerospace Engineering branches are at bottom forms of applied science, and so the
basics of physics theory becomes second nature to a working professional. I generally have an intuitive feel for physics problems and can explain the fundamentals very well.
13 Subjects: including SAT math, calculus, physics, geometry
Related Cutler Bay, FL Tutors
Cutler Bay, FL Accounting Tutors
Cutler Bay, FL ACT Tutors
Cutler Bay, FL Algebra Tutors
Cutler Bay, FL Algebra 2 Tutors
Cutler Bay, FL Calculus Tutors
Cutler Bay, FL Geometry Tutors
Cutler Bay, FL Math Tutors
Cutler Bay, FL Prealgebra Tutors
Cutler Bay, FL Precalculus Tutors
Cutler Bay, FL SAT Tutors
Cutler Bay, FL SAT Math Tutors
Cutler Bay, FL Science Tutors
Cutler Bay, FL Statistics Tutors
Cutler Bay, FL Trigonometry Tutors
Nearby Cities With SAT math Tutor
Coral Gables, FL SAT math Tutors
Goulds, FL SAT math Tutors
Hialeah Gardens, FL SAT math Tutors
Hialeah Lakes, FL SAT math Tutors
Homestead, FL SAT math Tutors
Miami SAT math Tutors
Miami Beach SAT math Tutors
Miami Shores, FL SAT math Tutors
Opa Locka SAT math Tutors
Palmetto Bay, FL SAT math Tutors
Perrine, FL SAT math Tutors
Pinecrest, FL SAT math Tutors
South Miami Heights, FL SAT math Tutors
South Miami, FL SAT math Tutors
West Miami, FL SAT math Tutors | {"url":"http://www.purplemath.com/Cutler_Bay_FL_SAT_Math_tutors.php","timestamp":"2014-04-19T09:37:38Z","content_type":null,"content_length":"24292","record_id":"<urn:uuid:7238a075-9ddc-4bfc-ad23-bfe3496e2817>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
[plt-scheme] Re: to define, or to let
From: Bill Richter (richter at math.northwestern.edu)
Date: Fri Apr 23 22:23:47 EDT 2004
Anton van Straaten <anton at appsolutions.com> responds to me:
> But HtDP doesn't have enough mutation recipes?
It's primarily teaching pure functional programming, without
mutation, for most of its chapters. Mutation is an advanced topic,
left for the end of the book. So no, in-depth treatment of
mutation isn't there. Remember, this is a book that's used at the
introductory level - high school, or college CS intro. It's a
foundation, a beginning - a good one, but far from all there is to
Thanks, Anton. Well, that's a big purpose of PLT, I think, to teach
their book, so maybe the points you're raising are too advanced.
> If you're talking about LC_v reduction, then we've gone over this:
> argument evaluation order is arbitrary in LC_v.
> I'm trying to get to learn some Math, or at least appreciate it.
Don't worry Bill, I'm sure if you apply yourself diligently, you'll
eventually learn some math. Appreciating it is even easier. :-)
Hah, that's great!!!
> You're talking about a theorem that I believe is true but don't
> know how to prove: a version of mono's Thm 3.1.4 using the
> "right-most" standard reduction function. So in particular, if
> an LC_v expression reduces to a value, then reducing the
> algorithm which reduces the right-most beta_v redex will always
> terminate in a value. Do you know a reference for this?
Take a look at the evaluation contexts in definition 3.1.1. What
I'm talking about involves switching (o V ... V E M ... M) around
so that it reads (o M ... M E V ... V). Then in the proof of Lemma
3.1.2, for example, you can replace "leftmost argument expression"
with "rightmost argument expression", and the same proof still
applies - follow it and see. If you see a problem with doing this
here or anywhere else, let me know and I'll address it. It's
really pretty straightforward.
Wait, if you're telling me that in the 10+ page proof of Thm 3.1.4
that you can switch left-most to right-most... then you must have
read the proof. If so, congratulations! You're actually learning
some math, and no doubt appreciating it! that's not an easy proof.
Here's something I wrote MF on 31 Dec 1998, and he agreed it was true:
here's a simple proof that the Standard Reduction Thm 3.1.4 follows
from Lemmas 3.1.11 & 3.1.12, avoiding Plotkin's Theorem 3.1.8
altogether. [...]
I'll see if I can find some references over the weekend. There are
certainly papers out there that deal with order of evaluation
issues in LC, with and without mutation.
Cool. What I really want is papers that seriously use some LC or LC_v
for some computer-ological purpose, such as defining the language.
Just because somebody (say Steele's Rabbit) does some beta-reductions
doesn't mean there's any heavy LC.
You can take any reduction that reduces the leftmost redex in an
application, and switch it so you reduce the rightmost redex in
that application instead, and nothing changes, and you can prove
I can prove it? Maybe you're not saying you read the proofs above,
but you're advising me how to right-most 'em. Good advice, thanks!
> he's recommending applicative order, which is left->right?
Applicative order is where arguments are evaluated before applying
a function to them.
Ah, so in LC/LC_v, with only one argument, that's left-most.
> Remind what normal order is.
What LC books have you been reading? Normal order is the
leftmost-outermost reduction of standard LC. It's the order that's
guaranteed to reach a normal form if there is one.
You caught me! Here's the 1st sentence of my paper on my web page:
Barendregt~\cite[Thm.~13.2.2]{Barendregt} proves a {\em Normalization
Theorem} for the $\l$ Calculus: if a $\L$ expression has a $\b$-nf,
then the algorithm of reducing the leftmost redex eventually yields
the $\b$-nf.
Why do you keep saying leftmost-outermost when I say leftmost?
> So maybe you're right about the LC/LC_v focus of Scheme.
Some other good papers on the subject are Sussmann & Steele's
"Lambda Papers".
Thanks, I didn't get much out of them the last time I looked.
Bill> Now Anton, I speculated above about folks becoming "thoroughly
Bill> ingrained in Mzscheme's l->r order." This is possible, right?
The implications of what you speculated, relative to what Joe was
saying, are not possible, no.
no no no no no. I mean this: You're "thoroughly ingrained" in various
C++ techniques, right? I don't mean that you could never make a
mistake using them, but you're fluent in them.
Someone "thoroughly ingrained in l->r order" cannot infer where
those annotations are *not* necessary,
> I've talked about those, but I didn't know they were called `order of
> eval bugs'. I'd call them `unintended side effect interaction' bugs.
OK, but the latter is the category of bugs we're concerned with,
when you talk about eliminating constructs intended for
communicating order independence.
I'll repeat my point. You can have an algorithm for the compiler. You
can use left->right evaluation order. That doesn't prevent you from
having constructs intended to communicate evaluation order independence,
regardless of what their evaluation order actually is.
I'm not convinced that it's good for a language to have such
non-enforced constructs. I mean, it's good to be able to convey
information to yourself and the other coders. But if this information
turns out to be false (perhaps because of modifications made by
someone else), then we have serious trouble.
It also doesn't prevent you from using tools, like BrBackwards which I
posted earlier, to test that your programs don't misbehave when evaluation
order is switched. Empirical experience shows that such bugs show up
fairly quickly, even though they're tough to detect statically.
Me, I don't know, but I really doubt MF would be satisfied by this.
Posted on the users mailing list. | {"url":"http://lists.racket-lang.org/users/archive/2004-April/005330.html","timestamp":"2014-04-20T10:55:24Z","content_type":null,"content_length":"11456","record_id":"<urn:uuid:5a670062-6c9f-46de-a914-455b38d8a600>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |
A 16 μF Capacitor Is Connected To Theterminals ... | Chegg.com
A 16 μF capacitor is connected to theterminals of a 27.8 Hz generator whose rms volatge is 193 V. A.) Find the capacitive reactance. Answer in units ofΩ. B.) Find the rms current in the circuit,
Answer in units of A. | {"url":"http://www.chegg.com/homework-help/questions-and-answers/16-f-capacitor-connected-theterminals-278-hz-generator-whose-rms-volatge-193-v--find-capac-q209045","timestamp":"2014-04-20T04:04:03Z","content_type":null,"content_length":"20582","record_id":"<urn:uuid:7e54657b-bad8-4e59-b4e1-55206f68f2a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00030-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function assigning each subset of R sum of its elements
January 11th 2013, 01:58 AM #1
Junior Member
Nov 2012
Function assigning each subset of R sum of its elements
Could you give me a hint how to solve this problem?
Let $D:= \left\{E \subset \mathbb{R} | 0< card(E)< + \infty \right\}$.
$\phi : D i E \rightarrow \sum_{x \in E} \ x \in \mathbb{R}$
Check if $\phi$ is injective or surjective.
Last edited by wilhelm; January 11th 2013 at 02:36 AM.
Re: Function assigning each subset of R sum of its elements
There are two mistakes here. First, D is the collection of finite nonempty subsets of $\mathbb{R}$, so $\mathrm{card}(D)=\mathrm{card}(\mathbb{R})$. Second, pigeonhole principle is not valid for
infinite sets. You may have an injection from a infinite set into its proper subset; in fact, this is one of the definitions of an infinite set. The proof of the pigeonhole principle proceeds by
induction on the cardinality of the domain, so it only applies when this cardinality is a natural number. You need to use transfinite induction to prove properties of infinite numbers (ordinals).
It is probably instructive to see where the proof of the principle breaks down when one tries to use transfinite induction instead of regular one.
Speaking about the problem, you need to have an intuition about D. Can you give examples of sets in D? Both questions are trivial once you understand what D is.
Re: Function assigning each subset of R sum of its elements
Thank you, I've already deleted that comment. Could we just say that for example $\phi (\left\{ x \right\})= \phi (\left\{ x, 0 \right\})= x$ and $\forall x \in \mathbb{R} \ \ \exists E \subset \
mathbb{R} : \phi(E)=x$ for example $E = \left\{ x \right\}$?
Or is it oversimplified?
Re: Function assigning each subset of R sum of its elements
Yes. Concerning injection, you need $xe0$, but since to disprove injection you need to find just one pair of arguments mapped to the same image, such x obviously exists.
Re: Function assigning each subset of R sum of its elements
Thank you. Could you maybe help me with the linear algebra problem about detA=1995, too?
Prove there exists a matrix
Last edited by wilhelm; January 11th 2013 at 03:42 AM.
January 11th 2013, 02:17 AM #2
MHF Contributor
Oct 2009
January 11th 2013, 03:05 AM #3
Junior Member
Nov 2012
January 11th 2013, 03:15 AM #4
MHF Contributor
Oct 2009
January 11th 2013, 03:35 AM #5
Junior Member
Nov 2012 | {"url":"http://mathhelpforum.com/discrete-math/211151-function-assigning-each-subset-r-sum-its-elements.html","timestamp":"2014-04-16T20:23:36Z","content_type":null,"content_length":"48982","record_id":"<urn:uuid:535b36a8-450f-43da-8676-18e115fedddd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Level curves for Re(ζ(s)) are shown above with the solid lines; the red curve is Re(ζ(s))=0, the black curves represent values other than zero. Level curves for Im(ζ(s)) are shown above with dotted
lines; the green curve is Im(ζ(s))=0, the black curves represent values other than zero. The function ζ(s) is real on the real axis, thus Im(ζ(s))=0 there. Zeros of ζ(s) are points in the plane where
both Re(ζ(s))=0 and Im(ζ(s))=0; these are points where the red and green curves cross. You can see the trivial zeros at the negative even integers, and the first nontrivial zero at s=1/2+i*14.135...,
this is the point in the plane (1/2, 14.135...). You can also see the pole at s=1.
Argument in Color
Another possibility is to view the complex number w=ζ(s), itself a point in the plane, as a vector in polar coordinates. The angle, or argument of ζ(s) is a number between 0 and 2π for each complex
s. We can interpret this angle as a color on the color wheel, and plot a pixel with that color at the corresponding point in the domain, that it, in the s plane. In this representation, a zero of ζ
(s) is a point where all the colors come together; you can see both trivial zeros and the first three non-trivial zeros. Each of these is a simple zero (as the Riemann Hypothesis predicts); going
around the point once we see each of the colors exactly once. Observe the colors also come together at the pole s=1; but with a difference. At the pole one circles the color wheel in the opposite
The Mathematica code which produced this is
Show[Graphics[RasterArray[Table[Hue[Mod[3Pi/2 +
Arg[Zeta[sigma +I t]], 2Pi]/(2Pi)],
{t, -4.5, 30, .1}, {sigma, -11, 12, .1}]]],
AspectRatio -> Automatic]
Riemann Hypothesis Movie
If we restrict the s variable to, say, a vertical line in the s plane, the values of w=ζ(s) will also be one dimensional, a curve in the w plane which we can plot. Changing the position of the
vertical line makes the curve move. Each frame of the movie below shows a single such curve in the w plane, as the vertical line in the s plane moves from Re(s)=.05 to Re(s)=.95. The value of σ=Re(s)
is shown at the top. Observe that the curves pass through the origin only in a single frame of the movie, when Re(s)=.5. This happens because all the low lying zeros of ζ(s) are known to satisfy the
Riemann Hypothesis; they have real part equal to 1/2. In this movie we have colored each pixel again with the argument as above, but now we are looking at the range, not the domain, so colors are
constant along rays emanating from the origin.
Alternately, we can fix σ=Re(s)=1/2, and make a movie where we trace out ζ(1/2+It) for increasing values of t. The value of t=Im(s) is shown at the top. The curve passes through the origin for each
value of t such that ζ(1/2+It)=0, namely t = 14.135..., 21.022..., 25.011..., 30.425..., 32.935..., 37.586...
The reason for making the movies in color (aside from the fact that it is pretty) is that you see the color changes continuously, except when the curve passes through the origin, that is, when ζ(s)=
0. The angle at which the curve exits the origin differs by a factor of π from the angle at which it entered the origin. We can use this to get a quick-and-dirty way to see that the prime numbers p
determine the location of the zeros ρ of ζ(s). The argument of ζ(s) is the imaginary part of the logarithm log(ζ(s)). Borrowing an idea from the physicist Michael Berry, we take the logarithm of the
Euler product over primes for ζ(s) at s=1/2+It, even though the product does not converge there. This gives a sum over primes p of terms -log(1-p^{1/2+It}), each of which can in turn be expanded
using the series for log(1-z). Taking imaginary parts to get the argument, we have
arg(ζ(1/2+It)) "=" -Σ_p &Sigma_m sin(tmlog(p))/(mp^{m/2}).
The notation "=" is to remind you that the equality is only formal, the series does not converge. None the less, the first few terms are enough to indicate the rough location of the first few zeros.
Below is shown the graph (in red) of arg(ζ(1/2+It)) on the vertical axis for t between 0 and 30 on the horizontal axis. The discontinuities at t = 14.135..., 21.022..., 25.011... show the location of
the first three zeros. The movie shows (in black) the contribution of all powers of primes below 200 in the divergent series above, with one term added in each frame. Since the finite sum is a
continuous function, there are no actual discontinuities, but it does the best it can.
(There is a famous quotation of the mathematicain George F. Carrier, who said "Divergent series converge faster than convergent series, because they don't have to converge." This movie is an
The next section shows the converse idea, that is, how the zeros of the Riemann zeta function determine the location of the primes.
Next: Chapter 10 Explicit Formula | {"url":"http://www.math.ucsb.edu/~stopple/zeta.html","timestamp":"2014-04-16T13:10:48Z","content_type":null,"content_length":"7305","record_id":"<urn:uuid:929f029a-935c-4ecd-9dad-e4ec3e22a8df>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please help with calculation involve solid angle.
Seems to me the model is to suppose the surface is uniform temperature and is radiating on that basis. I believe a sphere at uniform intrinsic brightness would appear appear as a uniformly bright
disc whatever position it is viewed from, but I'm prepared to be proved wrong about that.
Yes, but what I'm suggesting is that when an antenna is quoted as having a diameter of such-and-such, that is to be interpreted as the square root of the 'effective' area, whatever shape and response
pattern the actual area is. I've no idea whether this is the case - just trying to make sense of the equation
Thanks for the answer. In the whole, they use round shape like a cone to represent the main lobe. I don't believe the book mean square. Besides, using square don't explain ∏/4 either.
The question is whether I am doing it right with my approach assuming the Mars has uniform temperature on the surface. I really believe the book is wrong to put the ∏/4 in. The square of the degree
will be good already as the ratio is the same using Sr or degree.
I just try taking my result (538) divide by ∏/4, the answer is 685 which is the same as the book!!! | {"url":"http://www.physicsforums.com/showthread.php?p=4225102","timestamp":"2014-04-21T04:49:03Z","content_type":null,"content_length":"58739","record_id":"<urn:uuid:50312397-4c1c-4557-83e0-e0e59085683e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example of a manifold which is not a homogeneous space of any Lie group
up vote 29 down vote favorite
Every manifold that I ever met in a differential geometry class was a homogeneous space: spheres, tori, Grassmannians, flag manifolds, Stiefel manifolds, etc. What is an example of a connected smooth
manifold which is not a homogeneous space of any Lie group?
The only candidates for examples I can come up with are two-dimensional compact surfaces of genus at least two. They don't seem to be obviously homogeneous, but I don't know how to prove that they
are not. And if there are two-dimensional examples then there should be tons of higher-dimensional ones.
The question can be trivially rephrased by asking for a manifold which does not carry a transitive action of a Lie group. Of course, the diffeomorphism group of a connected manifold acts
transitively, but this is an infinite-dimensional group and so doesn't count as a Lie group for my purposes.
Orientable examples would be nice, but nonorientable would be ok too.
dg.differential-geometry lie-groups counterexamples
3 Well, a compact surface of genus at least two is the quotient of $\mathbb{H}$ by a group action, and $\mathbb{H}$ is a homogeneous space for $\text{SL}_2(\mathbb{R})$, so in some sense these
examples still come from Lie groups. – Qiaochu Yuan Feb 24 '12 at 0:48
10 Well a compact orientable surface of genus two or more has no transitive action of a compact Lie group because such an action would necessarily preserve a Riemann metric and therefore a conformal
structure. The group of conformal automorphisms of such a surface is finite. – Tom Goodwillie Feb 24 '12 at 0:52
8 I must say I am surprised that this question has garnered two votes to close as "not a real question." Would anybody care to explain? The answers below seem to imply (to me, at least) that the
question isn't so trivial. – MTS Feb 24 '12 at 5:04
4 @MTS: Sometimes people underestimate a question, oversimplifying it. I suspect in this case it's just a mistake on the part of the people who cast votes to close. But if they don't say anything
it's hard to tell. I don't think you have to worry about this thread being closed. – Ryan Budney Feb 24 '12 at 5:11
1 As you mention, locally symmetric spaces give a huge class. The example you describe with the genus at least two is actually $M = \Gamma \backslash PSL_2( \mathbb{R}) /PSO(2)$, where $\Gamma \
cong \pi_1(M)$. – plusepsilon.de Feb 24 '12 at 13:03
show 2 more comments
7 Answers
active oldest votes
$\pi_2$ of a Lie group is trivial, so $\pi_2(G/H)$ is isomorphic to a subgroup of $\pi_1(H)$, which is finitely generated (isomorphic to $\pi_1$ of a maximal compact subgroup of the
identity component of $H$). But $\pi_2$ of a closed manifold is often not finitely generated. For example, the connected sum of two copies of $S^1\times S^2$ has as a retract a
punctured $S^1\times S^2$, which is homotopy equivalent to $S^1\vee S^2$ and so has universal cover homotopy equivalent to an infinite wedge of copies of $S^2$.
up vote 45
down vote EDIT This ad hoc answer can be extended as follows: All I really used was that $\pi_2(G)$ and $\pi_1(G)$ are finitely generated. But $\pi_n(G)$ is finitely generated for all $n\ge 1$
accepted (reduce to simply connected case and use homology), so $\pi_n(G/H)$ is finitely generated for $n\ge 2$. That leads to a lot more higher-dimensional non simply connected examples.
2 Something puzzles me. You seem to be using that a subgroup of a finitely generated group is finitely generated. But this is false (e.g. commutator subgroup of $F_2$). What am I
missing? (I upvoted your answer back in the day, so hopefully I understood it then...) – Mark Grant Jul 18 '13 at 15:46
1 @MarkGrant, $\pi_1(H)$ is abelian. – Mariano Suárez-Alvarez♦ Jul 18 '13 at 21:14
@Mariano: Ah yes, thanks. – Mark Grant Jul 19 '13 at 6:17
add comment
Apart from already mentioned non simply connected examples most simply connected manifolds are also not homogeneous. One easy criterion is that simply connected homogeneous spaces are
rationally elliptic, i.e. they have finite dimensional total rational homotopy. That is because any connected Lie group is rationally homotopy equivalent to a product of odd dimensional
spheres. so a homogeneous space is elliptic by a long exact homotopy sequence.
Most simply connected manifolds are not rationally elliptic. For example the connected sum of more than two $CP^2$'s or $S^2\times S^2$'s. This is easily seen by looking at their minimal
models. But even without computing minimal models it's known that an elliptic manifold $M^n$ has nonnegative Euler characteristic and has the total sum of its Betti numbers $\le 2^n$. So
up vote anything that violates either of these conditions such as the connected sum of several $S^3\times S^3$'s is definitely not rationally elliptic and hence can not be a homogeneous space or
29 down even a biquotient.
As for higher genus surfaces it should not be hard to show that they can not be homogeneous spaces $G/H$ even if you don't assume that $G$ acts by isometries. If $G/H=S^2_g$ and the $G$
action is effective then for any proper normal $K\unlhd G$ which does not act transitively on $S^2_g$ we must have $K/(K\cap H)=S^1$. But then $G/K$ is also 1-dimensional and hence also a
circle which is obviously impossible. This reduces the situation to the case of $G$ being simple which can also be easily ruled out for topological reasons.
In the last paragraph why does $K/(K\cap H)$ have to be compact? – Tom Goodwillie Feb 24 '12 at 13:01
sorry, I meant to say that that we should look at a closed proper normal subgroup $K$. – Vitali Kapovitch Feb 24 '12 at 13:21
1 Even so, how do you rule out the possibility that $K/(K\cap H)$ is a noncompact $1$-manifold? – Tom Goodwillie Feb 24 '12 at 13:38
1 hmm, I thought this was obvious but it does require some argument. how about this then. in such situation all orbits of $K$ must be 1-dimensional submanifolds of $S^2_g$ which gives a
foliation of $S^2_g$ which is impossible since its Euler characteristic is not zero. – Vitali Kapovitch Feb 24 '12 at 14:23
add comment
Atiyah and Hirzebruch gave a rather dramatic answer to your question in their paper "Spin Manifolds and Group Actions": if $M$ is a compact smooth spin manifold of dimension $4k$ whose $\
hat{A}$-genus is nonzero then no compact Lie group can act on $M$ nontrivially, let alone transitively! The proof uses Atiyah and Bott's Lefschetz fixed point theorem in a clever way.
up vote 21 Unfortunately I don't have a simple example of such a manifold lying around, though I know that there are plenty of examples among 4-manifolds. It's possible that some 4 dimensional lens
down vote spaces would do the job.
I guess my answer doesn't rule out the possibility that $M$ is a homogeneous space for a non-compact Lie group, but perhaps it's still interesting. – Paul Siegel Feb 24 '12 at 3:03
5 there are indeed many such 4-manifolds. connected sum of any number of $K3$-surfaces is the most standard example. so if anything like this is a homogenous space $G/H$ then the maximal
compact subgroup of $G$ must be zero-dimensional which means that $G$ must be contractible. This is of course impossible which means that none of such manifolds can be homogeneous. –
Vitali Kapovitch Feb 24 '12 at 3:13
3 @Paul : You should perhaps say "no compact connected Lie group", as there might be finite order symmetries. This is equivalent to "no non trivial $S^1$ action". – BS. Feb 24 '12 at
add comment
It is a result of Mostov that any compact homogeneous manifold must have nonnegative Euler characteristic:
up vote 15
down vote That should provide plenty of counterexamples. :)
Now I see this is also implied by Vitali's post... I wonder if Mostov knew this? – Dylan Wilson Feb 24 '12 at 6:59
This is also proven in R. Hermann "Compactification of homogeneous spaces. I." J. Math. Mech. 14 1965 655–678. Apparently Mostow did not know that paper. – Johannes Ebert Feb 24 '12 at
Are you sure they aren't assuming the group is compact? Mostow doesn't assume this. He also has some classification results. – Dylan Wilson Feb 24 '12 at 19:20
2 @Dylan: I am pretty sure Hermann, as well as Mostow, does not assume compactness of $G$, only of $G/H$. If $G$ is compact, then in fact the result is much older and, nowadays, fairly
easy. If $G$ is connected and compact and the rank of $H$ equals the rank of $G$, then there is a fibre bundle $H/T \to G/T \to G/H$, where $T$ is the maximal torus. By Hopf-Samelson,
the Euler numbers of $G/T$ and $H/T$ are given by the order of the Weyl groups, hence both positive, so $\chi(G/H)>0$. – Johannes Ebert Feb 24 '12 at 20:54
2 If $H$ has smaller rank, $S \subset H$ is a maximal torus of $H$, $S \subset T \subset G$ a maximal torus of $G$. The fibre bundle $T/S G/S \to G/T$ shows that $\chi(G/S)=0$. The fibre
bundle $H/S \to G/S \to G/H$ shows $0=\chi(G/S) = \chi(H/S) \chi(G/H) = # W_H \chi(G/H)$ and so $\chi(G/H)\geq 0$. – Johannes Ebert Feb 24 '12 at 20:57
add comment
I would think that many examples from 3-manifold theory would work. Take any compact, oriented, irreducible 3-manifold $M$ whose torus decomposition is nontrivial and has at least one
hyperbolic piece. Such examples at least have no locally homogeneous Riemannian metric, as a consequence of Thurston's analysis of the 8 geometries of 3-manifold theory. A specific example of
this sort can be obtained from a hyperbolic knot complement in $S^3$ by deleting an open solid torus neighborhood of the knot and doubling across the resulting 2-torus boundary; the doubling
up vote torus produces a characteristic $Z^2$ subgroup of $\pi_1(M)$. These examples have universal cover homeomorphic to $R^3$, and so they have trivial $\pi_2$. By the homotopy exact sequence there
8 down would be a quotient group $\pi_1(G) / \pi_1(H)$ identified with a subgroup of $\pi_1(G/H)$ whose quotient set is $\pi_0(H)$. Perhaps, in order to get a proof, one can analyze this situation
vote by considering the intersection of the $Z^2$ subgroup with the $\pi_1(G) / \pi_1(H)$ subgroup.
This is a bit over my head, but a nice answer nonetheless. Thank you. – MTS Feb 24 '12 at 22:01
np. I'll throw in that this idea of analyzing the $Z^2$ subgroup intersected with $\pi_1(G) / \pi_1(H)$ is kind of what is going on in the proof of Thurston's 8 geometries theorem. It seems
to me that your question, in the 3-manifold context, is a kind of generalization of the 8 geometries theorem. – Lee Mosher Feb 24 '12 at 23:28
add comment
Here is a proof that any closed surface $S$ of genus at least 2 cannot support a homogeneous Riemannian metric. In fact, let $g$ be any such metric. Being homogeneous, $g$ has constant
curvature, and e.g. by Gauss-Bonnet such a curvature must be negative. Therefore, $(S,g)$ is homotetic to a hyperbolic surface, and it is well-known that the isometry group of any closed
up vote 6 hyperbolic surface is finite (in fact, if $h$ is the genus of $S$, then $(S,g)$ admits at most $84(h-1)$ isometries).
down vote
This is a nice argument, but I am not asking for a homogeneous Riemannian metric. This doesn't rule out the possibility of having a transitive action of a Lie group that is not an action
by isometries for any Riemannian metric. – MTS Feb 24 '12 at 1:05
@MTS: if $M$ is a homogeneous space for a Lie group $G$ and $x \in M$ is a point with compact stabilizer, then you can choose a $G$-invariant inner product on $T_x(M)$ and homogeneity
gives you a $G$-invariant Riemannian metric on $M$, doesn't it? I guess it is possible that there are no points with compact stabilizers, but in any case this shows at least that any
compact surface of genus at least two is not a homogeneous space of a compact Lie group. – Qiaochu Yuan Feb 24 '12 at 1:36
Qiaochu, I think what you are saying is the following: if $M = G/H$, then the tangent bundle of $M$ is the homogeneous vector bundle induced by the adjoint representation of $H$ on $\
mathfrak{g}/\mathfrak{h}$. If $H$ is compact then we can integrate with respect to the Haar measure of $H$ to get an invariant inner product on $\mathfrak{g}/\mathfrak{h}$. Then
translating around with $G$ gives the invariant metric on $M$. Is that right? I don't think you can choose a $G$-invariant inner product on $T_x(M)$ since $G$ doesn't preserve the point
$x$ in general. – MTS Feb 24 '12 at 4:45
@MTS: sorry, I meant a $\text{Stab}(x)$-invariant inner product. – Qiaochu Yuan Feb 24 '12 at 17:42
add comment
In the homotopy exact sequence of the fiber bundle $G\to G/H$ the group $\pi_i(G/H)$ sits between $\pi_i(G)$ and $\pi_{i-1}(H)$. For example, if $i=1$, then $\pi_1(G)$ is abelian, and $\
pi_0(H)$ is finite (as $H$ is compact). Thus $\pi_1(G/H)$ has abelian subgroup of finite index. Surely there are lots of manifolds that do not have this property, e.g. any closed
negatively curved manifold does not. Connected sum of several lens spaces is another example.
up vote 5
down vote By the way, it is my opinion that the class of compact homogeneous spaces is very rich, and their finer topological properties are still poorly understood. For example, classifying
homogeneous spaces up to diffeomorphism in a given homotopy type is quite challenging, and it is not easy to cook up a homogeneous spaces with prescribed topological features.
1 the OP is interested in a more general situation than homogeneous Riemannian manifolds (see his comments above). thus one can not assume that $H$ is compact. So things like quotients by
uniform lattices in semisimple Lie groups or nilmanifolds are fair game. – Vitali Kapovitch Feb 24 '12 at 19:30
@Vitali: I was answering to OP's request "in finding an example of a compact manifold which is not a homogeneous space of any compact Lie group". If a compact Lie group acts
transitively on a manifold, then the isotropy subgroup is closed, hence compact. – Igor Belegradek Feb 24 '12 at 22:55
@Igor sorry, this was not clear from your answer. the OP states the question 3 times and only mentions compactness once. He should modify the question IMO. It's also clear from this
comments that he is interested in the general case and it creates confusion because some answers assume that G is compact and others do not. – Vitali Kapovitch Feb 25 '12 at 0:21
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry lie-groups counterexamples or ask your own question. | {"url":"http://mathoverflow.net/questions/89345/example-of-a-manifold-which-is-not-a-homogeneous-space-of-any-lie-group/89352","timestamp":"2014-04-20T06:23:49Z","content_type":null,"content_length":"114585","record_id":"<urn:uuid:fe1e739b-55bc-46fe-80ff-5f1a46aa4e65>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Rectangular Conducting Plate Lies In The Xy Plane, ... | Chegg.com
Image text transcribed for accessibility: A rectangular conducting plate lies in the xy plane, occupying the region 0 < x < a, 0 < y < b. An identical conducting plate is positioned directly above
and parallel to the first, at z = d. The region between plates is filled with material having conductivity sigma (x) = sigma 0e-x/u, where sigma 0 is a constant. Voltage V0 is applied to the plate at
z = d; the plate at z = 0 is at zero potential. Find, in terms of the given parameters: (a) the electric field intensity E within the material: (b) the total current flowing between plates; (c) the
resistance of the material.
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/rectangular-conducting-plate-lies-xy-plane-occupying-region-0-x-0-y-b-identical-conducting-q1228093","timestamp":"2014-04-17T07:56:09Z","content_type":null,"content_length":"18698","record_id":"<urn:uuid:00acae83-794d-42f2-8413-9fda5c2e2ae5>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
GNU Scientific Library 1.9 released
[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
GNU Scientific Library 1.9 released
From: Brian Gough
Subject: GNU Scientific Library 1.9 released
Date: Wed, 21 Feb 2007 11:29:22 +0000
User-agent: Wanderlust/2.14.0 (Africa) Emacs/21.3 Mule/5.0 (SAKAKI)
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Version 1.9 of the GNU Scientific Library (GSL) is now available.
GSL provides a large collection of well-tested routines for numerical
computing in C.
This release adds support for Non-symmetric Eigensystems, Basis
Splines (Patrick Alken) and Mathieu functions (Lowell Johnson), as
well as bug fixes. The full NEWS file entry is appended below.
The file details are:
ftp://ftp.gnu.org/gnu/gsl/gsl-1.9.tar.gz (2.5 MB)
ftp://ftp.gnu.org/gnu/gsl/gsl-1.9.tar.gz.sig (GPG signature)
81dca4362ae8d2aa1547b7d010881e43 (MD5 checksum)
The GSL project home page is at http://www.gnu.org/software/gsl/
GSL is free software distributed under the GNU General Public
Thanks to everyone who reported bugs and contributed improvements.
- --
Brian Gough
(GSL Maintainer)
Network Theory Ltd
Commercial support for GSL --- http://www.network-theory.com/gsl/
- ----------------------------------------------------------------------
* What is new in gsl-1.9:
** Added support for nonsymmetric eigensystems (Patrick Alken)
** Added Mathieu functions (Lowell Johnson)
** Added a new BFGS2 minimisation method, requires substantially fewer
function and gradient evaluations that the existing BFGS minimiser.
** Added new functions for basis splines (Patrick Alken)
** Fixed the elliptic integrals F,E,P,D so that they have the correct
behavior for phi > pi/2 and phi < 0. The angular argument is now
valid for all phi. Also added the complete elliptic integral
** The beta functions gsl_sf_beta_e(a,b) and gsl_sf_lnbeta_e(a,b) now
handle negative arguments a,b. Added new function gsl_sf_lnbeta_sgn_e
for computing magnitude and sign of negative beta values, analagous to
** gsl_cheb_eval_mode now uses the same error estimate as
** Improved gsl_sf_legendre_sphPlm_e to avoid underflow with large
** Added updated Knuth generator, gsl_rng_knuthran2002, from 9th
printing of "The Art of Computer Programming". Fixes various
weaknesses in the earlier version gsl_rng_knuthran. See
** The functions gsl_multifit_fsolver_set, gsl_multifit_fdfsolver_set
and gsl_multiroot_fsolver_set, gsl_multiroot_fdfsolver_set now have a
const qualifier for the input vector x, reflecting their actual usage.
** gsl_sf_expint_E2(x) now returns the correct value 1 for x==0,
instead of NaN.
** The gsl_ran_gamma function now uses the Marsaglia-Tsang fast gamma
method of gsl_ran_gamma_mt by default.
** The matrix and vector min/max functions now always propagate any
NaNs in their input.
** Prevented NaN occuring for extreme parameters in
gsl_cdf_fdist_{P,Q}inv and gsl_cdf_beta_{P,Q}inv
** Corrected error estimates for the angular reduction functions
gsl_sf_angle_restrict_symm_err and gsl_sf_angle_restrict_pos_err.
Fixed gsl_sf_angle_restrict_pos to avoid possibility of returning
small negative values. Errors are now reported for out of range
negative arguments as well as positive. These functions now return
NaN when there would be significant loss of precision.
** Corrected an error in the higher digits of M_PI_4 (this was beyond
the limit of double precision, so double precision results are not
** gsl_root_test_delta now always returns success if two iterates are
the same, x1==x0.
** A Japanese translation of the reference manual is now available
from the GSL webpage at http://www.gnu.org/software/gsl/ thanks to
Daisuke TOMINAGA.
** Added new functions for testing the sign of vectors and matrices,
gsl_vector_ispos, gsl_vector_isneg, gsl_matrix_ispos and
** Fixed a bug in gsl_sf_lnpoch_e and gsl_sf_lnpoch_sgn_e which caused
the incorrect value 1.0 instead of 0.0 to be returned for x==0.
** Fixed cancellation error in gsl_sf_laguerre_n for n > 1e7 so that
larger arguments can be calculated without loss of precision.
** Improved gsl_sf_zeta_e to return exactly zero for negative even
integers, avoiding less accurate trigonometric reduction.
** Fixed a bug in gsl_sf_zetam1_int_e where 0 was returned instead of
- -1 for negative even integer arguments.
** When the differential equation solver gsl_odeiv_apply encounters a
singularity it returns the step-size which caused the error code from
the user-defined function, as opposed to leaving the step-size
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.0 (GNU/Linux)
-----END PGP SIGNATURE-----
[Prev in Thread] Current Thread [Next in Thread]
• GNU Scientific Library 1.9 released, Brian Gough <= | {"url":"http://lists.gnu.org/archive/html/info-gnu/2007-02/msg00004.html","timestamp":"2014-04-20T21:18:05Z","content_type":null,"content_length":"9143","record_id":"<urn:uuid:3bdb76a8-88b2-4c9d-a7db-72956bf1c046>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the Cost of Pets in a Pet Shop
Date: 2/26/96 at 13:27:46
From: Larry Braun
Subject: Pet store word problem
I have tried for several days to help my son with this problem. Now
I am seeking your help.
A Cub Scout troop went to a very large pet store. The troop leader
wanted to find out how much the pets were, because some of the
scouts said they wanted one. He noticed some signs at various places
in the store. The sign above the fish tank said, "buy sixteen fish and
eight cats for the price of seven dogs." The sign above the cat cage
said, "Buy eleven cats and seven dogs for the price of nine fish."
The sign above the dog cage said, "Buy three dogs and nine fish for
the price of eight cats."
The troop leader ended up helping one of his scouts buy a fish and a
dog. The fish cost $42 less than the dog. At this time the Scout
Master discovered that exactly one of the three signs was incorrect.
Which sign was wrong, and how much did the cat cost?
Date: 2/26/96 at 14:22:1
From: Doctor Steve
Subject: Re: Pet store word problem
I suppose you did something like what I did at first, set up a series of
equations to represent the facts symbolically. In general, my
approach to such problems is to see if I can combine the different
pieces of information in a way that makes equations with fewer
unknowns. We start out with three unknowns here: the cost of a dog,
the cost of a cat, and the cost of a fish.
>The sign above the fish tank said, "buy sixteen fish and eight cats
>for the price of seven dogs."
16f + 8c = 7d
>"The sign above the cat cage said, "Buy eleven cats and seven dogs
>for the price of nine fish."
11c + 7d = 9f
>The sign above the dog cage said," Buy three dogs
>and nine fish for the price of eight cats."
3d + 9f = 8c
>The troop leader ended up
>helping one of his scouts buy a fish and a dog.
>The fish cost $42 less than the dog.
f = d-$42
Right away I'm suspicious of the second sign (11c + 7d = 9f).
According to the first one, seven dogs equals sixteen fish and eight
cats so the idea of 9 fish, which are much less expensive than the
dogs, equaling eleven cats and seven dogs makes me wonder what's
going on.
To solve further I would combine the information from the
equations that I have more confidence in. For instance, notice that
the third equation tells us what 8c is in terms of fish and dogs and 8c
is also used in the first equation. What happens if I use 3d + 9f in
the first equation instead of 8c, since they are the same? This way I
won't have any "cats" in the equation; one less unknown to think
about for now.
16f + (3d + 9f) = 7d
This tells me 25f=4d or one dog costs 25/4ths (six and a quarter
times) what a fish costs (d = 25/4 f).
I can then combine this information with the other bit of information
I haven't used yet, namely f = d - $42 and find out what a fish costs.
Do you see how to do that, similar to what I did with 8c above? I
won't have any dogs in my new equation, only fish and that'll tell me
how much the fish cost. You might want to see if you can solve the
rest before reading on.
f = (25/4 f) - $42
(I'll let you work out the calculations here. Please ask if you want
$8 = f
So, if we know how much a fish is, we can figure out what a dog is,
since it costs $42 more. Then we can figure out what a cat costs.
And finally, we should be able to see if our answers work for every
sign but the second. If they don't then we'll have to choose one of
the other signs as the culprit.
Please let us know what you come up with or where you need
further explanation.
-Doctor Steve, The Math Forum
Date: 2/26/96 at 14:37:39
From: Doctor Byron
Subject: Re: answer to word problem
Hi Larry,
I think the best thing to do in the case of this pet store
question is to start by putting the equation in algebraic form.
First, we have the assertions made by the three signs, which
may or may not be true:
16f + 8c = 7d
11c + 7d = 9f
3d + 9f = 8c
Where c, d, and f are the prices of cats, dogs, and fish, respectively.
Rearranging these into a slightly more standard form we have:
8c - 7d + 16f = 0
11c + 7d - 9f = 0
-8c + 3d + 9f = 0
We also know that the following equation _must_ be true:
f = d - 42
Now, since we know f in terms of d, we can substitute this
expression for d into all the places where f appears in the above
system of equations. We then get the following set of three equations
in only two variables:
8c - 7d + 16(d - 42) = 0
11c + 7d - 9(d - 42) = 0
-8c + 3d + 9(d - 42) = 0
8c + 9d = 672
11c - 2d = -378
-8c + 12d = 378
Now, if all of these equations were really true, we could pick any
two of them and get a reasonable answer to the problem. For
example, adding the first and third equations eliminates the variable
c, and allows us to find that d = $50.00 and c = $27.75. These two
seem to give a good answer to the problem, as we might expect. At this
point, we should be suspicious that the second equation may not be
true. You can confirm this by solving the remaining two sets of
equations. You will see that either 1 and 2 or 2 and 3 together yield
a negative solution for c.
The sign above the cat cage is therefore false, and the pets have the
following prices:
Dog - $50.00
Cat - $27.75
Fish - $8.00
(This is all assuming, of course, that the shopkeeper doesn't intend
to pay people to take his pets away.)
-Doctor Byron, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/58628.html","timestamp":"2014-04-19T22:39:05Z","content_type":null,"content_length":"10538","record_id":"<urn:uuid:7c8fbad9-fe34-48f8-b17e-624cc216f0d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00262-ip-10-147-4-33.ec2.internal.warc.gz"} |
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
Study on the Critical Stable Region of Magnetically Fluidized Beds
Based on Voids Fluctuation Analysis
Keting Gui and Guihuan Yao
School of Energy and Environment, Southeast University
Nanjing, 210096, China, ktgui~seu. edu. cn
Keywords: Magnetically fluidized beds, Critical stable region, Voids fluctuation
With the aid of the voids fluctuation analysis, a criterion of critical stable region of MFBs is derived in this paper. Our
criterion has two differences compared with that of Rosensweig. First, there is a region of critical stability instead of a
sharp boundary located between stable and unstable fluidization, which accords with the experimental results well.
Second, the criterion includes two additional parameters of fluidization, i.e., the particle terminal velocity u, and the
equivalent density p *, which are not considered in the criterion of Rosensweig and included in the analysis of Geldart
about fluidization.
In order to substantiate the critical stable region derived from the voids fluctuation analysis, an experiment on the
stability of MFBs is reported. It is shown that the critical stable region derived based on the voids fluctuation analysis,
corresponds well with the experimental results. The correspondence between theory and experiments indicates that the
voids fluctuation analysis is a useful approach to analyze the gas-solid flows in MFBs.
between the stable fluidization and the unstable one.
However, because of the random property of gas-solid
flows in MFBs, the experimental results reported by
some other authors show that it is not a sharp boundary
but a critical stable region that is located between the
stable and unstable fluidization (Cohen et al., 1991). In
this paper, therefore, we analyze voids fluctuation of
MFBs by a wave-theory approach, and derive a criterion
of the critical stable region based on the assumption of
the stable propagation of voids fluctuation in the bed.
Finally we also show that such a theoretical criterion
captures the main features of the critical stable region
that arises during our experiments.
C the relative velocity of kinetic wave
Dm dimensionless group represents the ratio of
the kinetic energy and the magnetic energy
D/gs) dimensionless groups related to the X,
DA)3 dimensionless groups related to the u,
The magnetically fluidized beds (MFBs) constitute a new
technology in the application of fluidization. As the
external magnetic fields suppress gas bubbles and
improve the contact between gas and solids in the beds,
MFBs have found many potential applications in industry,
such as in filtration (Wang et al., 2008: Meng et al.,
2003), separation (Fan et al., 2002; Hristor et al., 2007)
and synthesis reactions (Atwater et al., 2003; Graham et
al., 2006; Hao et al., 2008: Liu et al., 2009) most of
which requires that MFBs achieve stable fluidization
with the aid of magnetic fields. This type of MFBs has
been called magnetically stabilized fluidized beds (MSBs)
by some authors (Liu et al., 1991: Ganzha & Saxena,
2000). Research work on MSBs has been reported in the
literature (Rosensweig 1979; Lin & Leu, 2001; Zeng et
al., 2008), among which the most distinguished one is
that by Rosensweig. His criterion of the magnetically
stable fluidization sets up a sharp theoretical boundary
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
The Void Fluctuation in MFBs
A Two-Phase M~odel ofMlFBs
In order to study the void fluctuation in MFBs, we need
to model MFBs with a voids fluctuation equation. Such
an equation can be derived from the following two-phase
model of MFBs, which was developed by Gui et
+-(u1e) = 0
a 4.
a~l-> (1
particles,~ nf,, is the magneti force reule fro the
ex tera magneti fiemldandE g ise thle iacceleration owin
togra veity. Susctepritsicj)1,,3 veoiyd enote the threes
components in the direction X,EZ. The parameters of Ei,y
nf and nf,,, are calculated as follows, which are also
developed by Gui et al. (1997)
E,=+ p/Y (2)
nJ; = (1- e) (u, --vI) (1- e)p,g, (3)
2Po,(1-e)M? Se
n f,,, = (4)
S3+(3-2e)Xs 4
In equations (2)-(4), p, is the gas pressure, p is the
viscosity and do is the particle diameter. b4; denotes the
Kmnocker symbol and S(() is the correcting coefficient of
drag force due to voids. Po represents the magnetic
COnductivity while gXmodels the magnetic susceptibility
of solid and M, models the solid magnetization.
Substituting (2)-(4) into equation (1), we obtain
do the particle diameter
E the stress tensor of fluid
EfX(t)] average value of voids fluctuant signal X(t)
Fd dominant frequency of the auto-correlation
function of the voids fluctuant signal X(t)
f external forces acted on gas and solids phases
G,, dimensionless number related to the magnetic
energy and voids
G, dimensionless number related to the terminal
velocity of particles
g the acceleration owing to gravity
H the magnetic field intensity
M, the solid magnetization
N the number of Xi samples
1%, dimensionless group representing the ratio of
the kinetic energy and the magnetic energy.
N, dimensionless group but related to voids
nf the drag force between gas and particles
n/m the magnetic force resulted from the external
magnetic field
R,(t) the auto-correlation function ofX(t)
Uw, the continue wave velocity of voids
U, the kinetic wave velocity of voids
u,, the terminal velocity of particles
aw the relative velocity of continue wave
Wt the propagation velocity of voids
w the relative velocity of voids propagation
Xj the voids fluctuant signal
Greek letters
6;, the Kmnocker symbol
viscosityy of fluid
au the magnetic conductivity
$ the equivalent density
Sthe correcting coefficient of drag force due to
asquare-error of the voids fluctuant signal X(t)
xs the magnetic susceptibility
to the frequency of voids wave
i, j the three components in the direction x,EZ
-+u -+E =0
8t r, i
Be ae 84
v,+ (1 -E)- = 0
8t Br Br
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
Se' de' du
-+U, e -- 0
dt r, dr
-8 V,+(1-so)-=0
dt r, dr
Su ul B v'
p +U p, '+V -
St d r St d r
'+u =
t aU
at ar,
(u, -v,) (ps pg)gl pl e- s ()
do 3 +(3 -1 .` By
In equations (7)-(8), the terms on the right hand sides
represent the external forces acted on gas and solids in
MFBs. If we let ; and fs; denote these two external
forces in two equations respectively and subtract (8)
from (7), we obtain
-sf, (15)
By taking the partial differentiation of equation (15) with
respect to r; and the partial differentiation of equations
(13) and (14) with respect to t and rj respectively, and
rearranging the terms, we obtain the following equation.
dt2 o -- o r + Bt so s
8rr2E' pU,UIU p,VV _= d(sf)
+ (16)
Similarly, taking the partial differentiation of equation
(12) with respect to r;, we have
~(+ +~U -E V (17)
According to the results in wave theory, the force JA
resulted from a small perturbation can be considered as
the function of the voidage e, the gas velocity u; and the
total flux j; of gas and solid phases (Wallis,1969).
p, +u -
Clearly, the external force /~ is a function of ui, vi, a and
&/&; Mathematically, we have
f, = J;(u,,vr,,sS (10)
Void Fluctuant Equations in MIFBs
Consider the situation in which a small perturbation
takes place in gas-solid system. The variables in equation
(5), (6) and (9) can be expressed as the sum of an
average value and a small turbulent value.
Mathematically, we write
J; = f ( j,,, U, ur)
UI + u,
e +e
FI + Sfi
The relationship between the total flux ji and the voidage
a can be expressed as
u, =
f, +
j, = eu, + (1 e)v,
Hence, following from equations (18) and (19), we have
= (U, VI) +
= e +(20)
af, af,
= (1 so )
av aj2
Substituting equation (20) into equation (17), we obtain
In the equation (11), SA represents the effect of a small
perturbation on gas-solid system, which can be
calculated as
6f e+ + vJ + (12)
dE du y d(de'/Br ) dr
Substituting equation (11) into equations of (5), (6) and
(9), and ignoring all the small quantities of an order
greater than one, we obtain
p, +--
St Sr
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
8(8f ) 1 af; Se de' Sf a
-+-(UI so( d6,( )
dr, s du, dt dr, e S,
+7Ea, )7;a: (21)
Note that in equation (21), the coefficient of the gradient
&' / & coincides with the formula to calculate the
velocity of continue wave, denoted with Uw,, by Wallis
1969, which is written as
U,, = U, e~ = U +E el (22)
Substituting equation (21) into equation (16), we obtain
the voids fluctuation equation of the MFBs as follows.
P2 e s 2e' p UpV
8~st2 o+--P o)2- + o s o
Br,8rrd so -s
dso BPu, t B, 8888)Br8?
In the above equation, let
P, P
P* = "+ (24)
U + ,I (25)
A = +(2)
P So u 7
where Uo defined by equation (25) can be regarded as
the average velocity of kinetic wave (wallis,1969), which
satisfies the following equation.
CICJ = Uo2Uo, A2J =
8 2a e 2a e 2
+ 2U -+ A
dt 2 oI r~d U x
+ t +u "~dd' Sr,
We term Equation (29) as the voids fluctuant equation
derived from the two-phase model of MFBs. This partial
differential equation is not straightforward to be solved
by the analytic method. Hence we introduce and discuss
a simplification of equation (29) in the next section.
The Critical Stable Region of MFBs
The Condition of Stabilized Fluidization
In order to further discuss the stability of MFBs with the
VOids fluctuant equation, we first need to simplify
equation (29) as an analytical solution to it is hard to
obtain. Note that because the direction of main flows in
the vertically fluidized bed is perpendicular to horizontal
level, we can assume that the variables r;=rj=z, and thus
the suffix i(j) can be omitted in equation(29). With these
simplifications, equation (29) becomes
82s d29 d29 R e ds),,,
+ 2U -+ A + -+ U =(0
St2 o dzt d2 wt"dJv\v
For this simplified version of the voids fluctuant
equation, the general form of its solution is
oSat+,co(t- 31
Equation (31) shows that e' is the voids fluctuation
resulted from the voids perturbation which propagates
along the direction Z. In equation (31), the symbols w, W
and a represent the frequency, the propagation velocity
and the amplifying factor of amplitude of the voids
fluctuation, respectively. The value a should be negative
to guarantee that the amplitude of voids fluctuation is not
unlimited with respect to time. The relative velocity of
voids fluctuation, w, and that of continue wave, aw, can
be defined by subtracting the average velocity of kinetic
wave Uo from the propagation velocity W and the
continue wave velocity Uw, respectively. Mathematically
(UI VI)(UJ V )
With equations (24)-(27), equation (23) can then be
written as
uw = U, Uc
Substituting equation (31) into (30) and separating the
real part from the imaginary part, we obtain the
expressions of a and W2
-( +
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
a= 1) (34)
B2 W2 u w
rn = (3 5)
4 w2 w2 C
The square of the frequency, i.e., or, in equation (35)
should be positive. Hence, the square of the relative
velocity of voids fluctuation, tr, must be located
between the velocity of kinetic wave C" and that of
continue wave us, If uwZ > Cz, then s, > w > C. Hence,
uzAv in equation (34) is always larger than 1 and thus a
is positive. This means that the perturbation of the voids
fluctuation grows without any limitation and the state of
fluidization is unstable. If us, < CZ, then C > w> us..
Hence, uzAv in equation (34) is smaller than 1 and thus a
is negative. This implies that the disturbance of voids
decays in the propagation and the fluidized beds will
remain in a stable state.
In summary, the velocities of the kinetic wave and the
continue wave can be considered as the criterion to
determine the stability of MFBs. Specifically, if CZ > us, ,
MFBs is in stable fluidization. Otherwise, it may be in an
unstable state.
Determination of the Mlarginally Stable Zone
According to the analysis above, the condition of
stabilized fluidization can be expressed as
u = u,s;(1-- o)
where u, is the terminal velocity of a particle. u, can be
calculated by the Stokes resistance equation because the
size of particles in MFBs is small. n in equation(39) is
the Richardson-Zaki constant, which is in the range of
2-5 (Foscolo et al., 1984). Substituting equation (39) into
equation (22), the velocity of continue wave in MFBs
can be calculated as
U = UuE(n[ni so(nl+ 1)]
Another important parameter in (38) is &/fk/A The
force related to &/i2 in MFBs is the magnetic force
expressed in equation (4). So 4 / (&A) can be
calculated as
af, 2 uo (1- so)M5
d(ds/dz) 31 1+ 2(1- so )
Substituting equation (40) and (41) into (38), and
dividing U" at both sides of (38), we obtain a stability
criterion in dimensionless form
U~2 ~/$~ pU+ pV
1 2uo (1- so)M? g ,
U~p 3 +(3 2E )X soU 1pV s
C u
Substituting equation (28) and (33) into equation (36)
2U Uo -A U > 0
3 + (3 2Eo)Z,
D, (X ) =
2(1- so)
D, (ti) ti q _~ [So (n + 1)]
D = P "
By further substituting the formulas for U,,, Uo and A
into equation (37), we obtain an expression of the stable
2 U s u )( ,U l-p,V j
1 if PgU2 psV2
p' l(eh -so
UI + so 6 > 0 (38)
In the fluidized beds, the following relationship exists
between the gas velocity u and the voids e (Foscolo et
al., 1984),
where Dd(g) and D,(u are dimensionless groups related
to the susceptibility parameter X,, and the terminal
velocity u, respectively. D,, is also a dimensionless group
but represents the ratio between the kinetic energy and
the magnetic energy. Then the equation (42) can be
simplified into
S+ E,E:I [I R(n + 1)j
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
2 P Pp
E 2p so,- 1-so
E 2p so 1-. so
D,(,)2 um > 0
The symbol V in equation (46) represents the average
velocity of particles. Since the characteristics of particleS
or gas in different fluidized beds are not the same, the
average velocity V of particles may be anywhere
between 0 and a maximum value Vmax in an arb~itrary
fluidized bed. According to the results from the
numerical simulation of the two-phase flows in MFBs,
the particle velocity Vis observed to be less than the half
of gas velocity U (Gui et al., 1997). Hence we make the
assumption that
Figure 1: The
fluidized bed
marginally stable zone of magnetically
To simplify the expression of equations (48) and (49), let
Gu, = De(u,) + 1
To simplify the stability criterion (46), let us consider
two extreme conditions of V-> and V4U/2.
When V40, in the situations where p* >> p the
stability criterion becomes
D,, > [D, (Zs)] [De (u,)+ 1] (48)
When V4U/2, the stability criterion becomes
D,,, [Dcls)] De~u,] +Deu,)+ (49)
Let D,, be the abscissa and U/U,,, be the ordinate. The
critical curve C of stable fluidization when V-> is
drawn in Fig.1 based on equation (48). The critical stable
curve C when V4U/2 is also drawn in Fig.1 according
to equation (49). The range on the left hand side of curve
C, represents the unstable fluidized range, and that on the
right hand side of C is the stable one. The area between
the two curves can be regarded as the critical stable
region of the MFBs. The region of critical stability
reported based on the experiments by some authors is
also shown as the shaded area in Fig. 1 (Cohen et al.,
1991). It is easy to see that the theoretical stable region
between the curves C, and C coincides with the
experimental one.
G,, in Equation (50) can be treated as the dimensionless
number related to the magnetic energy and voids.
Similarly, we regard the termGu in equation (51) as the
dimensionless number related to the terminal velocity of
particles. With these two dimensionless numbers, the
criterion of critical stable region of MFBs can be
expressed as
- critical stabl>(2
Gm, > Gu,,
U f
G > G, > (G~ -G,, +
-2 3 U
Gm, <(G~,, G,+ ) -
4 U,,,
We compare the stability criterion of MFBs derived by
Rosensweig from the basic equations of MFBs, written
as (Rosensweig, 1979),
N,,N, <1
N,,N = 1
NN,> 1
critical stable
[1+ D,(t)]2 +
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
the magnetic field is along the axis of fluidized bed and
the stable fluidization can be obtained easily
(Rosensweig, 1979).
The voids fluctuant signals X(t) in magnetically fluidized
bed are measured using a mini-capacitance probe with
the influence of the magnetic field eliminated (Gui et
al.,1994). The square-error e' and the auto-correlation
function Rxx'(t) of the voids fluctuant signal X(t), are
calculated by following equations respectively.
A,,, (54)
47r(3 2so)
Ny = [-1+1+(1-so)Z (55)
a, (1- so)
In equation (54), N,, is a dimensionless group
representing the ratio between the kinetic energy and the
magnetic energy. N, in equation (55) is also a
dimensionless group but related to voids.
Let us compare the criterion (53) with the one (52).
There are two differences between the two criteria. First,
criterion (53) does not lead to a region of critical stability.
Instead it only produces a sharp boundary between the
stable and unstable fluidization, which is located in the
region of critical stability shown in Fig.1 by a dotted line.
Second, two additional parameters, i.e., the particle
terminal velocity u, and the equivalent density p are
taken into account in the criterion (52). These two
parameters are important when considering the analysis
of Geldart about fluidization in general, there are two
important factors influencing the stability of fluidized
beds. One is the difference of densities between the two
phases and the other is the size of particles (Geldart,
1973). In criterion (52), the difference of the densities p,
and p, are reflected by the equivalent density p The
effects of the particle size are also captured by the
particle terminal velocity u, in criterion (52). The
dimensionless groups N,, and N, in Rosensweig's
criterion, however, only involve the density p, of the
solid particles, but fail to recognize the influence of the
gas density p, and that of the particle sizes.
Experimental Facilities and Data Processing
In order to substantiate the critical stable region derived
from the voids fluctuation analysis, an experiment on the
stability of MFBs is reported in this paper. A schematic
diagram of the experimental facilities is shown in Fig. 2.
The bed column fabricated from Plexiglass is 100mm in
diameter and the height of the fixed bed is 115mm. The
ferromagnetic particles are made from cast iron and the
average size of the particles is 0.56 mm. To generate the
external magnetic field, a coil with 400mm in length is
used and the bed is located in the middle of the coil in
which the magnetic flux is homogenous. The direction of
KX'(lt) XX,_,
~N- j+1
(j = 0,1,..........20) (57)
1-Digital collection system; 2-Capacitance meter:
3-Capacitance probe: 4-Fluidized beds: 5-Ampere
meter; 6-Direct supply: 7-Coil: 8-Valve: 9-Orifice
flowmeter: 10-Blower
Figure 2: Schematic diagram of the experimental
In equation (56) and (57), X;'s are the measured voids
fluctuant signal indexed by the subscript i~j) in the
increasing order of their values, N is the number of Xi
samples.. EfX(t)]is the average value of Xi.
Based on the stochastic analysis to the voids fluctuant
signal in MFBs, Gui et.al propose the experimental
criterion of stable fluidization in MFBs (Gui et al., 1994).
This criterion involves two characteristic parameters.
One is the dominant frequency Fd Of the auto-correlation
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
slowly with the increase in H at first, and then falls
rapidly after reaching a maximum point and finally tends
to zero. In contrast, Fig. 3 shows that a decreases
monotonically at a steady rate as H increases. This
indicates that when H increases at the early stage, the
volume of bubbles become smaller and the quantity of
bubbles increases. It implies that the magnetic field can
cause bubbles splitting. Moreover, when the magnetic
intensity H attains a certain value, for instance, to H'
considering the curve 1 (U/U,,,72.05) as shown in Fig. 4,
Fd reaches the maximum value, but a continues to fall to
a relatively small value. In other words, if we increase
the magnetic intensity further, Fd decreases steeply while
a keeps on decreasing. This indicates that the bubbles
almost disappear and the bed turns to the particulate
fluidization at the moment when the voids fluctuation
becomes very small in both amplitude and frequency.
Consequently, the extreme point in which Fd reaches the
maximum value shown in Fig. 4, corresponds to the
critical transition point from the aggregate fluidization to
the particulate fluidization. This transition point can be
used to distinguish the stable and unstable fluidization in
our experiment.
We draw the 5 transition points of Fig. 4 in Fig. 1 and
obtain 5 points located in the critical stable region
determined by our theoretical analysis based on the voids
fluctuation analysis. Such a coincidence between the
theoretical region of critical stability of MFBs and the
experimental results substantiates the validity of our
stability criterion derived in this paper.
In this paper, we analyze the voids fluctuation of MFBs
with the wave theory and derived the criterion of critical
stability of the bed. The criterion to determine the critical
stable region captures the effects of the parameters p, and
pg as well as that of the particle size, which are shown to
be important to the stability of gas-solid fluidization
according to Geldert's analysis. We further show that the
theoretical critical stable region derived from the voids
fluctuation analysis corresponds well with our
experimental results. Such a correspondence between
theory and experiments proves that the voids fluctuation
analysis is a useful approach to analyze the gas-solid
flows in MFBs.
function of the voids fluctuant signal X(t), which can be
obtained from the reciprocal of delay time where the first
peck of R,'(t) beyond null appears. The other parameter
is the square-error CF of X(t).
Experimental Criterion of Stable Fluidization in
X~ I
H /kAXm'
U/U,,,: 1-2.05:2-1.88:3-1 .78 4-1.62:5-1.44
Figure 3: Relationship between $ and H
2.5 -
2.0 -\ 4
t 5
1.0 -
U/U,,,: 1-2.05:2-1.88:3-1 .78:4-1.62:5-1.44
Figure 4: Relationship between Fd and H
Fig. 3 and Fig. 4 illustrate the mean square value $ and
the dominant frequency Fd againSt the magnetic intensity
H respectively. The five curves in each of the figures
correspond to five different gas superficial velocities.
The variation pattern of Fd aS magnetic intensity H
increases is illustrated in Fig.4. Specifically, Fd TisCS
7th International Conference on Multiphase Flow
ICMF 2010, Tampa, FL USA, May 30-June 4, 2010
reforming. Powder Technol. 2008, 183 (1): 46-52
Hristov J, Fachikov L, An overview of separation by
magnetically stabilized beds: State-of-the-art and
potential applications. China Particuology 2007, 5(1):
Lin Y C, Leu L P, Voidage profiles in magnetically
fluidized beds. Powder Technol. 2001, 120(3):
Liu C Z, Wang F. Fan O Y. Ethanol fermentation in a
magnetically fluidized bed reactor with immobilized
saccharomyces cerevisiae in magnetic particles.
Bioresour. Technol. 2009, 100(2): 878-882
Liu Y A, Hamby R K, Collbery R D. Fundamental and
practical development of magnetofluidized beds: a
review. Powder Technol. 1991, 64 (1): 3-41
Meng X K, Mu X H, Zong B N et al., Purification of
caprolactam in magnetically stabilized bed reactor.
Catal. Today 2003, 79: 21-27
Rosensweig R E. Magnetic stabilization of the state of
uniform fluidization. Ind. Eng. Chem. Fundam. 1979,
18 (3):260-269
Wallis G B. One-dimensional Two-phase Flow. New
York: Mc Graw-Hill, 1969, 184-356
Wang Y H, Gui K T, Shi M H et al, Removal of dust
from flue gas in magnetically stabilized fluidized bed.
Particuology 2008, 6(2): 116-119
Zeng P, Zhou T, Yang J S, Behavior of mixtures of
nano-particles in magnetically assisted fluidized bed.
Chem. Eng. Prog. 2008, 47(1): 101-108
The authors would like to thank the National Natural
Science Foundation of China for financial support of this
work. (50576013)
Atwater J E, Akse J R, Jovanovic G N et al., Porous
cobalt spheres for high temperature gradient
magnetically assisted fluidized beds. Mater. Res. Bull.
2003, 38(3): 395-407
Cohen A H, Chi T. Aerosol filtration in a magnetically
stabilized fluidized bed. Powder Technol. 1991,
Fan M M, Chen Q R, Zhao Y M et al., Magnetically
stabilized fluidized beds for fine coal separation.
Powder Technol. 2002, 123(2): 208-211
Foscolo P U, Gibilaro L G A fully predictive criterion
for the transition between particulate and aggregate
fluidization. Chem. Eng. Sci. 1984, 39(2): 1667-1678.
Ganzha V L, Saxena S C, Hydrodynamic behavior of
magnetically stabilized fluidized beds of magnetic
particles. Powder Technol. 2000, 107(1): 31-35
Geldart D. Types of gas fluidisation. Powder Technol.
1973, 7(3): 285-292
Graham L J, Atwater J E, Jovanovic G N,
Chlorophenol dehalogenation in a magnetically
stabilized fluidized bed reactor. AIChE J. 2006, 52(3):
Gui K, Chao J, Shi M, etc. A two-phase model of
gas-solid magnetically fluidized beds. Journal of
Southeast University (in Chinese). 1997, 27(5): 36-45
Gui K, Wang R. Determination of critical transition
point by void fluctuant signal in a magnetically
fluidized bed. Journal of Chemical Industry and
Engineering (in Chinese), 1994, 45(5): 589-594
Hao Z G, Zhu Q S, Jiang Z et al., Fluidization
characteristics of aerogel Co/AlzO3 catalyst in a
magnetic fluidized bed and its application to CH4-CO, | {"url":"http://ufdc.ufl.edu/UF00102023/00378","timestamp":"2014-04-17T16:11:02Z","content_type":null,"content_length":"48130","record_id":"<urn:uuid:f0c889be-204e-41c5-889f-c169d5255f0f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Highbridge, NY SAT Math Tutor
Find a Highbridge, NY SAT Math Tutor
...If you're outside NYC, let's talk about doing online lessons! I am a graduate of Harvard University and a native New Yorker who has been tutoring students of all ages in everything from
standardized tests to Spanish to Math to dance for over six years. The SAT is a specialty of mine, and I love helping students discover all the tips and tricks necessary to getting their dream
31 Subjects: including SAT math, reading, English, TOEFL
Hello. I am an experienced mathematics and science teacher, with a wide range of interests and an extensive understanding of physics and mathematics. I love to talk with students of all ages
about these subjects, and I would like to help you to appreciate their fundamental simplicity and beauty while getting you to your academic goals.
25 Subjects: including SAT math, chemistry, physics, calculus
...I taught mathematics at universities for years and can help you excel! The first job of a good tutor is to listen. I will focus on your questions, obstacles, and goals.
10 Subjects: including SAT math, calculus, statistics, geometry
Hello! My name is Lawrence and I would like to teach you math! Since 2004, I have been tutoring students in mathematics one-on-one.
9 Subjects: including SAT math, calculus, geometry, algebra 1
...College level mathematics including: statistics, pre-calc, algebra, geometry, business calculus, calculus I, II, & III, linear algebra, etc. (This list does not include finance courses.) What
makes me different from other tutors: I have a unique approach in which I value my students as individua...
31 Subjects: including SAT math, English, Spanish, calculus
Related Highbridge, NY Tutors
Highbridge, NY Accounting Tutors
Highbridge, NY ACT Tutors
Highbridge, NY Algebra Tutors
Highbridge, NY Algebra 2 Tutors
Highbridge, NY Calculus Tutors
Highbridge, NY Geometry Tutors
Highbridge, NY Math Tutors
Highbridge, NY Prealgebra Tutors
Highbridge, NY Precalculus Tutors
Highbridge, NY SAT Tutors
Highbridge, NY SAT Math Tutors
Highbridge, NY Science Tutors
Highbridge, NY Statistics Tutors
Highbridge, NY Trigonometry Tutors
Nearby Cities With SAT math Tutor
Allerton, NY SAT math Tutors
Beechhurst, NY SAT math Tutors
Bronx SAT math Tutors
Castle Point, NJ SAT math Tutors
Fort George, NY SAT math Tutors
Fort Lee, NJ SAT math Tutors
Hamilton Grange, NY SAT math Tutors
Hillside, NY SAT math Tutors
Inwood Finance, NY SAT math Tutors
Manhattanville, NY SAT math Tutors
Morsemere, NJ SAT math Tutors
Parkside, NY SAT math Tutors
Rochdale Village, NY SAT math Tutors
West Englewood, NJ SAT math Tutors
West Fort Lee, NJ SAT math Tutors | {"url":"http://www.purplemath.com/Highbridge_NY_SAT_Math_tutors.php","timestamp":"2014-04-17T11:03:15Z","content_type":null,"content_length":"24136","record_id":"<urn:uuid:5fcdb2ad-9f29-4e09-b300-17942d3146cc>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
Limiting Reagent Problems
Now let s consider limiting reagent problems. You should remember those. They are the problems that are a little more difficult than the kind that you were just working with because you have to
decide which of two starting chemicals to base your calculations on. Limiting reagent problems that are solved using balanced equations are very much the same as what you were working with a few
lessons ago. It's just that you now use the balanced equation to give you the relationship between the chemicals.
There is a sequence of steps that should be taken to solve these types of problems. First, write an equation for the reaction and balance it. Second, determine the mole and weight relationships among
the chemicals in the reaction. Third, determine the limiting reagent in the same manner you did a few lessons ago. Fourth, carry out the necessary calculations using the mole and weight relationships
determined from the balanced equation.
Two examples are worked through for you on the pages in this section, one with weights (example 7 from your workbook) and one with moles (example 8 from your workbook). Look through them and then try
your hand at the practice problems that follow (example 9 from your workbook). The answers to those practice problem are on the last page of this section.
Top of Page
Back to Course Homepage
E-mail instructor: Eden Francis
Science Department
19600 South Molalla Avenue
Oregon City, OR 97045
(503) 594-3352
TDD (503) 650-6649
Distance Learning questions
Clackamas Community College
©1998, 2002 Clackamas Community College, Hal Bender | {"url":"http://dl.clackamas.edu/ch104-04/limiting.htm","timestamp":"2014-04-18T00:15:14Z","content_type":null,"content_length":"6034","record_id":"<urn:uuid:b3a2dd8d-1d78-4667-ba65-c53c5e4adcf7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00538-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 129.31402
Autor: Erdös, Pál; Neveu, J.; Rényi, Alfréd
Title: An elementary inequality between the probabilities of events (In English)
Source: Math. Scand. 13, 90-104 (1963).
Review: The reviewer (Zbl 064.13005) has proved that for any n events A[1],A[2],...,A[n] such that Pr(A[i]) = \omega[1] for i = 1,2,...,n and Pr(A[i] \cap A[j]) = \omega[2] for i \ne j we have the
\omega[2] \geq \omega[1]^2-{\omega[1](1-\omega[1]) \over n-1}+{(n\omega[1]-[n\omega[1]]) (1-n\omega[1]+[n \omega[1]]) \over n(n-1)}  (1)
with [n\omega[1]] denoting the integral part of n\omega[1], and that this inequality is an equality for some collection of events A[1], A[2],...,A[n] whatever \omega[1] and n.
Here the authors consider the closely related more general problem of the determination, for any natural n and a in (0,1), of the constant \epsilon[n] (\alpha) defined as the least real number \
epsilon such that for any collection of events A[1],A[2],...,A[n] subject to the only condition (2) Pr(A[i] \cap A[j]) \leq \alpha^2 for i \ne j we have the inequality sum Pr(A[i]) \leq n\alpha+\
epsilon. With \nu denoting the largest integer such that \nu(\nu-1) \leq n(n-1)\alpha^2 the constant sought for is found to be given by
\epsilon[n](\alpha) = ^1/[2] (1-\alpha)+(n \alpha-\nu) ((n-1)\alpha-\nu)/2 \nu.
The second term in this formula vanishes if n\alpha or (n-1)\alpha is an integer; otherwise for n > oo it is of the order of 1/n. An explicit extremal collection of events A[1],A[2],...,A[n] is
constructed in the case of \alpha = ^1/[2] and n \equiv 3 (mod 4) by the use of the method of quadratic residues.
Reviewer: S.Zubrzycki
Classif.: * 60C05 Combinatorial probability
60E05 General theory of probability distributions
Index Words: probability theory
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/12931402.htm","timestamp":"2014-04-18T08:10:35Z","content_type":null,"content_length":"5649","record_id":"<urn:uuid:a030022d-96ca-47f1-8b38-1c067d708d9b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search
The differentiation theorem for Laplace transforms states that
Proof: This follows immediately from integration by parts:
Corollary: Integration Theorem
Thus, successive time derivatives correspond to successively higher powers of
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email] | {"url":"https://ccrma.stanford.edu/~jos/filters/Differentiation.html","timestamp":"2014-04-19T06:10:06Z","content_type":null,"content_length":"7816","record_id":"<urn:uuid:4a30bf38-89e5-4b28-a79f-13cd142cbbe2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear System Analyzer
Overview of the Linear System Analyzer (LSA)
LSA Problem Domain
One of the most frequently encountered problems in scientific computing is the manipulation and solution of large-scale, sparse, unstructured systems of equations Ax = b. Here large now means O(10^4)
to O(10^6) unknowns or more. The term sparse means most of the entries in the matrix are zeros. By storing only the nonzero terms, the memory requirements are often reduced from O(n^2) to O(n), where
n is the number of unknowns. Here unstructured means the nonzeros in the matrix do not follow a simple pattern, such as tridiagonal or upper Hessenberg (see http://www.ee.ic.ac.uk/hp/staff/dmb/matrix
/ for a dictionary of some of the matrix terminology used here). In particular, this means the matrix is nonsymmetric - which introduces severe robustness problems for many solution methods. Much
research in the past thirty years has targeted solving these systems, and several sophisticated techniques and codes are available. Often these are classified as direct or iterative methods.
Direct Solvers
Direct methods perform a factorization such as Gaussian elimination on the matrix A to get upper and lower triangular matrices L and U . Then the system is solved using forward and backward
substitution on the vector b . Unfortunately, fill-in can occur: entries that were originally zero in the matrix A can be nonzero in the factors L and U . In the worst case this can lead to O(n^2)
storage requirements, clearly unacceptable for systems with n = 10 000 000 or more. Direct methods handle this by applying reordering methods, which permute the rows and/or columns of the matrix A to
reduce fill-in. Since pivoting in Gaussian elimination implicitly permutes the rows of the matrix also, the benefits of reordering can be undone during the factorization. Strategies such as Markovitz
pivoting seek to balance pivoting for numerical stability against retaining the original lower fill-in ordering. Modern sparse direct solvers also are parameterized to allow efficient block linear
algebraic operations, to help achieve high computation rates.
All of these techniques (reordering, Markovitz pivoting, blocking parameterization) can lead to large differences in the numerical quality of solution, amount of additional storage required, and
computational efficiency achieved. Choosing good settings for the parameters involved requires experimentation and extensive testing.
Iterative Solvers
Iterative methods use simple operations with the matrix: two of the most common computational kernels are matrix-vector multiplication (multiplying a vector d by A ), or transpose matrix-vector
multiplication. A whole alphabet soup of iterative methods has been developed in the past fifty years: BiCG, CGS, QMR, OrthoMin, GCR, etc. Although the additional storage for iterative methods
typically is small (a few n-vectors), in all but the simplest cases they don't work: convergence is slow or nonexistent for problems coming from nontrivial applications. Instead, the iterative method
is applied to a preconditioned system
M^-1Ax = M^-1b,
where the matrix M, is a cheaply computed approximation to A. Virtually all of the important research for applied iterative solvers is in choosing a preconditioner. A typical one for general problems
is based on incomplete factorization: Gaussian elimination is performed on A, but entries outside a certain sparsity pattern or below a cut-off numerical value are simply discarded during the
factorization. The approximate LU factors then define the preconditioner M. In that sense, practical iterative solvers are really hybrids between direct and iterative methods.
Iterative solvers also present a large parameter space to navigate: solver method, preconditioner method, target sparsity patterns for preconditioners, numerical cut-offs for numerical dropping,
stopping tests and tolerances, etc. Furthermore, analyzing what has happened in the iterative solution of a particular problem can be difficult.
Crafting a Solution Strategy
Choosing a solution strategy for solving large sparse linear systems in realistic applications is still an art form. For every currently existing solution method, an application area can be found
which gives linear systems that make the solver fail by requiring too much memory or from nonconvergence. Finding a solution strategy for an application still relies heavily on experimentation and
Current Methodology for Developing Solution Strategies
A common approach for developing a solution strategy is to "extract" the linear system and write it to a file in some standard format. Then the linear algebraist draws upon a collection of programs
for applying transformations on the linear system, which read it in, perform manipulations, and then write it out to another file. Other programs apply various solution methods, reading in the linear
system and writing out summaries of the results of the computation such as error estimates, memory usage, and time required. Control parameters for these transformation and solver programs are
typically from input files, command line arguments, or a GUI. The linear algebraist tries several combinations of these programs, comparing their results. If a program runs only on a certain machine,
the user can either try to port it, or transfer a file with the linear system to the remote machine and transfer the results back. Applications now routinely generate and solve linear systems with O
(10^5) unknowns and O(10^6-10^7) stored elements, requiring hundreds of Mbytes of internal memory to represent. Unless the linear algebraist is lucky and immediately finds a good combination of
software for the problem, much of the time gets spent in file I/O and transfer.
For the applications user who tries to manage all of this without expert help the situation is worse; much of the time and effort is spent in recompiling code, trying to understand adjustable
parameters in the solution methods, and trying to form a coherent picture of results from a variety of output from the codes. The LSA is built to address these problems.
The Role of the LSA
The LSA is a problem-solving environment for application scientists who wish to quickly experiment with a variety of solution strategies and methods for large sparse linear systems of equations,
without having to integrate large amounts of disparate code into their application. Once an effective solution strategy is developed, the LSA can provide source code that can be integrated into the
user's application.
The LSA allows a user to
• Create a solution strategy without knowing the implementation details of the computational routines.
• Navigate a potentially large parameter space quickly, with expert advice supplied by the LSA when needed.
• Analyze and compare results from multiple solution strategies.
• Encapsulate a solution strategy as exportable Fortran/C code which can then be incorporated into an application code.
Next page: Application components in the LSA.
Last updated: Tue Jan 26 12:31:05 1999 | {"url":"http://www.extreme.indiana.edu/pseware/lsa/LSAprobdomain.html","timestamp":"2014-04-20T11:12:19Z","content_type":null,"content_length":"8853","record_id":"<urn:uuid:38abc74f-f1b8-4791-8348-aa446bb8a75c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
Increasing FreeBSD threads
up vote 3 down vote favorite
For network apps that create one thread per connection (like Pound), threadcount can become a bottleneck on the number of concurrent connections you can server.
I'm running FreeBSD 8 x64:
$ sysctl kern.maxproc
kern.maxproc: 6164
$ sysctl kern.threads.max_threads_per_proc
kern.threads.max_threads_per_proc: 1500
$ limits
Resource limits (current):
cputime infinity secs
filesize infinity kB
datasize 33554432 kB
stacksize 524288 kB
coredumpsize infinity kB
memoryuse infinity kB
memorylocked infinity kB
maxprocesses 5547
openfiles 200000
sbsize infinity bytes
vmemoryuse infinity kB
pseudo-terminals infinity
swapuse infinity kB
I want to increase kern.threads.max_threads_per_proc to 4096. Assuming each thread starts with a stack size of 512k, what else do I need to change to ensure that I don't hose my machine?
freebsd threads sysctls
For a sane application you shouldn't be running into the max_threads limit. But then, scaling applications is rarely sane. Performance suffers when you have thousands of threads. Instead of
screwing with the limits, that are pretty high, it's worth considering replacing Pound with Pen, or another load balancer with is select() based. That will get you a bit further before you run into
the OS limits. – pehrs Apr 21 '10 at 21:13
select() has fairly low limits as well, though you'll see much better performance when you've got a thousand connections going. On the BSDs kqueue is the way to go, on Linux epoll. But this is a
discussion for the developers (whom you should be contacting, and requesting they change their app, thread-per-connection is old-school). – Chris S♦ Apr 21 '10 at 23:01
Let's assume that Pound is in there for a reason (it is) and can't be trivially replaced (it can't). – sh-beta Apr 21 '10 at 23:18
add comment
1 Answer
active oldest votes
FWIW, I set kern.threads.max_threads_per_proc to 4096 without modifying any other settings and haven't seen any ill effects. Pound even got there a couple times (eating up
up vote 0 down vote 2GB of RAM while doing so).
add comment
Not the answer you're looking for? Browse other questions tagged freebsd threads sysctls or ask your own question. | {"url":"http://serverfault.com/questions/134616/increasing-freebsd-threads","timestamp":"2014-04-17T04:27:17Z","content_type":null,"content_length":"65358","record_id":"<urn:uuid:e4f77042-1ba9-42c9-ad27-214784be0482>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robert Soule : Hiding Secret Points amidst Chaff
Robert Soule
New York University
Hiding Secret Points amidst Chaff
Motivated by the representation of biometric and multimedia
objects, we consider the problem of hiding noisy point-sets using a secure
sketch. A point-set X consists of s points from a d-dimensional discrete
domain [0,N?$B!]1]d. Under permissible noises, for every point hx1, .., xdi 2
X, each xi may be perturbed by a value of at most ^N. In addition,
at most t points in X may be replaced by other points in [0,N ?$B!] 1]d.
Given an original X, we want to compute a secure sketch P. A known
method constructs the sketch by adding a set of random points R, and
the description of (X [ R) serves as part of the sketch. However, the
dependencies among the random points are difficult to analyze, and there
is no known non-trivial bound on the entropy loss. In this paper, we first
give a general method to generate R and show that the entropy loss
of (X [ R) is at most s(d log^A + d + 0.443), where ^A = 2^N + 1. We
next give improved schemes for d = 1, and special cases for d = 2. Such
improvements are achieved by pre-rounding, and careful partition of the
domains into cells. It is possible to make our sketch short, and avoid
using randomness during construction. We also give a method in d = 1
to demonstrate that, using the size of R as the security measure would
be misleading.
AUTHORS : Ee-Chien Chang and Qiming Li. | {"url":"http://www.cs.nyu.edu/crg/newAbstracts/robertsoule_abstract_11_30_06.html","timestamp":"2014-04-20T13:23:24Z","content_type":null,"content_length":"1729","record_id":"<urn:uuid:19d8f213-8c6b-406a-ad08-84ad94044123>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Theme Organizers: Ray Spiteri (Saskatchewan) and Buks van Rensburg (York)
For the first time at the CAIMS annual meeting, the two distinct communities of computational PDE and discrete mathematics will be connected in this theme. Discretizations of PDE lead to (large)
discrete dynamical systems, and a popular strategy of studying discrete dynamical systems is to go in the opposite direction (ie, limits of large numbers, continuum limits). As a consequence, we hope
that linking the two communities will lead to a cross-fertilization of ideas. The session will focus on scientific computing and numerical modeling of complex systems such as percolation, polymers,
Monte Carlo simulations of protein folding and numerical simulation of biological and biomedical systems including knotting and linking in biopolymers.
Dhavide Aruliah (University of Ontario Institute of Technology
Robert Bridson (University of British Columbia)
Matthew Emmett (University of North Carolina)
Peter Grassberger (University of Calgary)
Chen Greif (University of British Columbia)
Ray Kapral (University of Toronto)
Vince Lyzinski (John Hopkins University)
Felicia Magpantay (McGill University)
Colin Macdonald (University of Oxford)
Peter Minev (University of Alberta)
Enzo Orlandini (University of Padova)
George Patrick (University of Saskatchewan)
Thomas Prellberg (Queen Mary University, London)
Oluwaseun Sharomi (Saskatchewan)
Erkan Tuzel (Worcester Polytechnic Institute)
Buks van Rensburg (York)
Ivona Bezakova (Rochester Institute of Technology) (talk cancelled)
Daniel Stefankovic (University of Rochester) (talk cancelled)
Plenary Speaker
Ted Kim (Santa Barbara)
Subspace Simulation of Non-linear Materials
Everyday materials such as biological tissue exhibit geometric and constitutive non-linearities which are crucial in applications such as surgical simulation and realistic human animation.
However, these same non-linearities also make efficient finite element simulation difficult, as computing non-linear forces and their Jacobians can be computationally intensive. Reduced order
methods, which limit simulations to a low-dimensional subspace, have the potential to accelerate these simulations by orders of magnitude. In this talk, I will describe a subspace method we have
developed that efficiently computes all the necessary non-linear quantities by evaluating them at a discrete set of "cubature" points, an online integration method that can accelerate simulations
even when the subspace is not known a priori, and a domain decomposition method that efficiently adds deformations to discretely partitioned, articulated simulations. Using our methods, we have
observed speedups of one to four orders of magnitude.
Bio: Theodore Kim joined the faculty of Media Arts and Technology Program at the University of California, Santa Barbara, in the Fall of 2011 as an Assistant Professor. He conducts research in
physically-based simulation for computer graphics, particularly in fluid dynamics, solid mechanics, and multi-core techniques. Techniques from his research have been used in over a dozen movies.
Previously, he has been an Assistant Professor in Computer Science at the University of Saskatchewan, a Post-Doctoral Associate at Cornell University, and a Post-Doctoral Fellow at IBM TJ Watson
Research Center. He received his PhD in Computer Science from the University of North Carolina at Chapel Hill in 2006.
Dhavide Aruliah (University of Ontario Institute of Technology)
Toward Accurate Methods for Forensic Bloodstain Pattern Analysis
At present, bloodstain pattern analysis (BPA) is largely an empirical, qualitative sub-discipline of forensic science. Forensic BPA specialists analyze bloodstains found at crime scenes and
attempt to infer the location of the blood-letting impact (or impacts) that caused to the bloodstains. Traditional BPA methods for reconstructing crime scenes (notably stringing) ignore the
effects of gravity and aerodynamic drag on trajectories of blood droplets in flight. Such assumptions may produce acceptable qualitative predictions under certain conditions (e.g., when analyzing
bloodstains caused by droplets moving at high speeds due to discharge of a firearm). However, in many circumstances, back-tracking blood droplets along straight lines from bloodstains to a static
impact location are misleading (e.g., when bloodstains are caused by assault with a blunt instrument or an edged weapon). Our ultimate aim is to develop software tools for quantitative analysis
to support forensic BPA analysts.
We describe our framework for recording simulated blood-letting events and extracting droplet flight trajectories. The simulations consist of fake blood encased in ballistic gel being splattered
by projectiles. The resulting blood flight trajectories are recorded by a stereo pair of high-speed cameras and the bloodstains are photographed to provide a collection of video and static image
data sets to validate our inversion framework. We employ a sophisticated algorithm incorporating background removal, segmentation, 2D motion tracking and 3D reconstruction to extract meaningful
flight trajectories from the video data.
Robert Bridson (University of British Columbia)
Generic implicit constrained dynamics and algebraic preconditioners for graphics
The emerging approach of "virtual practical effects" in feature film production leverages artists' intuition for the real world by simulating physics in a virtual world - instead of struggling to
control complex software models, artists set up effects as they might in reality and let physics do the rest. This demands robust numerical solvers which can couple diverse physics together in
unanticipated ways. We present a framework which unifies many previous elements (from viscous incompressible fluids to rigid body contact to inextensible cloth) and reduces to a sequence of
least-squares-like problems. We further explore a new approach to algebraic almost-black-box domain decomposition which shows promise for tackling linear systems of this category.
Matthew Emmett (University of North Carolina)
The Parallel Full Approximation Scheme in Space and Time (PFASST) algorithm
The Parallel Full Approximation Scheme in Space and Time (PFASST) algorithm for parallelizing PDEs in time is presented. To efficiently parallelize PDEs in time, the PFASST algorithm decomposes
the time domain into several time slices. After a provisional solution is obtained using a relatively inexpensive time integration scheme, the solution is iteratively improved using a deferred
correction scheme. To further improve parallel efficiency, the PFASST algorithm uses a hierarchy of discretizations at different spatial and temporal resolutions and employs an analog of the
multi-grid full approximation scheme to transfer information between the discretizations.
Peter Grassberger (University of Calgary)
"Who said that he understands percolation?"
Although percolation theory was considered a mature subject several years ago, recent progress has changed this radically. While "classical" or "ordinary" percolation (OP) is a second order phase
transition between long range connectivity and disconnectedness on diluted regular lattices or random graphs, examples have now been found where this transition can range from infinite order to
first order. The latter is of particular importance in social sciences, where first order percolation transitions show up as a consequence of synergistic effects, and I will point out analogies
with the relationship between percolation and rough interfaces in physics. Another case where first order percolation transitions show up is interdependent networks, although first claims about
this have to be substantially modified -- in some cases of interdependent networks the transition is second order but in a new universality class. A similar but even more unexpected result holds
for so-called "Achleoptas processes" that were originally claimed to show first order transitions, but which actually show second order transitions with a completely new type of finite size
scaling. Finally, I will present "agglomerative percolation" (AP), a model originally introduced to understand the claim that network renormalization can demonstrate the fractality of some small
world networks. Due to a spontaneously broken symmetry on bipartite graphs, AP leads e.g. to different scaling behaviors on square and triangular 2-d lattices, in flagrant violations of
Chen Greif (University of British Columbia)
Numerical Solution of Saddle-Point Linear Systems
Constrained partial differential equations and optimization problems typically require the need to solve special linear systems known as saddle-point systems. When the matrices are very large and
sparse, iterative methods must be used. A challenge here is to derive and apply solution methods that exploit the properties and the structure of the given discrete operators, and yield fast
convergence while imposing reasonable computer storage requirements. In this talk I will provide an overview of solution techniques. In particular, we will discuss effective preconditioners and
their spectral properties, bounds on convergence rates, and computational challenges.
Ray Kapral (Toronto)
Mesoscopic Dynamics of Biopolymers and Protein Machines
The dynamics of polymers and biopolymers in solution and in crowded molecular environments which mimic some features of the interior of a biochemical cell will be discussed. In particular, the
dynamics of protein machines that utilize chemical energy to effect cyclic conformational changes will be described. The investigation of the dynamics of such complex systems requires knowledge
of the time evolution on physically relevant long distance and time scales. This often necessitates a coarse grained or mesoscopic treatment of the dynamics. A particle-based mesoscopic dynamical
method, multiparticle collision dynamics, which conserves mass, momentum and energy, has been constructed and utilized to study the dynamics of these systems. The dynamics can be described by a
Markov chain model in the full phase space of the system and, using projection operator or other methods, can be shown to lead to the continuum Navier-Stokes or reaction-diffusion descriptions on
long distance and time scales.
Vince Lyzinski (Johns Hopkins)
Strong Stationary Duality for Diffusions
Strong stationary duality has had a wide-ranging impact on Markov chain theory since its conception by Diaconis and Fill in 1990. Its diverse applications range from perfect sampling extensions
of Markov Chain Monte Carlo to the establishment of cut-off phenomena for wide classes of Markov chains. We extend the idea of strong stationary duality to one-dimensional diffusion processes and
in doing so recover some classical Markov chain results in the diffusion setting. This is joint work with my PhD advisor James Allen Fill.
Colin Macdonald (University of Oxford)
Mathematics and Algorithms for Simple Computing on Surfaces
The Closest Point Method is a simple numerical technique for solving partial differential equations (PDEs) on curved surfaces or other general domains. The method works by embedding the surface
in a higher-dimensional space and solving the PDE in that space. Numerically, we can use simple finite difference and interpolation schemes on uniform Cartesian grids. This presentation provides
a brief overview of the Closest Point Method and outlines some current results. In the spirit of a minisymposium that is examining links between continuous and discrete computational mathematics,
I will discuss the issue of finding a narrow band surrounding a complicated surface (this is useful for an efficient implementation) and how we approach this (discrete) problem.
Joint work with Tom Maerz, Ingrid von Glehn, and Yujia Chen (Oxford) and Steve Ruuth (SFU).
Felicia Magpantay (York University)
Stability of backward Euler applied to a model state dependent delay differential equation
We consider the stability properties of the backward Euler method for delay differential equations (DDEs) with respect to a test equation. We consider two cases: (i) constant delay (ii) state
dependent delay. Runge-Kutta methods applied to DDEs have been studied by many authors who have mainly considered stability regions independent of the delay and/or require the step-size to be a
submultiple of the delay (for the constant delay case). These assumptions put too much restriction on the method and cannot be used in the state dependent case. We direct attention to the
dependence of the stability regions to the delay function and present results that use general step sizes. The techniques used are derived from the method of Lyapunov functionals to directly
prove the stability of zero solution. We also prove the local stability of backward Euler when the stepsize used in an important case.
Joint work with A.R. Humphries and N. Guglielmi.
Peter Minev (University of Alberta)
A Direction Splitting Algorithm for Flow Problems in Complex/Moving Geometries
An extension of the direction splitting method for the incompressible Navier-Stokes equations proposed in [1], to flow problems in complex, possibly time dependent geometries will be presented.
The idea stems from the idea of the fictitious domain/penalty methods for flows in complex geometry. In our case, the velocity boundary conditions on the domain boundary are approximated with a
second-order of accuracy while the pressure subproblem is harmonically extended in a fictitious domain such that the overall domain of the problem is of a simple rectangular/parallelepiped shape.
The new technique is still unconditionally stable for the Stokes problem and retains the same convergence rate in both, time and space, as the Crank-Nicolson scheme. A key advantage of this
approach is that the algorithm has a very impressive parallel performance since it requires the solution of one-dimensional problems only, which can be performed very efficiently in parallel by a
domain-decomposition Schur complement approach. Numerical results illustrating the convergence of the scheme in space and time will be presented. Finally, the implementation of the scheme for
particulate flows will be discussed and some validation results for such flows will be presented.
[1] J.L. Guermond, P.D. Minev, A new class of massively parallel direction splitting for the incompressible Navier-Stokes equations. Computer Methods in Applied Mechanics and Engineering, 200
(2011), 2083†2093.
Enzo Orlandini (Padova)
Dynamics of knotted polymers
Knots are frequent in long polymer rings at equilibrium and it is now well established that their presence can affect to some extent the static properties of the chain. On the other hand, the
presence of topological constraints (knots) in circular and linear polymers may influence also their dynamical properties. This has been indeed shown in recent experiments where the motion of a
single knotted DNA has been followed within a viscous solution and in the presence of a stretching force. These experiments raise interesting challenges to the theoretical comprehension of the
problem, an issue which is still in its infancy. As a first step towards the understanding of the mechanism underlying the mobility of a knot along the ring backbone and its effect on the overall
polymer dynamic we investigate, by Monte Carlo and molecular dynamics simulations, the dynamics of knotted rings under good solvent conditions. By using an algorithm that detects position and
size of the knot we are able to monitor the motion of the topologically entangled sub region both in space and along the ring backbone.
This allows identifying in knotted rings a novel, slow topological timescale, whose origin can be related to a self-reptation motion of the knotted region.
For open chains, knotted configurations do not represent an equilibrium state any more. However, under suitable conditions, (for example very tight knots or quite rigid chains) knotted metastable
states persist for a very long time and a statistical description of their dynamical properties is then possible. By performing off lattice molecular dynamic simulations of a semiflexible polymer
we estimate the average living time and the survival probability as a function of the initial conditions (size of the initial knot) and knot type. This analysis has been extended to the case in
which the linear polymer is subject to an external stretching force.
George Patrick (University of Saskatchewan)
Automatically generated variational integrators
Many fundamental physical systems have variational formulations, such as mechanical systems in their Lagrangian formulation. Discretization of the variational principles leads to (implicit)
symplectic and momentum preserving one step integration methods. However, such methods can be very complicated.
I will describe some advances in the basic theory of variational integrators, and a software system called AUDELSI, which converts any ordinary one step method into a variational integrator of
the same order.
Thomas Prellberg (Queen Mary University London)
Rare event sampling with stochastic growth algorithms
We discuss uniform sampling algorithms that are based on stochastic growth methods, using sampling of extreme configurations of polymers in simple lattice models as a motivation. We shall show
how a series of clever enhancements to a fifty-odd year old algorithm, the Rosenbluth method, leads to a suite of cutting-edge algorithms capable of uniform sampling of equilibrium statistical
mechanical systems of polymers in situations where competing algorithms failed to perform well. Examples range from collapsed homo-polymers near sticky surfaces to models of protein folding.
Oluwaseun Sharomi (University of Saskatchewan)
The Significance of Order in the Numerical Simulation of the Bidomain Equation in Parallel
The propagation of electrical activity in the heart can be modelled by the bidomain equations. However to obtain clinically useful data from the bidomain equations, they must be solved with many
millions of unknowns. Naturally, to obtain such data in real time is the ultimate goal, but at present we are still an order of magnitude or two away from being able to do so. The spectral/hp
element method can be considered to be a high-order extension of the traditional finite or spectral element methods, where convergence is not only possible through reducing the mesh size h
but also through increasing the local polynomial order p of the basis functions used to expand the solution. We are interested in evaluating the effectiveness of a high-order method in serial
against that of a low-order method in parallel. We find that high-order methods in serial can outperform low-order methods in parallel. These findings suggest software developers for the bidomain
equations should not forego the implementation of high-order methods in their efforts to parallelize their solvers.
Erkan Tuzel (Physics, Worcester Polytechnic Institute)
Constrained Polymer Dynamics in a Mesoscale Solvent
While modeling polymer solutions, the presence of multiple time scales, such as the intermolecular bond potentials, makes quantitative analysis of results difficult, and simulations costly. Here,
we show how these degrees of freedom can be replaced by rigid bond constraints as commonly done in Brownian dynamics for polymers embedded in a recently introduced hydrodynamic solvent known as
Stochastic Rotation Dynamics (SRD) (or Multi-Particle Collision Dynamics - MPCD). We then discuss applications of this approach to various systems of biological interest.
EJ Janse van Rensburg (York University)
Statistics of knotted lattice polygons
In this talk I discuss the implementation of the GAS algorithm using BFACF elementary moves which we implement to sample knotted polygons in the cubic lattice. The GAS algorithm is an approximate
enumeration algorithm, and I show how it can be implemented to estimate the number of distinct polygons of given length and fixed knot types. The numerical results we obtain make it possible to
examine the scaling of knotted lattice polygons. For example, our data indicate that unknotted polygons dominate cubic lattice polygon statistics up to lengths of about 170,000 steps, thereafter
polygons of knot type the trefoil becomes more numerous. In addition, the relative frequencies of various knot types can be determined -- we found that trefoil knots are about 28 times more
likely to occur than figure eight knots in long polygons.
Contributed Talks
*Ken Roberts, Western University
Coauthors: S. R. Valluri, Muralikrishna Molli, M. ChandraSekhar, K. Venkataramaniah, P. C. Deshmukh
A Study of Polylogarithmic Equations for Thermoelectric Nanomaterials
In the design of thermoelectric nanomaterials, various expressions arise which involve the polylogarithms. A computational study of some of those equations has led to curious results, suggesting
additional properties to be explored, either the mathematics of polylogarithms, or the statistical mechanics underlying the material models which lead to polylogarithms.
We will present a progress report on our efforts to explore and understand these relationships. There are possibly some insights to be gained into the statistical mechanics of thermoelectric
materials, via utilizing polylogarithms of complex order, or via generalizing polylogarithms to a form related to the Dirichlet L-series in analytic number theory.
Thomas Humphries
Department of Mathematics and Statistics, Memorial University of Newfoundland
Coauthors: Ronald Haynes, Lesley James
Simultaneous optimization of oil well placement and control using a hybrid global-local strategy
Two important decisions in maximizing production from an oil field are where to place injection and production wells, and how to control the flow rates at these wells. In this presentation we
address the reservoir optimization problem using an automatic approach that hybridizes two black-box optimization methods: particle swarm optimization (PSO) and generalized pattern search (GPS).
PSO provides a semi-random, global exploration of search space, while GPS systematically explores local regions of space. We present simulation results showing that this hybridized approach
outperforms the independent application of PSO to this problem.
Erin Moulding, Department of Mathematics, UBC
Coauthors: Chen Greif, UBC, and Dominique Orban, École Polytechnique
Bounds on the Eigenvalues of 3x3 Block Matrices arising from Interior-Point Methods
Interior-point methods feature prominently in the solution of constrained optimization problems, and involve solving linear systems with a sequence of 3x3 block matrices which become increasingly
ill-conditioned. A review of the literature suggests that most practitioners reduce these to either 2x2 block saddle-point matrices or 1x1 block normal equations. In this talk we explore whether
it pays off to perform such reductions. We use energy estimates to obtain bounds on the eigenvalues of the unreduced matrix, which indicate that in terms of spectral structure, it may be better
to keep the matrix in its original form.
Wayne Enright , University of Toronto
Coauthors: Bo Wang
Parameter Estimation for ODEs using a Cross-Entropy Approach
Parameter Estimation for ODEs and DDEs is an important topic in numerical analysis. In this paper, we present a novel approach to address this inverse problem. Cross-entropy algorithms are
general algorithm which can be applied to solve global optimization problems. The main steps of cross-entropy methods are first to generate a set of trial samples from a certain distribution,
then to update the distribution based on these generated sample trials. To overcome the prohibitive computation of standard cross-entropy algorithms, we develop a modification combining local
search techniques. The modified cross-entropy algorithm can speed the convergence rate and improve the accuracy simultaneously.
Two different coding schemes (continuous coding and discrete coding) are also introduced. Continuous coding uses a truncated multivariate Gaussian to generate trial samples, while discrete coding
reduces the search space to a finite (but dense) subset of the feasible parameter values and uses a Bernoulli distribution to generate the trial samples (which are fixed point approximation of
the actual parameters) . Extensive numerical and real experiments are conducted to illustrate the power and advantages of the proposed methods. Compared to other existing state-of-the-art
approaches on some benchmark problems for parameter estimation, our methods have three main advantages: 1) They are robust to noise in the data to be fitted; 2) They are not sensitive to the
number of observation points (in contrast to most existing approaches) ; 3) The modified versions exhibit faster convergence than the original versions, thus they are more efficient, without
sacrificing accuracy.
Ivona Bezakova (RIT)
Counting and sampling minimum cuts in weighted planar graphs
We will discuss two minimum cut problems in weighted planar graphs: minimum source-sink cuts and contiguous minimum single-source-multi-sink cuts.
A source-sink cut is a set of vertices containing the source vertex and not the sink vertex (or, in the case of multiple sinks, not containing any of the sink vertices). A cut is minimum if the sum
of the weights of the cut edges, connecting a vertex in the cut set with a vertex outside the cut set, is the smallest possible. A cut is contiguous if the cut set can be separated from the remaining
vertices by a simply connected planar region whose boundary intersects only the cut edges.
We will present an O(n^2) algorithm counting all minimum source-sink cuts in weighted planar graphs, where n is the number of vertices. We will also sketch an O(n^3) algorithm counting all contiguous
minimum single-source-multi-sink cuts. In both cases, having completed the counting part, subsequent sampling is very fast: a uniformly random cut can be produced in additional linear time.
The counting algorithms share a common outline. First, we reduce the problem to the problem of counting a different type of cuts in an unweighted planar directed acyclic graph (these cuts can also be
thought of as maximal antichains in the corresponding partially ordered set). These cuts correspond to certain cycles in the planar dual graph and we employ dynamic programming to count them. We will
discuss the first algorithm in detail and briefly sketch the issues encountered by the contiguity requirement.
Minimum source-sink and contiguous minimum single-source-multi-sinks cuts have applications in computer vision and medical imaging where the underlying graph often forms a 2D grid (lattice).
Based on joint works with Adam Friedlander and Zach Langley
Daniel Stefankovic (Rochester)
Connection between counting and sampling
Counting problems arise in a wide variety of areas, for example, computer science, enumerative combinatorics, and statistical physics (estimating the value of a partition function is a counting
problem). I will talk about the following aspect of approximate counting: given a sampling algorithm, how can one efficiently translate it into a counting algorithm?
back to top | {"url":"http://www.fields.utoronto.ca/programs/scientific/11-12/CAIMS_SCMAI/computational-and-discrete-math.html","timestamp":"2014-04-21T14:54:30Z","content_type":null,"content_length":"55378","record_id":"<urn:uuid:9d55d497-bf0e-4f9e-98c6-02b8f04f7d71>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00561-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Ultrafinitism
Walter Read read at csufresno.edu
Tue Feb 10 13:35:53 EST 2009
It seems to me that the response to Jean Paul Van Bendegem's comment on ultrafinitism
has been going down a less interesting path, concentrating on parses of "writing down a
numeral". I think that the more interesting issue is distinguishing "any numeral" from "all
numerals", i.e., unbounded from infinite or, in older terminology, "potential" from "actual"
infinities. Since Cantor - more precisely, since a generation or so after Cantor - these
concerns have largely faded away, but at one time they engaged the best minds of the era.
Walt Read
Computer Science, MS ST 109
CSU, Fresno
Fresno, CA 93740
Email: read at csufresno.edu
Tel: 559 278 4307
559 278 4373 (dept)
Fax: 559 278 4197
----- Original Message -----
From: Alex Blum <blumal at mail.biu.ac.il>
Date: Tuesday, February 10, 2009 8:10 am
Subject: [FOM] Ultrafinitism
To: Foundations of Mathematics <fom at cs.nyu.edu>
> Jean Paul Van Bendegem presents a putative counterexample to a
> generalization of mathematical induction. He writes, in part:
> "(a) I can write down the numeral 0 (or 1, does not matter),
> (b) for all n, if I can write down n, I can write down n+1 (or the
> successor of n),
> hence, by mathematical induction,
> (c) I can write down all numerals."
> Keith Brian Johnson questions (b), for, he writes: "One might have
> just
> enough time to write down some large number n before dying, but not
> enough time to write down n[+1]. Or one might run out of paper (or
> the
> amount of material in the universe might limit how many numbers
> could
> actually be written down). Or one might be limited, when
> conceiving of
> numbers, by his own brain's finitude. So, as a practical matter,
> (b)
> might be false. Naturally, I would think the argument should be
> so formulated as to render such practical considerations
> irrelevant,
> e.g., with an "in principle" inserted: for all n, if I can in
> principle
> write down n, then I can in principle write down n+1. I.e., if a
> hypothetical being unconstrained by spacetime limitations or mental
> finitude could conceive of n, then that being could conceive of
> n+1.
> (Whether such a being *would* conceive of n+1 is unimportant; what
> matters is that there is no mathematical reason why he couldn't.)
> Similarly, it's clearly false that I personally physically can
> write
> down all numerals, but "I can, in principle, write down all
> numerals,"
> where "in principle" is so construed as to leave me unconstrained
> by
> spacetime limitations or mental finitude, doesn't seem similarly
> false
> (unless one picks on the notion on writing down numerals as
> necessarily
> physical, in which case I would replace my writing down of numerals
> by that hypothetical being's conception of numbers)."
> ...
> The properties of numbers in mathematical induction hold of numbers
> irrespective of how they are named. Since a number may be named
> in a
> notation which could never be completed, (b), even if true,
> need not
> be true, and thus the predicate 'I can write down the numeral' is
> inappropiate for mathematical induction.
> Alex Blum
> _______________________________________________
> FOM mailing list
> FOM at cs.nyu.edu
> http://www.cs.nyu.edu/mailman/listinfo/fom
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2009-February/013401.html","timestamp":"2014-04-17T21:59:17Z","content_type":null,"content_length":"6621","record_id":"<urn:uuid:81fe849b-0ed0-4b66-a143-99b318410c07>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00229-ip-10-147-4-33.ec2.internal.warc.gz"} |
Blowin' in the wind: solution
March 2008
Blowin' in the wind: solution
last issue's Outer space
we developed a formula for the power output per unit time of a windmill:
The answer lies in understanding that the average speed is not a good guide. If, for example, the wind speed was
This is four times more than what we got in the original calculation.
Back to Outer space
Submitted by Anonymous on September 14, 2012.
If forward wind speeds are 4, 6, 8, 10 m/sec What is the tip speed of a 40m blade at each input rate? (formula?) | {"url":"http://plus.maths.org/content/plus-magazine-12","timestamp":"2014-04-16T10:16:31Z","content_type":null,"content_length":"24601","record_id":"<urn:uuid:81d39c94-c2ee-4526-83ab-eff28f3fddc6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geometry Problems and Questions with Answers for Grade 9
Grade 9 geometry problems and questions with answers are presented. These problems deal with finding the areas and perimeters of triangles, rectangles, parallelograms, squares and other shapes.
Several problems on finding angles are also included. Some of these problems are challenging and need a good understanding of the problem before attempting to find a solution. Also Solutions and
detailed explanations are included.
1. Angles A and B are complementary and the measure of angle A is twice the measure of angle B. Find the measures of angles A and B,
2. ABCD is a parallelogram such that AB is parallel to DC and DA parallel to CB. The length of side AB is 20 cm. E is a point between A and B such that the length of AE is 3 cm. F is a point
between points D and C. Find the length of DF such that the segment EF divide the parallelogram in two regions with equal areas.
3. Find the measure of angle A in the figure below.
4. ABC is a right triangle. AM is perpendicular to BC. The size of angle ABC is equal to 55 degrees. Find the size of angle MAC.
5. Find the size of angle MBD in the figure below.
6. The size of angle AOB is equal to 132 degrees and the size of angle COD is equal to 141 degrees. Find the size of angle DOB.
7. Find the size of angle x in the figure.
8. The rectangle below is made up of 12 congruent (same size) squares. Find the perimeter of the rectangle if the area of the rectangle is equal to 432 square cm.
9. ABC is a right triangle with the size of angle ACB equal to 74 degrees. The lengths of the sides AM, MQ and QP are all equal. Find the measure of angle QPB.
10. Find the area of the given shape.
11. Find the area of the shaded region.
12. The vertices of the inscribed (inside) square bisect the sides of the second (outside) square. Find the ratio of the area of the outside square to the area of the inscribed square.
Answers to the Above Questions
1. measure of A = 60 degrees, measure of B = 30 degrees
2. length of DF = 17 cm
3. measure of A = 87 degrees
4. size of angle MAC = 55 degrees
5. size of angle MBD = 72 degrees
6. size of angle DOB = 93 degrees
7. size of angle x = 24 degrees
8. perimeter of large rectangle = 84 cm
9. measure of angle QPB = 148 degrees
10. area of given shape = 270 square cm
11. area of shaded region = 208 square cm
12. ratio of area of outside square to area of inscribed square = 2:1
More Middle School Math (Grades 6, 7, 8, 9) - Free Questions and Problems With Answers
More Primary Math (Grades 4 and 5) with Free Questions and Problems With Answers
Author - e-mail
Home Page
Updated: 4 April 2009 (A Dendane) | {"url":"http://www.analyzemath.com/middle_school_math/grade_9/geometry.html","timestamp":"2014-04-20T21:22:52Z","content_type":null,"content_length":"10020","record_id":"<urn:uuid:3d36a868-f8fb-4086-86f5-d99f0712ebe8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00523-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: [MATHEDU] Re: Common Finals
Replies: 2 Last Post: May 27, 1997 7:05 AM
Messages: [ Previous | Next ]
Re: [MATHEDU] Re: Common Finals
Posted: May 27, 1997 12:22 AM
It appears to me that the discussion of *common finals*
has splintered into directions that no longer has much to do with
the original question.
The original question concerned the process where a department
wishes to have some inputs on the pros and cons of scheduling a
final examination in a multiple-sectioned beginning level
calculus course at a *common* time.
Pros and cons should be accompanied by a fairly detailed description
of the local scene. What works at one location may be totally
unreasonable at another location. Philosophical and theoretical
discussions without any accompanying scenario do not provide much
Many departments may decide to give, for administrative
reasons, to give one set of examination questions to all the
students taking the same course. In fact, there is no reason
why this has to be the case. Indeed, it is quite common that
several versions of the exams questions may be given to lessen
the chance of students copying from neighbors in crammed spaces.
In universities with adequate resources, it is perfectly feasible
for the exams to be customized by the individual teachers so that
the only thing that is *common* about the exam is the *time*.
Moreover, the weighting of the final exam can also be customized. There
is really no shortage of ideas which allow faculty members to implement
As for trusting individual teachers, in institutions where
most of the teaching/learning take place in recitations
staffed by teaching assistants, there is often a tremendous
variation in the past experiences of the teachers. It is
often quite helpful that the less experienced teachers are
not burdened with the decision on the make up of a final
examination. Of course, input from all the teaching staff
should be heard. One can easily include choices on the
exam. For example, students can be told that there are
11 questions on the exam, 10 would be considered as a *perfect*
score and the *11-th* would be considered as *bonus*. A
*common content* can, in fact, be a fair test on how well
students have mastered the content of the course because
the test may include problems that had not been touched
by the individual teachers.
Having a common time often has the following advantages:
Students taking the exam at a later time would no longer
be spending a large amount of time trying to get hold
of the content of the earlier exams.
Students taking the exam at an earlier time would no longer
spend time dreaming up excuses in order to take the exam
at a later time.
In the case of a *common content*, the question of *uniform grading*
may be dealt with by having a *grading party* where each teacher
is assigned to grading one problem for all the sections (it could be
increased to two or three depending on the number of problems and
the number of teachers). With the distribution of a grading key,
it is not all that difficult to achieve *reasonble uniformity in
grading*. One rationale for *uniform content* is often connected
with the fact that a beginning course is most likely to be a
pre-requisite or co-requisite to other courses. As such, *final*
exam has more to do with the fact that it is the *last* exam for
that course, rather than *the last* exam which tests for a
comprehensive understanding of the field. In addition,
one should also note that U.S. is a litigation-minded society. There
are many cases where students lodge official complaints about unfair
grades. In such cases, having a *final exam* in a course provides
some evidence in terms of student performance in the course.
In some U.S. institutions, undergraduate students may have to
pass a *comprehensive* exam in their major subject. Unlike many of
the European universities, U.S. students do not usually declare a
major until the end of their second year. In many cases, students
are still taking courses to satisfy their *distribution* requirements
during their junior and senior year. Thus, a comprehensive exam may
not be appropriate.
Ultimately, the question on how best to assess the students
is best dealt with on a *local basis*.
Han Sah, sah@math.sunysb.edu
This is an unmoderated distribution list discussing post-calculus teaching
and learning of mathematics. Please keep postings thoughtful and productive.
No cute one-liners please----David.Epstein@warwick.ac.uk
Get guidelines before posting: email majordomo@warwick.ac.uk saying
get mathedu guidelines
(Un)subscribe to mathedu(-digest)by email to majordomo@warwick.ac.uk saying:
(un)subscribe mathedu(-digest) <type in your email address here>
Date Subject Author
5/26/97 [MATHEDU] Re: Common Finals Matthias Kawski
5/27/97 Re: [MATHEDU] Re: Common Finals Chih-Han sah
5/27/97 Re: [MATHEDU] Re: Common Finals Colin Johnson | {"url":"http://mathforum.org/kb/thread.jspa?threadID=433313&messageID=1371897","timestamp":"2014-04-17T21:35:19Z","content_type":null,"content_length":"23103","record_id":"<urn:uuid:ae682fe1-15d4-4c21-a132-822ebf660c0f>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00314-ip-10-147-4-33.ec2.internal.warc.gz"} |
GENLIN fails to produce pooled parameter estimates with multiple-imputed data set
Technote (troubleshooting)
When running the GENLIN (Analyze-Generalized Linear Models) procedure on a multiple-imputed data set, I have been unable to obtained pooled estimates, although the results for the original sample and
all of the imputation samples appear to be stable and reasonable. For example, I ran a logistic regression model in GENLIN with a data set that had 5 imputation samples. Estimates were provided for
the original data and all 5 imputed samples. However, the Pooled section of the Parameter Estimates table was empty, with the footnote:
'At least one of the vectors or matrices in the model being added is different in size from the corresponding vector or matrix in previously added models. This model cannot be added.'
When I used the LOGISTIC REGRESSION command (Analyze->Regression->Binary Logistic) to run the same model on this imputed data set, that procedure produced the same parameter estimates as GENLIN for
all predictors in the original sample and all 5 imputations. The LOGISTIC REGRESSION run also printed pooled parameter estimates.
Why does GENLIN fail to produce pooled estimates for a well-defined model? Would it be appropriate to manually pool the results using Rubin's rules?
Resolving the problem
This problem was filed as a defect and fixed in Release 18.0.1. The error occurred because the pooling of estimates was halted when any of the coefficients had an empty standard error ("Std. Error")
cell. The parameter labeled "(Scale)" had no standard error in the affected models, but this condition is legitimate for many of the models available in GENLIN. The pooling was successful with the
LOGISTIC REGRESSION procedure results for the same model because that procedure does not print a "(Scale)" Parameter, so all of the nonredundant parameters had nonempty Standard Error cells.
In Release 18.0.1 and later versions, the pooling is not halted when the Scale parameter has an empty standard error cell. When the model is only available through GENLIN, applying Rubin's rules to
the nonpooled parameter estimates is a workaround for Releases prior to 18.0.1. | {"url":"http://www-01.ibm.com/support/docview.wss?uid=swg21478760","timestamp":"2014-04-16T19:22:04Z","content_type":null,"content_length":"21668","record_id":"<urn:uuid:a42b7826-5135-4631-9261-089f44b3f7ac>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: How to calculate the time elapsed between two transactions
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: How to calculate the time elapsed between two transactions
From Svend Juul <SJ@SOCI.AU.DK>
To <statalist@hsphsun2.harvard.edu>
Subject Re: st: How to calculate the time elapsed between two transactions
Date Thu, 24 Jul 2008 23:31:20 +0200
Bea wrote:
I have data about time of transactions and I would like to calculate
the difference (i.e. how much time elapsed between two transactions).
For example I have a transaction at 09:23:03 and the next at 10:43:53,
so the time interval is 1 hour 20 minutes and 50 seconds.
Is there a procedure to calculate the difference between a transaction
and the next one for the whole sample?
If you have Stata 10, read the output from:
. help dates and times
Prior to Stata 10, there were no official commands handling time
(clock) information, but there were some unofficial functions
in the -egenmore- package. First:
. ssc install egenmore
Now you have access to some useful -egen- functions. For your
purposes, the -hms()- and -tod()- functions are useful.
Hope this helps
Svend Juul
Institut for Folkesundhed, Afdeling for Epidemiologi
(Institute of Public Health, Department of Epidemiology)
Vennelyst Boulevard 6
DK-8000 Aarhus C, Denmark
Phone, work: +45 8942 6090
Phone, home: +45 8693 7796
Fax: +45 8613 1580
E-mail: sj@soci.au.dk
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-07/msg00911.html","timestamp":"2014-04-18T18:19:16Z","content_type":null,"content_length":"6130","record_id":"<urn:uuid:f9fee466-1aee-4e4f-9807-cb51281baaf9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00128-ip-10-147-4-33.ec2.internal.warc.gz"} |
12-08-2005 #1
Registered User
Join Date
Nov 2005
I'm having trouble with my scheme program and I was wondering if anyone knew any boards that dealt with the Scheme programming language so that perhaps I could get some help there, anyone know of
What do you need help with? Fire away the questions.
;readTillNum void -> num
(define (readTillNum)
(local [(define answer (read))]
(if (number? answer)
;usersNumbers void -> void
(define (usersNumbers)
(display "Type in a number")
(define num1 (readTillNum))
(display "Type in another number")
(define num2 (readTillNum))
(display "Type in another number")
(define num3 (readTillNum))
(define sum (sum num1 num2 num3))
(define avg (average num1 num2 num3))
(define numPos (count-positive num1 num2 num3))
(display "the sum is")
(display sum)
(display "the average is")
(display average)
(display "the number of positive numbers is")
(display numPos))
im trying to write a program that reads in 3 numbers from the user, and prints the sum, average, and number of positive numbers...but...when i try to run it i get a
define: expected only one expression for the function body, but found at least one extra part in: (define num1 (readTillNum))
error. I'm new to scheme...and i can't understand the reason for the error
The problem is in scheme, functions can only consist of one expression. In your case your function contains multiple expressions.
There's one way around this, that I know of. There may be a better way but this is the method I know:
For example:
(define (func)
The begin statement executes every statement in the block, allowing you to have infinite number of statements. Note that all but the last statement's return values are disregarded, so the
function will return the value of the last statement executed.
(define (func)
(+ 3 5)
(+ 1 2)))
Will output 3.
Also, you can't define variables in a begin statement, so you'll need a local around that if you want to define any variables:
(define (func)
(local [(define myvar 0)]
(set! myvar (readTillNum))
Also, I noticed there's a couple areas of your code that use wrong variable names (average where I think you meant avg).
;;sum num num num -> num
;(define (sum num1 num2 num3)
; (+ num1 num2 num3))
;"Examples of sum"
;(sum 0 0 0) "should be 0"
;(sum 0 0 1) "should be 1"
;(sum 0 1 1) "should be 2"
;(sum 1 2 3) "should be 6"
;(sum -1 0 1) "should be 0"
;;average num num num -> num
;(define (average num1 num2 num3)
; (/ (+ num1 num2 num3) 3))
;"Examples of average"
;(average 0 0 0) "should be 0"
;(average 0 0 3) "should be 1"
;(average 0 3 3) "should be 2"
;(average 3 3 3) "should be 3"
;(average -1 0 1) "should be 0"
;;count-positive num num num -> num
;(define posNum 0)
;(define (count-positive num1 num2 num3)
; (cond [(and (and (positive? num1) (positive? num2)) (positive? num3)) (+ posNum 3)]
; [(or (or (and (positive? num1) (positive? num2)) (and (positive? num1) (positive? num3))) (and (positive? num2) (positive? num3))) (+ posNum 2)]
; [(or (or (positive? num1) (positive? num2)) (positive? num3)) (+ posNum 1)]
; [(and (and (= num1 0) (= num2 0)) (= num3 0)) (+ posNum 0)]
; [(or (or (negative? num1) (negative? num2)) (negative? num3)) (+ posNum 0)]))
;"Examples of count-positive"
;(count-positive 0 0 0) "should be 0"
;(count-positive 0 0 3) "should be 1"
;(count-positive 3 3 0) "should be 2"
;(count-positive -3 -3 -3) "should be 0"
;(count-positive -1 0 1) "should be 1"
;(count-positive 14 4 2) "should be 3"
;(count-positive 0 0 -2) "should be 0"
;read display newline
;readTillNum void -> num
(define (readTillNum)
(local [(define answer (read))]
(if (number? answer)
;usersNumbers void -> void
(define (usersNumbers)
(local [(define sum (sum num1 num2 num3)) (define average (average num1 num2 num3))(define numPos (count-positive num1 num2 num3))]
(display "Type in a number")
(define num1 (readTillNum))
(display "Type in another number")
(define num2 (readTillNum))
(display "Type in another number")
(define num3 (readTillNum))
(display "the sum is")
(display sum)
(display "the average is")
(display average)
(display "the number of positive numbers is")
(display numPos))))
that's the entire code, i didn't send you the entire definitions window earlier...so i altered my code with your suggestions, however that created a new problem in that how do i pass parameters
that i don't have yet...any ideas? ps. thanks for your help thus far, i appreciate it
Just define them with some default value, much like you'd do in another language.
C Version:
int main()
int a=0;
// Input for a here
return 0;
Do the same in Scheme, by keeping the original code you had, but in the local definitions assign some default value to the variables.
thanks... heres my code thus far
;sum num num num -> num
(define (sum num1 num2 num3)
(+ num1 num2 num3))
"Examples of sum"
(sum 0 0 0) "should be 0"
(sum 0 0 1) "should be 1"
(sum 0 1 1) "should be 2"
(sum 1 2 3) "should be 6"
(sum -1 0 1) "should be 0"
;average num num num -> num
(define (average num1 num2 num3)
(/ (+ num1 num2 num3) 3))
"Examples of average"
(average 0 0 0) "should be 0"
(average 0 0 3) "should be 1"
(average 0 3 3) "should be 2"
(average 3 3 3) "should be 3"
(average -1 0 1) "should be 0"
;count-positive num num num -> num
(define posNum 0)
(define (count-positive num1 num2 num3)
(cond [(and (and (positive? num1) (positive? num2)) (positive? num3)) (+ posNum 3)]
[(or (or (and (positive? num1) (positive? num2)) (and (positive? num1) (positive? num3))) (and (positive? num2) (positive? num3))) (+ posNum 2)]
[(or (or (positive? num1) (positive? num2)) (positive? num3)) (+ posNum 1)]
[(and (and (= num1 0) (= num2 0)) (= num3 0)) (+ posNum 0)]
[(or (or (negative? num1) (negative? num2)) (negative? num3)) (+ posNum 0)]))
"Examples of count-positive"
(count-positive 0 0 0) "should be 0"
(count-positive 0 0 3) "should be 1"
(count-positive 3 3 0) "should be 2"
(count-positive -3 -3 -3) "should be 0"
(count-positive -1 0 1) "should be 1"
(count-positive 14 4 2) "should be 3"
(count-positive 0 0 -2) "should be 0"
;readTillNum void -> num
(define (readTillNum)
(local [(define answer (read))]
(if (number? answer)
;usersNumbers void -> void
(define num1 0)
(define num2 0)
(define num3 0)
(define (usersNumbers)
(local [(define sum (sum num1 num2 num3)) (define average (average num1 num2 num3))(define numPos (count-positive num1 num2 num3))]
(display "Type in a number")
(= num1 (readTillNum))
(display "Type in another number")
(= num2 (readTillNum))
(display "Type in another number")
(= num3 (readTillNum))
(display "the sum is")
(display sum)
(display "the average is")
(display average)
(display "the number of positive numbers is")
(display numPos))))
ok so it runs through sum average and count-positive smoothly, matching the test cases and all...when it gets to the running of usersNumbers however...i get an error message saying
local variable used before its definition: sum
odd..cause i defined it earlier...:/
never mind i realized my error in the names...now it runs but im getting odd results from the numbers im inputting...
here's the updated code...
;sum num num num -> num
(define (sum num1 num2 num3)
(+ num1 num2 num3))
"Examples of sum"
(sum 0 0 0) "should be 0"
(sum 0 0 1) "should be 1"
(sum 0 1 1) "should be 2"
(sum 1 2 3) "should be 6"
(sum -1 0 1) "should be 0"
;average num num num -> num
(define (average num1 num2 num3)
(/ (+ num1 num2 num3) 3))
"Examples of average"
(average 0 0 0) "should be 0"
(average 0 0 3) "should be 1"
(average 0 3 3) "should be 2"
(average 3 3 3) "should be 3"
(average -1 0 1) "should be 0"
;count-positive num num num -> num
(define posNum 0)
(define (count-positive num1 num2 num3)
(cond [(and (and (positive? num1) (positive? num2)) (positive? num3)) (+ posNum 3)]
[(or (or (and (positive? num1) (positive? num2)) (and (positive? num1) (positive? num3))) (and (positive? num2) (positive? num3))) (+ posNum 2)]
[(or (or (positive? num1) (positive? num2)) (positive? num3)) (+ posNum 1)]
[(and (and (= num1 0) (= num2 0)) (= num3 0)) (+ posNum 0)]
[(or (or (negative? num1) (negative? num2)) (negative? num3)) (+ posNum 0)]))
"Examples of count-positive"
(count-positive 0 0 0) "should be 0"
(count-positive 0 0 3) "should be 1"
(count-positive 3 3 0) "should be 2"
(count-positive -3 -3 -3) "should be 0"
(count-positive -1 0 1) "should be 1"
(count-positive 14 4 2) "should be 3"
(count-positive 0 0 -2) "should be 0"
;readTillNum void -> num
(define (readTillNum)
(local [(define answer (read))]
(if (number? answer)
;usersNumbers void -> void
(define num1 0)
(define num2 0)
(define num3 0)
(define (usersNumbers)
(local [(define newSum (sum num1 num2 num3)) (define newAverage (average num1 num2 num3))(define numPos (count-positive num1 num2 num3))]
(display "Type in a number")
(= num1 (readTillNum))
(display "Type in another number")
(= num2 (readTillNum))
(display "Type in another number")
(= num3 (readTillNum))
(display "the sum is")
(display newSum)
(display "the average is")
(display newAverage)
(display "the number of positive numbers is")
(display numPos))))
and no matter what i type in as the numbers...i get 0 as the answers, i.e. sum is 0 avg is 0 pos num is 0
The definition of the variables in the local (newSum..etc) are executed before the input, that is why they are all 0's right away. You need to set the values of these variables after they have
been input.
Please note, this help is borderline walkthrough, so I would like you to make an attempt to fix the rest of the errors yourself. Think through your program logically, comment out code that
doesn't work and comment it in slowly until your program breaks. Then figure out why it broke and fix it, one step at a time.
well yea i realized this ....and i have been working through it, hense the multiple posting...i mean you really only helped me with one thing, which i wouldnt constitute as a "walkthrough" but,
if you don't want to help me anymore that's fine, i thank you for your help thusfar
anyways i figured it out myself, thanks
Many of these problems can easily be solved using some simple debugging practices. As I said, stepping through your code one line at a time is the best way to do so. The only way to get better at
debugging your own code is to actually do it.
>>anyways i figured it out myself, thanks
Why would any sane language insist of wrapping every statement in brackets?
12-08-2005 #2
12-08-2005 #3
12-08-2005 #4
Registered User
Join Date
Nov 2005
12-08-2005 #5
12-08-2005 #6
Registered User
Join Date
Nov 2005
12-08-2005 #7
12-08-2005 #8
Registered User
Join Date
Nov 2005
12-08-2005 #9
Registered User
Join Date
Nov 2005
12-08-2005 #10
Registered User
Join Date
Nov 2005
12-08-2005 #11
12-08-2005 #12
Registered User
Join Date
Nov 2005
12-08-2005 #13
Registered User
Join Date
Nov 2005
12-08-2005 #14
12-08-2005 #15 | {"url":"http://cboard.cprogramming.com/brief-history-cprogramming-com/73309-scheme.html","timestamp":"2014-04-17T08:45:58Z","content_type":null,"content_length":"104645","record_id":"<urn:uuid:f1fc8d4e-fec0-4864-a514-2fc3dd8b9f45>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
[R] paraPen in gam [mgcv 1.4-1.1] and centering constraints
Simon Wood s.wood at bath.ac.uk
Mon Feb 9 11:07:28 CET 2009
There's nothing built in to constraint the coefficients in this way, but it's
not too difficult to reparameterize to impose any linear constraint. The key
is to find an orthogonal basis for null space of the constraint. Then it's
easy to re-parameterize to work in that space. Here's a simple linear model
based example ....
## simulated problem: linear model subject to constraint
## i.e y = X b + e subject to C b=0
X <- matrix(runif(40),10,4) ## a model matrix
y <- rnorm(10) ## a response
C <- matrix(1,1,4) ## a constraint matrix so C b = 0
## Get a basis for null space of constraint...
qrc <- qr(t(C))
Z <- qr.Q(qrc,complete=TRUE)[,(nrow(C)+1):ncol(C)]
## absorb constraint into basis...
XZ <- X%*%Z
## fit model....
b <- lm(y~XZ-1)
## back to original parameterization
beta <- Z%*%coef(b)
sum(beta) ## it works!
## If there had been a penalty matrix, S, as well,
## then it should have been transformed as follows....
S <- t(Z)%*%S%*%Z
... So provided that you have the model matrix for the term that you want to
penalize in `gam' then it's just a matter of transforming that model matrix
and corresponding penalty/ies, using a null space matrix like Z.
Note that the explicit formation of Z is not optimally efficient here, but
this won't have a noticeable impact on the total computational cost in this
context anyway (given that mgcv is not able to make use of all that lovely
sparcity in the MRF, :-( ).
Hope this helps.
On Saturday 07 February 2009 21:00, Daniel Sabanés Bové wrote:
> Dear Mr. Simon Wood, dear list members,
> I am trying to fit a similar model with gam from mgcv compared to what I
> did with BayesX, and have discovered the relatively new possibility of
> incorporating user-defined matrices for quadratic penalties on
> parametric terms using the "paraPen" argument. This was really a very
> good idea!
> However, I would like to constraint the coefficients belonging to one
> penalty matrix to sum to zero. So I would like to have the same
> centering constraint on user-defined penalized coefficient groups like
> it is implemented for the spline smoothing terms. The reason is that I
> have actually a factor coding different regions, and the penalty matrix
> results from the neighborhood structure in a Gaussian Markov Random
> Field (GMRF). So I can't choose one region as the reference category,
> because then the structure in the other regions would not contain the
> same information as before...
> Is there a way to constraint a group of coefficients to sum to zero?
> Thanks in advance,
> Daniel Sabanes
> Simon Wood, Mathematical Sciences, University of Bath, Bath, BA2 7AY UK
> +44 1225 386603 www.maths.bath.ac.uk/~sw283
More information about the R-help mailing list | {"url":"https://stat.ethz.ch/pipermail/r-help/2009-February/187686.html","timestamp":"2014-04-18T02:58:52Z","content_type":null,"content_length":"5780","record_id":"<urn:uuid:a6531410-83eb-43aa-9bb3-13db3528798d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proving that sigma_k(n) is odd if n is a square or double a square
November 23rd 2009, 08:55 PM #1
Nov 2009
Proving that sigma_k(n) is odd if n is a square or double a square
If the integer $k \geq 1$, prove that $\sigma_k(n)$ is odd if and only if $n$ is a square or double a square.
I tried the reverse direction first.
Let $n = p_1^{\alpha_1}p_2^{\alpha_2}\cdot\cdot\cdot p_z^{\alpha_z}$
$\sigma_k(n) = (p_1^{\alpha_1k} + p_1^{\alpha_1k - k} +\cdot\cdot\cdot + 1)(p_2^{\alpha_2k} + p_2^{\alpha_2k - k} +\cdot\cdot\cdot + 1)\cdot\cdot\cdot(p_z^{\alpha_zk} + p_z^{\alpha_zk - k} +\cdot
\cdot\cdot + 1)$
Now each alpha must be even if n is to be a square or double a square.
$(p_i^{\alpha_ik} + p_i^{\alpha_ik - k} +\cdot\cdot\cdot + 1) \mbox{ is odd for }0 \leq i \leq z$ is odd if the sums are odd. If $p_i$ is 2, then the term becomes even, and so the whole product
becomes even, which doesn't make sense, or am I doing something wrong?
Thanks guys.
A suggestion
I think you've pretty much gotten the result already. As you said, for a square the exponents are all even. That means that the sum of powers
$<br /> (p^{\alpha} + p^{\alpha -1} +\cdot\cdot\cdot + p + 1)<br />$
is odd for odd p because alpha is even.
For the factor of 2,
$<br /> (2^{\alpha} + 2^{\alpha -1} +\cdot\cdot\cdot + 2 + 1)<br />$
is odd irregardless of alpha, so the exponent could be even or odd.
November 24th 2009, 08:17 AM #2
Senior Member
Nov 2009 | {"url":"http://mathhelpforum.com/number-theory/116429-proving-sigma_k-n-odd-if-n-square-double-square.html","timestamp":"2014-04-25T02:12:35Z","content_type":null,"content_length":"34755","record_id":"<urn:uuid:a7f76e1a-d413-408b-b49b-06a7bfee755c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: gllapred vs gllasim take two
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
st: gllapred vs gllasim take two
From <Colin.Vance@dlr.de>
To <statalist@hsphsun2.harvard.edu>
Subject st: gllapred vs gllasim take two
Date Fri, 24 Jun 2005 20:29:07 +0200
This question follows up on a previous inquiry about the difference between gllapred and gllasim but with a few additional details. I'm estimating a probit model on panel data and would like to use the results to simulate outcomes over different values of one of the explanatory variables, holding the other variables fixed at their mean. My question is whether gllapred or gllsim is a better tool for this (and my hunch is the latter).
Specifically, I'd like to do the following. 1. Estimate a panel probit
model using:
gllamm y age other1 other2, i(persid) family(binom) link(probit) nip(20)
2. Holding other1 and other2 fixed at their mean values, generate
predicted values of y over different integer values of age, say, ranging
between 18 and 65.
To implement step 2, I've created an artificial data set in which other1
and other2 are fixed at their mean values and age varies over the range
of interest. So I first estimate the model on the real data. Then I open
the artificial data and type:
gllasim pred1, linpred fsample
The above seems like the right approach given my objective, but I've noted a few interesting things:
1. the approach doesn't work using gllapred unless the dependent variable is included in the artificial data (which I guess has to do with the fact that gllapred includes empirical Bayes).
2. gllapred produces the same answer when used repeatedly, whereas gllasim always produces a different answer, suggesting that the latter may be sampling from a distribution of the parameter estimates
I guess point 2 would actually be another good reason to use gllsim in my case. I could set up a simple code to implement the command, say 1000 times, and, after taking the mean, I'd have something akin to a Monte Carlo simulation of a predicted value.
One problem is that there is very little documentation of gllasim in the manual or elsewhere, so it's hard to know if my idea is even in the ballpark. Any insights offered would be greatly appreciated.
Many thanks,
Colin Vance, Ph.D.
German Aerospace Center
Rutherfordstrasse 2
12209 Berlin | {"url":"http://www.stata.com/statalist/archive/2005-06/msg00748.html","timestamp":"2014-04-20T08:24:54Z","content_type":null,"content_length":"7113","record_id":"<urn:uuid:48a39c6a-53e7-4cf2-8907-23f1fdfc7421>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Tension in a string along with a Pseudo force
1. The problem statement, all variables and given/known data
A block of mass 50g is suspended from the ceiling of an elevator.Find the tension in the string if the elevator goes up with an acceleration of [tex]1.2m/s^2[/tex]
2. Relevant equations
3. The attempt at a solution
I have reached here:...but with this i dont get the books answer...The book takes net acceleration to be 0
If F be the pseudo force...then,
So solving this i get an answer that differs from the one given in a problem book...that is 55N(the book's ans)
Ive realized that we get 55N if we take [tex]a_{net}=0[/tex].Now how is this possible? | {"url":"http://www.physicsforums.com/showpost.php?p=1311538&postcount=1","timestamp":"2014-04-19T02:23:24Z","content_type":null,"content_length":"9538","record_id":"<urn:uuid:39668aa2-77e3-489c-9ec8-e0d391a1e281>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00227-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monte Sereno, CA Algebra Tutor
Find a Monte Sereno, CA Algebra Tutor
...At that time the use of hard-to-learn words will be eliminated for more commonly used words. My background is in physics and math. But I am currently teaching useful SAT techniques to 3
32 Subjects: including algebra 1, algebra 2, reading, calculus
...Calculus is so much easier for students when they understand the physical significance of derivatives and integrals, so I spend a great deal of time working on that with them. As an engineer,
it's very rewarding to be able to share the relevance of these concepts for solving practical problems s...
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...Let me help your child to build the confidence they need to be successful. I'm an Australian high school mathematics and science teacher, with seven years experience, who has recently moved to
the bay area because my husband found employment here. I'm an enthusiastic teacher, who loves helping students to succeed to the best of their ability, and loves all facets of mathematics and
11 Subjects: including algebra 1, algebra 2, physics, chemistry
...We will work at your computer, not from a book. My Bachelors' degree training and my Ph.D. training each included course work in Genetics. I am the co-author of two peer-reviewed scientific
articles which revealed novel information about the genetics of South American monkeys.
17 Subjects: including algebra 1, algebra 2, chemistry, statistics
...What makes me good at tutoring? Knowing math, knowing my students, being good at drawing people out, and being good at adjusting how I teach so that it suits the unique individual I am working
with. To learn, students must feel comfortable, interested, and challenged.
22 Subjects: including algebra 2, algebra 1, English, reading
Related Monte Sereno, CA Tutors
Monte Sereno, CA Accounting Tutors
Monte Sereno, CA ACT Tutors
Monte Sereno, CA Algebra Tutors
Monte Sereno, CA Algebra 2 Tutors
Monte Sereno, CA Calculus Tutors
Monte Sereno, CA Geometry Tutors
Monte Sereno, CA Math Tutors
Monte Sereno, CA Prealgebra Tutors
Monte Sereno, CA Precalculus Tutors
Monte Sereno, CA SAT Tutors
Monte Sereno, CA SAT Math Tutors
Monte Sereno, CA Science Tutors
Monte Sereno, CA Statistics Tutors
Monte Sereno, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Monte_Sereno_CA_Algebra_tutors.php","timestamp":"2014-04-17T19:41:40Z","content_type":null,"content_length":"24120","record_id":"<urn:uuid:f9b04a38-2d07-4b0d-bd3e-11a379b5cef1>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the problem with the axiom of choice?
November 2nd, 2007 by Thorsten
It is strange: on the one hand the axiom of choice is singled out as the source of non-constructivity in classical theories on the other the axiom of choice is actually provable in a constructive
theory like Martin-Loef’s Type Theory. What is the cause of this apparent contradiction? I showed that while the proof-relevant axiom of choice is provable in Type Theory, a proof-irrelevant version
where we are not allowed to make choices based on the witness of an existential proof is not and even worse we can show that it implies the principle of excluded middle for propositional reasoning.
This is a construction due to Diaconescu which I sketched on the whiteboard. I also noticed that the countable axiom of choice or indeed ore general the axiom of choice over any type not involving
quotients (this excludes in particular the Reals) is implied by the setoid model and hence type-theoretically hunky dory. I finished with the question, how to generalize the axiom of predicative
topoi (which do not entail those choice principles) to fully reflect the setoid model.
I forgot to mention that my presentation was based on a discussion on the Epigram mailing list early this year, in particular in reply to an issue raised by Bas Spitters. | {"url":"http://sneezy.cs.nott.ac.uk/fplunch/weblog/?p=80","timestamp":"2014-04-18T03:01:25Z","content_type":null,"content_length":"9576","record_id":"<urn:uuid:e10b8e14-aa73-4f33-899d-e8bb7ceb1f6c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Blowin' in the wind: solution
March 2008
Blowin' in the wind: solution
last issue's Outer space
we developed a formula for the power output per unit time of a windmill:
The answer lies in understanding that the average speed is not a good guide. If, for example, the wind speed was
This is four times more than what we got in the original calculation.
Back to Outer space
Submitted by Anonymous on September 14, 2012.
If forward wind speeds are 4, 6, 8, 10 m/sec What is the tip speed of a 40m blade at each input rate? (formula?) | {"url":"http://plus.maths.org/content/plus-magazine-12","timestamp":"2014-04-16T10:16:31Z","content_type":null,"content_length":"24601","record_id":"<urn:uuid:81d39c94-c2ee-4526-83ab-eff28f3fddc6>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Statistics Tales of Distributions 10th Edition Chapter 12 Solutions | Chegg.com
A repeated-measures design doesn't always need the same subject experiencing each of the treatments. It is sufficient to have similar subjects experiencing each of the treatments. (A
repeated-measures design is actually a matched-pair design with more than 2 treatments.)
We might want to design an experiment testing the differences of learning Statistics with three different methods: an online course, a traditional classroom course, and giving the student a textbook
and learning by self-study. We would then administer the same final exam to all three treatment groups, with the final exam score as the dependent variable.
Rather than assigning subjects randomly to the 3 treatment groups, we would match students by their SAT Math score. We would match 3 people with the same SAT Math score. Then we would put one of them
in the online class, one in the traditional classroom course, and would assign the third person to the textbook self-study course. We would repeat this procedure with another group of 3 matched
students, and again, until we had the desired number of students in each of the treatment groups. This repeated-measures design would minimize the variability caused by different math abilities in
measuring the success in learning Statistics through the 3 different methods. | {"url":"http://www.chegg.com/homework-help/basic-statistics-tales-of-distributions-10th-edition-chapter-12-solutions-9780495808916","timestamp":"2014-04-21T12:53:00Z","content_type":null,"content_length":"25194","record_id":"<urn:uuid:eed785cf-aa98-4277-87a1-261feccc3972>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Quadrature of the Circle and Hippocrates' Lunes - Tantalizing Prospects
Having found the means to effect the quadrature of any polygonal figure (as we saw in Elements, II.14), the inability to square the circle by similarly direct means stood as an inviting challenge to
geometers of the fifth and fourth centuries. So when Hippocrates' lune was found to be equal to a triangle, a tantalizing hope was raised that some similar analysis might effect the quadrature of the
full circle. Thus Hippocrates' result stands as one of the more important advances in geometry of that time. But the problem of the quadrature of the circle remained unresolved as no way was found to
replicate Hippocrates' success in application to the full circle. The best that could be done was to achieve approximate quadratures.
Antiphon, a contemporary of Hippocrates who taught in Athens, attempted to resolve the problem by inscribing a polygon within the circle and effecting the quadrature of the polygon by finding lower
bound approximations for the area of the circle. He performs a similar process to find an upper bound approximation for the area of the circle by circumscribing a polygon about its circumference.
Then, by successively doubling the number of sides of these polygons, one inside and the other outside the circle, the error of approximation can be reduced. Effectively, the area of the circle is
ëxhausted" by taking more and more sides for these approximating polygons. Because the sources we have are quite fragmentary, we cannot be quite sure whether Antiphon believed that this procedure
would actually result in finding the exact quadrature of the circle, or whether he knew that the best this would accomplish is the determination of a pair of values which are close but never equal to
the true area.
Figure 11: "Exhausting" the area of the circle with inscribed and circumscribed polygons.
Another contemporary, Bryson of Heraclea, is said to have asserted that the area of the circle was at once greater than the area of any and all possible polygons that could be inscribed within it and
smaller than the area of any and all possible polygons that could be circumscribed about it. This principle would come to fruition in the work of Eudoxus of Cnidos, who working at the start of the
fourth century BCE would give the first proof of Elements, XII.2, the theorem that circles are to each other as the squares on their diameters. This method of exhaustion would be one that later
geometers would return to again and again over many centuries to apply to quadratures of a variety of curved shapes. Archimedes (in the third century BCE) used it often, to demonstrate results he had
discovered regarding the quadrature of regions like the parabolic segment^44 and the cubature^45 of the sphere. Much later, when Greek geometry was studied once again in the Europe of the sixteenth
and seventeenth centuries CE, a resurgence of interest in these methods took place. Geometers like Gregoire a Saint-Vincent (1584-1667) and his student Alfonso Antonio de Sarasa (1618-1667), both
Jesuit scientists, applied the method to show that the area between a hyperbola and its asymptote behaves like a logarithm.
As for the original problem of the quadrature of the circle, final resolution would not come until the nineteenth century, when it was shown using the tools of modern algebra (not geometry!) that
although the Greek construction tools of straightedge and compass are capable of producing an infinite number of line segments, these segments all have lengths that belong to a restricted subset of
real numbers, and p is not in this set, making the quadrature, in its strictest form, impossible.
The work of the fifth century geometers set a course for the pursuit of quadratures that led to the generation of much new and important mathematics. As geometers came across new types of curves,
they considered new sorts of plane regions and solids and asked questions about their quadrature and cubature. This work came to a high point in the seventeenth and eighteenth centuries CE with the
development of integral calculus. The fifth century area problem was the first mathematical "program," an enterprise representing the efforts of many individuals over a long period of time, all
contributing to the understanding of a single type of problem through the development of new and more powerful mathematical methods. Even early on, as we have seen from direct reference to some of
the texts written at the time, it involved surprising discoveries, patient systematization, and the realization that to adequately resolve the problem would require not just cleverness, but
willingness to change the way in which the problem was being considered. These features have characterized progress in mathematical programs throughout history.
^44Like a circular segment, a parabolic segment is the closed region bounded on one side by an arc of a parabola and on the other by a line that cuts the parabola in two points.
^45Naturally, cubature is for three-dimensional solids what quadrature is to two-dimensional regions. The cubature of a solid is obtained by constructing a line segment equal to the side of a cube
which is equal in volume to the given solid. | {"url":"http://www.maa.org/publications/periodicals/convergence/the-quadrature-of-the-circle-and-hippocrates-lunes-tantalizing-prospects","timestamp":"2014-04-21T06:55:49Z","content_type":null,"content_length":"104934","record_id":"<urn:uuid:cdc93636-a6d7-40c4-82db-a75ea1360068>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Synchronizing random automata
Evgeny Skvortsov, Yulia Zaks
Conjecture that any synchronizing automaton with $n$ states has a reset word of length $(n-1)^2$ was made by \v{C}ern\'{y} in 1964. Despite attracting a lot of attention of researches it remains
unproven since then. In this paper we study a random automaton that is sampled uniformly at random from the set of all automata with $n$ states and $m(n)$ letters. We show that for $m(n)>36 \ln n$
the random automaton is synchronizing with high probability. For $m(n)>n^\beta, \beta>1/2$ we also show that the random automaton with high probability satisfies the \v{C}ern\'{y} conjecture.
Full Text:
PDF PostScript | {"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/1456","timestamp":"2014-04-20T04:36:48Z","content_type":null,"content_length":"11217","record_id":"<urn:uuid:9ecd9736-a3d1-4228-9838-b3781f8f8583>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00473-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantitative Trading
First, an iPad version of this blog has been launched, so if you are reading this on an iPad, the look will be different. If you want to go back to the old look, just hit Page Turn in the bottom left
corner and choose the option there. Any comments or suggestions on this new look are most welcome!
Second, and this is probably irrelevant to most of you reading this blog, a Chinese translation of my book Quantitative Trading is now
Third, and most interesting, Larry Connors will be hosting a
on "How to Trade High Probability Stock Gaps" on
Tuesday, May 1, 2:00pm ET.
(Click on link to register.) It is sheer coincidence that I was just writing about stock gaps in my
previous post
! I have always found Larry's strategies to be clear, concise, and simple - exactly the ingredients for out-of-sample as opposed to in-sample returns!
59 comments:
hi Ernest
.Do you have any plan to publish your second book?
I find that Connors doesn't provide risk adjusted returns or robustness tests in his books, although I have to say that I enjoy reading them. Ec do you still reside in Toronto?
Hi Anon,
Yes, I just started writing a second book and plan to publish sometime in 2013.
ek: We shouldn't take any published strategies "as-is". We should always backtest and modify them ourselves.
I reside in Niagara-on-the-Lake.
Hi all,
Just a quick reminder about Larry's webinar tomorrow -- make sure you register online beforehand!
I missed the webinar today. Do we have it recorded? If yes, could you please put it online? Thanks a lot!
Actually, Larry postponed it till end of May. Please register online to get on the email list.
If a trader could make money with the methods he is teaching then (a) he would not have time to teach (b) he would not reveal his edge.
No rational trader will ever teach a method that works. People that teach trading methods are either irrational or do not trade. There may be another class that deceives traders purposely with
losing methods and takes the opposite side, just like market makers do. This is speculation from my part.
I expect the counterarguments and I tell you that teaching trading is not like teaching math. Math is a positive sum game whereas trading is a negative sum game.
Hi Anon,
Naturally, I disagree with you!
My counter-argument is based on my own experience.
I used to work at banks and hedge funds as their prop trader. We are not allowed to discuss our strategies even with colleagues in another group. The result: I never made a dime for my employers,
nor do I know of any colleagues that have done so consistently.
Once I started to write my blog and discuss bits and pieces of my strategies, and further teach classes on the basics of these strategies, I have learned far more from my readers and "students",
so much so that I have no problem coming up with live trading profits anymore. Most of my strategies come from ideas that are triggered by readers' comments or emails. (An example was in my
previous post's comment section.)
As I emphasized in my book, nobody will disclose a profitable strategy in its entirety to you. But books and classes are still valuable because they serve as inspirations for your own ideas, and
they teach basic techniques that are valuable to any strategy.
Professional traders are often on the phone many times a day with colleagues in another institution. Do you think their colleagues will spill all their secrets to them? If not, what do they talk
about? The weather? No, the sum of information increases between the communicating parties, at the expense of traders who do not communicate. So while it is a zero-sum game, it is our own loss if
we don't give and take from other traders.
Hi Ernie, in the previus post you said:"I have seen some strategies that have the opposite behavior: poor performance prior to 2009, and stellar performance since then"
Could you give us some info?
Hi anon,
Generally, I find short-term momentum strategies didn't work as well before 2009.
Hi Ernie,
I have a couple of back-tested only strategies on the SP emini that I am not sure whether to continue investigating as I am new to quantitative trading. The holding periods would vary from a few
minutes to a few hours. Both rely on tick data.
Both enter at a price target and exit at a count of ticks. The first has a shorter holding period, and the average profit per trade is 1 tick before costs and slippage. It has a unlevered Sharpe
ratio of 4. It is based upon tick bars of 5000.The second has an average profit per trade of 2 ticks, and an unlevered Sharpe ratio of 3. It is based upon tick bars of 25000.
Given your experience, are either of these worth pursuing further?
I am concerned that the profit per trade will be very tight, if it would exist at all, after I take into account the bid/ask spread, slippage, and commissions.
I have used TickData for historical data, and suppose if I did proceed would need to begin by finding a live data feed from a brokerage house (like IB) whose data is a relatively close match to
Any suggestions or comments would be greatly appreciated.
Hi millman,
If your ES strategy can be implemented with limit orders, then 1 tick round trip profit is reasonable. But we don't know what opportunity cost you will incur. The only way to find out for sure is
to paper trade it, or even better, trade it live with small size.
Hello Dr Chan,
in yuor 2nd book, can you discuss how to do stat arbs or spreading for spot currency?
spot currency is rather different as by nature it is already in a pairs.
thank you.
Hi Anon,
Yes, in fact I have a section on trading FX pairs in the new book.
Hi M chan,
Based on your book and blog, you seel to use co-integration in pair trading. Does you coming book treat the problem of co-integrated systems with more than two variables. You have talked before
about methods closed to the one of avellaneda and lee. Have you got any chance with other methods to build such systems? Do you mention these methods in your next book?
In particular I have two problems with their method:
1/ They use a single linear equation with ADF test instead of a VAR/VECM approach with johansen test.
2/ They use all the stocks (series) to build their model which results in high transaction costs.
One simple way to go about the second problem would be to use best subset, forward-backawrd selection or least angle regression to select the best model.(All these based on a co-integration
statistic) Any thought on that?
Thanks in advance and sorry for the multiple questions.
Hi Zarbouzou,
Yes, in my new book I discussed using Johansen test for cointegration, which can test multiple time series together.
Typically, Johansen test will give you a subset of time series which cointegrates best, with large coefficients for only a few stocks, so you won't have problem with transaction costs.
Hi all,
An update on Larry Connors' webinar on gap trading: it will happen next week. You can register here: http://presentations.tradingmarkets.com/1580186/
Thanks for your answer,
Weirdly enough while there are numerous papers on pair trading, I can't find any paper that uses johansen tests for mean reverting baskets (>2 securities). Do you have any reference?
Hi Zarbouzou,
No, I am afraid I have not read any papers on using Johansen test on trading stock baskets. I am afraid you would have to do the research yourself.
Hi Ernest,
I am following you blog with great interest, thanks for sharing your ideas! As being interested in the industry, i wonder how lucrative quant trading for the providers of the algorithms is. Say i
managed a proprietary quant fund for one or more investors that generates 20% return per year. What percentage of that would i be able to keep as the algo provider? In other words, how is the
return splitted between the providers and the investors?
For the record, I am not asking about your paycheck, more about what is common in the industry.
Hi Anon,
Typical incentive fee due the fund manager is 20% of the profits.
Hello Dr Chan,
may i know for intraday cointegration test how many data points are considered necessary and sufficient?
Hi Anon,
I don't think running cointegration tests on intraday prices make sense, since you can't concatenate all the different days data together (assuming we are dealing with stock prices).
Hello Dr Chan,
I was thinking of testing for cointegration on m5 and higher bars for forex.
Hi Anon,
For any markets, there is no reason to test for cointegration at any frequency other than daily. Higher frequency data does not give better test statistics, since the data would be serially
Hello Dr Chan,
many thanks, i was thinking that if we test for cointegration using different time frames, we can get different degrees of conintetration.
e.g M5 may show 2.5SD while H1 will show 1.5SD, so we can make trades on M5.
am i being mistaken?
best regards.
Does this mean that you don't trade pairs (based on cointegration) intraday or just that you don't test for cointegration intraday.
While I theoreticaly agree with you about about concatenation, wouldn't this be the same problem for any model driven strategy?
Hi anon,
You can certainly trade at different time scales. But cointegration is not the way to compute that. If a time series does not cointegrate in long time frame, it won't cointegrate even if you
increase the sampling frequency. However, you can indeed test cointegration on this high frequency data one day at a time, without testing all these days together.
Hi Z,
I certainly trade pairs intraday. I just would not use cointegration to test on the concatenated data set, since that is a meaningless procedure for my trading strategy.
I don't know what you mean by saying this is a common problem for any strategy. For strategies that hold overnight positions, it is obviously useful and important to test for cointegration on
daily prices concatenated together.
Hi Ernie,
What I meant is that if you concatenante the series of say 5mins bars of different days and build a model on price/return you will have a problem to fit a model to such series. The returns from
close of the day to open of the next day will certainly not have the same agnitude as the returns from 5mins bars which could be seen as a deterministic heteroscedastic effect.
Completly unrelated, do you know good references (reviews are even better) on optimal order execution algorithms? I'm not thinking of very large orders but more optimaly sending orders for equity
/futures quant trading.
Hello Dr Chan,
thanks for the explanation.
just to check if i am correct:
1) check for cointegration on daily prices
2) if 2 pairs are conintegrated on daily TF we can trade the pairs on intraday TF to get better entry prices.
i have another question:
If 2 pairs are cointegrated, does it mean it will alo be cointegrated on weekly TF?
best regards.
Hi Z,
I agree with exactly what you just said. If you are not trading FX which has data 24 hours a day, there is no sense to concatenate intraday data to test for cointegration. But that certainly does
not prevent me from trading stock pairs intraday!
For optimal execution, the classic paper is "Optimal Execution of Portfolio Transactions" by Almgren and Chriss.
Hi Anon,
Yes, if 2 pairs are coint with daily prices, they should coint on weekly prices too.
Hi Ernest,
I have learnt alot from your blog and book. I have a basic question regarding the usage of data for FX trading.
Given FX trades 24-hours, the Open,High,Low and Close data obtained might differ between a user in say Asia versus London versus US.
Does it matter in backtesting which data to use or if it works in one set of data, it is bound to work on other timezone datasets?
Hi Anon,
Yes, you should make sure that the open and close refers to 17:00 ET. And if not, you have to be careful with strategies that refer to specific times of open/close.
i found your blog very useful in getting knowledge about trading.i am a great fan of larry conner.i found your articles on quantitative trading very useful.your first book is remarkable. i am
waiting for your second book so eagerly.
Hi Ernie,
On a slightly unrelated topic, are the subfolders and util folder for your cadf function still available on your website? I purchased your book, and went on to the premium content page but
couldn't find it there.
Hi Noah,
I did not create the cadf function. You can get this function from the spatial-econometrics.com package. Remember to add ALL the subfolders of that package to your Matlab path in order to use any
Dr Chan,
for mechanical trading system, have you ever try Design of Experiment techniques to test and optimize the parameters?
I don't normally spend much time optimizing parameters. I believe that optimal parameters in-sample are usually not optimal out-of-sample.
I tried the link again. Is there any place to view the recorded webinar?
The recorded webinar can be viewed at http://tradingmarkets.adobeconnect.com/p7jkbjhy3tx/
Hi Ernie,
regarding half life. You advocate regressing change in spread onto the spread as in a OU process.
How about fitting a standard AR(1) to the spread instead and compute the half life based on the ar-coefficient? Would that be correct?
Hi Thomas,
No. Mean-reverting process is represented by an error-correction model. I.e. it is an autoregressive model of first differences, and not in the prices themselves. So you cannot obtain the
half-life from an auto-regression of the prices.
See the documentation of the spatial-econometrics.com package for details.
Thanks Ernie. But an AR(1) is simply the discrete time counterpart to an OU process. So should be fine to fit an AR(1) to the spread (not the prices) I think.
Hi Thomas,
I know that the Wiki article on OU claims that AR(1) is the discrete version of OU, but I disagree. Error correction models are not the same as AR(1).
Also, when I said "price", I meant the price of the spread. The price of one side of the spread does not mean revert and so we don't even want to model it.
Well, it is a well established fact and not only something Wikipedia claims.
OU in discrete time:
x_t+1-x_t=theta*mu - theta*x_t + dW
Rewrite as AR(1): x_t+1=theta*mu+(1-theta)*x_t
Hence the constants are identical and the autoregressive coefficients tightly linked.
The half-lifes are not identical but close enough.
I don't think one lose any information by simply estimating the half life from a standard AR(1).
If I buy the Chinese translation of your book, can I find the password also?
Hi Dennis,
Yes, the password is also in example 3.1 in the Chinese translation.
Hi Ernie,
Is it possible to put constraints on the regression.betas from the spatial econometrics function ols? If not, is it possible using the matlab fucntion regress?
Thanks a lot,
Hi Noah,
No, ols does not accept constraint. If you have Matlab's Statistics toolbox, you might be able to find a constrained regression function. I know they have, for example, stepwise regression.
Hi Ernie,
Thanks for a great blog.
Do you think it is important to use intraday day when backtesting pairs-trading as opposed to only closing prices?
I have realized that many strategies look much better than they actually are when I use closing prices.
The problem is that you can't trade on closing prices...
Any thoughts on this?
Hi Jack,
Yes, if you can use intraday prices to backtest, then it eliminates/reduces slippage as one source of transaction cost. But you can also use primary exchange closes to backtest: such closing
prices can be achieved in reality with little slippage.
I bought the chinese translated book in Malaysia ^_^
Hi Swiss_Dragon,
Respected Sir,
Let me first congrats & thank you for your wonderful post .sir my name is amit,from india, i am a NanoTech.professional,i also wanna to learn analytics for trading,please guide me how to start
from scratch as i am novice in this field .your little guidance will change the life of many layman like me.
sir i can offer free on/offline assistance to any assignment.
Warm Regards
AMIT (bigfm987@yahoo.in
Hi Amit,
I suggest you start by reading my book "Quantitative Trading". | {"url":"http://epchan.blogspot.co.uk/2012/04/few-announcements.html","timestamp":"2014-04-18T09:16:37Z","content_type":null,"content_length":"159493","record_id":"<urn:uuid:e8e2a756-359b-46ed-b6af-512779f03da7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
Camilo Dagum^1, Stéphane Mussard^2, Françoise Seyte^2, Michel Terraza^2
with SOCREES’ participation
^1UNIVERSITY of OTTAWA
^2 LAMETA, UNIVERSITY of MONTPELLIER I
Camilo Dagum, Emeritus Professor of the University of Ottawa and Doctor Honoris Causa of the University of Montpellier I, introduced in 1997 the decomposition of the Gini index by population
subgroups. This method allows you to compute overall income inequality and to break it down into within-group and between-group income inequality.
Michel Terraza is a science Professor of economics at University of Montpellier I. He applied this decomposed measure when studying the wages inequalities in the Languedoc-Roussillon region (see the
bibliography). He did it in collaboration with Françoise Seyte (Associate Professor) and Stéphane Mussard (Assistant Professor).
The SOCREES^© corporation, company of statistics and economic studies, made the Gini decomposition software (first version).
THE SOFTWARES
The softwares you can download are Excel macro. To try it, you only need income or wage panel data. You can get some here below. The programs include the decomposition of the Gini index and either
the measures derived from the general entropy inequality measures such as Theil, Hirschman-Herfindahl and Bourguignon that are additively decomposable or the weakly decomposable measures such as the
coefficient of variation squared and the α-Gini. Indeed, the latter can be decomposed according to the same method as the Gini index. All the information connected with the programs running is easy
and available on sheet 1 of the Excel files "The softwares" and in the paragraph "THE USE" here below.
The data, at your disposal (strictly positive reals), consist of three columns. The first one concerns the codes of the individuals that give a number to each person (from 1 to n.) The modality 1 in
the second column represents the USA and the modality 2 Japan. The third column deals with the growth rate of the American and Japanese real wages between 1960 and 1996. This first example is done in
order to get used to the tool.
DOWNLOAD / THE USE
I. Arranging the Data
Put your data in a new Excel sheet and select them as in the following table:
Individuals’ Codes Groups Incomes
… … …
Column 1 represents the individuals codes that give an index to each person (for instance from 1 to 103 for a sample of 103 individuals).
Column 2 deals with the different groups of the population. The individuals belonging to the same subpopulation must be put together. In the table example there are 5 groups.
Column 3 represents the wages of the individuals belonging to the groups of column 2.
II. Code 1: Dagum’s Gini Decomposition and the Additive Decompositions of three Generalized Entropy Inequality Indices
To obtain the results:
1) Download the software then open the file "Dagum" in Excel ("yes" to activate the macro);
2) Download the data, open the file "donnees" in Excel, type "Alt+F8" select "Dagum.xls!CalculDagum" then "execute" and put the number of groups "2" then "OK".
3) You should obtain the following results: Download the results. Follow the theoretical decomposition to make relevant interpretations.
III. Code 2: the (α,β)-Decomposition
This program is a generalization of the previous one. This new version focuses only on the pair-based inequality measures that are weakly decomposable in Ebert’s (2010) sense. The (α,β)-decomposition
is an adaptation of Dagum’s (1997a, 1997b) decomposition to all the weakly decomposable measures and permits to include a parameter of inequality aversion denoted by α as well as a parameter of
sensibility towards transvariation β into the calculation of the various components. A first attempt of generalization had been proposed by Chameni (2011) and had been programmed by Fattouma Souissi
and Pauline Mornet of the University of Montpellier 1 (LAMETA PhD students).
This new program is inspired from theoretical researches: Chameni (2006, 2011), Mussard and Terraza (2009) and Ebert (2010). Besides allowing to capture inequality aversion, the (α,β)-decomposition
permits to decompose the Gini index (obtained when α=1, forall β≥1), as well as the coefficient of variation squared (when α=2, forall β≥1). So, this generalization brings out a link between the Gini
index and an entropic measure [see Chameni, 2006, 2011].
Following Ebert (2010), the main axioms PC [resp. PD], PP, NM, SM are respected as far as α>0 [resp. α≥1]. Furthermore for any α≥2 the principle of strong diminishing transfers is also satisfied by
such inequality measures [see Mornet, P. Zoli, C. Mussard, S. Sadefo-Kamdem J., Seyte, F. and M. Terraza, 2013].
To apply the decomposition:
1) Download the software then open the file "(alpha, beta)-decomposition (en)" in Excel ("yes" to activate the macro);
2) Download the data, open the file "data" in Excel and copy the information in “sheet2”. Type "Alt+F8" and select: “The_2_parameters_weak_decomposition" then "Execute".
Put the number of groups,
and your sensitivity parameters (any positive real values),
then "OK".
3) You should obtain the following results: Download the results. Follow the theoretical papers^ to make relevant interpretations.
a) Ebert, U. (2010), “The Decomposition of Inequality Reconsidered: Weakly Decomposable Measures”, Mathematical Social Sciences 6 (2), 94-103.
b) Chameni Nembua C. (2011), “A generalisation of the Gini coefficient: Measuring economic inequality”, Mimeo.
c) Mussard, S. et Terraza M. (2009), “ La décomposition du coefficient de Gini et des mesures dérivées de l'entropie : les enseignements d'une comparaison”, Recherches Economiques de Louvain 75 (2),
d) Chameni Nembua C. (2006),”Linking Gini to Entropy: Measuring Inequality by an interpersonal class of indices”, Economics Bulletin 4 (5), 1-9.
OTHER RELATED SOURCES
* You can obtain the same results more quickly than an Excel macro using a Gauss program that can contain more than 64,000 observations:
By Michele Costa, University of Bologna: Gini.g
* With SAS : used in Koubi, M. Mussard, S., F. Seyte et M. Terraza (2005):
By Malik Koubi, INSEE: Gini-SAS
* If your incomes are composed of several income sources (wages + income taxes + transfers + etc.), you may try the GAUSS Gini multi-decomposition (a non generalized program), used in Mussard (2006):
By Stéphane Mussard: g-revenu.g
* If your incomes are composed of several income sources (wages + income taxes + transfers + etc.) and several partitions of groups, use the GAUSS Gini multi-decomposition in multi-levels, used in
Mussard S., Pi-Alprein M.-N., Seyte F. and Terraza M. (2006):
By Stéphane Mussard: g-revenu-2.g | {"url":"http://www.lameta.univ-montp1.fr/online/gini.html","timestamp":"2014-04-21T11:03:24Z","content_type":null,"content_length":"116472","record_id":"<urn:uuid:f0c18700-1ab2-48a1-b260-676b6298b00b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
Hey there, I'm trying, and failing miserably, at getting an answer to this question. I am trying to work out percentages of my exam grade. I have a grade of 65, forming 10% of my end grade, and 59,
forming 40%, my end exam is worth 60%, so what grade would I need to ensure an overall grade of 60 or above? How do I work this out? After a while I came up with the answer 50, but this definately
doesn't sound right.
Yes, I know I should be studying rather than working out the maths, but I'm just doing this in one of my breaks in order to put my mind at rest!
Thanks for the help! | {"url":"http://www.mathisfunforum.com/post.php?tid=2431&qid=23587","timestamp":"2014-04-16T10:59:23Z","content_type":null,"content_length":"17657","record_id":"<urn:uuid:0f9d752f-c100-4b74-b4ac-e4e51a45a144>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00015-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design and Manufacturing of a High-Precision Sun Tracking System Based on Image Processing
International Journal of Photoenergy
Volume 2013 (2013), Article ID 754549, 7 pages
Research Article
Design and Manufacturing of a High-Precision Sun Tracking System Based on Image Processing
Faculty of Mechanical Engineering, K. N. Toosi University of Technology, Pardis Avenue, Molla-Sadra Avenue, Vanak Sq., P.O. Box 19395-1999, Tehran 19991 43344, Iran
Received 30 May 2013; Revised 5 August 2013; Accepted 5 August 2013
Academic Editor: Mohammad A. Behnajady
Copyright © 2013 Kianoosh Azizi and Ali Ghaffari. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
Concentration solar arrays require greater solar tracking precision than conventional photovoltaic arrays. This paper presents a high precision low cost dual axis sun tracking system based on image
processing for concentration photovoltaic applications. An imaging device is designed according to the principle of pinhole imaging, making sun rays to be received on a screen through pinhole and to
be a sun spot. The location of the spot is used to adjust the orientation of the solar panel. A fuzzy logic controller is developed to achieve this goal. A prototype was built, and experimental
results have proven the good performance of the proposed system and low error of tracking. The operation of this system is independent of geographical location, initial calibration, and periodical
1. Introduction
Renewable energies are considered as a great source of energy during the last two decades. As an important source of alternative energy, the solar has unlimited reserves, has widespread existence, is
pollution-free, and so forth. It is shown that optimum of solar energy is obtained when sun rays are incident normally on the transforming part of solar systems, such as solar thermal collectors,
solar cells, and other solar equipments. Among various types of solar cells, the application of high concentration solar cells allows a significant increase in the amount of energy collected by solar
arrays per area unit. However, the performance of solar cells with concentrators decreases drastically if the sun pointing error is greater than a small value and, therefore, a low tracking error
must be achieved for this kind of solar photovoltaic arrays (PV) compared with conventional PV arrays [1]. Several kind of solar tracking systems are proposed in the literature; one can classify them
according to their degrees of freedom (DoFs) and/or control strategy. Regarding DoF, there are three main types of trackers [2]: fixed devices, single-axis trackers (see [3]), and dual-axis trackers
(see [4]). It is shown that the annual gain in energy production of the dual and single-axis trackers is 1.5 and 1.40–1.45, respectively, compared with the nontracking systems [1]. Regarding control
strategy, three main types of solar trackers exist [5]: passive, open-loop, and closed-loop controlled trackers. The passive trackers have no electronic sensors or actuators [6]. The open-loop ones
have no sensors too but use microprocessor and are based on mathematical formulae to predict the sun’s position [7, 8]. The third kind of trackers use the information of electro-optic sensors for
estimating the location of the sun [5, 9]. The open-loop trackers are dependent on geographical location and the start time situation of the system, while closed-loop systems do not have these
An open-loop type of controller does not observe the output of the processes that it is controlling. Consequently, an open-loop system cannot correct any errors so that it could make and may not
compensate for disturbances in the system. The system is simpler and cheaper than the closed-loop type of sun tracking systems [10]. In open-loop mode, the computer or a processor calculates the
sun’s position from formulae or algorithms using its time/date and geographical information to send signals to the electromotor. However, in some cases many sensors are used to identify specific
positions [11]. The systems proposed in [12, 13] are in this category.
In closed-loop systems, a number of inputs are transferred to a controller from sensors which detect relevant parameters induced by the sun, manipulated in the controller and then yield outputs [10].
In this type, by differential illumination of electrooptical sensors differential control signal occurs which is used to drive the motor and to orient the apparatus in such direction where
illumination of electro-optical sensors becomes equal and balanced. In addition, the photodiodes can be mounted on tilted planes in order to increase the photocurrent sensitivity, and, very commonly
in concentrator PV applications, the shading device is presented as a collimating tube which prevents diffuse irradiation from entering the sensor and masking a precise measurement of the sun
alignment position [14]. Such trackers, with high accuracy, are intended mainly for concentrator solar systems. These trackers are complex and, therefore, expensive and also unreliable [11]. Examples
of these systems are proposed in [15–17]. More comparison between various types of sun tracking systems can be achieved in review articles.
The target of this study is to design and manufacture a low cost and precise sun tracking system for concentration photovoltaic panels (CPVs). The provided sun tracking systems in the recent studies
are applicable for PV panels not for CPVs. Anyway they have their weaknesses that are listed below.(i)Tracking precision of low cost systems is not mentioned in most studies, (ii)Most of these
tracking systems are inappropriate for CPV because of their low precision.(iii)The precise systems (for instance manufactured by Kipp & Zonen Company) are relatively expensive.
According to the literature, in most low cost sun trackers, the discrete elements, light dependent resistors (LDRs), are used as sensor. In this method of sun tracking, the differences between the
outputs of at least four LDRs are used to generate error signals in two directions of tracking. In these systems, for instance, when the outputs of eastern and western LDRs become equal, the east to
west tracking ends. But when the outputs of the LDRs are the same, the tracking error is actually not zero. This LDR-based tracking error is not acceptable for CPV applications. This error exists
because that all four sensors are not exactly the same, and the second reason is the influences of weather condition on this type of sensors. So this kind of sensors is not appropriate for CPV
The solar trackers have nonlinear behavior and are subjected to disturbances. For such systems it is recommended to apply soft computing methods to control them. In recent years, fuzzy logic has
become an important approach in designing nonlinear controllers because of its simplicity, ease of design and implementation [18].
In this paper a high-precision and low cost dual-axis sun tracker (DAST) is proposed for PV applications. To achieve optimal solar tracking, a fuzzy algorithm is developed. At first, the orientation
of the solar panel is approximately adjusted using four LDRs, and then the strategy based on image processing decreases tracking error, as much possible as. In other words, adding camera and image
processing-based method strongly diminished the tracking errors that are provided in LDR-based tracking methods. This system does not need any astronomical consideration in finding the sun location
and is independent of geographical position and initial configuration.
The remainder of the paper is organized in four sections as follows. in Section 2 the fundamental operation of the system is explained. The overall system and control strategy are described in detail
in Section 3. Experimental results are shown in Section 4, and finally, in Section 5 the conclusions of this work are drawn.
2. Principle of the System Operation
There are two noticeable motions for the sun in respect to an observer on the earth. One is called azimuthal motion (from east to west), and the other is called altitudinal motion (refering to the
changing of the height of sun in the sky). Therefore, it is necessary to consider two degrees of freedom for motions of any solar tracker power generator. The proposed system in this work mainly
consists of the imaging device, image acquisition and processing parts, controller, and electromechanical structure. The working principle is shown in Figure 1.
The tracking algorithm has two modes: LDR-based mode and image processing-based mode. LDR is a discrete element. One of the disadvantages of using this type of sensor is its high sensitivity to
weather conditions such as temperature and humidity. To overcome this disadvantage and to provide less tracking error, the tracking algorithm based on image processing has been proposed. The imaging
device is shown in Figure 2. It consists of a box, a transparent screen in the middle, and a camera on the bottom. A suitable small hole is opened on the middle of the top of the box. Sun ray can
only be entered from this hole. A glass lid is set on the top of the box to protect transparent screen and imaging device against penetration of the rain and dust. According to the principle of
pinhole imaging, the sun rays entered to the chamber form a beam to produce a small bright spot on the receiving screen (Figure 3(a)). The transparent screen is assumed on the XY surface as shown in
Figure 3(b). Now at a moment, the sun spot on the transparent screen has a coordinates (, ). Since the receiving screen is parallel to the surface of the solar panel, the target coordinates for sun
spot are (0, 0).
After image acquisition, the position of the sun spot should be determined through image processing. Then, error signal is fed to fuzzy logic controller (FLC) and suitable control signals are emitted
to motors. This procedure is repeated till tracking error is reached to desired range.
3. System Description
3.1. Dual-Axis Electromechanical Structure
Electromechanical structure has two degrees of freedom, motorized by two DC motors: a base platform moving around vertical axis and a suspended platform with PV panel moving around horizontal axis.
Position of the base platform and the suspended frame can vary in the range of ±90° ensuring alignment of the panel in azimuth and elevation, respectively. Several components of this structure such
as motors, gearboxes, and thrust ball bearing are shown in Figure 4.
3.2. Image Acquisition and Processing Unit
An A4TECH (PK-836F) commercial webcam that offers 640*480 pixels was used. A polarized filter is set under the pinhole, in the box, to prevent saturation of the charge-coupled device of the camera in
intensive solar radiations. A diaphragm of calk paper is used as the transparent screen. The webcam was connected to a personal computer. A simple MATLAB program was used to image processing. This
processing algorithm calculates the coordinate (, ) of the center of the sun spot.
3.3. Control Unit
The fuzzy logic can make human knowledge into the knowledgebase to control a plant with linguistic descriptions. The advantages of FLC including good popularization and high faults tolerance make it
suitable for nonlinear control systems. A FLC has five parts: fuzzifier, data base, rule base, inference mechanism, and defuzzifier.
At first, the sun lights radiate on four LDRs, and the differences of the voltage on the eastern-western LDRs and the southern-northern LDRs will be delivered into the controller; then motors actuate
till the differences of these sensors voltage become zero. Then the image processing-based mode will operate. After calculation of the coordinates of the sun spot, and are fed to controller as error
signals (). When the coordinates of the sun spot are being (0, 0), the controller does not work. Since the sun moves very slow, the fast rotating speed of the solar tracking device is not necessary.
By fuzzy control, some advantages such as reducing consumption power of motors and fast and smooth positioning can be achieved. Control of motors may be done independently. The rotation angel of
motor is considered as the output variable of fuzzy controller (). The membership functions are shown in Figures 5 and 6. Five fuzzy control rules are used, as shown in the following:R1: If is
positive large (PL), then is PL,R2: If is positive small (PS), then is PS,R3: If is zero (Z), then is Z,R4: If is negative small (NS), then is NS,R5: If is negative large (NL), then is NL.
Product inference is applied for fuzzy inference and the center of gravity method is adopted for defuzzification. The input-output (error-rotation angel) relation of the proposed controller is shown
in Figure 7. The rotation angel of a DC motor is proportional to two variables. One is the time during which voltage is fed to motor, and the other is the level of this voltage, . In other words,
control command is a combination of and . Figure 8 shows functions and that are obtained from heuristics to satisfy the relation shown in Figure 7. The control commands emitted by MATLAB program are
fed to motors through a microcontroller.
Note that LDR-based mode of tracking is, also, active if the sun spot is not detected on the transparent screen. Furthermore, energy saving mode has been considered active when the solar radiation is
not great enough for the system to produce electricity. In these conditions, the system has been hold at the last known position for some minutes, and then the tracking algorithm will start again.
Figure 9 illustrates the flow chart of the system operation.
4. Experiments and Results
The performance of the designed DAST has been evaluated and monitored during several days on September and October 2012. Since flat PV solar panels were only available, several sheets of aluminum as
a netted plate were mounted on the two 5 watts amorphous type PV panels perpendicularly (Figure 10). This netted plate guarantees that no solar radiation gets to the PV panel when the tracking error
is greater than some small degrees, so, the results of the proposed installation is logically close to a CPV solar panel. The used PV panels parameters are provided in Table 1.
Figure 11 shows the experimental power (the product of the open-circuit voltage and short-circuit current) attained using the proposed tracking system, as well as the one obtained from fixed panel,
on October 5. The output power showed a considerable increase during the early and late hours of the day. In fact, the average improvement, in the tracking system, was about 81% and 105% in the
morning (7:30–9:00) and in the afternoon (15:00–17:00), respectively. However, the improvement was about 30–41% during mid-day. The mean power generated by the sun tracker panel during the whole day
was 60.45% higher than the fixed one. Figure 12 illustrates absolute and relative increases in generated power using sun tracker.
In Figure 13, the coordinates of the center of the sun spot are shown for a period of time at the experiment day. It can be seen in this figure that the strategy based on image processing pushed the
center of the sun spot into a circle with a radius of 0.6 pixels. This value is corresponding to 0.1432° of the sun rays incident angel, according to the webcam resolution and geometry of the imaging
devise. So, tracking error is less than 0.15°. An open-loop solar tracker that provides a tracking error in the range from 0.236° to 0.693° is presented in [8]. So, the closed-loop system presented
in this work has higher precision than the open-loop one.
5. Conclusion
A new sun tracking system that provides high tracking precision, needed by concentration PVs, has been developed using fuzzy logic and image processing. The system consists of two main modes. One is
LDR-based that adjusts the orientation of the solar panel approximately, and the other is based on image processing that provides small tracking errors (less than 0.15 degrees). Energy saving mode
has been considered also, to prevent power overconsumption during cloudy skies. Experimental results showed the significant increase in the power generated by the solar panel. The proposed system has
low cost and is independent of correct initial configuration and geographical position. It can be used with various collectors and solar concentrators, as well as in astronomical researches as a
leading system for telescopes.
Conflict of Interests
The authors declare that there is no conflict of interests regarding the publication of this paper.
1. F. R. Rubio, M. G. Ortega, F. Gordillo, and M. López-Martínez, “Application of new control strategy for sun tracking,” Energy Conversion and Management, vol. 48, no. 7, pp. 2174–2184, 2007. View
at Publisher · View at Google Scholar · View at Scopus
2. N. H. Helwa, A. B. G. Bahgat, A. M. R. El Shafee, and E. T. El Shenawy, “Maximum collectable solar energy by different solar tracking systems,” Energy Sources, vol. 22, no. 1, pp. 23–34, 2000.
View at Scopus
3. C. S. Chin, A. Babu, and W. McBride, “Design, modeling and testing of a standalone single axis active solar tracker using MATLAB/Simulink,” Renewable Energy, vol. 36, no. 11, pp. 3075–3090, 2011.
View at Publisher · View at Google Scholar · View at Scopus
4. S. Ozcelik, H. Parkash, and R. Challoo, “Two-axis solar tracker analysis and control for maximum power generation,” Procedia Computer Science, vol. 6, pp. 457–462, 2011.
5. H. Arbab, B. Jazi, and M. Rezagholizadeh, “A computer tracking system of solar dish with two-axis degree freedoms based on picture processing of bar shadow,” Renewable Energy, vol. 34, no. 4, pp.
1114–1118, 2009. View at Publisher · View at Google Scholar · View at Scopus
6. M. J. Clifford and D. Eastwood, “Design of a novel passive solar tracker,” Solar Energy, vol. 77, no. 3, pp. 269–280, 2004. View at Publisher · View at Google Scholar · View at Scopus
7. R. Ranganathan, W. Mikhael, N. Kutkut, and I. Batarseh, “Adaptive sun tracking algorithm for incident energy maximization and efficiency improvement of PV panels,” Renewable Energy, vol. 36, no.
10, pp. 2623–2626, 2011. View at Publisher · View at Google Scholar · View at Scopus
8. K. K. Chong and C. W. Wong, “General formula for on-axis sun-tracking system and its application in improving tracking accuracy of solar collector,” Solar Energy, vol. 83, no. 3, pp. 298–305,
2009. View at Publisher · View at Google Scholar · View at Scopus
9. W. Batayneh, A. Owais, and M. Nairoukh, “An intelligent fuzzy based tracking controller for a dual-axis solar PV system,” Automation in Construction, vol. 29, pp. 100–106, 2013.
10. C.-Y. Lee, P.-C. Chou, C.-M. Chiang, and C.-F. Lin, “Sun tracking systems: a review,” Sensors, vol. 9, no. 5, pp. 3875–3890, 2009. View at Publisher · View at Google Scholar · View at Scopus
11. H. Mousazadeh, A. Keyhani, A. Javadi, H. Mobli, K. Abrinia, and A. Sharifi, “A review of principle and sun-tracking methods for maximizing solar systems output,” Renewable and Sustainable Energy
Reviews, vol. 13, no. 8, pp. 1800–1818, 2009. View at Publisher · View at Google Scholar · View at Scopus
12. J. Cañada, M. P. Utrillas, J. A. Martinez-Lozano, R. Pedrós, J. L. Gómez-Amo, and A. Maj, “Design of a sun tracker for the automatic measurement of spectral irradiance and construction of an
irradiance database in the 330–1100nm range,” Renewable Energy, vol. 32, no. 12, pp. 2053–2068, 2007. View at Publisher · View at Google Scholar · View at Scopus
13. M. Alata, M. A. Al-Nimr, and Y. Qaroush, “Developing a multipurpose sun tracking system using fuzzy control,” Energy Conversion and Management, vol. 46, no. 7-8, pp. 1229–1245, 2005. View at
Publisher · View at Google Scholar · View at Scopus
14. I. Luque-Heredia, J. M. Moreno, P. H. Magalhães, R. Cervantes, G. Quéméré, and O. Laurent, “Inspira's CPV sun tracking,” Springer Series in Optical Sciences, vol. 130, pp. 221–251, 2007. View at
Publisher · View at Google Scholar · View at Scopus
15. K. Aiuchi, K. Yoshida, M. Onozaki, Y. Katayama, M. Nakamura, and K. Nakamura, “Sensor-controlled heliostat with an equatorial mount,” Solar Energy, vol. 80, no. 9, pp. 1089–1097, 2006. View at
Publisher · View at Google Scholar · View at Scopus
16. M. Abouzeid, “Use of a reluctance stepper motor for solar tracking based on a programmable logic array (PLA) controller,” Renewable Energy, vol. 23, no. 3-4, pp. 551–560, 2001. View at Publisher
· View at Google Scholar · View at Scopus
17. S. Gagliano and N. Savalli, “Two-axis sun tracking system: design and simulation,” Eurosun 2006, 2006.
18. E. T. El Shenawy, M. Kamal, and M.A. Mohamad, “Artificial intelligent control of solar tracking system,” Journal of Applied Sciences Research, vol. 8, pp. 3971–3984, 2012. | {"url":"http://www.hindawi.com/journals/ijp/2013/754549/","timestamp":"2014-04-19T05:16:52Z","content_type":null,"content_length":"60636","record_id":"<urn:uuid:44a0e35e-54f4-421a-9d04-b734e82044e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermodynamics: Building Blocks
We have introduced Thermodynamics using a statistical, quantum-based approach and have not relied on postulates. However, historically Thermodynamics was analyzed in terms of four separate unverified
statements known as the Laws of Thermodynamics. We have more tools to verify the statements, though, and you may be surprised at the simplicity of the laws.
Zeroth Law
The Zeroth Law supposes that we have three systems in which the first two are each in thermal equilibrium with the third. Then the Law claims that the first two are likewise in thermal equilibrium
with each other. Recall that the equilibrium condition was that the temperatures be equal. Then we have: If τ [1] = τ [3] and τ [2] = τ [3] then τ [1] = τ [2] . It isn't hard to see why this is so.
First Law
The First Law has many formulations. Historically, the Law is stated as such: the work done in taking an isolated system from one state to another is independent of the path taken. We know from
previous study of mechanics that energy behaves the same way. It turns out that this work can be called heat, and therefore a sleeker definition of the First Law is: Heat is a form of energy. The
path independence follows from this simple statement.
Second Law
The Second Law has an overwhelming number of formulations. We shall present two here, one that makes sense given the statistical origins we've focused on, and one that has historical value and will
be useful later when we deal with engines.
Statistically, we say that: if a closed system is not in equilibrium, then the most probable future is that the entropy will increase with each passing bit of time, and will not decrease. The more
foreign formulation, useful later (see Heat, Work, and Engines), known as the Kelvin-Planck formulation, is: it is impossible for any cyclic process to occur whose sole effect is the extraction of
heat from any reservoir and the performance of an equivalent amount of work. The popularized version of the second law looks more like the first explanation and has been recently challenged by
considerations of the physics of black holes.
Third Law
Qualitatively, the Third Law claims that as a system approaches absolute zero, or T = 0 , it becomes increasingly ordered, and thus exhibits a low entropy. Strictly, we say that: the entropy of a
system approaches a constant value as the temperature approaches zero. This constant value is near or at zero, usually. Consider a system with a non-degenerate (i.e. having a multiplicity function
value of one) ground state. Then the entropy of that state is zero. As the temperature decreases, the system becomes more and more likely to be found in the ground state, as we shall see in
Statistics and Partition Function. Thus the entropy will approach a small, near zero value. | {"url":"http://www.sparknotes.com/physics/thermodynamics/bblocks/section3.rhtml","timestamp":"2014-04-20T18:51:08Z","content_type":null,"content_length":"53045","record_id":"<urn:uuid:3cf9209b-4e4e-4698-b801-6ad05df0af8f>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exclamation Mathematical comm. question need help!!!
August 8th 2008, 08:33 AM #1
Aug 2008
you are given a finite collection of distinct quantities and told thier sum. One of them is chosen(although you are not told which one this is), and you are told the sum of every pair of distinct
quantities that contains it. You deduce that the chosen quantity is equal to the difference between this sum of pairs ant the first given pair, divided by two less that the total number of
Express this result in symbols and hence explain why it is true.
I dont really understand how to go about this question, soo any help would be much appreciated, thanks
Hello, agrabham!
We need some clarification . . . Some of it is not clear.
You are given a finite collection of distinct quantities and told thier sum.
The collection is: . $\{a_1, a_2, a_3, \hdots, a_n\}$
And their sum is: . $a_1 + a_2 + a_3 + \hdots + a_n \:=\:S$
One of them is chosen (you are not told which one),
and you are told the sum of every pair of distinct quantities that contains it.
Suppose the chosen number is $a_x$
The sums are: . $\begin{Bmatrix}a_x+a_1 \\ a_x+a_2 \\ a_x + a_3 \\ \vdots \\ a_x+a_n \end{Bmatrix}\qquad \text{ There are }n-1\text{ sums.}$
Their total is: . $T \;=\;(n-1)a_x + (a_1 + a_2 + \hdots + a_n)$
. . . . . . . . . . . $T \;=\;(n-2)a_x + \underbrace{(a_x + a_1 + a_2 + \hdots + a_n)}_{\text{This is }S}$
. . . . . . . . . . . $T \;=\;(n-2)a_x + S$
You deduce that the chosen quantity is equal to
the difference between this sum of pairs and the first given pair ??
divided by two less that the total number of quantities.
Do this mean: . $\frac{T - {\color{blue}(a_1 + a_2)}}{n-2}$?
If so, it doesn't work out . . .
yep, that was the problem i was having, all the data i was given was what i have posted and it seemed to me i mite have missed something, but there is nothing else on the sheet
August 8th 2008, 09:47 AM #2
Super Member
May 2006
Lexington, MA (USA)
August 8th 2008, 01:33 PM #3
Aug 2008 | {"url":"http://mathhelpforum.com/pre-calculus/45565-exclamation-mathematical-comm-question-need-help.html","timestamp":"2014-04-19T07:03:58Z","content_type":null,"content_length":"39405","record_id":"<urn:uuid:4d0ae3f9-d307-4ee2-a1fa-309efd5f6092>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hangman 1
Re: Hangman 1
You did say there was no R, up there in post #286?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=248456","timestamp":"2014-04-18T21:31:21Z","content_type":null,"content_length":"32234","record_id":"<urn:uuid:3d0d14c9-4553-4e10-a08c-456a07f20240>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00452-ip-10-147-4-33.ec2.internal.warc.gz"} |
Everything Is an Expression
Mathematica handles many different kinds of things: mathematical formulas, lists, and graphics, to name a few. Although they often look very different, Mathematica represents all of these things in
one uniform way. They are all expressions.
A prototypical example of a Mathematica expression is . You might use to represent a mathematical function . The function is named , and it has two arguments, and .
You do not always have to write expressions in the form . For example, is also an expression. When you type in , Mathematica converts it to the standard form Plus[x, y]. Then, when it prints it out
again, it gives it as .
The same is true of other "operators", such as (Power) and (Divide).
In fact, everything you type into Mathematica is treated as an expression.
x+y+z Plus[x,y,z]
xyz Times[x,y,z]
x^n Power[x,n]
{a,b,c} List[a,b,c]
a->b Rule[a,b]
a=b Set[a,b]
Some examples of Mathematica expressions.
You can see the full form of any expression by using FullForm[expr].
The object f in an expression is known as the head of the expression. You can extract it using Head[expr]. Particularly when you write programs in Mathematica, you will often want to test the head of
an expression to find out what kind of thing the expression is.
gives the "function name" .
gives the name of the "operator".
Head[expr] give the head of an expression: the f in
FullForm[expr] display an expression in the full form used by Mathematica | {"url":"http://reference.wolfram.com/mathematica/tutorial/EverythingIsAnExpression.html","timestamp":"2014-04-18T16:19:43Z","content_type":null,"content_length":"38140","record_id":"<urn:uuid:b10b5729-3d6f-402d-92f1-5cfec085af0d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability ghc
Stability unstable
Maintainer Andy Gill <andygill@ku.edu>
Translate is the main abstraction inside KURE, and represents a rewriting from a source to a target of a possibly different type.
Rewrite (defined in Language.KURE.Rewrite) is a synonoym for a Translate with the same source and target type.
data Translate m dec exp1 exp2 Source
Translate is a translation or strategy that translates between exp1 and exp2, with the posiblity of failure, and remembers identity translations.
(Monad m, Monoid dec) => Failable (Translate m dec a)
runTranslate :: (Monoid dec, Monad m) => Translate m dec exp res -> dec -> exp -> m (Either String (res, dec))Source
runTranslate executes the translation, returning either a failure message, or a success and the new parts of the environment.
translate :: (Monoid dec, Monad m) => (exp1 -> RewriteM m dec exp2) -> Translate m dec exp1 exp2Source
translate is the standard way of building a Translate, where if the translation is successful it is automatically marked as a non-identity translation.
Note: translate $ _ e -> return e is not an identity rewrite, but a succesful rewrite that returns its provided argument. | {"url":"http://hackage.haskell.org/package/kure-0.2.3/docs/Language-KURE-Translate.html","timestamp":"2014-04-19T03:18:58Z","content_type":null,"content_length":"7511","record_id":"<urn:uuid:e5b36330-15d2-4e44-8ba0-d61af4437022>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are Central Limit Theorems and why are they called so?
up vote 7 down vote favorite
I know two opinions:
1) "Central" means "very important" (as it was central problem in probability for many decades), and CLT is a statement about Gaussian limit distribution. If the limit distribution of fluctuations is
not Gaussian, we should not call such statement CLT.
2) "Central" comes from "fluctuations around centre (=average)", and any theorem about limit distribution of such fluctuations is called CLT.
Which is correct?
pr.probability ho.history-overview reference-request
The way I see it is that the space of all probability distributions (satisfying the conditions required for CLT) has a distinguished point, namely the Gaussian. That's the center of the space. –
Deane Yang Oct 29 '10 at 13:54
1 I'm assuming of course that one has normalized the mean and variance. – Deane Yang Oct 29 '10 at 13:55
2 Wikipedia (en.wikipedia.org/wiki/Central_limit_theorem#History) suggests #1: "The actual term "central limit theorem" (in German: "zentraler Grenzwertsatz") was first used by George Pólya in 1920
in the title of a paper.[7](Le Cam 1986) Pólya referred to the theorem as "central" due to its importance in probability theory." – Qiaochu Yuan Oct 29 '10 at 14:01
In my probability theory book (the one by H. Bauer) the name is also contributed to George Pólya, but nothing is written about the meaning. – Someone Oct 29 '10 at 14:48
Both are correct. The first one if one uses "history" to evaluate correctness, and the second one if one uses "closeness to the center" to evaluate "correctness" ;-) – Suvrit Oct 29 '10 at 18:33
add comment
3 Answers
active oldest votes
One of my teacher in Probability once told us that this name (Central Limit Theorem) was just used (at the beginning) to stress the importance of the result -which plays a central role in
the theory. Besides, the ambiguity led to several different translations, corresponding to both interpretations of the term "central". (e.g in French, we can find "théorème central limite"
up vote 6 and "théorème de la limite centrale")
down vote
Interesting. I had though that 2) was the reason, but I guess it was just speculation. By analogy with the usage "central tendency" for discussion of mean, median, mode, etc... – Gerald
Edgar Oct 29 '10 at 14:40
1 The German version clearly shows that "central" refers to the theorem, not to the limit. A correct French translation would be "théorème central de la limite", which I have never seen.
– Laurent Moret-Bailly Oct 29 '10 at 15:29
I was told recently by Firas Rassoul-Agha that 1) is the reason for the name. One of the French translations above (I'm not sure which since my French is bad) emphasizes that the
theorem is the Central (i.e., fundamental) limit theorem in probability. Firas claimed that the other French translation arose by translating Central Limit Theorem back from English to
French - thus obscuring the original meaning. – Jon Peterson Oct 29 '10 at 15:59
add comment
From the introduction to History of the Central Limit Theorem: From Laplace to Donsker by Hans Fischer:
The term “central limit theorem” most likely traces back to Georg Pólya. As he recapitulated at the beginning of a paper published in 1920, it was “generally known that the appearance of
the Gaussian probability density $e^{-x^2}$” in a great many situations “can be explained by one and the same limit theorem,” which plays “a central role in probability theory” [Pólya
1920, 171]. Laplace had discovered the essentials of this fundamental theorem in 1810, and with the designation “central limit theorem of probability theory,” which was even emphasized
in the paper’s title, Pólya gave it the name that has been in general use ever since.
up vote
5 down Fischer refers to the paper by G. Pólya, Über den zentralen Grenzwertsatz der Wahrscheinlichkeitsrechnung und das Momentenproblem, Mathematische Zeitschrift, 8 (1920), pp. 171–181.
Edit. The paper is reprinted in George Pólya: Collected Papers, Volume 4, MIT Press, 1984. R.M.Dudley mentions in his comment on the paper that
Although the name "central limit theorem" for the normal limit law seems to have been articulated in the mathematical folklore by 1920, Feller in his famous text attributes to Pólya the
first written use of this term.
add comment
There's a nice little book on this subject: The Life and Times of the Central Limit Theorem. The book goes over the history of the theorem from its embryonic form to its more or less
up vote 4 final form 200 years later. It also gives precise statements of some of the modern variations of the theorem and indicates directions of research.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged pr.probability ho.history-overview reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/44132/what-are-central-limit-theorems-and-why-are-they-called-so","timestamp":"2014-04-19T15:30:11Z","content_type":null,"content_length":"68632","record_id":"<urn:uuid:1b38983a-124c-4c36-a080-f606aee8eedd>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate Carbs on a Dry Matter Basis
Looking at the Guaranteed Analysis on the label, add up the values for Protein, moisture, fat, fiber, and ash (if listed - (sometimes it is not). Then subtract from 100 - the difference is the wet
matter carbs. So, if we have a food with 78% moisture, 11% protein, 5% fat, 2% fiber and 1.5% Ash, the calculation would look like this:
78.00 + 11.00 + 5.00 + 2.00 + 1.50 = 97.50 Subtract that from 100, and the remainder, 2.50, is the WET matter carbs.
However, when comparing the carbohydrate contents of any food, it must be done on a dry matter basis, even with dry food, as dry does contain some small amount of moisture. So, looking at the GA on
the label once again, subtract the moisture content from 100 - in this case, the difference is 22. Then divide your wet matter carbs by this number. So...
2.50 divided by 22. = 11% carbs on a dry matter basis.
This is an easy formula, and once you've done it a few times, you will be able to look at the GA and pretty much know what the carb content is based on the values given. Remember, tho, that
ingredients are just as important as the carb content - you want food with no grains (including rice or soy) no veggies, fruits, glutens, or cornstarch, and no gravy. Broth is fine, as long as it is
not thickened with starches. | {"url":"http://www.yourdiabeticcat.com/faqs/carbcalc.htm","timestamp":"2014-04-19T19:34:48Z","content_type":null,"content_length":"2284","record_id":"<urn:uuid:c245af8c-56d4-4f83-952e-0ae39665c492>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2013 [00137]
[Date Index] [Thread Index] [Author Index]
Prime numbers and primality tests
• To: mathgroup at smc.vnet.net
• Subject: [mg129449] Prime numbers and primality tests
• From: johnfeth at gmail.com
• Date: Mon, 14 Jan 2013 23:30:04 -0500 (EST)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
• Delivered-to: l-mathgroup@wolfram.com
• Delivered-to: mathgroup-newout@smc.vnet.net
• Delivered-to: mathgroup-newsend@smc.vnet.net
A straightforward way to test a prime number candidate is the Miller-Rabin test (sometimes called the Rabin-Miller test). This well known and popular test is commonly executed 50 times on a candidate prime and has a proven probability of missing a non-prime of no more than 0.25 for each execution. Note that passing 50 Miller-Rabin tests (which is a de facto standard), the probability of non-primality is 0.25^50 ~ 7.9*10^-31, I'm satisfied that the number NextPrime gives me is "prime enough". Mathematica uses the Miller-Rabin test, although it is not clear how many iterations are used. As I understand it, Mathematica also the Lucas pseudo prime test on the Miller-Rabin output.
It is interesting to note, however, that the Lucas pseudo prime method of primality testing apparently does not have the handy "feature" of the Miller-Rabin test, namely, the provable, and bounded low probability of a wrong answer, from whence an estimate of primality for any number can be made without finding a counter example!
I've read that there are have been no counter-examples (viz., no non-primes that pass the the Lucas pseudo prime test) to numbers that pass the Lucas pseudo prime test, but then again, I've never found an oyster with a pearl inside.
Is the Miller-Rabin a better test that the Lucas pseudo prime test? | {"url":"http://forums.wolfram.com/mathgroup/archive/2013/Jan/msg00137.html","timestamp":"2014-04-19T02:04:56Z","content_type":null,"content_length":"27006","record_id":"<urn:uuid:9ea428e0-dcd3-4cb9-a69f-84e9deda5c05>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Printable Math Dittos Download - Free Download Printable Math Dittos
Printable Math Dittos free downloads
My Math Quiz Sheets v2.4 - Allows you to create printable, math-related, homework sheets. My Math Quiz Sheets lets you ... (81/0)
My Math Sheets Lite v1.5a - My Math Sheets Lite is a program designed to create printable math homework sheets. A number ... optional answer sheet. My Math Sheets offers a quick ... (69/0)
Math Quiz Creator 2.1 - Math Quiz Creator is an ... lets educators quickly generate printable math quizzes with answer keys ... (5/0)
The Math Riddler Worksheet Generator - Whole Numbers Sampler 1.1 - The Math Galaxy Whole Numbers Worksheet ... an unlimited number of printable math worksheets, with numbers created ... tutor for
K-12 math, guiding you through the ... (2/0)
Buildbug math for kids 10 - Buildbug kids math online game. Offers free math lessons and homework help ... experiencing some delays. The Math Forum's Internet Math Library is a comprehensive ...
cooperative lesson into our math class. Take math grades once a week ... (2/0)
EMSolution Trigonometry short 3.0 - fully explained solutions, related math theory and easy-to ... definition, rule and underlying math formula or theorem. A ... a way to learn math lexicon in a
foreign ... (44/0)
EMSolution Hyperbolic short 3.0 - fully explained solutions, related math theory and easy-to ... a way to learn math lexicon in a foreign ... options facilitate development of printable math tests
and homeworks, and ... (31/0)
EMSolution Trigonometry Equations short 3.0 - fully explained solutions, related math theory and easy-to ... definition, rule and underlying math formula or theorem. A ... a way to learn math
lexicon in a foreign ... (20/0)
EMMentor_Light 3.0 - solving software offers 500+ math problems, a variety of ... more than 500 of math problems, a variety of ... exercises for building missing math knowledge and skills. A ... (18
EMSolution Algebra Equations short 3.0 - fully explained solutions, related math theory and easy-to ... definition, rule and underlying math formula or theorem. A ... a way to learn math lexicon in
a foreign ... (42/0)
EMSolution Arithmetic 3.0 - fully explained solutions, related math theory and easy-to ... definition, rule and underlying math formula or theorem. A ... a way to learn math lexicon in a foreign ...
EMSolutionLight 3.0 - through more than 500 math problems with guided solutions ... a way to learn math lexicon in a foreign ... options facilitate development of printable math tests and homeworks,
and ... (129/0)
EMMentor Algebra short 3.0 - exercises for building missing math knowledge and skills. A ... a way to learn math lexicon in a foreign ... options facilitate development of printable math tests and
automate preparation ... (11/0)
EMMentor Algebra Inequalities short 3.0 - exercises for building missing math knowledge and skills. A ... a way to learn math lexicon in a foreign ... options facilitate development of printable
math tests and automate preparation ... (14/0)
Math Dittos 2 1.7 - Fact Controlled subtraction for special learners. Twenty-eight blackline masters comprise the basic package, with a complement of twenty-eight word problem worksheets. Each page
contains fact bars presenting and previewing the facts to be used on the page. Worksheets follow a logical and thorough progression through the operation of subtraction. Complete ... (12/0)
EQUALS Math Jigsaw Puzzles 1.02 - EQUALS Math Jigsaw Puzzles uses the ... help kids learn basic math the easy, fun and ... and struggling students, EQUALS math puzzles help kids begin ... (132/0)
Math Flash Cards 1 - increasing demands for basic math skills, Math Flash Cards helps provide ... accelerate his or her math abilities. Math Flash Cards really work ... children to develop their
math skills. It will be ... (162/0) | {"url":"http://www.softlist.net/search/printable-math-dittos/","timestamp":"2014-04-20T08:20:13Z","content_type":null,"content_length":"60710","record_id":"<urn:uuid:514d203e-6649-4f6b-856d-3d81f9c38c2b>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hapeville, GA Algebra Tutor
Find a Hapeville, GA Algebra Tutor
...I look forward to helping you! All the best, ArisOne of the most difficult classes, calculus can be a killer. I probably have taught and tutored this subject more than any other, having taught
it for the last 15 years without a break and having graded the AP calculus test a few years back.
20 Subjects: including algebra 2, algebra 1, calculus, statistics
...As an undergraduate and graduate student in genetics, this subject is one that I know inside and out. I can tutor basic Mendelian genetics, Complex patterns of inheritance, Molecular biology/
genetics, and eukaryotic and prokaryotic genetics. I have also tutored genetics to undergraduate students.
15 Subjects: including algebra 1, algebra 2, geometry, chemistry
...My experience in teaching philosophy includes experience teaching students skills in reading and writing in English. Most of my courses have been writing intensive, and several have been for
writing credit. Students in my courses must learn to be able to read, comprehend, analyze, and explain difficult texts, often from primary sources.
9 Subjects: including algebra 1, algebra 2, English, reading
Hello! My name is Jessica Coates and I am currently a graduate student at Emory University. I am working to complete my PhD in Microbiology and Molecular Genetics.
18 Subjects: including algebra 2, biology, geometry, algebra 1
...Tutoring with me is comprehensive, systematic and fun. A little about me: I have strong interests in mathematics, software development, electrical engineering, and I possess excellent
troubleshooting skills. My love for these fields is the reason why I focused in Mathematics, communication and ...
27 Subjects: including algebra 2, algebra 1, French, calculus | {"url":"http://www.purplemath.com/Hapeville_GA_Algebra_tutors.php","timestamp":"2014-04-16T04:13:14Z","content_type":null,"content_length":"23927","record_id":"<urn:uuid:4ebb60d1-85fb-4b0d-889f-2f5d0da062a3>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00064-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamics of a Network of Single Compartmental Cells
In order to gain a better understanding of many biological processes, it is often necessary to implement a theoretical model of a neuronal network. In the paper Rate Models for Conductance-Based
Cortical Neuronal Networks, Shriki et al. present a conductance-based model for simulating the dynamics of a neuronal network [1]. The work done in this module is an implementation of their model.
In his module Dynamics of the Firing Rate of Single Compartmental Cells, Yangluo Wang shows how to model the dynamics of an isolated cell using the Hodgkin and Huxley model. We will build on the work
presented by Wang to model the dynamics of cells within a neuronal network driven by some external current. We then apply this model to a network of cells within a hypercolumn in primary visual | {"url":"http://cnx.org/content/m22294/latest/?collection=col10523/latest","timestamp":"2014-04-16T16:07:21Z","content_type":null,"content_length":"167250","record_id":"<urn:uuid:ae8337bf-0be4-4329-8b78-943c100e06a0>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vector space classification
Next: Document representations and measures Up: irbook Previous: References and further reading Contents Index
Vector space classification
The document representation in Naive Bayes is a sequence of terms or a binary vector 6 . It represents each document as a vector with one real-valued component, usually a tf-idf weight, for each
term. Thus, the document space
The basic hypothesis in using the vector space model for classification is the contiguity hypothesis .
Contiguity hypothesis. Documents in the same class form a contiguous region and regions of different classes do not overlap.
There are many classification tasks, in particular the type of text classification that we encountered in Chapter 13 , where classes can be distinguished by word patterns. For example, documents in
the class China tend to have high values on dimensions like Chinese, Beijing, and Mao whereas documents in the class UK tend to have high values for London, British and Queen. Documents of the two
classes therefore form distinct contiguous regions as shown in Figure 14.1 and we can draw boundaries that separate them and classify new documents. How exactly this is done is the topic of this
Whether or not a set of documents is mapped into a contiguous region depends on the particular choices we make for the document representation: type of weighting, stop list etc. To see that the
document representation is crucial, consider the two classes written by a group vs. written by a single person. Frequent occurrence of the first person pronoun I is evidence for the single-person
class. But that information is likely deleted from the document representation if we use a stop list. If the document representation chosen is unfavorable, the contiguity hypothesis will not hold and
successful vector space classification is not possible.
The same considerations that led us to prefer weighted representations, in particular length-normalized tf-idf representations, in Chapters 6 7 also apply here. For example, a term with 5 occurrences
in a document should get a higher weight than a term with one occurrence, but a weight 5 times larger would give too much emphasis to the term. Unweighted and unnormalized counts should not be used
in vector space classification.
We introduce two vector space classification methods in this chapter, Rocchio and kNN. Rocchio classification (Section 14.2 ) divides the vector space into regions centered on centroids or prototypes
, one for each class, computed as the center of mass of all documents in the class. Rocchio classification is simple and efficient, but inaccurate if classes are not approximately spheres with
similar radii.
kNN or 14.3 ) assigns the majority class of the
A large number of text classifiers can be viewed as linear classifiers - classifiers that classify based on a simple linear combination of the features (Section 14.4 ). Such classifiers partition the
space of features into regions separated by linear decision hyperplanes , in a manner to be detailed below. Because of the bias-variance tradeoff (Section 14.6 ) more complex nonlinear models are not
systematically better than linear models. Nonlinear models have more parameters to fit on a limited amount of training data and are more likely to make mistakes for small and noisy data sets.
When applying two-class classifiers to problems with more than two classes, there are one-of tasks - a document must be assigned to exactly one of several mutually exclusive classes - and any-of
tasks - a document can be assigned to any number of classes as we will explain in Section 14.5 . Two-class classifiers solve any-of problems and can be combined to solve one-of problems.
Next: Document representations and measures Up: irbook Previous: References and further reading Contents Index © 2008 Cambridge University Press
This is an automatically generated page. In case of formatting errors you may want to look at the PDF edition of the book. | {"url":"http://nlp.stanford.edu/IR-book/html/htmledition/vector-space-classification-1.html","timestamp":"2014-04-18T03:17:41Z","content_type":null,"content_length":"12381","record_id":"<urn:uuid:2917e2dd-88e9-4f22-a864-b1011d68829c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00251-ip-10-147-4-33.ec2.internal.warc.gz"} |
21-110: Finding a formula for a sequence of numbers
It is often useful to find a formula for a sequence of numbers. Having such a formula allows us to predict other numbers in the sequence, see how quickly the sequence grows, explore the mathematical
properties of the sequence, and sometimes find relationships between one sequence and another. If the sequence is counting something (for example, the number of polyominoes of each area), having a
formula helps us to catch omissions or duplicates if we are trying to make a complete list.
If we have a (partial) sequence of numbers, how can we guess a formula for it? There is no method that will always work for all sequences, but there are methods that will work for certain types of
sequences. Here we explore one method for guessing a polynomial formula for a sequence.
Recall that a polynomial (in the variable x) is an algebraic expression that is the sum or difference of one or more terms (but not infinitely many), where each term consists of a real-number
coefficient multiplied by a nonnegative, whole-number power of x. (Remember that x^0 is allowed, which is the same as 1, so a polynomial can have a constant term too.) The degree of a polynomial is
the highest power of x that appears. For example, 3x^5 − 0.5x^2 + 8 is a polynomial of degree 5.
A simple sequence
Suppose we have n possibly overlapping squares that share exactly one vertex (corner); in other words, there is one point that is a vertex of each of the squares, but no other point is a vertex of
more than one square. The figure below shows one way to draw five such squares (i.e., this is the case n = 5). How many vertices do the squares have in all?
We begin by drawing some examples, counting the number of vertices, and constructing the following table. We are using the notation f(n) for the total number of vertices of n squares to emphasize
that the number of vertices is a function of n, though we don’t know a formula for the function yet.
We would like to find a formula for f(n) in terms of n. In this case the pattern is fairly easy to see: each value of f(n) is 3 more than the previous value. We can see this if we look at the
differences between successive numbers in the sequence, writing these differences in a row beneath the f(n) row.
n: 1 2 3 4 5 6
f(n): 4 7 10 13 16 19
If we plot the points in our table, with n on the horizontal axis and f(n) on the vertical axis, we see that the points lie on a straight line. This leads us to guess that the function f(n) can be
described by a linear polynomial, that is, a polynomial of degree 1.
This line has a slope of 3 (the same 3 that is the common difference we saw above), so the equation of the line is f(n) = 3n + b for some value of b. One way to find the value of b is to know that it
represents the y-intercept of the line; from our plot above, we see that b = 1.
Alternatively, we can choose a value of n that we know and the corresponding value of f(n), plug them into the incomplete formula f(n) = 3n + b, and solve for the value of b. Let’s use n = 1 and f(n)
= 4. We get the equation
4 = 3⋅1 + b,
from which it is clear that b must be 1.
So our guess for the formula of f(n) is
f(n) = 3n + 1.
Let’s check this formula with the data we know. When n = 5, for example, we know that f(n) = 16 (from our experimentation). And our formula says
f(5) = 3⋅5 + 1 = 16,
so it seems to check out. You can check the other points from our table; they should agree with the values given by our formula.
A tougher sequence
Let’s try another problem. How many diagonals does a regular n-gon have? (Recall that a regular n-gon is a shape with n equal sides and n equal angles. A diagonal is a line from one vertex of the
n-gon to another, except that edges of the n-gon don’t count.)
As before, we start by drawing some examples. The smallest value of n that makes sense is n = 3 (a 3-gon is a triangle); we continue up to n = 8 (an 8-gon is an octagon).
We see that a triangle has no diagonals, a square has two, a pentagon has five, a hexagon has nine, and so on. We record this information in a table, using n for the number of sides and d(n) for the
number of diagonals. [We could have reused f(n), but since we already used the letter f with a different meaning earlier we chose a different letter this time. On the other hand, we are reusing the
letter n with a different meaning here. It’s not terribly important what you name your variables; just be consistent within any one problem.]
As before, we’ll start by taking differences between successive terms of the sequence.
n: 3 4 5 6 7 8
d(n): 0 2 5 9 14 20
This time we don’t have constant differences. But we notice that the differences themselves seem to have a pattern—each difference is 1 more than the previous difference. So, if we take the
differences of the differences, we will get a constant row:
n: 3 4 5 6 7 8
d(n): 0 2 5 9 14 20
Since we had to take differences twice before we found a constant row, we guess that the formula for the sequence is a polynomial of degree 2, i.e., a quadratic polynomial. (In general, if you have
to take differences m times to get a constant row, the formula is probably a polynomial of degree m.) The general form of a function given by a quadratic polynomial is
d(n) = an^2 + bn + c,
where the coefficients a, b, and c are real numbers. In our case these coefficients are unknown; we are trying to find them. Since we have three unknowns, we will need a system of three equations. To
get these equations, we can use some of the values of n and d(n) that we know. For example, plugging in n = 3, d(n) = 0 into the general quadratic d(n) = an^2 + bn + c gives the equation
0 = a⋅9 + b⋅3 + c.
Similarly, plugging in n = 4, d(n) = 2 gives
2 = a⋅16 + b⋅4 + c,
and n = 5, d(n) = 5 gives
5 = a⋅25 + b⋅5 + c.
This gives us the following simultaneous system of three linear equations in three unknowns. (The numbers in parentheses to the left of the equations are simply labels so that we can refer to the
equations; they are not part of the equations themselves.)
(1) 9a + 3b + c = 0
(2) 16a + 4b + c = 2
(3) 25a + 5b + c = 5
We can solve this system algebraically to find the values of the coefficients a, b, and c. (For a review of some useful algebraic techniques, see Appendix A in Problem Solving Through Recreational
One way to solve a system of linear equations is to systematically substitute or eliminate variables one at a time until only one variable remains. Solving for the single remaining variable is easy,
and then you can back-substitute the values of the variables you know to find the values of the variables that were eliminated.
Let’s start by solving equation (1) for c.
This gives us an expression to substitute for the variable c. If we plug this expression into the other two equations, (2) and (3), we get a simultaneous system of two linear equations in two
16a + 4b + (−9a − 3b) = 2
25a + 5b + (−9a − 3b) = 5
After we combine like terms, we get the following system.
Now b looks like a good candidate to eliminate. If we multiply equation (5) by −2 and add it to equation (6), the b’s will cancel:
−14a − 2b = −4
16a + 2b = 5
(7) 2a = 1
We have substituted or eliminated all but one variable, whose value we can now find. From equation (7) we see that a = 0.5. Now we begin the process of back-substituting the values of variables we
know into previous equations in order to find the values of the other variables. We can plug the value a = 0.5 into equation (6), say, to get the equation
8 + 2b = 5,
so 2b = 5 − 8 = −3, which means b = −1.5. Now we know the values of both a and b, so we can plug them into equation (4) to get
c = −9(0.5) − 3(−1.5) = 0.
So we have found the coefficients of the quadratic polynomial: a = 0.5, b = −1.5, and c = 0. So we guess that the formula for d(n) is
d(n) = 0.5n^2 − 1.5n.
Let’s check this formula with some data from our table, to make sure we didn’t make an algebraic error somewhere. We’ll try n = 8, for which we should get d(n) = 20:
d(8) = 0.5(8^2) − 1.5(8) = 20.
It checks out. Try a few more values of n and make sure this formula works for the data we have.
Higher-degree polynomial sequences and nonpolynomial sequences
In the examples above, we looked at one sequence that was described by a linear (degree-1) polynomial, and another that was described by a quadratic (degree-2) polynomial. Some sequences might be
described by polynomials of higher degree. A polynomial of degree 3 is called a cubic polynomial, one of degree 4 is called quartic, and one of degree 5 is called quintic; polynomials of degree
higher than 5 aren’t usually given special names. Formulas for sequences described by such polynomials can be found using the technique described above; there will just be more algebra to do (but not
harder algebra).
For example, if you are investigating a sequence described by a quartic (degree-4) polynomial, you will find that you need to take successive differences four times before you reach a constant row
(this tells you the degree of the polynomial), and you will need to solve for five unknown coefficients, because the general form of a quartic polynomial (in the variable n) is
an^4 + bn^3 + cn^2 + dn + e.
So you will need a system of five equations, which you can obtain (as we did above) by plugging in five sets of known values. Solving a system of five equations can be done just like solving a system
of three equations; it just requires a bit more patience.
On the other hand, some sequences cannot be described by polynomials. For example, consider the sequence of powers of 2, given by the formula 2^n:
n: 0 1 2 3 4 5 6 7 8 9 10 …
2^n: 1 2 4 8 16 32 64 128 256 512 1024 …
You can start taking successive differences of these numbers, but no matter how long you go, you will never reach a constant row. (Try it and see.) This sequence grows too quickly to be a polynomial
sequence. It is an exponential sequence instead, meaning that the variable n is in the exponent in the formula 2^n. The method described here will not work for sequences (like this one) that are not
polynomial sequences. There are other methods that can be used to find formulas for certain types of nonpolynomial sequences, such as exponential sequences, but the details of such methods are
probably beyond the scope of this course.
From guesses to proofs
It is important to note that this method produces only a guess for a formula—it doesn’t actually prove that the formula is correct in general. For example, in the squares problem it is conceivable
that the formula works only for values of n up to 6, and then fails after that; or maybe it works for all values of n up to a million, but doesn’t work for n = 1,000,001. We can test more values of
n, drawing more and more squares, counting the number of vertices, and comparing this with the value predicted by the formula, but we won’t ever be able to test every possible value of n. So how can
we be positive that the formula will always work?
This is the fundamental question that provides the inspiration for all of mathematics: How do we know that a guess is correct; how do we know it is always true? The answer lies in the concept of
mathematical proof. When we guess a formula for f(n), we have made a conjecture, an educated guess based on a body of evidence. But it is not a theorem, an established mathematical fact, until we can
provide a proof, a logical justification that explains why it must be always true. Proof is important; it is what separates statements that are known to be true from statements that merely seem to be
true. Many conjectures that seem true at first turn out to be false when they are studied more extensively. It is not uncommon for a mathematician to make a conjecture and even announce it publicly
(as a conjecture, of course), only to be proved wrong later by someone who finds a counterexample, an exception to the conjecture. This is part of the process by which mathematical knowledge grows
and evolves.
Finding a proof for a conjecture can be difficult and may require a very creative way of looking at the problem. There are some conjectures in math that have eluded all attempts at proof for hundreds
of years. On the other hand, many conjectures can be proved with just a bit of thought. A mathematical proof does not have to follow a strict format or be encrusted with strange-looking symbols; all
that is required is an explanation following a careful, logical train of thought that shows why the conjecture must be true.
Let’s try to prove our conjecture that the total number of vertices in the squares problem is given by the formula f(n) = 3n + 1. We might first ask ourselves, “Why does the formula end with ‘+1’?
What significance does that have? What does it correspond to?”
Let’s see if there is any single object that might give us the “+1” in the formula. Since the formula is counting vertices, the “+1” seems to point to one special “extra” vertex that has to be added
in at the end. Looking at a picture, we might guess that the “+1” corresponds to the single vertex that is shared by all of the squares. If so, then the remaining vertices should be accounted for by
the “3n.” Aha! The variable n represents the number of squares, and each square has 3 unshared vertices that are its own and do not belong to any other square. So the total number of these unshared
vertices is given by 3n. To this we must add 1 for the shared vertex, giving us a total of 3n + 1 vertices in all, which is the formula we guessed.
Once a mathematician has figured out the idea for a proof, he or she usually rewrites the proof in a more succinct and polished form. (It is worth remembering this when reading a proof in a book—what
you are reading is a final draft, with all of the messy trial-and-error thought processes that were required to produce the proof hidden. Nobody, not even an experienced mathematician, writes a
complete, well-written proof from start to finish the first time they try.) A polished presentation of the conjecture we just proved (now a theorem!) might look like the following.
Theorem. The total number of vertices for n squares that share exactly one common vertex is given by the formula f(n) = 3n + 1.
Proof. Each of the n squares has 3 vertices that are not shared with any other square; this gives 3n unshared vertices in all. In addition there is a single vertex that is shared among all the
squares. So the total number of vertices is 3n + 1.
Now that we have a proof for this theorem, we know that the formula f(n) = 3n + 1 always works. This is an amazing and beautiful thing. We have proved a statement that is true for infinitely many
values of n—the formula for f(n) works for any positive integer n—even though we can never check every case one by one! So, for example, we now know with complete certainty that if a million squares
share exactly one vertex, the total number of vertices is
f(1,000,000) = 3⋅1,000,000 + 1 = 3,000,001.
Of course we could never accurately draw this many squares and count their vertices by hand, but that’s not necessary to know that our answer is right.
Can you prove the formula we guessed for d(n), the number of diagonals of a regular n-gon? It is not easy to see what is going on when the formula is written as d(n) = 0.5n^2 − 1.5n. But we can
factor this expression and write it in a more suggestive form:
Why is this formula correct?
Last updated 12 February 2010. Brian Kell <bkell@cmu.edu> | {"url":"http://www.math.cmu.edu/~bkell/21110-2010s/formula.html","timestamp":"2014-04-21T00:15:49Z","content_type":null,"content_length":"27553","record_id":"<urn:uuid:8bd207a2-de74-45a7-b303-783f473f25da>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about free energy on Nerd Wisdom
In my last post, I discussed phase transitions, and how computing the free energy for a model would let you work out the phase diagram. Today, I want to discuss in more detail some methods for
computing free energies.
The most popular tool physicists use for computing free energies is “mean-field theory.” There seems to be at least one “mean-field theory” for every model in physics. When I was a graduate student,
I became very unhappy with the derivations for mean-field theory, not because there were not any, but because there were too many! Every different book or paper had a different derivation, but I
didn’t particularly like any of them, because none of them told you how to correct mean-field theory. That seemed strange because mean-field theory is known to only give approximate answers. It
seemed to me that a proper derivation of mean-field theory would let you systematically correct the errors.
One paper really made me think hard about the problem; the famous 1977 “TAP” spin glass paper by Thouless, Anderson, and Palmer. They presented a mean-field free energy for the
Sherrington-Kirkpatrick (SK) model of spin glasses by “fait accompli,” which added a weird “Onsager reaction term” to the ordinary free energy. This shocked me; maybe they were smart enough to write
down free energies by fait accompli, but I needed some reliable mechanical method.
Since the Onsager reaction term had an extra power of 1/T compared to the ordinary energy term in the mean field theory, and the ordinary energy term had an extra power of 1/T compared to the entropy
term, it looked to me like perhaps the TAP free energy could be derived from a high-temperature expansion. It would have to be a strange high-temperature expansion though, because it would need to be
valid in the low-temperature phase!
Together with Antoine Georges, I worked out that the “high-temperature” expansion (it might better be thought of as a “weak interaction expansion”) could in fact be valid in a low-temperature phase,
if one computed the free energy at fixed non-zero magnetization. This turned out to be the key idea; once we had it, it was just a matter of introducing Lagrange multipliers and doing some work to
compute the details.
It turned out that ordinary mean-field theory is just the first couple terms in a Taylor expansion. Computing more terms lets you systematically correct mean field theory, and thus compute the
critical temperature of the Ising model, or any other quantities of interest, to better and better precision. The picture above is a figure from the paper, representing the expansion in a
diagrammatic way.
We found out, after doing our computations but before submitting the paper, that in 1982 Plefka had already derived the TAP free energy for the SK model from that Taylor expansion, but for whatever
reason, he had not gone beyond the Onsager correction term or noted that this was a technique that was much more general than the SK model for spin glasses, so nobody else had followed up using this
If you want to learn more about this method for computing free energies, please read my paper (with Antoine Georges) “How to Expand Around Mean-Field Theory Using High Temperature Expansions,” or my
paper “An Idiosyncratic Journey Beyond Mean Field Theory.”
This approach has some advantages and disadvantages compared with the belief propagation approach (and related Bethe free energy) which is much more popular in the electrical engineering and computer
science communities. One advantage is that the free energy in the high-temperature expansion approach is just a function of simple one-node “beliefs” (the magnetizations), so it is computationally
simpler to deal with than the Bethe free energy and belief propagation. Another advantage is that you can make systematic corrections; belief propagation can also be corrected with generalized belief
propagation, but the procedure is less automatic. Disadvantages include the fact that the free energy is only exact for tree-like graphs if you add up an infinite number of terms, and the theory has
not yet been formulated in an nice way for “hard” (infinite energy) constraints.
If you’re interested in quantum systems like e.g. the Hubbard model, the expansion approach has the advantage that it can also be applied to them; see my paper with Georges, or the lectures by
Georges on his related “Dynamical Mean Field Theory,” or this recent paper by Plefka, who has returned to the subject more than 20 years after his original paper.
Also, if you’re interested in learning more about spin glasses or other disordered systems, or about other variational derivations for mean-field theory, please see this post. | {"url":"http://nerdwisdom.com/tag/free-energy/","timestamp":"2014-04-18T18:38:18Z","content_type":null,"content_length":"67624","record_id":"<urn:uuid:ac568f25-e214-4fa2-b1ac-6b4feec05f46>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00444-ip-10-147-4-33.ec2.internal.warc.gz"} |
Forest Hills, NY Algebra Tutor
Find a Forest Hills, NY Algebra Tutor
...I'm a scientist, and I love science! I've been tutoring for the ACT for ten years. Unfortunately, even AP level science classes often will not prepare a student for the ACT science.
17 Subjects: including algebra 1, algebra 2, calculus, geometry
...I focused primarily on political philosophy and moral philosophy but have experience in most areas of the subject. I've TAed numerous classes in those areas at both institutions and have
taught high school philosophy courses at a private high school. I studied A-level mathematics and further mathematics in the UK and received As in both subjects.
40 Subjects: including algebra 2, algebra 1, English, chemistry
...When teachers explain something in class they assume that the students have a certain knowledge about math based on what they learned in previous years. If a student is lacking in a certain
area of what they are assumed to know, they won't understand the new topic, and it creates a snowball effe...
21 Subjects: including algebra 1, statistics, algebra 2, calculus
...Cheers! I have a Master's degree in philosophy from Queens College CUNY, with a focus in Applied Ethics, 19th Century German Philosophy, and Political Theory. I am also knowledgeable in many
other areas of Western philosophy and religion.
36 Subjects: including algebra 1, Spanish, English, reading
...I look forward to working with you! I am currently an instructor for an AP Biology Enrichment course where I develop lesson plans, quizzes, exams, and homeworks to make sure students stay on
top of their regular AP Biology course and I relate what they learn to real-world examples so they have a...
8 Subjects: including algebra 1, algebra 2, chemistry, biology
Related Forest Hills, NY Tutors
Forest Hills, NY Accounting Tutors
Forest Hills, NY ACT Tutors
Forest Hills, NY Algebra Tutors
Forest Hills, NY Algebra 2 Tutors
Forest Hills, NY Calculus Tutors
Forest Hills, NY Geometry Tutors
Forest Hills, NY Math Tutors
Forest Hills, NY Prealgebra Tutors
Forest Hills, NY Precalculus Tutors
Forest Hills, NY SAT Tutors
Forest Hills, NY SAT Math Tutors
Forest Hills, NY Science Tutors
Forest Hills, NY Statistics Tutors
Forest Hills, NY Trigonometry Tutors | {"url":"http://www.purplemath.com/forest_hills_ny_algebra_tutors.php","timestamp":"2014-04-21T12:40:30Z","content_type":null,"content_length":"24067","record_id":"<urn:uuid:a81b868a-5ba8-4ba3-9a3d-eec4e63d9a60>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
simple but puzzling proofs
May 5th 2009, 05:27 PM
simple but puzzling proofs
I am a bit puzzled on this. It is a bit embarrassing.
Is the following statement true or false?
For all y there exists an x s.t. x > y^2 + 1
The problem says nothing else about x or y and this is confusing. I have a feeling it is true because I can always find a number that is bigger than another number. Am I thinking correctly? Or is
it that I choose y to be zero and x to be zero and thus it is false by counterexample. I think that this is incorrect because it is for some x not for all x.
May 5th 2009, 05:44 PM
Let y be any real number. Define x so that $x = y^2 + 2$.
Then show that $x > y^2 + 1$
You might find the part I chapters 1-3 of my book Topology and the Language of Mathematics useful. There's a free download available through here: Bobo Strategy - Topology
Hope that helps...
May 5th 2009, 05:53 PM
Thanks but could you please explain what gives me the right to define x = y^2 + 1 ?
May 5th 2009, 06:21 PM
You are supposed to show that for any y, you can come up with an x so that x is bigger than $y^2 + 1$. That's what we've done.
For any y, we have shown there is an x that is bigger than $y^2 + 1$. And we've defined it as a function of the arbitrary y.
Note you could also define x = 323 - it just wouldn't be useful as it wouldn't always be bigger than $y^2 + 1$ (for example when y = 20). | {"url":"http://mathhelpforum.com/discrete-math/87692-simple-but-puzzling-proofs-print.html","timestamp":"2014-04-16T07:36:25Z","content_type":null,"content_length":"7181","record_id":"<urn:uuid:fa0ea923-42b3-435f-baa1-b61665bb248d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing software
December 14th 2011, 08:40 PM #1
May 2011
Graphing software
I'm looking for a free program to graph relatively straightforward high school functions. I'd like it to produce decently high resolution so I can print it out and make high school math
Which program should I use?
Re: Graphing software
Excel works
Also see: List of information graphics software - Wikipedia, the free encyclopedia
Re: Graphing software
Thanks for sharing it really help me.
Re: Graphing software
thanks for the list very usefull for me
italian coffee Here every single pot has a fresh coffee, hermetically closed until the movement you want to use it. italia coffee
Re: Graphing software
Re: Graphing software
thanks a lot, have a nice day thanks for sharing
Re: Graphing software
For a very straightforward graph without any of the advanced functions, I use this website: https://www.desmos.com/calculator
You can simply type in a function and it'll show it on the graph, no other complications.
Re: Graphing software
there are lot of graphics software available used for graphics, you can search it on the software sites and can draw graph.
Debra Fine
Re: Graphing software
Now a Days lots of Graphics software available in the market, I am working with MindsEye – A nice freeware 3D modeling and rendering suite for Linux users.
miami website designers
Re: Graphing software
Now a Days lots of Graphics software available in the market, I am working with MindsEye – A nice freeware 3D modeling and rendering suite for Linux users.
miami website designers
Can you tell where MindsEye is mentioned in the link you provided?
Re: Graphing software
I found some by looking up 'drawing graphs' on Google.
Re: Graphing software
I have used a program called Advanced Grapher to prepare graphs for papers, presentations, and one book. It costs $30 (I have enjoyed lifetime upgrades) and is currently available here
Plot graphs, perform regression analysis with Advanced Grapher
Re: Graphing software
The 2 I like best are both free:
□ WinPlot: http://math.exeter.edu/rparris/winplot.html
□ Graph: Graph
Re: Graphing software
There are also several online utilities you can use.
Fooplot ( www.fooplot.com ) will let you save graphs, and if you choose one of the vector formats (PDF, EPS, SVG) they will be fantastic for printing, though you need a program that supports
vector image embedding. If you're doing your worksheets in LaTeX this is ideal. Fooplot also has PNG export for those using less-professional programs.
Desmos ( www.desmos.com ) seems to only let you save after logging in, though you could always screenshot it.
graph.tk ( graph.tk ) also lets you save, but doesn't support vector formats.
December 14th 2011, 11:21 PM #2
Grand Panjandrum
Nov 2005
January 3rd 2012, 03:53 PM #3
Jan 2012
March 13th 2012, 11:56 AM #4
Mar 2012
March 13th 2012, 11:08 PM #5
Senior Member
Nov 2011
Crna Gora
March 19th 2012, 05:39 PM #6
Mar 2012
March 27th 2012, 10:42 PM #7
March 28th 2012, 09:16 PM #8
Mar 2012
March 29th 2012, 03:07 AM #9
Mar 2012
Columbus, GA 31907
March 29th 2012, 04:08 AM #10
MHF Contributor
Oct 2009
March 29th 2012, 04:34 AM #11
Senior Member
Mar 2012
Sheffield England
May 3rd 2012, 08:52 PM #12
May 2012
near Seattle
May 7th 2012, 12:16 PM #13
May 29th 2012, 05:32 AM #14
May 2012 | {"url":"http://mathhelpforum.com/math-software/194296-graphing-software.html","timestamp":"2014-04-17T05:58:08Z","content_type":null,"content_length":"65544","record_id":"<urn:uuid:8eb09943-cac6-440d-8005-dc2dd0c55e30>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alpharetta Geometry Tutor
Find an Alpharetta Geometry Tutor
...I am qualified to assist students with many of the sub-fields including set theory, recursion, and proofs. I graduate with honors with my Bachelor's degree in Computer Engineering from
Embry-Riddle Aeronautical University. As most with most undergraduate degree programs, my undergraduate studies provided the necessary knowledge base approach problems I would face in my
24 Subjects: including geometry, calculus, statistics, algebra 1
I am a state certified teacher of students in grades K-8. Through over 12 years of teaching children in multiple subject areas, I have discovered that children excel in learning when they are
motivated through positive encouragement, the content is presented in a fun and exciting manner, and multip...
7 Subjects: including geometry, reading, algebra 1, grammar
...Physics is my strongest subject and I enjoy it the most. Particularly kinematics, Newton's second law of motion, and rotational motion. I also enjoy tutoring work energy theorem.
11 Subjects: including geometry, calculus, physics, ASVAB
...I have been a tutor in the Johns Creek, Duluth and Alpharetta area for the past 5 years. Prior to teaching, I worked as a Civil and Environmental Engineer at a private consulting firm. During
that time, I achieved my master's degree in Engineering.
12 Subjects: including geometry, chemistry, calculus, algebra 1
...To see the big picture correctly all the pieces must be in their proper place. Algebra is just a component of the puzzle. I will help you find the proper place where it belongs so that you will
see how it connects to lower and upper math concepts.
26 Subjects: including geometry, English, reading, algebra 2 | {"url":"http://www.purplemath.com/Alpharetta_geometry_tutors.php","timestamp":"2014-04-16T16:50:37Z","content_type":null,"content_length":"23908","record_id":"<urn:uuid:35235da5-2cb2-431c-b3e3-88504fde30ca>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00261-ip-10-147-4-33.ec2.internal.warc.gz"} |
Show that (ab)^n=(a^n)(b^n) for an Abelian group - My Math Forum
February 3rd, 2011, 01:22 PM #2
Global Moderator Re: Show that (ab)^n=(a^n)(b^n) for an Abelian group
Well, what is (ab)^n ??
It's just (ab)(ab)(ab)(ab).........(ab)
Joined: Nov 2009 n times.
But since you're talking abelian, you can move the elements around, so you can write
From: Northwest Arkansas (aaaaaaaaaaaaaaaa.........a)(bbbbbbbbbbbbbbbbbb... .....b)
Where each of a and b is written n times, but that's just
Posts: 2,766 (a^n)(b^n)
Thanks: 2 This is not true in non-abelian groups. | {"url":"http://mymathforum.com/abstract-algebra/17263-show-ab-n-n-b-n-abelian-group.html","timestamp":"2014-04-18T14:11:18Z","content_type":null,"content_length":"41615","record_id":"<urn:uuid:8abaa917-ee19-4026-9c30-843db5b1f16e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00603-ip-10-147-4-33.ec2.internal.warc.gz"} |
Realistic Goal For Graduates: Accumulate Double Your Annual Salary By Age 40
Enough with the fluffy stuff, how about some firm numbers. Imagine that a young college grad actually has the forethought to even think about what they need for retirement. They check out an online
retirement calculator, and see their needed amount is… 5.7 bajillion dollars!^1 Shocked, they shake their head, walk away, and promise themselves to revisit it again in a few years… hopefully.
A more attainable goal: You should aim to accumulate double your salary by age 40. Doesn’t that sound more reasonable? This is the solution proposed by this Wall Street Journal article A $1 Million
Retirement Fund: How to Get There From Here. (Thanks Don for the tip.) Why double?
Let’s say your salary has hit that $80,000, you have amassed $160,000 in savings, you are socking away 12% of your pretax income each month and your investments earn 6% a year. Over the next 12
months, your $160,000 portfolio would balloon to $179,518, or $19,518 more. Your monthly savings would account for $9,600 of that growth. But the other $9,918 would come from investment gains.
In other words, you’ve got to the crossover point, where the biggest driver of your portfolio’s growth is now investment earnings, not the actual dollars you’re socking away.
My only beef is that the math in the article is a bit vague. First, the article means double your expected salary at age 40, by age 40. Now, is the 6% assumed return supposed to be real or nominal?
Are we assuming this is all in a 401(k)? How much inflation-adjusted money will this give you at age 65?
However, the main points remain. Money saved now will be worth a lot more than money saved later. Once you generate a “critical mass” in your retirement funds, they really do seem to gain a life of
their own.
The graph on the right shows three investors, each of whom invests just $1,000 a year until age 65. However, one begins at age 25, investing a total of $40,000; one at age 35, investing a total of
The result? The early bird ends up with more than double the one who waits until age 35 and more than four times the one who waits until age 45.^2
I’ve certainly experienced this. As our own retirement balances have grown, the recent stock gains alone are often thousands of dollars each month. So what are you waiting for? Get started with just
$50 per month!
^1 Actually if you plugged in 21 years old and $40,000, the goal would be $2,591,000. Still big!
^2 Source: Investment Company Institute
1. Dennis says:
I start work in about two weeks and will be diving deeper into investments.
2. Don says:
I don’t have many complaints about the article (after all, I suggested it). I noticed that the math was as you say: savings at age 40 amounting to double your salary at age 40. Still it seems
like a really nice and tangible goal to keep in mind.
After all, if you have savings at age 35 amounting to double your salary at age 35, you can probably expect you are ahead of the curve. You can probably expect your savings to grow faster than
your salary, given that it is earning a return and you are at the same time still contributing.
I’m a few years from 40 myself, and my savings aren’t double my salary, but I can well imagine that they will be close based on my contribution level and my approximate expected return.
Naturally, that pleases me.
And if you reach 40 and find yourself behind, then you know what to do. You should probably look for a way to save more. At 40, you are young enough still to make modest adjustments and see big
results in the end. It seems like a great age to lay down a guide post to see if you are on track.
To summarize, I think it is ingenious. It is mathematically simple, the measure is taken at a useful time of life, and it works for all income levels (at least according to the usual wisdom that
your retirement income should be some percentage of you pre-retirement income).
3. John says:
I’ve heard this before, but I think the number was 1x salary by 40 rather than 2x salary. All the retirement calculators I’ve seen (except for this one) assume more than a 6% return, so at 7% to
9% return 1x income may well be plenty.
Furthermore, the rule doesn’t exactly apply to those with professional/doctorate degrees, as they (and me) get out of school later and often have a pile of debt (not me). This delay has to be
made up through higher contributions as soon as a regular income becomes available.
4. Bill Bailey says:
I’m 38, have more than double my salary, put in 20% a year (thanks to matching), and still the Fidelity retirement calculator says I probably won’t reach my retirement goals by 65. What’s up with
5. pfodyssey says:
While the idea is straightforward enough, I don’t know if it buys me much other than being “easy”.
As Bill Bailey wrote, doubling his salary doesn’t seem to get him over the hump. I advocate using a number of tools available to you in order to get a sense for whether or not you are on track.
6. heather says:
I think that the Fidelity calculator is totally absurd. I’m in the same position as Bill and Fidelity wants us to bump our savings to 50% of gross. I think that calculator is “Fidelity’s
Retirement Calculator.”
Your asessment of the article is spot on. My additional assumption is that the author intended any money you can tap for retirement to be included in the “double salary” figure… so if you buy a
house (or are saving for a down payment), that money still counts because you can do a reverse mortgage or sell it in retirement. But money saved for a car, college… not so much.
7. Don says:
Now, I’m not 40 yet, but I look to be on track by the article. I’ve also checked against the Fidelity calculator. According to the calculator, in average markets I could stop saving right now and
be fine. In poor markets I’d run short.
But also according to the calculator, if I continue to save at approximately the same rate, I can expect to be fine in basically any market.
So to my eye the measures seem about on. I expect to have about double my salary at age 40, and the Fidelity calculator is laying pretty good odds that I’ll have enough retirement income. Perhaps
the fact that most of my savings are in Roth accounts means more. After all, without tax at the end, that money can be expected to go farther.
8. Mike says:
I think that the secret to the Fidelity calculator is in the number that you can’t adjust. “Your goal represents assets needed to replace 85% of your pre-retirement income before taxes” Man, I
wish I currently lived off of 85% of my income before taxes! After taxes and savings, I’m under 30%, so I’m not sure I’d even know how to nearly triple my spending in retirement.
9. Sasha says:
Interesting article. I’m only 24, so even a goal for age 40 seems a bit intangable. The retirement calculators I’ve used just seem so abstract (can they REALLY estimate what I’ll need in 40 or 50
years!) so I basically just save as much as I can afford to. Which is good since I’ve only been working about a year and compounding interest is on my side–but after awhile, that strategey won’t
be the best.
10. W.H. says:
I am wondering if there is a calculator that can handle a change of investing strategy. For example, be aggressive when young and gradually switch to more conservative approach.
11. Grant says:
A noteworthy goal, indeed.
Of course, another goal could be to save half your net salary each year… Of course, the goal must be realistic and tangible, and will be different for everyone.
Speak Your Mind Cancel reply | {"url":"http://www.mymoneyblog.com/realistic-goal-for-graduates-accumulate-double-your-annual-salary-by-age-40.html","timestamp":"2014-04-16T16:15:03Z","content_type":null,"content_length":"59750","record_id":"<urn:uuid:20dc697b-0c49-4060-bf2c-21f6a0ab3e01>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00169-ip-10-147-4-33.ec2.internal.warc.gz"} |
trig word problem question
June 15th 2009, 05:03 PM #1
Jun 2009
trig word problem question
Below is a word problem that is driving me nuts. I'll bet the answer is probably an obvious one but for some reason I'm not making any sense of it. I am asking for help in HOW to solve it, not
the actual answer...I'd rather work that out and post the answer so it can be checked. Here is the word problem:
Two wheels are rotating in such a way that the rotation of the smaller wheel causes the larger wheel to rotate. The radius of the smaller wheel is 6.9 cm and the radius of the larger wheel is
10.9 cm. Through how many degrees will the larger wheel rotate if the smaller one rotates 108 degrees?
Does the 108 degrees of the smaller circle have any relevance beyond the fact that it should rotate more/faster because it is the smaller one, or is it just a matter of solving the angle using s=
ar (s=arc length, a = angle, r = radius)? If that is the case, and s=ar, then a = s/r. s= 360 deg, divided by radius 10.9, = 33.02752294, or roughly 33 degrees.
Or am I completely off base and need to work this from a different perspective?
Below is a word problem that is driving me nuts. I'll bet the answer is probably an obvious one but for some reason I'm not making any sense of it. I am asking for help in HOW to solve it, not
the actual answer...I'd rather work that out and post the answer so it can be checked. Here is the word problem:
Two wheels are rotating in such a way that the rotation of the smaller wheel causes the larger wheel to rotate. The radius of the smaller wheel is 6.9 cm and the radius of the larger wheel is
10.9 cm. Through how many degrees will the larger wheel rotate if the smaller one rotates 108 degrees?
Does the 108 degrees of the smaller circle have any relevance beyond the fact that it should rotate more/faster because it is the smaller one, or is it just a matter of solving the angle using s=
ar (s=arc length, a = angle, r = radius)? If that is the case, and s=ar, then a = s/r. s= 360 deg, divided by radius 10.9, = 33.02752294, or roughly 33 degrees.
Or am I completely off base and need to work this from a different perspective?
You have part of the idea.
You have to assume that the circles touch at only one point at the start of the rotation.
The arc length through 108degrees of the smaller circle
$2r \pi \times \frac{108}{360}= 13.006$
13.006 / 10.9 is the radian measure through which the larger circle rotated.
Below is a word problem that is driving me nuts. I'll bet the answer is probably an obvious one but for some reason I'm not making any sense of it. I am asking for help in HOW to solve it, not
the actual answer...I'd rather work that out and post the answer so it can be checked. Here is the word problem:
Two wheels are rotating in such a way that the rotation of the smaller wheel causes the larger wheel to rotate. The radius of the smaller wheel is 6.9 cm and the radius of the larger wheel is
10.9 cm. Through how many degrees will the larger wheel rotate if the smaller one rotates 108 degrees?
Does the 108 degrees of the smaller circle have any relevance beyond the fact that it should rotate more/faster because it is the smaller one, or is it just a matter of solving the angle using s=
ar (s=arc length, a = angle, r = radius)? If that is the case, and s=ar, then a = s/r. s= 360 deg, divided by radius 10.9, = 33.02752294, or roughly 33 degrees.
Or am I completely off base and need to work this from a different perspective?
See, the arc length will be the same for both the wheels
for smaller wheel, s = ar
for bigger wheel, s = AR
ar = AR
$108^\circ \times 6.9=A\times 10.9$
$A = 68.36^\circ$
the bigger wheel will rotate $68.36^\circ$
June 15th 2009, 06:22 PM #2
Super Member
Jan 2009
June 15th 2009, 06:46 PM #3
Aug 2008 | {"url":"http://mathhelpforum.com/trigonometry/92968-trig-word-problem-question.html","timestamp":"2014-04-18T12:36:26Z","content_type":null,"content_length":"39043","record_id":"<urn:uuid:dfc117c7-b5b3-414b-9bcc-8aac4ea42e0d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
efficient way to calc log base 2 of 64-bit unsigned int
efficient way to calc log base 2 of 64-bit unsigned int
Is there a more efficient way to calculate the log base 2 of a 64-bit unsigned integer than looping through the bits? I couldn't find an algorithm on the internet that works on 64-bit numbers.
PS. assume that the input is a power of 2 (exactly one bit is set)
Thank you very much
int findBit ( unsigned long val ) {
int result = 0;
if ( val >= 0x10000 ) {
result += 16;
val >>= 16;
if ( val >= 0x100 ) {
result += 8;
val >>= 8;
if ( val >= 0x10 ) {
result += 4;
val >>= 4;
if ( val >= 0x4 ) {
result += 2;
val >>= 2;
if ( val >= 0x2 ) {
result += 1;
val >>= 1;
return result + val;
int main ( ) {
unsigned long tests[] = {
0, 0x1, 0x100000, 0x400000, 0x80000000
int i;
for ( i = 0 ; i < sizeof tests / sizeof *tests ; i++ ) {
printf("%08lx = %d\n", tests[i], findBit(tests[i]) );
return 0;
$ gcc foo.c
$ ./a.exe
00000000 = 0
00000001 = 1
00100000 = 21
00400000 = 23
80000000 = 32
64 bits is just one more test.
thank you for the code, took me a while to understand, but I think I do now.
The "val >=" and "result +=" lines can be implemented using bitwise & and |=, resp.
thank you for the suggestion. I changed the code accordingly:
unsigned int findBit ( Uint64 bb ) {
unsigned int result = 0;
if ( bb & ~0x1FFFFFFFFLL) {
result |= 32;
bb >>= 32;
if ( bb & ~0x1FFFF ) {
result |= 16;
bb >>= 16;
if ( bb & ~0x1FF ) {
result |= 8;
bb >>= 8;
if ( bb & ~0x1F ) {
result |= 4;
bb >>= 4;
if ( bb & ~0x7 ) {
result |= 2;
bb >>= 2;
if ( bb & ~0x3 ) {
result |= 1;
bb >>= 1;
return (result + bb);
however, to my surprise, through benchmarking (a simple loop that executes this function many times), it appears that the improvement in efficiency is unnoticeable or even non-existant. Is this
what is supposed to happen? or has my compiler (gcc 4.1) done some magic to the old code?
Thank you
Mario F.
bitwise operations don't experience the performance improvement of old days.
Always a nice thing to do, I expect. Especially when there are plans to turn new code into old code or use new code in old systems. However, as far as modern compilers (and processors?)are
concerned, explicit bitwise operations in source code don't have too much of an impact any more under most common situations.
I'm surprised - I wasn't sure about the "val >=", but I was pretty sure that the compiler wasn't smart enough to realize that each of the "result +=" could be replaced with |= (since it requires
seeing that the different increments have no bits in common, and that the initial value of result is 0) and that |= would be faster than +=.
I'm surprised - I wasn't sure about the "val >=", but I was pretty sure that the compiler wasn't smart enough to realize that each of the "result +=" could be replaced with |= (since it requires
seeing that the different increments have no bits in common, and that the initial value of result is 0) and that |= would be faster than +=.
The compiler probably doesn't replace += with |=, but += and |= are the same number of clocks either way on a modern x86-based processor - one cycle latency for the actual OR operation, and then
whatever additional time to access the arguments (in this example no time at all, since on side is constants and most likely result is in a register - or it's cached so not much of a latency
there either).
Where it may make a difference is if you're OR-ing to data that is bigger than a register, say a 32-bit value on a 16-bit x86 processor. But that was a while back. I'm pretty sure that OR and ADD
are the same number of cycles even on a 486.
If you wanted to go "all out", and you're using an x86, there's always the "BSR—Bit Scan Reverse" instruction to play with.
> it appears that the improvement in efficiency is unnoticeable or even non-existant.
> Is this what is supposed to happen?
Why would you imagine that there would be some vast difference in performance?
Personally, I prefer mine, because it's more intuitively obvious (well I think so) as to what is going on.
thank you for all your comments
I was under the (wrong) impression that bitwise operations are faster than add.
thank you for clarifying it for me
in this case, I would prefer Salem's original version, too, for the readability.
by the way, i am using this function in the inner loop of a chess playing program, that's why I wanted to optimize it
Where it may make a difference is if you're OR-ing to data that is bigger than a register, say a 32-bit value on a 16-bit x86 processor. But that was a while back. I'm pretty sure that OR and ADD
are the same number of cycles even on a 486.
so it would matter if I am OR-ing to 64-bit data on a 32-bit CPU? what kind of difference would that make?
Thank you
Whilst testing this out, I've found that Visual Studio VC7 doesn't do this very well [because it touches the second half of the 64-bit word anyways]- but when I tried to do it in assembler, I
found that the result isn't noticably better - most likely due to bad branch prediction. If I use a constant instead of a variable, then the assembler version is much faster than the C-version
with variable input, but the compiler realizes that the input is constant and optimizes the C-version out of the loop, so it takes "zero time" to run the loop!
I see... thank you
My advice is to keep the board reading code as simple and understandable as possible. Look for performance gains by enhancing your evaluation function, using a transposition table, and improving
move ordering for alpha-beta search. You can get very large gains this way, whereas optimizing the board access function will lead to only a linear speedup. You're not going to make your code
twice as fast by fiddling with this function.
One of the NASA engineers who designed the control code for the antenna beam on the space shuttle once told me, "Don't bother making something faster unless you can make it 4 times faster."
My advice is to keep the board reading code as simple and understandable as possible. Look for performance gains by enhancing your evaluation function, using a transposition table, and improving
move ordering for alpha-beta search. You can get very large gains this way, whereas optimizing the board access function will lead to only a linear speedup. You're not going to make your code
twice as fast by fiddling with this function.
Thank you for your advice, it's much appreciated. I am still on my board handling and move generation code, not yet onto the AI part, but I am already planning to implement various optimizations
such as transposition tables as I have one month to work on the AI if needed (I am still a high school student).
One of the NASA engineers who designed the control code for the antenna beam on the space shuttle once told me, "Don't bother making something faster unless you can make it 4 times faster."
well, the change from my original looping algorithm to this algorithm is a change from linear search to binary search (if I understand this algorithm correctly) and the speed up is well over 4
times =) | {"url":"http://cboard.cprogramming.com/cplusplus-programming/92048-efficient-way-calc-log-base-2-64-bit-unsigned-int-printable-thread.html","timestamp":"2014-04-21T13:56:34Z","content_type":null,"content_length":"25012","record_id":"<urn:uuid:dd6e0983-15c8-4e62-bcbc-d38691b57265>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Classify the following as an example of a nominal, ordinal, interval, or ratio level of measurement, and state why it represents this level: eye color of family members
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Eye color is an example of a type of measurement in which the name of the measurement is really the only thing that is recorded. That makes it "nominal" -- nominal means name, more or less.
Here's a site with some good examples of different types of measurements: http://infinity.cos.edu/faculty/woodbury/stats/tutorial/Data_Levels.htm Good luck!
Best Response
You've already chosen the best response.
Thank you so much....
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/508bed13e4b077c2ef2e9cac","timestamp":"2014-04-19T15:35:56Z","content_type":null,"content_length":"30464","record_id":"<urn:uuid:5ed842c8-3381-4959-8487-db32394b3ea7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mean Value Theorem
August 13th 2009, 06:21 PM #1
Junior Member
Feb 2009
Mean Value Theorem
I'm having trouble with this question on mean value theorem
1. x+(96/x) on interval [6,16]
the problem with this equation is i get 0/(10) and when i check the answer is 4sq.rt(6) I'm using the f(b)-f(a)/b-a so what am i'm missing?
Solve for x.
Join Date: Feb 2009
Posts: 25
Thanks: 0
Thanked 0 Times in 0 Posts
that comes out to (22-22)/10? because the answer in the book says it come out to 4sq.rt6,
hahaha, I guess we just found out were I got lost at. when you say I have only done the right side of the equation what do you mean.
Equations have right sides, and then they have left sides. Soooooo, generally when we "solve" equations, we isolate the variable. In this case, the variable would be x. So, your task is to move
everything to one side of the equation except x.
The MVT says that there will be at least one place in an interval [a,b] where the slope of the tangent line to the graph will have the same slope as the secant line through [a,f(a)] and [b,f(b)].
You have found the slope of the secant line. So, you're not done.
oh, lamo, well I finally got the right answer when I finished the other side lol. Oh and just realized where the thanks button was lol.
Good. I hate asking people for thank yous, and if I didn't help you, don't thank me. But I think that you'll find that once you start clicking that thank you button, people will start busting
down your door trying to help you.
August 13th 2009, 07:31 PM #2
August 13th 2009, 07:37 PM #3
Junior Member
Feb 2009
August 13th 2009, 07:41 PM #4
August 13th 2009, 07:45 PM #5
Junior Member
Feb 2009
August 13th 2009, 07:54 PM #6
August 13th 2009, 08:04 PM #7
Junior Member
Feb 2009
August 13th 2009, 08:09 PM #8 | {"url":"http://mathhelpforum.com/calculus/97986-mean-value-theorem.html","timestamp":"2014-04-16T20:48:34Z","content_type":null,"content_length":"52963","record_id":"<urn:uuid:7cc1ecb7-b829-4072-886a-3124aa740791>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
TASI Lectures: Introduction to Cosmology - M. Trodden &
S.M. Carroll
2.6. Geometry, Destiny and Dark Energy
In subsequent lectures we will use what we have learned here to extrapolate back to some of the earliest times in the universe. We will discuss the thermodynamics of the early universe, and the
resulting interdependency between particle physics and cosmology. However, before that, we would like to explore some implications for the future of the universe.
For a long time in cosmology, it was quite commonplace to refer to the three possible geometries consistent with homogeneity and isotropy as closed (k = 1), open (k = - 1) and flat (k = 0). There
were two reasons for this. First, if one considered only the universal covering spaces, then a positively curved universe would be a 3-sphere, which has finite volume and hence is closed, while a
negatively curved universe would be the hyperbolic 3-manifold ^3, which has infinite volume and hence is open.
Second, with dust and radiation as sources of energy density, universes with greater than the critical density would ultimately collapse, while those with less than the critical density would expand
forever, with flat universes lying on the border between the two. for the case of pure dust-filled universes this is easily seen from (40) and (44).
As we have already mentioned, GR is a local theory, so the first of these points was never really valid. For example, there exist perfectly good compact hyperbolic manifolds, of finite volume, which
are consistent with all our cosmological assumptions. However, the connection between geometry and destiny implied by the second point above was quite reasonable as long as dust and radiation were
the only types of energy density relevant in the late universe.
In recent years it has become clear that the dominant component of energy density in the present universe is neither dust nor radiation, but rather is dark energy. This component is characterized by
an equation of state parameter w < - 1/3. We will have a lot more to say about this component (including the observational evidence for it) in the next lecture, but for now we would just like to
focus on the way in which it has completely separated our concepts of geometry and destiny.
For simplicity, let's focus on what happens if the only energy density in the universe is a cosmological constant, with w = - 1. In this case, the Friedmann equation may be solved for any value of
the spatial curvature parameter k. If
where we have encountered the k = 0 case earlier. It is immediately clear that, in the t de Sitter space - just in different coordinate systems. These features of de Sitter space will resurface
crucially when we discuss inflation. However, the point here is that the universe clearly expands forever in these spacetimes, irrespective of the value of the spatial curvature. Note, however, that
not all of the solutions in (50) actually cover all of de Sitter space; the k = 0 and k = - 1 solutions represent coordinate patches which only cover part of the manifold.
For completeness, let us complete the description of spaces with a cosmological constant by considering the case Anti-de Sitter space (AdS) and it should be clear from the Friedmann equation that
such a spacetime can only exist in a space with spatial curvature k = - 1. The corresponding solution for the scale factor is
Once again, this solution does not cover all of AdS; for a more complete discussion, see [20]. | {"url":"http://ned.ipac.caltech.edu/level5/Sept03/Trodden/Trodden2_6.html","timestamp":"2014-04-17T06:44:32Z","content_type":null,"content_length":"5771","record_id":"<urn:uuid:3fbbf8ca-ae60-4987-a754-f01487f7aaf0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Reflective Educator
I've normally started my classes with a description of what math we will be learning, and a class discussion about what the math means.
When I first started teaching, I would lecture for 30 minutes, and students would work for 60 minutes (I started in with a double block of math) during double block math classes, and in a 45 minute
lesson, I would still lecture for 30 minutes, and students would get 15 minutes to practice and do other activities.
I discovered early on in my teaching that the less time I talked, the more time students had to work on activities and exercises, and this led to improved understanding. I read research suggesting
that adolescents could actively pay attention for about 10 - 15 minutes, so I focused on getting the lecture portion of my lesson down to this length, and on embedding more questions and subsequent
discussion into my lecture.
Today I tried something new. I found questions (with an emphasis on real world application) related to exponential functions that students had never seen before, and started class by handing them out
as a package, and asking students to work on these problems in groups. I then spent class circulating around the room, answering the occasional student question (but being very careful what types of
questions I answered) and pushing students to try finding multiple solutions to the problems. When students were completely stuck, I offered support, but by asking them questions, rather than just
giving them the solution.
Now, I've definitely had classes where I haven't taught an idea to the entire class before, but this is the first time I've introduced a completely new topic without either presenting a lecture on
the topic ahead of time or using some sort of guided instructional aid for the students (like a video prepared in advance of the lesson).
Here are some observations I had while I was circulating around the classroom.
• Not one student asked me "is this solution right?"
• Students were actively engaged in the problem solving process.
• The questions I overheard from students (to each other) were often about the nuances in the problems, rather than "how did you do this?"
• Every group of students found the most efficient standard solution to the problem, as well as 2 other ways of solving the problem.
• No one attempted to Google for the solutions, or even open their textbook to see what information it had.
• My students were thinking.
At the end of class, I asked students to continue working in groups and come up with notes to explain the topic. As the students will be taking an exam in about a year and half on all of the material
they are writing, I recommended that they write the notes for their future self that might not remember having worked on these problems. Next class, I plan on having students form new groups, and
collaborate to construct meaningful notes for the future, and then work on some more related problems.
I've flipped the classroom. Instead of me presenting the ideas, my students look for solutions, and I help them. Instead of me giving notes to students, they make their own notes. Instead of the
classroom being about the content, it's about the process.
There were no videos, no notes in advance, no computer assessed exercises; just a focus on changing who was doing the thinking.
So interesting
Submitted by
on Wed, 10/26/2011 - 14:55.
Thanks for sharing your experience David, it motivates and encourages us in flipping our classrooms too.
Thanks again from Spain!
Not just good teaching?
Submitted by
Evan Weinberg
on Thu, 10/27/2011 - 04:32.
Hi David,
It's always interesting to read about your experiences and experiments. I agree on the power of giving students some interesting problems they haven't seen before and letting them go - I just posted
on my blog about doing this in physics and being really pleased with the results. I find it pretty neat to watch them feel like writing down important things and figuring out problems rather than
telling them what to do.
I just wonder about whether what you are doing is just flipping or just overall good pedagogy. Don't get me wrong, I love the concept of flipping when it makes sense. I've found it really makes me
decide whether a part of a lesson I've done before (especially in Calculus) is worth doing in class, as an exploration, or as a video that preloads students with some basic information before
Shifting the focus of class time to figuring things out is making your students active learners rather than receivers of information. Asking questions instead of answering them puts the power in the
students' hands to figure things out. Giving an assignment and knowing your students well enough to know that they can solve it given the time to get help from one another shows that you believe in
their abilities and want them to be challenged.
You are certainly justified in calling this flipping, as it does reverse the traditional roles of student and teacher in the classroom. I just don't think you need the gimmick of the 'flip' brand :)
It isn't about videos
Submitted by
Brian Bennett
on Thu, 10/27/2011 - 10:28.
Hey David,
I love reading your posts because of the thought and process you put into them. Thanks for sharing your ideas.
I do, however, want to stress again that the flipped class is NOT about the videos. Yes, what you've done is a "flipped class" because YOU were not the focus of the content. Video did not need to
help with the content building that happened in your room. Video is simply a tool that can help accomplish a goal. If you don't need the video, don't use it.
The "flip" is not a brand, it's not a copyrighted term, and it isn't ONE way or the highway. The flip is an idea that can help you get kids to think and interact in meaningful ways. Video helps some
people do it well, others (like yourself) don't need it and don't (shouldn't?) use it. Again, it is a tool that can help teachers do more things like this work.
I'm glad the experiment worked...I would love to be able to do that with my classes someday, but they're just not there right now. Until I'm at that point, I'll be using technology that helps enhance
their learning process.
Thanks again-
Submitted by
on Thu, 10/27/2011 - 10:49.
Interesting post David. Is this not just experiential learning? Similar to what we do at outward bound?
Flipped Class Post
Submitted by
Philip McIntosh
on Thu, 10/27/2011 - 11:16.
Very interesting.
Your approach sounds a bit like the Moore Method, in which minimal information is given and the students are tasked with figuring it out from there. I do not think that is an approach that is
appropriate for middle school, or any other school where the learners have insufficient background and self-confidence to figure things out on their own.
I agree that videos are not necessary. In fact, I think we will soon move beyond the "flipped class" and move more directly into the "self-directed" class. I think you are already doing that, but
most 7th graders I know are not yet ready for it.
Submitted by
on Tue, 05/15/2012 - 16:36.
I am currently trying the same process. Have you collected any data? Improved grades? Parental support? Etc.
Submitted by
on Thu, 05/17/2012 - 01:04.
Hola David,
Very interesting approach. I like that it empowers students tremendously.
I flip my classes, but because I am a foreign language teacher, I find that the videos are important.
Thank you for sharing this.
PS: I also teach online and I teach two students from Stratford Hall...small world! | {"url":"http://davidwees.com/content/i-flipped-my-math-classroom","timestamp":"2014-04-18T08:02:16Z","content_type":null,"content_length":"67795","record_id":"<urn:uuid:6498c5dd-3002-45d7-833e-acfa03806e6b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solution problem, please help
October 4th 2012, 06:16 AM
Solution problem, please help
Hi, please exlplain to me how the question 3 ( attached pdf) is worked out.. step by step. The answer we were told is
I'm VERY confused so please take it step by step and be as descriptive and clear as possible.
Thanks very much
October 4th 2012, 06:29 AM
Re: Solution problem, please help
Question 3: $x^2 - (p-2)x + 1 = p(x-2)$ is satisfied by only one value for $x$. What are the possible values of $p$?
If you simplify and collected like terms, it's a quadratic equation for $x$ with coefficients that involve $p$. Right?
Now, what do you know about real solutions to quadratic equations? Sometimes, there's two of them, sometimes there's one, and sometimes there are none. Right? OK - so what determines that? In
particular, there's something about a quadratic that must be true in order that it have only one real solution. Write down that expression. Since that expression will involve the coefficients of
the equation, and those coefficients involve $p$, the condition "this quadratic has exactly one real root" will translate into an equation for $p$. You can then solve it and then you'll have your
answer to Question #3.
October 4th 2012, 07:36 AM
Re: Solution problem, please help
Okay thanks, i'll try again | {"url":"http://mathhelpforum.com/algebra/204631-solution-problem-please-help-print.html","timestamp":"2014-04-18T14:05:27Z","content_type":null,"content_length":"5964","record_id":"<urn:uuid:c93b8aaf-5579-4d84-a091-233a7590981a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00188-ip-10-147-4-33.ec2.internal.warc.gz"} |
Velocity Reviews - Preferred Style Question
In article <XnsA0D9CE5D36A8Emyfirstnameosapriee@216.196.109.1 31>,
> > What are some accepted ways to do the following calculation - and
> > avoid the compiler warnings that expose conversion data loss? (I'm
> > trying to compute the total time in seconds that a person has to
> > complete an event, based on a stated pace.) TIA
> >
> > double dDistance = 6.214;
> > time_t tPace = 600;
> > time_t tEstTime;
> >
> > tEstTime = tPace * dDistance;
> Now you immediately see that the datatype for Pace is wrong - why it
> should be time_t if it is not in seconds? It should be double as well.
No, it's a computed "seconds mile", derived from a prompted value
that's in "minutes per mile". That is, the user is asked for "pace",
and the user responds in minutes. After that, the "seconds" value is
calculated and used for numerous calculations and displays.
> OTOH, in this kind of code one can see immediately that the units are
> used correctly. Whenever you see something like esTime_s = distance_m *
> pace_hour_km you know there is a bug.
That may be, but I must determine a value that is compared to a
presented scalar value that's in "seconds". The purpose of this derived
value is to place the user's estimate into one of 30 "slots" to
establish groupings. I want the processing to be precise (and not use
f/p comparisons), and I used to use "int". Now I'm trying to use a more
appropriate "time_t" for all "seconds-based" calculations and displays.
> The conversion warning comes because you want to squeeze the floating-
> point result into an integer, losing precision. If there are no other
> prevailing reasons, I would say that precision loss should be avoided. As
> we have now got rid of meaningless type prefixes, it is easy to change
> the datatype:
> double distance_m = 6.214; // distance in meters
> double pace_s_m = 600; // pace, in seconds per meter
> double estTime_s = distance_m * pace_s_m; // estimated time in seconds
Since I have to work in seconds throughout my processing, use of
"double" isn't going to help here. I really want to produce "time_t"
values and work from them. | {"url":"http://www.velocityreviews.com/forums/printthread.php?t=952666","timestamp":"2014-04-25T06:44:18Z","content_type":null,"content_length":"25944","record_id":"<urn:uuid:c8cdf857-4557-4206-9330-f1520abf806a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transforming rectangles into squares, with applications to strong colorings
Posted on September 26, 2011 by saf in categories: Publications.
Abstract: It is proved that every singular cardinal $\lambda$ admits a function $\textbf{rts}:[\lambda^+]^2\rightarrow[\lambda^+]^2$ that transforms rectangles into squares.
That is, whenever $A,B$ are cofinal subsets of $\lambda^+$, we have $\textbf{rts}[A\circledast B]\supseteq C\circledast C$, for some cofinal subset $C\subseteq\lambda^+$.
As a corollary, we get that for every uncountable cardinal $\lambda$, the classical negative partition relation $\lambda^+\nrightarrow[\lambda^+]^2_{\lambda^+}$ coincides with the following higher
arity statement. There exists a function $c:[\lambda^+]^2\rightarrow\lambda^+$ such that for
• every positive integer $n$,
• every coloring $d:n\times n\rightarrow\lambda^+$, and
• every family $\mathcal A\subseteq[\lambda^+]^n$ of size $\lambda^+$ of mutually disjoint sets,
there exist $a,b\in\mathcal A$ with $\max(a)<\min(b)$ such that $$c(a_i,b_j)=d(i,j)\text{ for all }i,j<n.$$(here, $a_i$ denotes the $i_{th}$-element of $a$, and $b_j$ denotes the $j_{th}$-element of
Citation information:
A. Rinot, Transforming rectangles into squares, with applications to strong colorings, Adv. Math., 231(2): 1085-1099, 2012.
One Response to Transforming rectangles into squares, with applications to strong colorings
1. saf says:
Submitted to Advances in Mathematics, March 2011.
Accepted June 2012.
0 likes
Leave a Reply Cancel reply
This entry was posted in Publications and tagged 03E02, Club Guessing, Minimal Walks, Square-Brackets Partition Relations, Successor of Singular Cardinal. Bookmark the permalink. | {"url":"http://blog.assafrinot.com/?p=190","timestamp":"2014-04-16T04:11:28Z","content_type":null,"content_length":"52377","record_id":"<urn:uuid:05f50ced-7f46-4559-a9e7-9dadc946fb3e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Santa Fe Springs Trigonometry Tutor
Find a Santa Fe Springs Trigonometry Tutor
...This really hits hard when students face the SAT, and all the problems look just a little different from what they saw in class. Same concepts but different wording, and suddenly the “rules”
don’t apply the way students are used to. The good news is that the problem can be solved with the right kind of preparation.
24 Subjects: including trigonometry, Spanish, physics, writing
...I will not judge you, I will just help you. I will meet you at your level of understanding, wherever that is, and we will build from there. Many students were never taught the basic concepts
behind their courses.
63 Subjects: including trigonometry, reading, chemistry, writing
...It is not an achievement test; therefore, it acts as a common denominator for schools in measuring a student’s academic capabilities, regardless of school record. When used for admission by
independent schools, the test is only one piece of information that is considered. Schools also review th...
18 Subjects: including trigonometry, geometry, ASVAB, GRE
...Typically a little bit of differential equations is taught in the third calculus class (usually the calculus course which covers multivariable calculus). The topics covered in the differential
equations classes that I completed were as follows: 1) Separation of variables 2) Homogeneous equations...
34 Subjects: including trigonometry, English, chemistry, physics
...I have taken both General Chemistry and a competitive Organic Chemistry class with other pre-health (medicine, dental, etc.) majors. I did well in the class, and I also touched upon some
Organic Chemistry concepts in my Biochemistry, Genetics and Physiology classes while in medical school. Overall, I have a thorough understanding of the concepts taught in Organic Chemistry.
62 Subjects: including trigonometry, English, chemistry, reading | {"url":"http://www.purplemath.com/Santa_Fe_Springs_Trigonometry_tutors.php","timestamp":"2014-04-20T13:31:19Z","content_type":null,"content_length":"24572","record_id":"<urn:uuid:94ca7488-a8e9-44ec-91b1-fe344797dce7>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
classifying SUGRAs
Besides the fact that U(1)*SU(2)*SU(3) should be the result, are there any other ideas why one should compactify this theory in a certain why?
The only fermion in the basic representation of SUGRA in 11D is a 3/2 particle with 128 degrees of freedom. Its bosonic partners are the graviton, which has 44 degrees of freedom, and them a
(antisymmetric, iirc) 3-form, A_{\MNR} with 84 degrees degrees of freedom.
Now this A_{\MNR}, having three indexes, generates a force tensor with four indexes, which allows to singularize a four dimensional manifold when solving Einstein equations to get compact+big
dimensions. This is called the Freund-Rubin solution or something so. You need some extra handwaving to explain why it is 4+7 instead of 7+4.
Haelfix, I think that this tensor A relates to the extended object you are asking for. It is because of it that you can tell that 11D SUGRA is a limit of M-theory. My opinion is that the stringy
completion provides the flavour. Back in the 80s, the flavour (the number of generations) was searched in the topology of the manifold.
So for the dimension. As for why the standard model, where, once you are in seven, it is either that or the sphere. It a sense, the manifold with the standard model is "bigger" than the one of the | {"url":"http://www.physicsforums.com/showthread.php?t=369843","timestamp":"2014-04-19T22:44:18Z","content_type":null,"content_length":"77857","record_id":"<urn:uuid:0ee9a0d6-f4dc-4f28-8877-c4452074826d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transforming rectangles into squares, with applications to strong colorings
Posted on September 26, 2011 by saf in categories: Publications.
Abstract: It is proved that every singular cardinal $\lambda$ admits a function $\textbf{rts}:[\lambda^+]^2\rightarrow[\lambda^+]^2$ that transforms rectangles into squares.
That is, whenever $A,B$ are cofinal subsets of $\lambda^+$, we have $\textbf{rts}[A\circledast B]\supseteq C\circledast C$, for some cofinal subset $C\subseteq\lambda^+$.
As a corollary, we get that for every uncountable cardinal $\lambda$, the classical negative partition relation $\lambda^+\nrightarrow[\lambda^+]^2_{\lambda^+}$ coincides with the following higher
arity statement. There exists a function $c:[\lambda^+]^2\rightarrow\lambda^+$ such that for
• every positive integer $n$,
• every coloring $d:n\times n\rightarrow\lambda^+$, and
• every family $\mathcal A\subseteq[\lambda^+]^n$ of size $\lambda^+$ of mutually disjoint sets,
there exist $a,b\in\mathcal A$ with $\max(a)<\min(b)$ such that $$c(a_i,b_j)=d(i,j)\text{ for all }i,j<n.$$(here, $a_i$ denotes the $i_{th}$-element of $a$, and $b_j$ denotes the $j_{th}$-element of
Citation information:
A. Rinot, Transforming rectangles into squares, with applications to strong colorings, Adv. Math., 231(2): 1085-1099, 2012.
One Response to Transforming rectangles into squares, with applications to strong colorings
1. saf says:
Submitted to Advances in Mathematics, March 2011.
Accepted June 2012.
0 likes
Leave a Reply Cancel reply
This entry was posted in Publications and tagged 03E02, Club Guessing, Minimal Walks, Square-Brackets Partition Relations, Successor of Singular Cardinal. Bookmark the permalink. | {"url":"http://blog.assafrinot.com/?p=190","timestamp":"2014-04-16T04:11:28Z","content_type":null,"content_length":"52377","record_id":"<urn:uuid:05f50ced-7f46-4559-a9e7-9dadc946fb3e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability density function
January 14th 2009, 07:10 AM
Probability density function
i have the following pdf which relates to X square ft of a (round) pizza
f(x)= 6(x-x^2) for 0 < x < 1
0 elsewhere
i had to find the mean and variance which i got as 0.5 and 0.05 respectivly
now i have been asked to find the proportion of pizzas that wont fit in a box of size 1ft square
the pizzas are circular so i found the maximum size of pizza that would fit in the box which is 0.25pi so i need to find the are of the graph which is above that value the trouble is integrating
between these two values just gives me a silly answer so i know i am doing something wrong i just cant place what (by silly answer i mean negative)
January 14th 2009, 08:30 AM
In fact, you're looking for this probability :
$\mathbb{P}(0<x<M)$ (where M is the maximum area in sq. ft. of the pizza), which is, by definition of the pdf :
$\mathbb{P}(0<x<M)=\int_0^M f(x) ~dx$
Assuming that the box is a square, you're correct in finding the value $M=0.25 \pi$
So now just calculate $\int_0^{0.25 \pi} 6(x-x^2) ~dx$ (because $0.25 \pi <1$, so you're in an interval contained in (0,1) and hence $f(x)=6(x-x^2)$)
and you'll have the proportion (or probability) you're looking for.
January 15th 2009, 03:28 AM
mr fantastic
In fact, you're looking for this probability :
$\mathbb{P}(0<x<M)$ (where M is the maximum area in sq. ft. of the pizza), which is, by definition of the pdf :
$\mathbb{P}(0<x<M)=\int_0^M f(x) ~dx$
Assuming that the box is a square, you're correct in finding the value $M=0.25 \pi$
So now just calculate $\int_0^{0.25 \pi} 6(x-x^2) ~dx$ (because $0.25 \pi <1$, so you're in an interval contained in (0,1) and hence $f(x)=6(x-x^2)$)
and you'll have the proportion (or probability) you're looking for.
NB: This integral gives the proportion of pizas that do fit in the box. The proportion that don't is $\int_{0.25 \pi}^1 6(x-x^2) ~dx$ which is the same as $1 - \int_0^{0.25 \pi} 6(x-x^2) ~dx$. | {"url":"http://mathhelpforum.com/advanced-statistics/68175-probability-density-function-print.html","timestamp":"2014-04-20T07:54:55Z","content_type":null,"content_length":"8594","record_id":"<urn:uuid:35bf5155-6a3a-4068-a675-a259b78cfa30>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where Math meets Music
Ever wonder why some note combinations sound pleasing to our ears, while others make us cringe? To understand the answer to this question, you’ll first need to understand the wave patterns created
by a musical instrument. When you pluck a string on a guitar, it vibrates back and forth. This causes mechanical energy to travel through the air, in waves. The number of times per second these
waves hit our ear is called the ‘frequency’. This is measured in Hertz (abbreviated Hz). The more waves per second the higher the pitch. For instance, the A note below middle C is at 220 Hz.
Middle C is at about 262 Hz.
Now, to understand why some note combinations sound better, let’s first look at the wave patterns of 2 notes that sound good together. Let’s use middle C and the G just above it as an example:
Now let’s look at two notes that sound terrible together, C and F#:
Do you notice the difference between these two? Why is the first ‘consonant’ and the second ‘dissonant’? Notice how in the first graphic there is a repeating pattern: every 3^rd wave of the G
matches up with every 2^nd wave of the C (and in the second graphic how there is no pattern). This is the secret for creating pleasing sounding note combinations: Frequencies that match up at
regular intervals (* - Please see footnote about complications to this rule).
Now let’s look at a chord, to find out why it’s notes sound good together. Here are the frequencies of the notes in the C Major chord (starting at middle C):
C – 261.6 Hz
E – 329.6 Hz
G – 392.0 Hz
The ratio of E to C is about 5/4ths. This means that every 5^th wave of the E matches up with every 4^th wave of the C. The ratio of G to E is about 5/4ths as well. The ratio of G to C is about 3/
2. Since every note’s frequency matches up well with every other note’s frequencies (at regular intervals) they all sound good together!
Now let’s look at the ratios of the notes in the C Major key in relation to C:
C – 1
D – 9/8
E – 5/4
F – 4/3
G – 3/2
A – 5/3
B – 17/9
To tell you the truth, these are approximate ratios. Remember when I said the ratio of E to C is about 5/4ths? The actual ratio is not 1.25 (5/4ths) but 1.2599. Why isn’t this ratio perfect?
That’s a good question. When the 12-note ‘western-style’ scale was created, they wanted not only the ratios to be in tune, but they also wanted the notes to go up in equal sized jumps. Since they
couldn’t have both at the same time, they settled on a compromise. Here are the actual frequencies for the notes in the C Major Key:
│Note │Perfect Ratio to C │Actual Ratio to C│Ratio off by│Frequency in Hz│
│Middle C│ │ │ │ 261.6 │
│D │9/8 or 1.125 │1.1224 │ 0.0026 │ 293.7 │
│E │5/4 or 1.25 │1.2599 │ 0.0099 │ 329.6 │
│F │4/3 or 1.333… │1.3348 │ 0.0015 │ 349.2 │
│G │3/2 or 1.5 │1.4983 │ 0.0017 │ 392.0 │
│A │5/3 or 1.666… │1.6818 │ 0.0152 │ 440.0 │
│B │17/9 or 1.888… │1.8877 │ 0.0003 │ 493.9 │
You can see that the ratios are not perfect, but pretty close. The biggest difference is in the C to A ratio. If the ratio was perfect, the frequency of the A above middle C would be 436.04 Hz, which
is off from 'equal temperament' by about 3.96 Hz.
The previous list shows only the 7 notes in the C Major key, not all 12 notes in the octave. Each note in the 12 note scale goes up an equal amount, that is, an equal amount exponentially speaking.
Here is the equation to figure out the Hz of a note:
Hertz (number of vibrations a second) = 6.875 x 2 ^ ( ( 3 + MIDI_Pitch ) / 12 )
The ^ symbol means ‘to the power of’. The MIDI_Pitch value is according to the MIDI standard, where middle C equals 60, and the C an octave below it equals 48. As an example, let’s figure the hertz
for middle C:
Hertz = 6.875 x 2 ^ ( ( 3 + 60 ) / 12 ) = 6.875 x 2 ^ 5.25 = 261.6255
The next note up, C#, is:
Hertz = 6.875 x 2 ^ ( ( 3 + 61 ) / 12 ) = 277.1826
And the next note, D, is:
Hertz = 6.875 x 2 ^ ( ( 3 + 62 ) / 12 ) = 293.6648
The jump between C and C# is 15.56 Hertz, the jump between C# and D is 16.48 Hertz. Although the Hertz jump is not equal between the notes, it is an equal jump in the exponent number and it sounds
like an equal jump to our ears going up the scale. This gives a nice smooth transition going up the scale.
Another important feature of the scale is that it jumps by 2 times each octave. The A below middle C is at 220 Hertz, the A above middle C is at 440 Hertz, and the A above that is at 880 Hertz. This
means that you can move notes into different octaves and still have them sound consonant. For instance, let’s take the case of middle C and G again, except move G into the next octave. We still have
middle C at 261.6Hz, but G is now at 784 Hz. That gives a ratio from G to C of about 3/1 (twice the original ratio of 3/2). The waves still meet up at regular intervals and they still sound
consonant! Another nice feature of having an equal exponential jump is that you can start a scale on any note you wish, including the black keys. For instance, instead of C,D,E,F,G,A,B, you can start
on, say, D# and have D#,F,G,G#,A#,C,D as your scale with the same great sounding combinations of frequencies.
At a certain point frequency ratios are too great to sound consonant. It takes too many waves for them to match up, and our ears just can’t seem to find a regular pattern. At what point is this? The
simple answer is when the ratio’s numerator or denominator gets to about 13. For instance, C# has a frequency ratio to C of about 18/17ths. That’s just too many waves before they meet up, and you can
tell that immediately when you play them together.
So now you’re thinking that we have a scale that goes up in even steps and has reasonably accurate ratios, we’re all set, right? Actually, there are a lot of dissenting opinions on the subject.
Remember those not-quite-accurate ratios? One reason for this was for instruments to be able to be tuned once, and sound reasonably good in all keys. Some of the grumpier musicians still complain,
though, saying that equal temperament makes all keys sound equally bad. If you tune to just one particular key, you can get those ratios perfect (since the human ear can detect a difference of 1Hz,
being off by several Hz can be a problem!).
Maybe more importantly, though, is that there are a lot of undiscovered frequency combinations that can’t be played in the confining 12-note system. Many alternative scales used in India have up to
22 notes per octave. If you’re not satisfied with the standard western scale, there are lot of alternative tuning methods available, such as 'Just Intonation' and 'Lucy Tuning'. With modern digital
equipment, these alternate tunings have become much easier to implement. We should hear some new and incredibly interesting music come out of these tuning methods as they are gradually accepted into
the mainstream.
* - The frequency ratio theory of consonance does not always hold true. See this other article for an explanation.
Here are some links if you’d like to explore this topic further:
Just vs Equal Temperment – ‘harmonic tuning’ described
A beginner’s guide to temperament – with a little history
American Festival of Microtonal Music
This article is Copyright 2002 Joseph Heimiller - all rights reserved.
Click here to go back to our home page, for info on Music MasterWorks voice-to-note composer and
In-Tune Multi-Instrument Tuner. | {"url":"http://www.musicmasterworks.com/WhereMathMeetsMusic.html","timestamp":"2014-04-19T14:30:11Z","content_type":null,"content_length":"41875","record_id":"<urn:uuid:fde2eec0-ea34-4181-9a76-b8fd1ec297d6>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exploring the intersection of math, physics, and computer science
As a high school student in Montreal, Amik St-Cyr never guessed what he'd be doing a few years later. "If someone would have told me then that I was going to be a mathematician, I would have thought
they were crazy," he recalls.
Little did Amik know that his career path was headed to NCAR's Institute for Mathematics Applied to Geosciences (IMAGe), where he works at the intersection of math, physics, and computer science. He
sums up his research this way: "I answer physical questions with the right mathematical tools in an efficient way on supercomputers."
Amik spends much of his time studying numerical methods for solving partial differential equations, also known as PDEs. Mathematicians use these equations, which represent physical properties, to
describe phenomena such as fluid flows, gravitational fields, and electromagnetic fields. PDEs have practical applications for aircraft simulation, computer graphics, weather prediction, and more.
Amik refines and improves existing techniques for solving PDEs and also develops new techniques.
He is currently working on a computer model called the High-Order Multiscale Modeling Environment. HOMME incorporates advanced algorithms and computing techniques that enable it to use tens of
thousands of computer processors. When coupled with NCAR's compact new Blue Gene supercomputer, HOMME may allow scientists to model atmospheric processes in greater detail without requiring computers
that demand more power. "We're researching the ultimate numerical algorithms tied to the ultimate architecture for producing science faster," Amik says.
Another one of his interests is adaptive mesh refinement, a method of localizing one feature in a climate model and viewing it in higher resolution. He also studies ways to improve time-stepping
methods; that is, methods for representing the advance of time in climate models. "The classic time-stepping scheme in atmospheric science, called the semi-Lagrangian scheme, is hard to implement on
modern computers," Amik explains. "We've found a new way to do it that is compatible with modern computers and makes it very efficient."
IBM Blue Gene supercomputer, which is expected to produce simulations of the ocean, weather, and climate phenomena. (©UCAR, photos by Carlye Calvin.)
For Amik, the favorite thing about his job is problem solving. "I like going from a problem to a solution, implementing the solution and running a test, and then seeing it work on a daily basis," he
says. "It's the whole deal, rather than just considering idealized problems all the time."
Ironically, Amik didn't enjoy math much until early in his college days at the University of Montreal, when he had the chance to study higher-level math and physics. Soon he discovered the field of
numerical analysis. "I knew I wanted to do numerical analysis and apply it to partial differential equations to solve realistic problems," he says.
Amik went on to earn a doctorate in applied mathematics, also at the University of Montreal. Afterward, he held a post-doctoral appointment in parallel computing for computational fluid dynamics at
McGill University in Canada. It wasn't until he came to NCAR in 2003 that he applied his mathematical knowledge to the atmospheric sciences. Before, his main interest was in the fluid flows known as
supersonic flows.
In the future, Amik hopes to explore ways to model tornadoes and other severe weather events. He stays motivated by thoughts of seeing his ideas work. "As scientists, we like the feeling of being the
first to discover something or witness something new," he says.
Related Links
by Nicole Gordon
May 2005; February 2011 | {"url":"http://www2.ucar.edu/atmosnews/people/amik-st-cyr","timestamp":"2014-04-18T11:14:27Z","content_type":null,"content_length":"32557","record_id":"<urn:uuid:adb81f17-802d-4ccf-af02-96a20dfa03d3>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
Function SUBST, SUBST-IF, SUBST-IF-NOT, NSUBST, NSUBST-IF, NSUBST-IF-NOT
subst new old tree &key key test test-not => new-tree
subst-if new predicate tree &key key => new-tree
subst-if-not new predicate tree &key key => new-tree
nsubst new old tree &key key test test-not => new-tree
nsubst-if new predicate tree &key key => new-tree
nsubst-if-not new predicate tree &key key => new-tree
Arguments and Values:
new---an object.
old---an object.
predicate---a symbol that names a function, or a function of one argument that returns a generalized boolean value.
tree---a tree.
test---a designator for a function of two arguments that returns a generalized boolean.
test-not---a designator for a function of two arguments that returns a generalized boolean.
key---a designator for a function of one argument, or nil.
new-tree---a tree.
subst, subst-if, and subst-if-not perform substitution operations on tree. Each function searches tree for occurrences of a particular old item of an element or subexpression that satisfies the test.
nsubst, nsubst-if, and nsubst-if-not are like subst, subst-if, and subst-if-not respectively, except that the original tree is modified.
subst makes a copy of tree, substituting new for every subtree or leaf of tree (whether the subtree or leaf is a car or a cdr of its parent) such that old and the subtree or leaf satisfy the test.
nsubst is a destructive version of subst. The list structure of tree is altered by destructively replacing with new each leaf of the tree such that old and the leaf satisfy the test.
For subst, subst-if, and subst-if-not, if the functions succeed, a new copy of the tree is returned in which each occurrence of such an element is replaced by the new element or subexpression. If no
changes are made, the original tree may be returned. The original tree is left unchanged, but the result tree may share storage with it.
For nsubst, nsubst-if, and nsubst-if-not the original tree is modified and returned as the function result, but the result may not be eq to tree.
(setq tree1 '(1 (1 2) (1 2 3) (1 2 3 4))) => (1 (1 2) (1 2 3) (1 2 3 4))
(subst "two" 2 tree1) => (1 (1 "two") (1 "two" 3) (1 "two" 3 4))
(subst "five" 5 tree1) => (1 (1 2) (1 2 3) (1 2 3 4))
(eq tree1 (subst "five" 5 tree1)) => implementation-dependent
(subst 'tempest 'hurricane
'(shakespeare wrote (the hurricane)))
=> (SHAKESPEARE WROTE (THE TEMPEST))
(subst 'foo 'nil '(shakespeare wrote (twelfth night)))
=> (SHAKESPEARE WROTE (TWELFTH NIGHT . FOO) . FOO)
(subst '(a . cons) '(old . pair)
'((old . spice) ((old . shoes) old . pair) (old . pair))
:test #'equal)
=> ((OLD . SPICE) ((OLD . SHOES) A . CONS) (A . CONS))
(subst-if 5 #'listp tree1) => 5
(subst-if-not '(x) #'consp tree1)
=> (1 X)
tree1 => (1 (1 2) (1 2 3) (1 2 3 4))
(nsubst 'x 3 tree1 :key #'(lambda (y) (and (listp y) (third y))))
=> (1 (1 2) X X)
tree1 => (1 (1 2) X X)
Side Effects:
nsubst, nsubst-if, and nsubst-if-not might alter the tree structure of tree.
Affected By: None.
Exceptional Situations: None.
See Also:
substitute, nsubstitute, Section 3.2.1 (Compiler Terminology), Section 3.6 (Traversal Rules and Side Effects)
The :test-not parameter is deprecated.
The functions subst-if-not and nsubst-if-not are deprecated.
One possible definition of subst:
(defun subst (old new tree &rest x &key test test-not key)
(cond ((satisfies-the-test old tree :test test
:test-not test-not :key key)
((atom tree) tree)
(t (let ((a (apply #'subst old new (car tree) x))
(d (apply #'subst old new (cdr tree) x)))
(if (and (eql a (car tree))
(eql d (cdr tree)))
(cons a d))))))
The following X3J13 cleanup issues, not part of the specification, apply to this section:
Copyright 1996-2005, LispWorks Ltd. All rights reserved. | {"url":"http://www.lispworks.com/documentation/lw60/CLHS/Body/f_substc.htm","timestamp":"2014-04-17T12:41:26Z","content_type":null,"content_length":"10696","record_id":"<urn:uuid:150e0a2f-a487-4c12-91af-313a46042e96>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
word problems sequences and series
06-07-2006, 09:19 PM #1
word problems sequences and series
Please help I have ALOT of questions!!
1.)Starting at 888 and counting backward by 7, a student counts 888, 881 and 874, and so on. Which of the following numbers will be included?
a) 35 b) 34 c) 33 d) 32 e) 31
Ok, so I by using the calcuator the aswer is 34, but how would you calcuate this using an equation or something more sensible...
I think I have to use this one
tn=a + (n-1) d
a= 888
d= -7
... now what
__________________________________________________ ___________
2. the sum of the first n even postive integers is h. the sum of the first n odd positive integers is k. then h -k is equal to:
a) n/2
b) n/2 -1
c) n
d) -n
e) n - 1
so... nK and nh
I don't get it...
__________________________________________________ __________
3. the largest four- digit number to be foung in the arithmetic sequence
1,6,11,16,21, ... is
t1= 1
t2= 6
t3= 11
t4= 21
because it goes form 1 to 6 the (last number)
2nd last number stayes for 2 ed 1,1,2,2
everything else up one
but is there an equation for this or something?
__________________________________________________ _________
4. the sum of 50 consecutive intergers is 3250. tha largest of these intergers is
I don't know how to so this...is it the same as before?
__________________________________________________ _________
5 Jan 1, 1986 was a wed. Jan 1 1992 was waht day of the week?
86 wed
87 thurs
88 fri
89 sat
90 sun
91 mon
92 tues
what would an equatin be
__________________________________________________ _________
6) The total number of digits used to number all the pages of a book was 216. Find the number of pages in the book.
tn= a + (n-1) d
216= 1 + (n- 1) 1
216= 1 + n - 1
is that wrong...
total number of digits? sum?
Sn= n/2 [ 2a + ( n-1) d ]
216= n/2 [ 2(1) + (1-1) 1]
216= n/2 (2)
216= n
Pleas help!
Re: word problems sequences and series
2. the sum of the first n even postive integers is h. the sum of the first n odd positive integers is k. then h -k is equal to:
a) n/2
b) n/2 -1
c) n
d) -n
e) n - 1
so... nK and nh
I don't get it...
I'll give you a hand with this one. It's getting late.(yawn)
[tex]\text{Use the arithmetic series thing\\
For the positive:}[/tex]
[tex]\text{odds would be:}[/tex] [tex]\L\\k=\frac{n}{2}[2(1)+(n-1)(2)][/tex]
[tex]\text{Now subtract h and k. Your answer is there among the choices.}[/tex]
Re: word problems sequences and series
Hello, Aka!
Here are a few of them . . .
1) Starting at 888 and counting backward by 7, a student counts 888, 881 and 874, and so on.
Which of the following numbers will be included?
[tex]a)\;35\qquad\qquad b)\;34\qquad\qquad c)\:33\qquad\qquad d)\;32\qquad\qquad e)\;31[/tex]
Your reasoning is correct . . .
We'll use: [tex]\,t_n\:=\:t_1\,+\,d(n\,-\,1)[/tex]
We have: [tex]\,t_1\,=\,888,\;\;d\,=\,-7[/tex]
So: [tex]\,t_n\:=\:888\,-\,7(n\,-\,1)[/tex]
Solve for [tex]n:\;\;n\:=\:\frac{888\,-\,t_n}{7}\,+\,1[/tex]
Since [tex]n[/tex] is positive integer, [tex]888\,-\,t_n[/tex] must be divisible by 7.
And only [tex]t_n\,=\,34[/tex] works . . . answer (b)
2. The sum of the first [tex]n[/tex] even postive integers is [tex]h[/tex].
The sum of the first [tex]n[/tex] odd positive integers is [tex]k[/tex].
Then [tex]h\,-\,k[/tex] is equal to:
[tex]a)\;\frac{n}{2}\qquad\qquad b)\;\frac{n}{2}\,-\,1\qquad\qquad c)\;n\qquad\qquad d)\;-n\qquad\qquad e)\;n\;-\,1[/tex]
I'll solve this from square-one . . .
The sum of the first [tex]n[/tex] even integers: [tex]\,2\,+\,4\,+\,6\,+\,\cdots\,+\,2n[/tex]
[tex]\;\;[/tex] is an arithmetic series with [tex]t_1\,=\,2[/tex] and [tex]d\,=\,2[/tex]
Hence: [tex]\,h\;=\;S_{\text{even}}\;=\;\frac{n}{2}[2\cdot2\,+\,2(n-1)] \;=\;n(n\,+\,1)[/tex]
The sum of the first [tex]n[/tex] odd integers: [tex]\,1\,+\,3\,+\,5\,+\,+\,\cdots\,+\,(2n-1)[/tex]
[tex]\;\;[/tex]is an arithmetic series with [tex]t_1\,=\,1[/tex] and [tex]d\,=\,2[/tex]
Hence: [tex]\,k\;=\;S_{\text{odd}}\;=\;\frac{n}{2}[2\cdot1\,+\,2(n-1)] \;= \;n^2[/tex]
Therefore: [tex]\,h\,-\,k\;=\;n(n\,+\,1)\,-\,n^2\;=\;n[/tex] . . . answer (c)
3. the largest four-digit number to be foung in the arithmetic sequence
1,6,11,16,21, ... is >
Your answer (9996) is correct!
We have an arithmetic sequence with [tex]t_1\,=\,1,\;d\,=\,5[/tex]
The [tex]n^{th}[/tex] term is: [tex]\,t_n\;=\;1\,+\,5(n\,-\,1)[/tex]
We want [tex]t_n[/tex] to be less than 10,000 (a five-digit number).
So we have: [tex]\,1\,+\,5(n\,-\,1)\;<\;10,000[/tex]
[tex]\;\;[/tex]Subtract 1: [tex]\,5(n\,-\,1)\;<\;9,999[/tex]
[tex]\;\;[/tex]Divide by 5: [tex]\,n\,-\,1\;<\;1999.8[/tex]
[tex]\;\;[/tex]Add 1: [tex]\,n \;<\;2000.8[/tex]
So, we let [tex]\,n\,=\,2000[/tex]
Then: [tex]t_{_{2000}}\;=\;1\,+\,5(1999)\;=\;9996[/tex] . . . There!
I'm the other of the two guys who "do" homework.
Re: word problems sequences and series
1.)Starting at 888 and counting backward by 7, a student counts 888, 881 and 874, and so on. Which of the following numbers will be included?
a) 35 b) 34 c) 33 d) 32 e) 31
Ok, so I by using the calcuator the aswer is 34, but how would you calcuate this using an equation or something more sensible...
I think I have to use this one
tn=a + (n-1) d
a= 888
d= -7
... now what
Dividing each number of the series by 7 leaves a remainder of 6.
Which of ) 35 b) 34 c) 33 d) 32 e) 31 leaves a remander of 6 when divided by 7?
34/7 = 4 + 6 remainder.
No matter how insignificant it might appear, learn something new every day.
Re: word problems sequences and series
2. the sum of the first n even postive integers is h. the sum of the first n odd positive integers is k. then h -k is equal to:
a) n/2
b) n/2 -1
c) n
d) -n
e) n - 1
so... nK and nh
I don't get it...
The sum of the first n even integers is n(n + 1).
The sum of the first n odd integers is n^2.
Therefore, the difference between the first n even integers and the first n odd integers is n(n + 1) - n^2 = n^2 + n - n^2 = n.
No matter how insignificant it might appear, learn something new every day.
Re: word problems sequences and series
3. the largest four- digit number to be foung in the arithmetic sequence 1,6,11,16,21, ... is: t1 = 1, t2 = 6, t3 = 11, t4 = 21
9996? because it goes form 1 to 6 the (last number)
2nd last number stayes for 2 ed 1,1,2,2 everything else up one
but is there an equation for this or something?
9999/5 = 1999.8 or 1999 + 4
Therefore, the highest 4 digit number in the series is 1999 - 4 = 1996.
Every number leaves a remainder of 1 when divided by 5.
The number of terms is 1996/5 = 399.2 or 399 with a 1 remainder
No matter how insignificant it might appear, learn something new every day.
Re: word problems sequences and series
Hello, Aka!
4. The sum of 50 consecutive intergers is 3250. tha largest of these intergers is:
[tex]a)\;64\qquad\qquad b)\;66\qquad\qquad c)\;112\qquad\qquad d)\;114\qquad\qquad e)\;115[/tex]
The answer is "None of These"!
We have an arithmetic series: [tex]a\,+\,(a+1)\,+\,(a+2)\,+\,\cdots\,+\,(a+49)[/tex]
The sum of the first [tex]n[/tex] terms is: [tex]\,S_n\;=\;\frac{n}{2}[2a_1\,+\,d(n-1)][/tex]
Our first term is [tex]a[/tex], the common difference is [tex]d = 1,\;n\,=\,50,\;S_{50}\,=\,3250[/tex]
We have: [tex]\,3250\;=\;\frac{50}{2}[2\cdot a\,+\,1(50\,-\,1)][/tex]
[tex]\;\;[/tex]then: [tex]\,3250\;=\;25[2a\,+\,49)\;\;\Rightarrow\;\;130\;=\;2a\,+\,49\;\; \Rightarrow\;\;81\;=\;2a[/tex]
But this gives us: [tex]\,a\;=\;40.5[/tex] . . . and [tex]a[/tex] is supposed to be an integer.
So I assume there is a typo in the statement of the problem.
5. January 1, 1986, was a Wednesday.
January 1, 1992, was what day of the week?
There is no "formula" for this problem; you must do some Thinking.
From 01/01/86 to 01/01/92 is six years: [tex]\,6\,\times\,365\:=\:2190[/tex] days.
But 1988 was a leap year, so there are: 2191 days.
Since [tex]2191\,\div\,7\;=\;313[/tex] with no remainder,
[tex]\;\;[/tex]then 01/01/92 is exactly 313 weeks after 01/01/86.
Therefore, January 1, 1992 is also on a Wednesday.
6) The total number of digits used to number all the pages of a book was 216.
Find the number of pages in the book.
There is no formula for this one either . . . You must baby-talk your way through it.
Pages 1 to 9: nine 1-digit numbers = 9 digits.
Pages 10 to 99: ninety 2-digit numbers = 180 digits.
There are: [tex]\,216\,-\,9\,-\,90\:=\:27[/tex] digits to go.
These are taken up by the first nine 3-digit numbers: 100, 101, 102, ... , 108
Therefore, the last page is number 108.
I'm the other of the two guys who "do" homework.
Re: word problems sequences and series
4. the sum of 50 consecutive intergers is 3250. tha largest of these intergers is: a) 64, b) 66, c) 112, d) 114, e) 115
Assuming the first and last numbers are x and y,
(x + y)50/2 = 3250 making x +y = 130
For 50 integers, x - y = 51
x + y = 130
x - y = 51
2x = 181 making x 90.5
since we are supposed to be dealing with integers, some given information is wrong.
No matter how insignificant it might appear, learn something new every day.
Thanks for everyones help!
Sorry, I typed #4 wrong its
4. the sum of 50 consecutive even intergers is 3250. tha largest of these intergers is
That makes a world of difference then.
The first number of the series is 16. Can you figure out what the last one is now?.
06-07-2006, 10:21 PM #2
Elite Member
Join Date
Sep 2005
06-07-2006, 10:27 PM #3
Elite Member
Join Date
Jan 2005
Lexington, MA
06-08-2006, 01:05 PM #4
Full Member
Join Date
Jul 2005
Long Island, NY
06-08-2006, 01:38 PM #5
Full Member
Join Date
Jul 2005
Long Island, NY
06-08-2006, 02:06 PM #6
Full Member
Join Date
Jul 2005
Long Island, NY
06-08-2006, 03:37 PM #7
Elite Member
Join Date
Jan 2005
Lexington, MA
06-08-2006, 04:40 PM #8
Full Member
Join Date
Jul 2005
Long Island, NY
06-08-2006, 05:38 PM #9
06-08-2006, 06:17 PM #10
Elite Member
Join Date
Sep 2005 | {"url":"http://www.freemathhelp.com/forum/threads/44248-word-problems-sequences-and-series","timestamp":"2014-04-20T03:19:08Z","content_type":null,"content_length":"80233","record_id":"<urn:uuid:2511f63a-b178-4873-9498-0af31442102d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimal plat representations of prime knots and links are not unique
Montesinos Amilibia, José María (1976) Minimal plat representations of prime knots and links are not unique. Canadian Journal of Mathematics-Journal Canadien de Mathématiques, 28 (1). pp. 161-167.
ISSN 0008-414X
Restricted to Repository staff only until 31 December 2020.
Official URL: http://cms.math.ca/cjm/v28/cjm1976v28.0161-0167.pdf
J. S. Birman [same J. 28 (1976), no. 2, 264–290] has shown that any two plat representations of a link in S3 are stably equivalent and that stabilization is a necessary feature of the equivalence for
certain composite knots. She has asked whether all 2n-plat representations of a prime link are equivalent. The author provides a negative answer, by exhibiting an infinite collection of prime knots
and links in S3 in which each element L has at least two minimal and inequivalent 6-plat representations. In addition, as an application of another result of Birman [Knots, groups and 3-manifolds
(Papers dedicated to the memory of R. H. Fox), pp. 137–164, Ann. of Math. Studies, No. 84, Princeton Univ. Press, Princeton, N.J., 1975], the 2-fold cyclic covering spaces of S3 branched over such
links L form further examples of closed, orientable, prime 3-manifolds having inequivalent minimal Heegaard splittings, which were first constructed by Birman, F. González-Acuña and the author
[Michigan Math. J. 23 (1976), no. 2, 97–103].
Item Type: Article
Uncontrolled Topology of general 3-manifolds
Subjects: Sciences > Mathematics > Differential geometry
ID Code: 17266
References: J. S. Birman and H. M. Hilden, On the mapping class group of closed, orientable surfaces as covering spaces, Annals of Math. Studies 66, 81-115.
J. S. Birman, On the equivalence of Heegaard splittings of closed, orientable 3-manifolds, Knots, Groups and 3-Manifolds (L. Neuwirth, Editor), Annals of Math. Studies 84
(1975), 137-164.
J. S. Birman, F. Gonzalez-Acufïa and J. M. Montesinos, Heegaard splittings of prime 3-manifolds are not unique, to appear, Michigan Math. J.
J. S. Birman, Braids, links and mapping class groups, Annals of Math. Studies 82 (1975).
J. S. Birman, On the stable equivalence of plat representations of knots and links, to appear, Can. J. Math.
R. Engmann, Nicht-hom'ôomorphe Heegaard-Zerlegungen vom Geschlecht 2 der zusammenhdngendem Summe zweier Linsenrâume, Abh. Math. Sem. Univ. Hamburg 35(1970), 33-38.
J. M. Montesinos, Sobre la conjetura de Poincaré y los recubridores ramificados sobre un nudo, Tesis doctoral, Madrid, 1971.
J. M. Montesinos, Variedades de Seifert que son recubridores ciclicos ramificados de dos hojas, Boletin Soc. Mat. Mexicana 18 (1973), 1-32.
K. Reidemeister, Zur dreidimensionalen Topologie, Abh. Math. Sem. Univ. Hamburg 9 (1933), 189-194.
H. Seifert, Topologie dreidimensionaler gefaserter Raume, Acta Math. 60 (1933), 147-238.
J. Singer, Three dimensional manifolds and their Heegaard diagrams, Trans. Amer. Math. Soc. 85 (1933), 88-111.
0. Ja. Viro, Linkings, 2-sheeted branched coverings, and braids, Math. U.S.S.R. Sbornik 16 (1972), 222-236 (English translation).
F. Waldhausen, Eine Klasse von 3-dimensionalen Mannigfaltigkeiten II, Invent. Math 4 (1967), 87-117.
F. Waldhausen, Uber Involutionen der 3-Sphare, Topology 8 (1969), 81-91.
Deposited On: 29 Nov 2012 09:54
Last Modified: 07 Feb 2014 09:44
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/17266/","timestamp":"2014-04-19T07:22:38Z","content_type":null,"content_length":"31750","record_id":"<urn:uuid:dbabdaca-9d1a-4168-a519-7f8f32e44547>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00345-ip-10-147-4-33.ec2.internal.warc.gz"} |
Z[i]-module Question
December 8th 2008, 06:03 PM #1
Junior Member
Nov 2008
Z[i]-module Question
Let $M$ be the $\mathbb{Z}[i]$-module generated by the elements $v_1$, $v_2$ such that $(1+i)v_1+(2-i)v_2=0$ and $3v_1+5iv_2=0$. Find an integer $r \geq 0$ and a torsion $\mathbb{Z}[i]$-module
$T$ such that $M \cong \mathbb{Z}[i]^r \times T$.
$r=0$ because $M$ is a torsion $\mathbb{Z}[i]$ module. this is very easy to see: $0=3(1-i)[(1+i)v_1 + (2-i)v_2]-2(3v_1+5iv_2)=(3-19i)v_2.$ thus $v_2$ is torsion. similarly you can show that $v_1$
torsion too. thus $M$ is a torsion module. $\Box$
Thanks! I have only a few questions. How do we show that $v_1$ is torsion? I know how you did it for $v_2$, but getting things to cancel in complex analysis is not so easy [I am having trouble
getting $v_2$ to cancel]. Also, did we find the torsion $\mathbb{Z}[i]$-module such that $M \cong \mathbb{Z}[i]^r \times T$ or does $T=M$?.
$0=(2-i)(3v_1+5iv_2) - 5i[(1+i)v_1 + (2-i)v_2]=(11-8i)v_1.$ so since every $v \in M$ is a linear combination of $v_1,v_2,$ we will have $(11-8i)(3-19i)v=0.$
Also, did we find the torsion $\mathbb{Z}[i]$-module such that $M \cong \mathbb{Z}[i]^r \times T$ or does $T=M$?.
since $M$ is torsion, we can just let $r=0$ and $T=M.$
December 9th 2008, 07:15 AM #2
MHF Contributor
May 2008
December 9th 2008, 01:47 PM #3
Junior Member
Nov 2008
December 9th 2008, 06:13 PM #4
MHF Contributor
May 2008 | {"url":"http://mathhelpforum.com/advanced-algebra/64015-z-i-module-question.html","timestamp":"2014-04-23T20:57:02Z","content_type":null,"content_length":"48331","record_id":"<urn:uuid:b38b9ee0-ebd5-4d67-90cc-217aaf9c3e5b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
• A large pizza has a diameter 35cm. Two large pizzas cost $19.99. A medium pizza has a diameter 30 cm. Three medium pizzas cost $24.99. Which is the better deal: 2 large pizzas or 3 medium
radius word problems, radius problems
• From a space probe orbiting Jupiter’s moon Io from an altitude of 552 km, it was observed that the angle of depression from the probe to the surface of this moon was 39.7 degrees. What is the
radius of the moon, Io?
word problems for radius, radius problems solutins
radius and diameter word problems 5th, 4th grade word problems with circles radius
• According to Kep;er`s third law of planetary motion, the radyo T^2/R^3 has the same value for every planet in our solar system. R is the average radius of the orbit of the planet measured in
astronomical units(AU), and T is the number of years it takes for one complete orbit of the sun. Jupiter orbits the sun in 11.86 years with an average radius of 5.2 AU, whereas Saturn orbits the
sun in 29.46 years. Find the average radius of the orbit of Saturn.(One AU is the distance from the earth to the sun.)
word problems for diameter and radius, simple word problems about radius and diameter
• The distance from Philadelphia to Sea Isle City is 100 mi. A car was driven this distance using tires with a radius of 14 in. How many revolutions of each tire occurred on the trip?
circle of radius word problems with solution, RADIUS AND DIAMETER WORD PROBLEMS
• A man has 3 spheres solid and small. The radius are 2 mm, 3mm and 4 mm. He melted all the spheres and make one sphere. What is the radius of the new sphere?
radius word problem, word problems with radius
• In the figure below, both circles have the same center, and the radius of the larger circle is R. If the radius of the smaller circle is 3 units less than R, which of the following represents
the area of the shaded region?
• the combined area of two circles is 80 pie centimeters. the length of the radius of one circle is twice the length of the radius of the other circle. find the legth of the radius of each circle.
Radius Related Links | {"url":"http://wordproblems.us/radius","timestamp":"2014-04-18T10:35:24Z","content_type":null,"content_length":"14488","record_id":"<urn:uuid:3b7c63fc-c5b3-49c0-8b25-2b7924741e49>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |