content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Kirchhoff's laws
Kirchhoff's laws of electric circuits
Two laws governing electric circuits involving Ohm's-law conductors and sources of electromotive force, stated by Gustav Kirchhoff. They assert that the sums of the outgoing and incoming currents at
any junction in the circuit must be equal, and that the sum of the current-resistance products around any closed path must equal the total electromotive force in it.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/K/Kirchhoffs_laws.html","timestamp":"2014-04-17T18:57:42Z","content_type":null,"content_length":"6078","record_id":"<urn:uuid:4ebb3eb4-9dcc-4d92-8bdf-3b0d7241f068>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wellcome to Physics
A couple of days ago I took part in a “packed lunch” discussion at the Wellcome Exhibition, about CERN, the LHC, and my work at UCL. A lot of fun. I turned up with a piece of the LHC we built at UCL
(ok, it was basically a big printed circuit board) and enjoyed a chat with the excellent Dan Glaser followed by some discussion with the audience. The podcast of this is available here.
The Wellcome centre is a brilliant place and it was good to talk about CERN and fundamental physics in a place more used to cutting-edge life-sciences. The packed lunch was really packed too,
standing room only, with people who had dropped in during lunch and some who had travelled specially.
One theme was how being a professor at UCL fits with working at CERN (in Geneva). How do we contribute, and how do we benefit?
CERN is a really international lab. British scientists, including those from UCL, are essential to the success of the whole thing. Only a fraction of those people working on the big experiments (mine
is ATLAS) are actually CERN employees, and those employees themselves are drawn from all over Europe and beyond. I talked more about this in the podcast.
As to how we benefit: as a country there are many ways. But looking at it from the point of view of teaching physics in UCL, the main thing is that if we didn’t participate in CERN we would be cut
off from the energy frontier in physics. This means we’d sit back and watch while the rest of the world found out whether or not the Higgs boson exists (even though Peter Higgs is a British scientist
based in Edinburgh, who spent some time at UCL).*
It’s a critical motivation in my teaching, and as far as I can tell in the students’ learning, that physics is a living subject where the textbooks are still being written and rewritten. CERN is
arguably the most high profile place doing this. Physics is useful and interesting all over the place, but it’s hard to imagine telling the brightest and best physics students “learn all this stuff
because it’s useful, but don’t get carried away trying to do new fundamental physics, we don’t do that here”. I hope we are never put in that position.
Funnily enough one question which came up in the discussion was directly from the first year mathematical methods course I teach. And I didn’t know the (exact) answer, which seemed outrage the
questioner somewhat.
This course is typical of first year physics degree courses; the main goal is to provide students with the mathematical tools they need to do degree-level physics. It contains some techniques for
solving differential equations, doing multi-dimensional integrals, matrix manipulation and coordinate transformations. The sweetener at the end is that we do Einstein’s special relativity. I just
finished marking 150 exam scripts; a relief to all concerned.
The question was “how fast does a proton go in the LHC”. The answer is “nearly the speed of light” of course, as has been the answer for every high energy accelerator for decades. The actual answer
is 0.999999964 times the speed of light now, and when we go up to full energy it will be 0.999999991 times the speed of light. In the first case this is 299792447 metres per second and in the second
it is 299792455 metres per second. So all the effort we’ll go to in 2012 to get up to full energy buys us another 8 metres per second in speed; about as fast as I cycle to work. As I tried to explain
to the questioner, it is actually the energy that matters. Special relativity means the protons can never reach the speed of light. In fact they get heavier and heavier instead as we put in more
energy. Most of my undergraduate class now know how this works. I think I will try and explain further it in a separate blog post soon, ‘cos it’s not really that tricky.
I also got gingerbread daleks. Bonus.
*As I said in the discussion, it’s really about understanding electroweak symmetry breaking, but that’s a longer story, some of which I described in the last two paragraphs of this. I also described
my ideas (mentioned in the discussion) on how to find a Higgs here. Other things I mentioned included the aural interpretation of particle collisions at http://www.lhcsound.moonfruit.com/.
3 Responses to Wellcome to Physics
1. i sometimes ask myself this?.if as its going to happen the Higgs Boson is found to exist indeed,then the Standard Model is verified!!then where after?do physicist sit down and play with the
theory and see what else it will predict? or nature will come up with new entities that will puzzle scientist as happened in the 20th century!
This entry was posted in Particle Physics, Science and tagged ATLAS, cern, Higgs, LHC, not (yet) on the Guardian, Relativity, teaching, Wellcome. Bookmark the permalink. | {"url":"http://lifeandphysics.wordpress.com/2010/05/23/wellcome-to-physics/","timestamp":"2014-04-20T05:42:37Z","content_type":null,"content_length":"60893","record_id":"<urn:uuid:3138c047-0c9c-48d8-88d5-515f41fd18ec>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Random number generators.
Charles R Harris charlesr.harris at gmail.com
Sun Jun 4 15:41:07 CDT 2006
MWC8222 has good distribution properties, it comes from George Marsaglia and
passes all the tests in the Diehard suite. It was also used among
others by Jurgen
Doornik in his investigation of the ziggurat method for random normals and
he didn't turn up any anomalies. Now, I rather like theory behind MT19937,
based as it is on an irreducible polynomial over Z_2 discovered by brute
force search, but it is not the end all and be all of rng's. And yes, I do
like to generate hundreds of millions of random numbers/sec, and yes, I do
do it in c++ and use boost/python as an interface, but that doesn't mean
numpy can't use a speed up now and then. In particular, the ziggurat method
for generating normals is also significantly faster than the polar method in
numpy. Put them together and on X86_64 I think you will get close to a
factor of ten improvement in speed. That isn't to be sniffed at, especially
if you are simulating noisy images and such.
On 6/4/06, Stephan Tolksdorf <st at sigmasquared.net> wrote:
> > MWC8222:
> >
> > nums/sec: 1.12e+08
> >
> > MT19937:
> >
> > nums/sec: 5.41e+07
> > The times for 32 bit binaries is roughly the same. For generating large
> > arrays of random numbers on 64 bit architectures it looks like MWC8222
> > is a winner. So, the question is, is there a good way to make the rng
> > selectable?
> Although there are in general good reasons for having more than one
> random number generator available (and testing one's code with more than
> one generator), performance shouldn't be the deciding concern for
> selecting one. The most important characteristic of a random number
> generator are its distributional properties, e.g. how "uniform" and
> "random" its generated numbers are. There's hardly any generator which
> is faster than the Mersenne Twister _and_ has a better
> equi-distribution. Actually, the MT is so fast that on modern processors
> the contribution of the uniform number generator to most non-trivial
> simulation code is negligible. See www.iro.umontreal.ca/~lecuyer/ for
> good (mathematical) surveys on this topic.
> If you really need that last inch of performance, you should seriously
> think about outsourcing your inner simulation loop to C(++). And by the
> way, there's a good chance that making the rng selectable has a negative
> performance impact on random number generation (at least if the
> generation is done through the same interface and the current
> implementation is sufficiently optimized).
> Regards,
> Stephan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20060604/679f9962/attachment-0001.html
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-June/020738.html","timestamp":"2014-04-18T08:23:16Z","content_type":null,"content_length":"5849","record_id":"<urn:uuid:f2c9b7a5-b325-4856-8d54-88fedb09ee78>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milliken Algebra 1 Tutor
Find a Milliken Algebra 1 Tutor
...I taught science and physics classes for 4 years at the Wyoming Science and Adventure Center (WySAC). I was also a member of the physics department at Casper College that constructed the iconic
tribuchet. I graduated from Natrona County HS in 2007 with the International Baccalaureate's diploma. I took the European chem test in order to pass this exam.
31 Subjects: including algebra 1, English, chemistry, writing
...Therefore I am currently offering special pricing for diff eq students of $20/hour. I believe that the choice of college and program of study have a huge influence on a person's potential. I
offer to spend a 1-2 hour session with high school juniors and seniors to help them evaluate the colleges and programs that they are considering.
30 Subjects: including algebra 1, reading, Spanish, calculus
...The absolute proudest moment (and the biggest surprise) of my career so far came when one of these students ran to me from across a dance floor and threw her arms around my neck, saying she'd
scored a 32! As a multidisciplinary learner, I believe adamantly in the educational power of the arts, a...
31 Subjects: including algebra 1, English, reading, Spanish
...I have a deep passion for the art which I enjoy passing on to my students. Having graduated Magna cum laude with my Bachelor's degree in Psychology, and completing my Master's in Botany with a
4.0 GPA, I know what it takes to be successful in school. My graduate coursework involved many biology...
10 Subjects: including algebra 1, reading, writing, biology
...I was involved in jazz band, orchestra, and marching band. In junior high, I also started the French horn, and continued playing that through high school. In junior high, I ran for student
33 Subjects: including algebra 1, reading, English, SAT math | {"url":"http://www.purplemath.com/milliken_algebra_1_tutors.php","timestamp":"2014-04-21T15:06:51Z","content_type":null,"content_length":"23879","record_id":"<urn:uuid:2770cbf9-f763-436d-9b1c-1380da386f9f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Capital Asset Pricing Model (CAPM) Definition | Investopedia
Capital Asset Pricing Model - CAPM
Definition of 'Capital Asset Pricing Model - CAPM'
A model that describes the relationship between risk and expected return and that is used in the pricing of risky securities.
The general idea behind CAPM is that investors need to be compensated in two ways: time value of money and risk. The time value of money is represented by the risk-free (rf) rate in the formula and
compensates the investors for placing money in any investment over a period of time. The other half of the formula represents risk and calculates the amount of compensation the investor needs for
taking on additional risk. This is calculated by taking a risk measure (beta) that compares the returns of the asset to the market over a period of time and to the market premium (Rm-rf).
Investopedia explains 'Capital Asset Pricing Model - CAPM'
The CAPM says that the expected return of a security or a portfolio equals the rate on a risk-free security plus a risk premium. If this expected return does not meet or beat the required return,
then the investment should not be undertaken. The security market line plots the results of the CAPM for all different risks (betas).
Using the CAPM model and the following assumptions, we can compute the expected return of a stock in this CAPM example: if the risk-free rate is 3%, the beta (risk measure) of the stock is 2 and the
expected market return over the period is 10%, the stock is expected to return 17% (3%+2(10%-3%)).
Wants to know more about CAPM? Read
Taking Shots at CAPM
The Capital Asset Pricing Model: An Overview
Related Video for 'Capital Asset Pricing Model - CAPM'
comments powered by Disqus | {"url":"http://www.investopedia.com/terms/c/capm.asp","timestamp":"2014-04-18T17:14:20Z","content_type":null,"content_length":"95737","record_id":"<urn:uuid:cf8fe329-aa27-4374-b470-786bb7c12857>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Next: Acknowledgements Up: A Divergence Critic for Previous: Related Work
I have described a divergence critic, a computer program which attempts to identify diverging proof attempts and to propose lemmas and generalizations which overcome the divergence. The divergence
critic has proved very successful; it enables the system SPIKE to prove many theorems from the definitions alone. The divergence critic's success can be largely attributed to the power of the
rippling heuristic. This heuristic was originally developed for proofs using explicit induction but has since found several other applications. Difference matching is used to identify accumulating
term structure which is causing divergence. Lemmas and generalizations are then proposed to ripple this term structure out of the way. There are other types of divergence which could perhaps be
recognized by the divergence critic. Further research is needed to identify such divergence patterns, isolate their causes and propose ways of fixing them. This research may take advantage of the
close links between divergence patterns and particular types of generalization. For instance, it may be possible to identify specific divergence patterns with the need to generalize common subterms
in the theorem being proved. | {"url":"http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume4/walsh96a-html/section3_10.html","timestamp":"2014-04-16T07:58:11Z","content_type":null,"content_length":"2231","record_id":"<urn:uuid:5766319b-9337-41e4-aec7-2d5be682c62c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00321-ip-10-147-4-33.ec2.internal.warc.gz"} |
Acceleration Problem
May 10th 2008, 11:29 AM #1
Mar 2008
Acceleration Problem
A particle moves along the curve given by s = sqrt(t+1). Find the acceleration at 2 seconds. A particle moves a long a curve.. that means it's giving me position function? And I need to take the
derivative twice to get to acceleration?
Also I am working on a review hand out that my professor gave us, so instead of creating a bunch of threads could I just make one thread and post there any time I come to a problem I need
Well the problem is that the thread gets very long and rather tedious to go through since there will be posts that have solved some of your previous problems. Just post a question per thread and
it'll be easier on us to know what we're dealing with. Also, post any work that you have done too
Well I took the derivative twice.
s = $\sqrt{t+1}$
v = 1/(2 $\sqrt(t+1)$)
a = -1/(4 $\sqrt{t+1}^3$)
I plug in 2 for the acceleration function
-1/(4 $\sqrt{3}^3$)
My problem is the answer key says it's supposed to come out to -1/32, and I have -1/4sqrt3. Where'd I go wrong?
Last edited by Hibijibi; May 10th 2008 at 12:09 PM. Reason: fixed mistypes
I think they made a mistake.
$-\frac{1}{4\sqrt{2}^{3}} = -\frac{1}{32}$
This is right if your position function was $s = \sqrt{t}$. It seems that they have forgotten to add + 1 to t = 2.
I'll make sure to point that out to my professor, thanks!
May 10th 2008, 11:35 AM #2
May 10th 2008, 11:49 AM #3
Mar 2008
May 10th 2008, 12:18 PM #4
May 10th 2008, 12:26 PM #5
Mar 2008 | {"url":"http://mathhelpforum.com/calculus/37874-acceleration-problem.html","timestamp":"2014-04-19T06:17:47Z","content_type":null,"content_length":"43275","record_id":"<urn:uuid:fb1eda94-2704-4111-888b-1403e0463636>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
compactness theorem proof
An expression of the compactness theorem for sets of sentences is that: let T be a set of sentences in L. Then T has a model iff every finite subset of T has a model.
Could anyone give me some hints how to prove this?
The first direction is straightforward: every model of T is a model of every subset of T. But what about the opposite direction? Any help is appreciated! | {"url":"http://www.physicsforums.com/showthread.php?p=2750113","timestamp":"2014-04-18T10:46:50Z","content_type":null,"content_length":"22119","record_id":"<urn:uuid:6e79380d-d15b-44a5-bc09-d7f2a7876560>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00187-ip-10-147-4-33.ec2.internal.warc.gz"} |
PCMI@MathForum: Rhombic Dodecahedron - Hidden Within or Surrounding the Cube?
Rhombic Dodecahedron - Hidden Within or Surrounding the Cube?
by Joyce Frost and Kris Koch
Files accompanying this lesson: rhombic.doc - MS Word
rhombic.pdf - PDF format
Geometric Solids, Measurement
Making constructions in three-dimensions (of cubes, square pyramids, and rhombic dodecahedrons), dissecting solids, finding lengths, recognizing three-dimensional relationships, problem solving.
Grade Level/Strand:
Secondary geometry
Class Time:
1-2 class periods to build the models, complete the worksheet, and conduct the classroom discussion.
• Student Worksheet
• Teacher classroom set of Zome™ tools (if available)
If Zome™ tools are not available, each group of 4 students could use 96 clear standard sized drinking straws (24 for each person) and blue and yellow curling ribbon to build a total of 12 square
pyramids per group.
Students should be familiar with the Pythagorean Theorem and have some knowledge of irrational numbers. For this student activity, assume in Figure 1 that the cube has edge length of 2 units,
resulting in face diagonals of length 2√2, and interior diagonals of 2√3 units for the cube.
Figure 1: Cube with Interior Diagonals
The edge lengths for the square-based pyramids in Figure 2 are 2 units and √3 units.
Figure 2: Square Pyramid Formed by Cube Face and Parts of Diagonals
Each of the square pyramids formed in Figure 1 are attached to either the original cube or a congruent cube forming the rhombic dodecahedron of Figure 3a. A different view with the original cube
removed is seen in Figure 3b.
Figure 3: (a) Rhombic Dodecahedron (b) Rhombic Dodecahedron
The rhombic dodecahedron, dual of the cuboctahedron, is a fascinating polyhedron, although not one of the Platonic or Archimedian Solids. It is best known in nature as the crystalline shape of the
garnet, the January birthstone. In this lesson, students explore the relationship of the rhombic dodecahedron to the cube and the space-filling or tessellating properties of both polyhedra. Students
discover the relationship between the edges and diagonals of a cube. By analyzing the three different lengths found in a cube, students naturally derive the irrational numbers √1, √2, and √3.
Additionally, they explore dissections of three-dimensional objects by comparing the unusual rhombic dodecahedron to the familiar cube. This is a mathematically rich problem to illustrate common
algebraic and geometric concepts in two and three-dimensions.
Prepare sample models of a cube (Figure 1), square pyramids (Figure 2), and rhombic dodecahedron (Figure 3a and 3b) using Zome tools For the cube, use 12 blue struts for edges and 8 balls for
vertices. Complete with 8 yellow struts and 1 ball for the inside diagonals and center. Also prepare six square-based pyramids using 4 blue edges, 4 yellow diagonal edges, and 5 balls for vertices.
The rhombic dodecahedron is built starting with a cube as in Figure 1 and is completed with square pyramids (Figure 2) placed on each of the six faces as in Figure 3a. Additionally, prepare a rhombic
dodecahedron using 24 yellow struts and 14 white balls as in Figure 3b.
If using clear straws instead of Zome tools, prepare models using blue curling ribbon inside full length straws for the blue struts and yellow curling ribbon inside 7/8 length straws for yellow
struts. For this activity, edge lengths are 2 units; thus yellow lengths would be √3 or may be approximated for this construction as 1.75 units. Therefore, a 1.75 unit is 7/8 of a 2-unit straw.
Folding a piece of paper equal in length to a straw is an easy place to start. By folding this piece of paper in half three times to create eighths, it is easy to create a template to cut straws that
are 7/8 the length of the original straws.
Arrange students in groups of 3 or 4. Each group needs a cup or Ziploc bag containing 12 blue struts, 8 yellow struts, and 9 white balls, and each student in the group needs a worksheet. If you have
enough Zome tools, each group should also get several sets of 4 blue, 4 yellow, and 5 balls to make square-based pyramids to use as "jackets" to surround their cubes. Have students build their models
and work through the worksheet. Circulate among the groups to answer questions and to check for understanding of key concepts. Make use of the prepared models to question students and probe student
understanding. Allow ten to fifteen minutes at the end of class to bring the groups together as a large group and discuss findings.
Questions that may be asked of students to help achieve the objectives of the lesson follow:
• How can you determine the lengths of the diagonals of a cube using the edge length?
• How are the edges and diagonals of the cube related?
• Why is the cube considered a space-tessellating, or space filling, shape?
• How can another space-tessellating shape, the rhombic dodecahedron, be created using the diagonals of the cube?
• How can one prove that the rhombic dodecahedron is a space-tessellating polyhedron by analyzing its relationship to the cube from which it was constructed?
• How can the volume of the cube and its corresponding rhombic dodecahedron be computed and compared?
• How can the name, rhombic dodecahedron, be explained based on the number and shape of its faces?
• How can one convincingly argue that, when attached to a cube, the two triangular faces of the square pyramids meet in the same plane and create a rhombus?
Extensions: More about Rhombic Dodecahedra
a. Search the Internet for information about rhombic dodecahedra.
b. Take the net below for the square pyramid pieces, print it on cardstock or glue the sheet to cardstock or 1/2 of a manila file folder. Place the card stock/manila folder on contact paper, score
the inside lines of each net, cut them out and tape into (convex) square pyramids. Attach six of these pyramids to one of the eleven hexomino configurations that form a cube. Work in pairs. One
student can build the inside of a cube and the second student can create the outside jacket of the cube making a rhombic dodecahedron.
c. Using simple trigonometric functions, find the angles of the rhombi that create the faces of the rhombic dodecahedron.
d. A rotating ring of non-regular tetrahedra, sometimes called a "kaleidocycle," is a dissection of the rhombic dodecahedron. The kaleideocycle is shown in the interior of the figure below.
1. What's the ratio of the volume of the kaleidocycle to the rhombic dodecahedron?
2. How many tetrahedrons would make a complete rhombic dodecahedron?
3. Find the volume of one of those tetrahedra.
e. Search the Internet for Archimedian solids, Johnson solids, or Kepler solids as related topics.
f. List as many of the two-dimensional polygonal shapes that can be found when slicing through the rhombic dodecahedron with a plane as you can. Are any of the shapes regular?
g. Imagine the rhombic dodecahedron filling space and a plane cutting through it to create tessellations of the plane. Name possible figures that can be found that in the ensuing tessellations.
Cundy, H.M. and A.P. Rollett. Mathematical Models: Third Edition. Norfolk: Tarquin Publications, 1989.
Frost, J. and Cagle, P. An Amazing Space Filling, Non-regular Tetrahedron,
Holden, A. Shapes, Space, and Symmetry. New York: Columbia University Press, 1971.
Pierce, P. and S. Polyhedra Primer. Palo Alto: Dale Seymour Publications, 1978.
Schattschneider, D. and Walker, W. M. C. Escher Kaleidocycles, Petaluma, CA: Pomegranate Communications, 1987.
Additional Geometry Resources:
Bulatov, V. Polyhedra Collection,
Eppstein, David The Geometry Junkyard: Three-dimensional Geometry,
The Math Forum Internet Mathematics Library: Math Topics: Geometry,
Answers to Worksheet:
1. 6 faces; 12 edges; 8 vertices
2. 2 diagonals; 12 face diagonals per cube?
3. 45-45-90 right triangle; all are congruent; the sides of a cube are congruent; the triangles angles are all isosceles right triangles; so the triangles are congruent by Side-Angle-Side.
4. The length of each face diagonal is 2√2 units.
5. The diagonals of the cube are twice as long as a non-base edge of each pyramid.
6. Length of edge of base 2 units
Length of altitude of pyramid 1 unit
Length of altitude of lateral face √2 units
Length of lateral edge of pyramid √3 units
7. 12 faces; the measures of the dihedral angles formed where three faces meet at a vertex on the new polyhedron are 120° (Three fit together inside the cube; all are congruent, and thus the measure
of the angle formed must be 360°/3.).
8. The dihedral angle of the cube is 90°. The base dihedral angle of a square pyramid is 45° so the triangles meet where the surface is 45° + 90°+ 45° = 180° and must be coplanar.
9. Length of edge √3 units
Length of short diagonal 2 units
Length of long diagonal 2√2 units
10. The dihedral angles on the rhombi are 120° so 3 make 360°. A rhombic dodecahedron has twice the volume of the cube. In space, solid cubes alternate with the cubes filled with the 6 square
11. 2 x 2 x 2 = 8 cubic units;
8/6 = 4/3 cubic units; by dividing the cube into 6 congruent pieces. The formula for the volume of a pyramid is one-third the base area times the height, or 1/3(2 x 2) = 4/3 cubic units. The
rhombic dodecahedron has twice the volume of the cube or 16 cubic units. (Also, the rhombic dodecahedron is made up of 12 square-based pyramids. 12 x 4/3 = 16 cubic units.)
Answers to Extensions:
c. The angles of the rhombus are approximately 109.47° and 70.53° to the nearest hundredth. (Use basic trigonometric functions on the rhombus when it is separated into right triangles.)
1. 4/12 = 1/3
2. 24
3. 16 x 1/24 = 2/3 cubic unit (16 is the volume of the Rhombidodecahedron built on a cube with sides of 2)
f. Student answers may vary, but among the answers you may find some of the following: square, trapezoid, equilateral, isosceles, and scalene triangles, octagon, heptagon, and hexagon. Squares and
equilateral triangles are always regular. The octagon and hexagon can be regular or non-regular depending on the angle of the slice.
g. Student responses will vary, but some possible responses are: all hexagons, equilateral triangles, squares and hexagons, rhombi and hexagons, octagons and squares, and equilateral triangles and | {"url":"http://mathforum.org/pcmi/hstp/resources/rhombic/paper.html","timestamp":"2014-04-20T21:55:03Z","content_type":null,"content_length":"13225","record_id":"<urn:uuid:7ea43bf0-4c87-4efd-b68b-6cf310d70851>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Bead on a Horizontally Rotating Hoop: Variable Initial Displacement and Tilted Axis
This Demonstration illustrates the behavior of a frictionless bead sliding on a circular hoop as the hoop is rotated about a horizontal axis tangent to the bottom of the hoop. Such a device can be
built and used to illustrate the principle of the Paul rf ion trap. In the ideal case, the bead's motion is described exactly by Mathieu's equation. However, the behavior of the bead is very
sensitive to small perturbations of the apparatus. In this Demonstration, you can observe the effect of increasing the initial displacement of the bead, as well as slightly tilting the rotation axis
from the horizontal.
The hoop shown here has a radius . Gravity is acting downward at . The gray line indicates the horizontal axis of rotation. The motion of the bead is independent of its mass. The bead is given an
initial displacement of radians from the equilibrium point. Starting the Demonstration causes the hoop to rotate about the axis with angular frequency , allowing the motion of the bead along the
rotating hoop to be observed.
In the ideal case of small displacements from equilibrium and a perfectly horizontal axis of rotation, the motion of the bead can be described by Mathieu's equation, as discussed in the other
Demonstration based on this model,
Bead on a Horizontally Rotating Hoop with Movable Axis
. With the axis of rotation as shown, the behavior of the bead is equivalent to the motion of an ion in an rf Paul trap. In the ideal case, the bead will remain trapped near the equilibrium point
for rotational frequencies above 4.6 rad/s.
The transition from unstable motion to stable motion can be observed by using the scrollbar to vary the rotational frequency from 3 to 8 radians per second. The higher the rotational frequency above
the minimum of 4.6 radians per second, the more slowly the bead oscillates about equilibrium. This is a characteristic feature of ponderomotive traps.
However, this trap is very sensitive to initial conditions. As the initial displacement is increased from the default value of 0.1 radians and the rotational frequency is increased, the bead can
escape the trap. At higher frequencies, the bead experiences a significant "centrifugal force" pushing the bead away from equilibrium that can destabilize the motion.
The motion of the bead is also very sensitive to any tilt in the axis of rotation, corresponding to the case when the apparatus is not perfectly leveled. Tilting causes the bead to experience a
small centrifugal force pushing it along the tilted axis down from the original equilibrium point. For very small tilts, the bead remains trapped at moderate rotational frequencies, but escapes at
higher frequencies. For slightly larger tilts, the bead can no longer be trapped. This behavior can be explored by adjusting the tilt angle from 0 to 0.03 radians (1.7 degrees). | {"url":"http://demonstrations.wolfram.com/BeadOnAHorizontallyRotatingHoopVariableInitialDisplacementAn/","timestamp":"2014-04-19T09:24:47Z","content_type":null,"content_length":"45989","record_id":"<urn:uuid:a3c2d859-af6e-40d0-a847-2b2707555389>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Golf, FL Trigonometry Tutor
Find a Golf, FL Trigonometry Tutor
Hello, my name is Jose. I have taught Mathematics and Physics in High School and College for over 10 years, I have also taught Spanish privately in recent years. I have a degree in Physics and
post-degree studies in Geophysics and Information Systems.
10 Subjects: including trigonometry, Spanish, physics, geometry
...I speak French and am conversant in Spanish and thus can help in these areas. This also gives me comparative language skills to integrate into the teaching of basic English. I have a solid
background in the sciences as I tutor them at a level well above K through 6, but this gives me a great un...
53 Subjects: including trigonometry, chemistry, English, reading
...I won the only teacher superlative this year "most likely to be a teenager." We will have fun but at the same time learn math! I got my passion for teaching while a Naval Instructor teaching
Naval Academy and ROTC graduates. I taught them trigonometry for ship handling.
9 Subjects: including trigonometry, geometry, ASVAB, algebra 1
...Industrial Engineering curriculum is packed with intense math courses. MY EXPERIENCE: I have tutored before for UASD University (Dominican Republic)(1998-2001),Palm Beach State College
(Florida, US), Kaplan, Score at the Top, and independent tutoring for a combined total of 5 years. Math is FUN!
16 Subjects: including trigonometry, Spanish, calculus, physics
...I had been working thirty-three years for Motorola Inc. My responsibility was design, calculation, and development of several mechanical packages in electronic industry. My job included also
analytical stress calculation, kinematics and dynamics of mechanical devices.
6 Subjects: including trigonometry, geometry, algebra 1, algebra 2
Related Golf, FL Tutors
Golf, FL Accounting Tutors
Golf, FL ACT Tutors
Golf, FL Algebra Tutors
Golf, FL Algebra 2 Tutors
Golf, FL Calculus Tutors
Golf, FL Geometry Tutors
Golf, FL Math Tutors
Golf, FL Prealgebra Tutors
Golf, FL Precalculus Tutors
Golf, FL SAT Tutors
Golf, FL SAT Math Tutors
Golf, FL Science Tutors
Golf, FL Statistics Tutors
Golf, FL Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Atlantis, FL trigonometry Tutors
Boynton Beach trigonometry Tutors
Briny Breezes, FL trigonometry Tutors
Delray Beach trigonometry Tutors
Glen Ridge, FL trigonometry Tutors
Gulf Stream, FL trigonometry Tutors
Highland Beach, FL trigonometry Tutors
Hypoluxo, FL trigonometry Tutors
Lantana, FL trigonometry Tutors
Manalapan, FL trigonometry Tutors
Ocean Ridge, FL trigonometry Tutors
Palm Beach trigonometry Tutors
Palm Springs, FL trigonometry Tutors
South Palm Beach, FL trigonometry Tutors
West Delray Beach, FL trigonometry Tutors | {"url":"http://www.purplemath.com/Golf_FL_Trigonometry_tutors.php","timestamp":"2014-04-19T17:49:10Z","content_type":null,"content_length":"24207","record_id":"<urn:uuid:fdf4bada-19ba-47ec-942a-b87413ff9709>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Hi, I am trying to find the double Integral of [(1 + x + y)^-.5, {x, 0, 1}, {y, 0, 2}] do you switch to cylindrical coordinates?
• 11 months ago
• 11 months ago
Best Response
You've already chosen the best response.
I'm not an expert on this. I don't recognize that form of equation. Could you elaborate a bit on what the parameterization is intended to do there? Or perhaps which portion of the class this came
Best Response
You've already chosen the best response.
It is from the chapter on The change of variables theorem. The full question states Evaluate the double integral dxdy/sqrt(1+x+2y) on the region D, where D=[0,1]*[0,1], by setting T(u,v)=(u,v/2)
and evaluating an integral D*, where T(D*)=D. I found that the jacobian is 1/2 and the new integral is 1/sqrt(1+u+v) dudv, where u is between 0 and 1 and v is between 0 and 2. I just don't
remember how to integrate the negative square root of (1+u+v).
Best Response
You've already chosen the best response.
Both my solutions manual and mathematica say the answer is 2/3[(9-2*(sqrt2)-3*(sqrt3))] and that D is the region 0≤u≤1 and 0≤v≤2
Best Response
You've already chosen the best response.
Thanks for the extra information. Normally, when changing variables, if one were to use a change of variables involving cylindrical coordinates, we would change immediately from x and y to r and
theta instead of x and y to u and v, and then to r and theta. In this case, integrating 1 over the square root of 1 + u + v is rather straightforward. Consider 1 over the square root of u. It's
almost the same as 1 over the square root of 1 + u. I'll use h instead of u, to avoid confusion. Substitute h = 1+u, and dh = du. Hm, well then we are back to the simple case of 1/sqrt(h). A
better substitution then is h = 1 + v + u and dh = du. And again we are back to 1/sqrt(h). I hope that helps.
Best Response
You've already chosen the best response.
This was totally helpful. Thank you!
Best Response
You've already chosen the best response.
You're welcome ;)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5197d176e4b0c719b63fe856","timestamp":"2014-04-21T15:59:34Z","content_type":null,"content_length":"41946","record_id":"<urn:uuid:711bfc05-5662-4ca8-8b35-21b52adaef9a>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numericals on Thermodynamics:
1. Mass enters an open system with one inlet and one exit at a constant rate of 50 kg/min. At the exit, the mass flow rate is 60 kg/min. If the system initially contains 1000 kg of working fluid,
determine the time when the system mass becomes 500 kg.
2. Mass leaves an open system with a mass flow rate of c*m, where c is a constant and m is the system mass. If the mass of the system at t = 0 is m[0], derive an expression for the mass of the system
at time t.
3. Water enters a vertical cylindrical tank of cross-sectional area 0.01 m^2 at a constant mass flow rate of 5 kg/s. It leaves the tank through an exit near the base with a mass flow rate given by
the formula 0.2h kg/s, where h is the instantaneous height in m. If the tank is empty initially, develop an expression for the liquid height h as a function of time t. Assume density of water to
remain constant at 1000 kg/m^3.
4. A conical tank of base diameter D and height H is suspended in an inverted position to hold water. A leak at the apex of the cone causes water to leave with a mass flow rate of c*sqrt(h), where c
is a constant and h is the height of the water level from the leak at the bottom. (a) Determine the rate of change of height h. (b) Express h as a function of time t and other known constants, rho
(constant density of water), D, H, and c if the tank was completely full at t=0.
5. Steam enters a mixing chamber at 100 kPa, 20 m/s, with a specific volume of 0.4 m^3/kg. Liquid water at 100 kPa and 25^oC enters the chamber through a separate duct with a flow rate of 50 kg/s and
a velocity of 5 m/s. If liquid water leaves the chamber at 100 kPa and 43oC with a volumetric flow rate of 3.357 m^3/min and a velocity of 5.58 m/s, determine the port areas at the inlets and exit.
Assume liquid water density to be 1000 kg/m^3 and steady state operation.
6. Air is pumped into and withdrawn from a 10 m^3 rigid tank as shown in the accompanying figure. The inlet and exit conditions are as follows. Inlet: v[1]= 2 m^3/kg, V[1]= 10 m/s, A[1]= 0.01 m^2;
Exit: v[2]= 5 m^3/kg, V[2]= 5m/s, A[2]= 0.015 m^2. Assuming the tank to be uniform at all time with the specific volume and pressure related through p*v=9.0 (kPa.m^3), determine the rate of change of
pressure in the tank.
7. A gas flows steadily through a circular duct of varying cross-section area with a mass flow rate of 10 kg/s. The inlet and exit conditions are as follows. Inlet: V[1]= 400 m/s, A[1]= 179.36 cm^2;
Exit: V[2]= 584 m/s, v[2]= 1.1827 m/kg. (a) Determine the exit area. (b) Do you find the increase in velocity of the gas accompanied by an increase in flow area counter intuitive? Why?
8. Steam enters a turbine with a mass flow rate of 10 kg/s at 10 MPa, 600^oC, 30 m/s, it exits the turbine at 45 kPa, 30 m/s with a quality of 0.9. Assuming steady-state operation, determine (a) the
inlet area, and (b) the exit area.
Answers: (a) 0.01279 m^2 (b) 1.075 m^2
No comments: | {"url":"http://www.subhankar4students.blogspot.com/2009/11/numericals-on-thermodynamics.html","timestamp":"2014-04-20T23:29:17Z","content_type":null,"content_length":"107819","record_id":"<urn:uuid:8b711e2e-48e0-4def-baaf-832d3bae3f50>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minds on Math Book Study-Chapter 6 "Opening"
If this is your first time joining our book study please click on the button above and it will link to all previous posts. Feel free to go back and add a link to any previous chapters. You can link
up your post on Chapter 6 at the bottom of this post. Please visit the other bloggers who have linked up below and leave them comments. Lots of great ideas are being posted on other blogs!
I enjoyed Chapter 6 on the Opening. I have always had a routine of students completing a warm-up while I circulate around the room stamping homework. I know many times the warm-up might get cut out
because of time or what we were doing that day. I really need to make sure the opening is really setting the stage for what we are doing in class that day. There are so many possibilities of what can
be done those first five minutes. The most important thing is that we want to get students engaged in and thinking about math.
I loved the table on page 95 where the author gave examples of how you can use the seven thinking strategies when working on a problem or when discussing a concept. I really need to start
implementing these thinking strategies as I continue to put greater emphasis on the Standards for Mathematical Practice in my classroom discussions (and workshop opening).
I think this chapter seems so obvious and common sense about the importance of the opening, but it's so easy for a busy teach to gloss over or skip this part completely. We really have to establish
this integral part of the workshop routine in our classrooms on a daily basis. By carefully planning out those first few minutes of class we are sending a message to students about what we value,
expect, and hope.
Thanks for joining in everyone!
8 comments:
1. It's important to remember that the opener is only 6-8 minutes. I've always used an opener, but really liked the suggestions on page 92. I would usually put up a problem like part d on page 92
and ask the students to solve. I liked building the problem with scaffolds and also asking how or why questions instead of just looking for a solution. The Workshop Model seems to work
beautifully with the Common Core Standards and Practice Standards.
I'd like to hear from all of you about how you handle homework. If it's important enough to assign, I feel it's important enough to spend some time going over it. I'm not certain that the two
examples are what I'm looking for. I would appreciate hearing from all of you as to how you go over homework.
1. I will still start the year checking in homework as I always have done. I will walk around and stamp while students work on the opener. I will then project answers on the SMARTBoard or maybe
have an answer key copy for each table. I will still pick one or two problems to discuss as a class.
2. I agree with Charla, I liked the scaffolding piece as well. I also like what Hoffer said about buliding Stamina.
Our math program has an online homework component. Usually I assign 6-8 questions from there. Students can get examples and helper problems if they get stuck on a question, and they can redo the
homework as many times as they want to get a 100. Since our district only allows hw to count up to 10%, I'm ok with them redoing the sheet to get the grade they want. It is randomized with
questions, so they actually have to do the problem to get the problem correct. I can check the homework and determine who did or did not complete it, see how many attempts were made, and see the
time it took to complete. So I pull any questionable students during a studyhall and reteach.
Coffee Cups and Lesson Plans
1. Our HW only counts for 5% of the final grade so each assignment earns a 2 pt completion grade. We have a system called working lunch where students must complete the assignment so zeroes are
not really an option. We are going to SB grading and assessments so I don't know how homework will factor in.
3. I found this more informative instead of something to be reflected upon. I think the ideas are great and I plan to implement the a, b, c, d warm up this year.
4. I assign homework Monday through Thursday. This year I think I'm going to tweak it slightly as I found a good "skills review" at #CAMT13. (You can check it out here http://
www.algebrareadinesseducators.com/) I post the answers online and students check their own work. If they have questions, they can ask on our Edmodo page, or they can ask me in class.
We have a homework quiz every week, so students turn in HW on Friday and then take a quiz over it on Tuesday. 40% of the grade is homework completion (was it complete and did they show their
thinking). 30% based on the homework itself (I choose questions from the homework, and students give me their answers....this holds them accountable for checking their answers). 30% is new
problems similar to those in the homework or to what we've been working on in class. In theory, students can't fail this (although some still manage to). I don't think it's perfect, but it worked
pretty well last year!
5. This chapter made me feel like I'm on the right track with how I open my class. I am always at my door greeting the students as they come in, smiling, and welcoming them to math!
The teacher that was in my room before me left me a lot of resources and for the most part, I use quite a bit of them. I know she was a great teacher so I took anything that was given to me,
especially since this job was my first teaching job. One of the things she left me was her "board work" (warmups to start the day).. They used to be on transparencies, but a teacher a few years
back converted them all to SMARTboard lessons. They are interactive and the students can basically lead the lesson. The students get 3 minutes to complete 10 questions and all the questions are
things that we have already covered or are about to be covered. So it activates their prior knowledge and gets their brains thinking about math. If I was in a hurry, sometimes I would answer
questions that students had at the end of the 3 minutes, but I really tried to let this be all student centered. That is something I will need to continue to work on next year. I would like them
to come up to the board more often and show their work along with describing their thinking (good questioning offered on page 93 and chart on 95). There are 30 or so different board works to
choose from so instead of just going in order, I need to try and make them more intentional and decide which board work will go best with each lesson; that way it can lead right into the mini
lesson. I structure the board work so I only collect papers every other week and it seems to work well in my class; I do also like the ideas represented on pg 96. Anyone can email me if they are
interested in taking a look at these. Jilliancmorris@gmail.com
I like the ideas presented on pages 92 & 93 that talk about the 4 questions that get slightly more detailed and complex. This helps them show what they know instead of just solving.
As mentioned before, a man named Mark Forget has done a few seminars for our school and he focuses a lot on setting purpose. I really made a point to set purpose each day last year and I really
noticed a difference. "A students purpose as a math learner in any given class ought to be more than getting through the lesson, but rather exercising a growth mindset, mastering content, and
also honing her endurance as a problem solver, as well as her skills as a thinker." This line really stood out to me. They need to be able to have metacognition, which is a tough thing to grasp
(they need to establish what they know, what they need to know, their confusion, what questions to ask...).
I loved the homework ideas represented on page 100. I plan on using all of these next year! (Tally check, Share and compare, Clickers, Weekly quiz). I have been thinking about homework solutions
for awhile now (because I spent too much time on it last year and it was not beneficial-I was just calling out answers and if the students wanted me to go over one, I would. This would have been
a great opportunity for the students to share/discuss/show alternative methods) and I feel that after reading this, it gave me the ideas I need.
After reading this chapter, overall I'm pleased with how I open my classroom, I just need to tweak the resources I have to be more student centered and intentional with the day's purpose. I also
need to alter my homework purpose.
-Jillian Morris
6. Great chapter and I love reading all of the other comments about it - you all have great ideas! I read somewhere about a teacher who would hand out 6 tickets to one student. This student would
then give five other students each one ticket and they would answer a certain warm-up question on the board (the ticket passer got to keep the last ticket). This helped with both student
participation with the answers and getting them to complete the warm-up because they might get chosen to go up front! Neat idea. I wonder how many tickets I'd end up going through??
Thank you so much for visiting my blog and leaving a comment. Feel free to email luvbcd@yahoo.com with any questions you have.
Middle School Math Rules! | {"url":"http://7thgrademathteacherextraordinaire.blogspot.com/2013/07/minds-on-math-book-study-chapter-6.html","timestamp":"2014-04-19T01:47:14Z","content_type":null,"content_length":"152795","record_id":"<urn:uuid:d0e8d987-a688-4141-8f08-fecb536e9be2>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
CONCEPT BY RICHARD TOOGOOD AND RON BLOND
The slider area
The sliders change parameters h, k and p in the quadratic relation.
• Drag a slider to change the value of a parameter.
• The left and right arrow keys can be used to change the selected parameter [light blue].
• The up and down arrow keys can be used to increase or decrease the selected parameter. (Use this feature for fine adjustments.)
• If the arrow keys don't respond, click the cursor in the applet frame then try again.
• Drag a parameter scale to "move" it. (ie. Change the range but maintain the scale "length".)
• Drag towards or away from a parameter thumb to "re-scale" the scale.
• Select the corresponding parameter button to enter a value for a parameter.
The graph area
• Drag the vertex of the parabola to move it in the coordinate system.
• Drag any location along the parabola to change parameter a.
• Click and drag any other location to see the coordinates of the cursor and locations along the graph of the parabola.
• Move the cursor into the equation area to see which type of parabola (horizontal or vertical) is being displayed.
• THE SHIFT KEY : Click and drag in the graph area to move the entire coordinate system.
• THE CONTROL KEY : Click and drag in the graph area towards or away from the origin to change the x and y axis scale.
• Use the "RESET" button to restore the applet to the initial state. | {"url":"http://members.shaw.ca/ron.blond/TLE/QR.PARABOLA.APPLET/index.html","timestamp":"2014-04-17T00:50:30Z","content_type":null,"content_length":"2661","record_id":"<urn:uuid:b3717077-9f01-4692-a8ef-c887063cdc11>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learning to Use Arithmetic
From inside the book
27 pages matching Fill in this book
Where's the rest of this book?
Results 1-3 of 27
What people are saying - Write a review
We haven't found any reviews in the usual places.
Related books
Section 1 3
Section 2 16
Section 3 18
8 other sections not shown
Other editions - View all
Common terms and phrases
2-place number Add 1 row add or subtract baseball cards Betty Bill blanks bought bought a pair boxes boys Brown Brownies bushel candy bars cent sign cents Check by adding Check each example Check
row Check the examples circle Circle groups Color columns cookies Cost of pencils cowboy boots crayons cupcakes cutouts decimal point dimes divide the number Division Facts divisor dollar sign
doughnuts dozen Draw rings around equal equal smaller groups Estimate Answer examples in row Fill find the answer Find the cost finish the example flashlight four-place number fourth fourth yard
fraction ft ft ft Fun with Numbers gallon gave the clerk girls groups gumdrops half dollars Helping Numbers hot mats hundreds ice cream inches Jane key fact Larger Numbers largest largest remainder
left white lollipops marbles miles Miss Jones missing numbers Multiplication Facts Multiply Multiply and compare nickels nnps notebook number divide number family number line number you divided
numbers below nut cups paint boxes pair pennies picture shows pints pounds problems and write quarts quotient mean raincoat remainder mean Remember Add down ring around Roman numeral Sfep shows 1
whole sign and decimal smallest Solve each problem Solve these problems space below stamps Step Study the example tens place thermometer Think why thousand three-place number topcoat tpns Two-Place
Numbers ty ty Uneven Division write the answers Write the missing Write the numbers Write the products Write the quotients Writp yard Yo-yo
Bibliographic information | {"url":"http://books.google.com/books?id=uHcNAQAAIAAJ&q=Fill&dq=related:STANFORD36105049272409&lr=&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-19T18:57:09Z","content_type":null,"content_length":"136894","record_id":"<urn:uuid:a847e335-4eff-4d96-a4b8-eebbd106c251>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing on a number line Question
January 14th 2009, 08:55 AM #1
Jan 2009
Graphing on a number line Question
I'm sorry Im trying to learn to use Latex while simultaneously mastering precalc so far it's not working.
I'm currently going over, graphing numbers on a number line and my book is doing only a fair job in explaining specific rules.
I understand that; [1<4] but why do I have to write the answer as; 1< or equal to x < or equal to 4
$1\1eqx\1eq4$ if I can understand this reasoning, it would make the rest of my homework easier.
Thank You
Last edited by Carolyng66; January 14th 2009 at 09:16 AM. Reason: syntax errors
I'm sorry Im trying to learn to use Latex while simultaneously mastering precalc so far it's not working.
I'm currently going over, graphing numbers on a number line and my book is doing only a fair job in explaining specific rules.
I understand that; [1<4] but why do I have to write the answer as; 1< or equal to x < or equal to 4
$1\1eqx\1eq4$ if I can understand this reasoning, it would make the rest of my homework easier.
Thank You
Hello Carolyn,
I think, maybe, you're talking about "interval notation" and "set builder notation". Let's see if I can explain.
If you want to graph all the elements between and including 1 and 4, we can write the solution set in two ways.
Interval Notation: [1, 4]
The brackets mean the endpoints are included in the graph. Had they not been included, we would have used parentheses.
Set Builder Notation: $\{x | 1 \leq x \leq 4\}$
This is read "The set of all x, such that x is greater than or equal to 1 and less than or equal to 4".
Does this help? If not, post specific examples.
If you mean that you understand that the notation "[1, 4]" indicates the interval from 1 to 4 inclusive, but are wondering why you "have to write the answer" also in the form " $1\, \leq\, x\, \
leq\, 4$", I think the reason is simply that you be able to understand the various notations.
Not all books use the same notation, so you need to be able to read each of them.
January 14th 2009, 09:27 AM #2
A riddle wrapped in an enigma
Jan 2008
Big Stone Gap, Virginia
January 14th 2009, 11:42 AM #3
MHF Contributor
Mar 2007 | {"url":"http://mathhelpforum.com/algebra/68188-graphing-number-line-question.html","timestamp":"2014-04-17T10:51:02Z","content_type":null,"content_length":"39466","record_id":"<urn:uuid:34af687a-23b5-4f67-9fac-21d643fbda56>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00239-ip-10-147-4-33.ec2.internal.warc.gz"} |
Borough Park, New York, NY
New York, NY 10016
GRE, GMAT, SAT, NYS Exams, and Math
...I specialize in tutoring
and English for success in school and on the SAT, GED, GRE, GMAT, and the NYS Regents exams. Whether we are working on high school geometry proofs or GRE vocabulary, one of my goals for each session
is to keep the student challenged,...
Offering 10+ subjects including algebra 1, algebra 2 and geometry | {"url":"http://www.wyzant.com/geo_Borough_Park_New_York_NY_Math_tutors.aspx?d=20&pagesize=5&pagenum=1","timestamp":"2014-04-19T22:58:01Z","content_type":null,"content_length":"63251","record_id":"<urn:uuid:a211a75f-44b6-46d9-b7dc-d516311f6999>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Science of Sticky Spheres
The Science of Sticky Spheres
On the strange attraction of spheres that like to stick together
Take a dozen marbles, all the same size, and squeeze them into a compact, three-dimensional cluster. Now count the number of points where the marbles touch one another. What is the maximum number of
contact points you can possibly achieve with 12 marbles? What geometric arrangement yields this greatest contact number? Is the optimal cluster unique, or are there multiple solutions that all give
the same maximum?
When I first heard these questions asked, they did not seem overly challenging. For a cluster consisting of two, three, four or five equal-size spheres, I was pretty sure I knew the answers. But I
soon learned that the problem gets harder in a hurry as the size of the cluster increases. Over the past three years, the maximum contact number has been determined for clusters of up to 11 spheres.
Finding those answers required a variety of mathematical tools drawn from graph theory and geometry, as well as extensive computations and, at a few crucial junctures, building ball-and-stick models
with a set of toys called Geomags. For clusters of 12 or more spheres, the answers remain unknown.
To be stumped by such simple questions about small clumps of spheres is humbling—but perhaps not too surprising. Sphere-packing problems are notoriously tricky. Some of them have resisted analysis
for centuries.
Kepler and Newton
In 1611 Johannes Kepler declared that the densest possible packing of identical spheres is the arrangement seen in a grocer’s pyramid of oranges. Any sphere in the interior of Kepler’s lattice
touches 12 other spheres, and the fraction of space filled by the spheres is π/√18, or about 0.74. Kepler apparently believed that the superiority of this packing was so obvious that no proof was
needed; as it happens, no proof was forthcoming for nearly 400 years. In 1998 Thomas C. Hales of the University of Pittsburgh finally showed that no other packing that extends throughout
three-dimensional space can have a higher density.
Kepler’s conjecture (and Hales’s proof of it) apply to an infinite lattice of spheres, but another centuries-old puzzle concerns finite clusters. The story begins with a dispute between Isaac Newton
and his disciple David Gregory in the 1690s. According to one telling of the tale, Newton held that a central sphere could touch no more than 12 surrounding spheres of the same size, but Gregory
thought there might be room for a 13th halo sphere. This problem of the “kissing number” was not resolved until 1953, when Kurt Schütte and B. L. van der Waerden proved that Newton was right—but just
barely so. When a 13th satellite sphere is shoehorned into the assemblage, the diameter of the cluster increases by only about 5 percent.
Newton’s kissing-number problem suggests a solar-system model of sphere packing, with a dozen planets all feeling the attraction of a central sun. The contact-counting problem has a more egalitarian
character. There is no designated center of attraction; instead, all the spheres stick to one another, and the goal is to maximize the overall number of contact points throughout the cluster.
Why is there so much interest in cramming spheres together? Kepler was trying to explain the symmetries of snowflakes, and much of the later work on sphere-packing has also been motivated by
questions about the structure of solids and liquids. The recent focus on clusters with many sphere-to-sphere contacts arose from studies of colloids, powders and other physical systems in which
particles are held together by extremely short-range forces.
The Well-Connected Cluster
In building clusters and counting contacts, it’s convenient to work with unit spheres, which have a diameter of 1 and thus a radius of 1/2. When two unit spheres are touching, the center-to-center
distance is 1. The diagrams accompanying this column show a unit-length rod connecting the centers of spheres when they are in contact. Some physical models of clusters (including Geomags) keep this
skeleton of connecting rods and omit the spheres altogether.
In any cluster of n spheres, let C[n] denote the total number of contact points; then max(C[n]) is the highest value of C[n] found among all n-sphere clusters.
For the smallest values of n, finding max(C[n]) is easy. The case of n=1 is trivial: A single, isolated sphere doesn’t touch anything, and so max(C[1])=0. Two spheres can meet at only one point,
which means that max(C[2])=1. For three spheres the best solution puts the sphere centers at the vertices of an equilateral triangle; in this arrangement there are three contact points, and thus max
(C[3])=3. A fourth sphere can be placed atop the triangle to create a regular tetrahedron with six pairwise contacts: max(C[4])=6.
Not only is it easy to construct these small clusters; it’s also easy to prove that no other n-sphere configurations could have a higher C[n]. The reason is simply that in these clusters each sphere
touches every other sphere, and so the number of contacts could not possibly be greater. In the terminology of graph theory, the cluster is a clique. The number of contacts in a cliquish cluster is n
(n–1)/2. The sequence begins 0, 1, 3, 6, 10, 15, 21….
Going on to n=5, cliquishness is left behind: In three-dimensional space there is no way to arrange five unit spheres so that they all touch one another. If such a five-sphere clique existed, it
would have 10 contact points. The best that can actually be attained is C[5]=9, which is the number of contacts formed when you attach a fifth sphere to any face of a tetrahedral cluster. The
resulting structure is known as a triangular dipyramid.
At Sixes and Sevens
Up to this point, each value of n has had a unique cluster that maximizes C[n]. Furthermore, in each case the best-connected cluster with n+1 spheres can be assembled incrementally by sticking a new
sphere somewhere on the surface of the max(C[n]) cluster. These properties come to an end at n=6. With six spheres, two cluster shapes both yield the same maximum contact number, C[6]=12. (Note that
a hypothetical six-sphere clique would have 15 contacts.) One of the max(C[6]) clusters is built incrementally from the five-sphere triangular dipyramid. But the other max(C[6]) cluster is a “new
seed”—a structure that cannot be created simply by gluing a sphere to the surface of a smaller optimum cluster. The new seed is the octahedron (which might also be described as a square dipyramid).
Beyond n=6, the problem of finding all the maximum-contact clusters becomes more daunting. For n=7, the incremental approach of adding another sphere to the surface of an n=6 cluster yields four
solutions that have 15 contact points. Three of these C[7]=15 clusters consist of four tetrahedra glued together face-to-face in various ways. The remaining product of incremental construction
consists of an octahedron with a tetrahedron erected on one face. (One of the seven-sphere solutions has both left-handed and right-handed forms, but the convention is to count these “chiral” pairs
as variants of a single cluster, not as separate structures.)
Finding this particular set of structures is not especially difficult. If you spend some time playing with Geomags or some other three-dimensional modeling device, you are likely to stumble upon
them. But having identified these four clusters with C[7]=15, how do you know there aren’t more? And how do you prove that no seven-sphere cluster has 16 or more contacts?
As it turns out, 15 is indeed the maximum contact number for seven spheres, but there is another C[7]=15 cluster. It is a new seed, called a pentagonal dipyramid. With its fivefold symmetry, it has
no structural motifs in common with any of the smaller clusters. The novelty of this object again raises the question: How can we ever be sure there aren’t still more arrangements waiting to be
A successful program for answering such questions was initiated about five years ago by Natalie Arkus, who was then a graduate student at Harvard University. (She is now at Rockefeller University.)
In a series of papers written with her Harvard colleagues Michael P. Brenner and Vinothan N. Manoharan, she enumerated all the max(C[n]) configurations for n=7 through n=10. The results were later
extended to n=11 by Robert S. Hoy, Jared Harwayne-Gidansky and Corey S. O’Hern of Yale University. (Hoy is now on the faculty of the University of South Florida.) All of the results I describe here
come from the work of these two groups.
Sticky Spheres
One way to solve sphere-packing problems is to view the spheres as particles subject to a physical force. Then, through mathematical analysis or computer simulation, you can try to find the
geometric arrangement that minimizes the potential energy of the system. The force is usually defined as a smooth function of distance. When two particles are far apart, the force between them is
negligible; at closer range the force becomes strongly attractive; at even smaller distances a “hard-core repulsion” prevents the spheres from overlapping. Under a force law of this kind, the
particles settle into equilibrium at some small but nonzero separation.
The contact-counting problem can be translated into the language of forces and energy, but the physics of the system is rather peculiar. To begin with, the force law is not a smooth function of
distance. Instead of hills and valleys representing gradual changes in energy, there is a sheer cliff, where the energy jumps abruptly. Imagine two spheres drifting through space. As long as they do
not touch, there is no force acting between them—neither attraction nor repulsion. If the spheres happen to come in contact, however, they stick together; suddenly, the force becomes attractive. Yet
any attempt to push them still closer is met by infinite resistance.
In this world of sticky spheres, the forces at work are not merely short-range but zero-range. (Martin Gardner once suggested a model for such systems: ping-pong balls coated with rubber cement.) Two
spheres lower their total energy when they touch, and energy has to be supplied to pull them apart; but once they are separated, they have no further influence on one another. Minimizing the
potential energy of the whole system means maximizing the number of contacts.
The discontinuous nature of the force law affects the choice of mathematical tools for solving the sticky-sphere problem. With a smooth force law, sphere-packing problems can be solved by
optimization methods. An algorithm repeatedly attempts to reduce the total energy by making small adjustments to the particles’ positions, continuing until no further progress is made. This scheme
won’t work with sticky spheres because there are no smooth gradients to guide the particles toward lower-energy configurations. For this reason, the sticky-spheres problem has seemed harder than most
other sphere-packing tasks.
On the other hand, the discrete, all-or-nothing character of the sticky-spheres potential also brings an important advantage. Because each pair of spheres is either touching or not, the number of
essentially different configurations is finite. In principle, you can examine all these possibilities and simply choose the one with the most contacts and hence the lowest potential energy. It was
this insight—the idea that the problem can be solved by exhaustive enumeration—that led to the recent results of the Harvard and Yale groups.
From Geometry to Graph Theory
All the essential facts about sphere-to-sphere contacts in a cluster can be captured in a graph—a collection of vertices and edges. Each sphere is represented by a vertex, and two vertices are
connected by an edge if and only if the corresponding spheres are in contact. The same information can be encoded even more abstractly in an adjacency matrix. A cluster of n spheres becomes an n-by-n
matrix of 0s and 1s. The matrix element at the intersection of row i and column j is 1 if sphere i touches sphere j and otherwise is 0.
Given a cluster of spheres, it’s easy to construct the corresponding graph or adjacency matrix. But is it possible to go the other way—to start with an adjacency matrix and recover the full geometry
of the cluster? In other words, with nothing more to go on than a table indicating which spheres are in contact, can one determine the coordinates of all the spheres in three-dimensional space? The
answer is: Not always. Consider the all-0s matrix, which reveals nothing about the locations of the spheres except that they’re not touching. And the all-1s matrix describes a cluster that simply
cannot exist when n is greater than 4. But for an important class of clusters the adjacency matrix does supply enough information to allow a full reconstruction. That class includes the clusters that
maximize contact number. These facts suggest a direct problem-solving strategy: Generate all the candidate matrices and check to see which ones produce geometrically feasible clusters.
How many adjacency matrices need to be tested? Because contact is a symmetric relation—if i touches j, then j touches i—all the information in the matrix is confined to the upper triangle, which has
n(n–1)/2 elements. Each element has two possible values, and so the total number of adjacency matrices is 2^n(n–1)/2.
Sifting through 2^n(n–1)/2 matrices would be a formidable task; the value of this expression is already beyond two million at n=7 and exceeds 10^16 at n=11. But not all of those matrices are
distinct; many of them represent mere relabelings of the same graph. When redundancies of this kind are eliminated, the number of distinct 7×7 matrices is reduced from 2,097,152 to just 1,044.
Arkus and her group took advantage of a further dramatic winnowing. For reasons that will be explained below, they examined only matrices that meet two criteria: Every column and row has at least
three 1s, and the total number of 1s in the upper triangle is 3n–6. For n=7, imposing these constraints reduces the number of candidate adjacency matrices from 1,044 to just 29. Among those 29
matrices, only five give rise to genuine three-dimensional sphere packings.
And Back to Geometry
How were these geometric structures determined? The key idea is to transform the adjacency matrix A into a distance matrix D. Whereas each element A[ij] of the adjacency matrix is a binary value,
answering the yes-or-no question “Do spheres i and j touch?,” the element D[ij] is a real number giving the Euclidean distance between i and j.
As it happens, we already know some of those distances. Every 1 in the adjacency matrix designates a pair of unit spheres whose center-to-center distance is exactly 1; thus A[ij]=1 implies D[ij]=1.
We even know something about the rest of the distances: A cluster is feasible only if every element of the distance matrix satisfies the constraint D[ij]≥1. Any distance smaller than 1 would mean
that two spheres were occupying the same volume.
To fully pin down the geometry of a cluster, we need to determine the x, y and z coordinates of all n spheres. A rule of elementary algebra suggests we would need 3n equations to determine these 3n
unknowns, but in fact 3n–6 equations are enough. The energy of the cluster depends only on the relative positions of the n spheres, not on the absolute position or orientation of the cluster as a
whole. In effect, the locations of two spheres come “for free.” We can arbitrarily assume that one sphere is at the origin of the coordinate system and another is exactly one unit away along the
positive x axis. In this way six coordinates become fixed. Then the 3n–6 equations supplied by the 1s in the adjacency matrix are exactly the number needed to locate the rest of the spheres.
Having just enough constraints to solve the system of equations is more than a convenient coincidence; it’s also a necessary condition for mechanical stability in a cluster. Specifically, having 3n–6
contacts and at least three contacts per sphere gives a cluster a property called minimal rigidity. If any sphere had only one or two contacts, it could flap or wobble freely. Such as cluster cannot
be a max(C[n]) configuration because the unconstrained sphere can always pivot to make contact with at least one more sphere, thereby increasing C[n].
Each of the 3n–6 equations has the form:
defining the distance between the centers of spheres i and j. To recover the coordinates of all the spheres, this system of equations must be solved. To that end, Arkus first tried a technique called
a Gröbner basis, which in recent years has emerged as a powerful tool of algebraic geometry. The method offers a systematic way to reduce the number of variables until a solution emerges. An
implementation of the Gröbner-basis algorithm built into a computer-algebra system was able to solve the n=7 equations, but it became too slow for n=8.
Another approach relies on numerical methods that converge on a solution by successive approximation. The best-known example is Newton’s method of root-finding by refining an initial guess. Arkus
found that the numerical techniques were successful and efficient, but she was concerned that they are not guaranteed to find all valid solutions. (Whenever the algorithm converges, the result is a
correct solution, but failure to converge does not necessarily mean that no solution exists; it’s also possible that the initial guess was in the wrong neighborhood.)
Setting aside the algebraic and numerical techniques, Arkus chose to rely on geometric reasoning both as a guide to assembling feasible clusters and as a means of excluding unphysical ones. A basic
rule for unit spheres states that if i touches j, then j’s center must lie somewhere on a sphere of radius 1 centered on i—the “neighbor sphere.” If k touches both i and j, then k’s center must be
somewhere on the circular intersection of two neighbor spheres. If l touches all three of i, j and k, the possible locations are confined to a set of two points. With a handful of rules of this
general kind, it’s always possible to solve for the unknown distances in a distance matrix—assuming that the adjacency matrix describes a feasible structure.
Other geometric rules can be applied to prove that certain classes of adjacency matrices cannot possibly yield a physical sphere packing. For example, if spheres i, j and k all touch one another,
they must form an equilateral triangle. If the pattern of 1s in the adjacency matrix shows that more than two other spheres also touch i, j and k, then the cluster cannot exist in three-dimensional
space. The unphysical matrices can be eliminated without even exploring the geometry of the clusters.
Arkus also made use of the Geomags construction set to check the feasibility of certain sphere arrangements. The Geomags set consists of polished steel balls and bar-magnet struts encased in colored
plastic; all the struts are the same length, and so they can readily be assembled into a skeleton of unit-length bonds between sphere centers. Having a three-dimensional model you can hold in your
hand is a great aid to geometric intuition.
Eight, Nine, Ten
The results of the survey of sticky-sphere clusters are summarized in the table at right. Arkus and her colleagues determined max(C[n])—and identified all clusters that exhibit these highest contact
numbers—for all n≤10. Along the way, they discovered quite a few clusters with interesting quirks and personalities.
At n=8 the maximum contact number is 18, and there are 13 distinct ways of achieving this bound. All but one of the clusters can be built up incrementally by attaching a new sphere to the surface of
one of the n=7 clusters.
Clusters of nine spheres have up to 21 contacts; there are 52 varieties, including four new seeds. In this crowd of sphere packings, one stands out from all the rest. It has a property not seen in
any other max(C[n]) cluster up to this point: flexibility. The structure can be twisted around one axis without breaking any bonds between spheres. This ability to wiggle may seem surprising, given
that the adjacency matrices were designed with the explicit aim of ensuring “minimal rigidity.” But there’s a reason this form of rigidity is called minimal. The requirement that every sphere make
contact with at least three others implies that no individual sphere can move relative to the rest of the cluster without breaking at least one bond. But other modes of motion, in which larger groups
of spheres flex or rotate, are not ruled out. In the flexible n=9 cluster, two square faces joined by an edge can twist and deform slightly. The animation at right shows the flexibility.
The n=10 clusters cross another threshold: For the first time the number of contacts exceeds 3n–6 (which is 24 in this case). Some 259 clusters of 10 spheres have exactly 24 contacts, but another
three clusters have 25 each. Again, it’s mildly surprising that these objects find their way onto the list. The search algorithm begins with a list of matrices that specify exactly 3n–6 adjacencies,
so how can the search uncover a cluster with even more contacts? The explanation is that even if two spheres are not required to touch, they are not forbidden to do so. When you solve a system of
equations that specifies 24 adjacencies, it can happen, as if by coincidence, that a 25th pair of spheres is also at a distance of exactly 1.
One of these happy accidents is shown in the animation at right. The structure is derived from the flexible n=9 cluster by adding an octahedral cap to one of the square faces. (The addition
eliminates the flexibility.)
To compile the catalog of 10-sphere clusters, Arkus and her colleagues had to examine more than 750,000 matrices for minimally rigid structures. The challenge of pushing the frontier out to n=11 was
taken up by Hoy, Harwayne-Gidansky and O’Hern at Yale. They relied on many of the same methods but adopted a different approach to streamlining the algorithms. For example, they took advantage of a
curious fact proved by Therese Biedl, Erik Demaine and others: Any valid packing of spheres has a continuous, unbranched path that threads from one sphere to the next throughout the structure, like a
long polymer chain. This fact implies that the rows and columns of the adjacency matrix can always be rearranged so that the superdiagonal (just above and to the right of the main diagonal) consists
entirely of 1s. Confining attention only to these matrices reduces the workload by a factor of 1,000 for n=11.
The Yale group also devised simplified rules for excluding invalid packings. And they formulated more of the geometric rules in such a way that matrices could be tested without ever having to go
through the time-consuming steps of computing sphere-to-sphere distances. For example, one such exclusion rule applies to clusters that have eight spheres arranged at the vertices of a cube. No ninth
sphere can touch more than four of these corner spheres; to do so, the ninth sphere would have to lie somewhere inside the cube, but there’s no room for it there. Violations of this rule can be
detected merely by counting 1s in the adjacency matrix, a much quicker operation than calculating three-dimensional coordinates.
At n=11 the Yale group identified 1,641 distinct clusters with C[11]≥27. The vast majority of these structures have exactly 27 contacts (the 3n–6 value), but there are 20 clusters with 28 contacts
(equal to 3n–5) and a single packing with 29 contacts (3n–4). This last object, shown at right, can be understood as a further elaboration of the “floppy” cluster described above. The flexible
nine-sphere structure has two square faces exposed at the surface. One of those faces is capped to form an octahedron in the sole 10-sphere cluster with 25 contacts. Capping the other square face to
create a second octahedron leads to the unique 11-sphere cluster with 29 contacts.
Incidentally, there is an obvious way to add a 12th sphere to this cluster to produce a structure with 33 contacts, equal to 3n–3. But whether or not 33 is the highest attainable C[12] value remains
a matter of conjecture, because no complete survey of 12-sphere clusters has been attempted.
Even for n=11 it’s possible to quibble over questions of completeness and certainty. Some of the Yale results rely on a numerical algorithm to solve the system of distance equations. As noted above,
when this process fails to converge, it does not unequivocally prove that no solution exists; even after many trials, there’s always a possibility that one more run with a different initial guess
might succeed. In practice, the chance that a valid sphere packing might have been missed is extremely slim; whether the question is worth worrying about is perhaps a matter of differing attitudes
toward rigor in mathematics and the physical sciences.
Sticky Spheres in Action
Both the Harvard and the Yale groups were inspired to undertake this exercise by an interest in aggregations of material spheres rather than mathematical ones. Hoy, Harwayne-Gidansky and O’Hern
discuss the mysteries of crystallization. Bulk materials tend to favor configurations that maximize density, such as the Kepler packing. But crystal growth must start from clusters of just a few
atoms, where the configurations that minimize energy are not the same as those in the bulk. Some high-contact-number clusters exhibit motifs seen in the Kepler packing, but other clusters are
incompatible with space-filling structures. For example, the n=7 pentagonal dipyramid is not a pattern that can tile three-dimensional space.
Arkus is particularly interested in the self-assembly of nanostructures, both natural ones (such as viral capsids) and engineered materials. Guangnan Meng of Harvard, working with Arkus, Brenner and
Manoharan, has developed an experimental system that offers another way to explore small clusters of sticky spheres. The spheres are polystyrene beads one micrometer in diameter, suspended in water
along with a large population of much smaller plastic nanoparticles. When two spheres come into contact, the nanoparticles are excluded from the space between them; this phenomenon creates a
short-range attractive force between the spheres. Hence the system is a good model of idealized sticky spheres.
In Meng’s experiments the microspheres were spread over glass plates with thousands of cylindrical microwells, where they formed clusters with an average of about 10 spheres per well. The wells were
scanned with a microscope to tabulate the relative abundance of various configurations. If potential energy were the sole criterion, then clusters with more contacts would be more common, but entropy
also enters into this calculus: Structures that can be formed in many different ways are more probable. For the most part, results were broadly in accord with theoretical expectations. Entropy
favored clusters with lower symmetry, and also enhanced the representation of nonrigid structures. But because extra contacts lower the potential energy, structures with more than 3n–6 bonds were
also overrepresented.
Apart from these physical applications of sticky spheres, the contact-counting model also evokes a celebrated open problem in pure mathematics. The question was raised by Paul Erdös in 1946: Given n
points in d-dimensional space, how many pairs of points can be separated by the same distance? By scaling all distances appropriately, the repeated distance can always be set equal to 1, and so the
problem is sometimes called the unit-distance problem. In three dimensions, the maximum-contact problem for unit spheres is equivalent to the Erdös unit-distance problem with the additional
constraint that no distance is allowed to be less than 1. Thus the recent results on sticky spheres solve this restricted version of the problem for all n≤11.
What lies beyond n=11? Arkus suggests that the main roadblock to enumerating maximum-contact clusters for higher n is not the geometric problem of solving for coordinates and distances but the
combinatorial one of generating all appropriate adjacency matrices. Because so few of the matrices correspond to valid packings, the process becomes hideously wasteful. Arkus suggests a possible
alternative approach, although it has not yet been successfully implemented. Through n=10 she has shown that every cluster with 3n–6 contacts can be converted into any other 3n–6 cluster by some
chain of simple transformations, in which a single bond is broken and another bond is formed. She conjectures that this property holds true for all n. If it does, the maximum-contact problem might be
solved by generating any one structure with 3n–6 contacts and then systematically traversing the tree of all single-bond-exchange transformations.
At some large enough value of n, the diversity of these curious geometric structures will necessarily begin to diminish, as all larger clusters come to look more and more like pieces of the Kepler
packing. But we’re not there yet, and there may still be oddities to discover.
• Arkus, N., V. N. Manoharan and M. P. Brenner. 2009. Minimal energy clusters of hard spheres with short range attractions. Physical Review Letters 103:118303.
• Arkus, N., V. N. Manoharan and M. P. Brenner. 2011. Deriving finite sphere packings. SIAM Journal on Discrete Mathematics 25(4):1860–1901.
• Aste, T., and D. Weaire. 2008. The Pursuit of Perfect Packing, 2nd ed. New York: Taylor & Francis.
• Biedl, T. E., et al. 2001. Locked and unlocked polygonal chains in three dimensions. Discrete and Computational Geometry 26:269–281.
• Conway, J. H., and N. J. A. Sloane. 1999. Sphere Packings, Lattices, and Groups, 3rd ed. New York: Springer.
• Erdös, P. 1946. On sets of distances of n points. American Mathematical Monthly 53:248–250.
• Hales, T. C. 2005. A proof of the Kepler conjecture. Annals of Mathematics 162:1065–1185.
• Hoare, M. R., and J. McInnes. 1976. Statistical mechanics and morphology of very small atomic clusters. Faraday Discussions of the Chemical Society 61:12–24.
• Hoy, R. S., J. Harwayne-Gidansky and C. S. O’Hern. 2012. Structure of finite sphere packings via exact enumeration: Implications for colloidal crystal nucleation. Physical Review E 85:051403.
• Hoy, R. S., and C. S. O’Hern. 2010. Minimal energy packings and collapse of sticky tangent hard-sphere polymers. Physical Review Letters 105:068001.
• Kepler, J. 2010. The Six-Cornered Snowflake: A New Year’s Gift. Philadelphia: Paul Dry Books.
• Meng, G., N. Arkus, M. P. Brenner and V. N. Manoharan. 2010. The free-energy landscape of clusters of attractive hard spheres. Science 327:560–563.
• Schütte, K., and B. L. van der Waerden. 1953. Das Problem der dreizehn Kugeln. Mathematische Annalen 125:325–334.
• Sloane, N. J. A., R. H. Hardin, T. D. S. Duff and J. H. Conway. 1995. Minimal-energy clusters of hard spheres. Discrete and Computational Geometry 14:237–259. | {"url":"http://www.americanscientist.org/issues/id.15927,y.0,no.,content.true,page.4,css.print/issue.aspx","timestamp":"2014-04-16T08:10:12Z","content_type":null,"content_length":"161979","record_id":"<urn:uuid:fc83ab06-39bf-459a-80b9-70e6559dca1e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Submitted to:
European Journal of Soil Science
Publication Type:
Peer Reviewed Journal
Publication Acceptance Date:
October 24, 2008
Publication Date:
February 16, 2009
Repository URL: http://hdl.handle.net/10113/29599 Citation:
La Scala, N., Lopes, A., Spokas, K.A., Archer, D.W., Reicosky, D.C. 2009. Short-Term Temporal Changes of Bare Soil CO2 Fluxes Described by First-Order Decay Models. European Journal of Soil Science.
Interpretive Summary:
To gain further insight into the mechanisms of tillage induced carbon dioxide (CO2) losses, we investigated the application of two different mathematical models to simulate the emission of CO2
following tillage. The models were based on the assumption that CO2 emission after tillage is a function of the non-tilled emission plus a correction due to the tillage disturbance. Our hypothesis is
that an additional amount of labile carbon (C) is made available to the soil organisms by tillage, exposing aggregate protected C, and thereby making it accessible to microorganisms. The two models
were both first-order decay models, but differed from each other in the first model assumed different rates of organic matter decay following tillage and the second model assumed equal rates of decay
following tillage. Both models performed well. However, the model based on the assumption that the decay rates before and after tillage were equal, fit the observed field data better than the model
with unequal decay rates. The advantage to this modeling is that the amount of CO2 lost can be predicted by utilizing the no-till flux as a surrogate for the tillage emissions. With further
experiments it is anticipated that the effect of various tillage implements will be able to be quantified thereby improving the ability to predict CO2 tillage emission losses as a consequence of
tillage. This information will assist scientists and engineers in developing improved tillage methods to minimize the gaseous loss and to improve soil carbon management and assist farmers to develop
new management techniques for enhancing soil carbon. This research will be of direct benefit to the farmers to enable them to maintain crop production with minimal impact to the environment.
Technical Abstract: To further understand the impact of tillage on carbon dioxide (CO2) emissions, we compare the performance of two conceptual models that describes the CO2 emission after tillage as
a function of the non-tilled emission plus a correction due to the tillage disturbance. Our hypothesis is that an additional amount of labile carbon (C) is made available to the soil organisms by
tillage, exposing aggregate protected C, and thereby making it accessible to microorganisms. The models assumes that C in the readily decomposable organic matter follows a first-order reaction
kinetics equation as: dCsoil/dt = -kCsoil(t) and that soil C-CO2 emission is proportional to the C decay rate in soil, where Csoil(t) is the available labile soil C (g m-2) at any time (t). Emissions
are addressed in terms soil C available to decomposition in the tilled and non-tilled plots. Two possible relationships are derived between non-tilled (FNT) and tilled (FT) fluxes which are: Ft =Fnt
+ a1 * exp(-a2t) (model 1) and Ft=a3 * Fnt * exp(-a4t) (model 2), where t is time after tillage. The difference between these two models comes from an assumption related to the k factor of labile C
in the tilled plot and its similarity to the k factor of labile C in the non-till plot. In model 1 the k factors are unequal and model 2 the k factors are equal. Predicted and observed CO2 fluxes
showed good agreement based on determination coefficient (R2), index of agreement and model efficiency, with R2 as high as 0.97. Comparisons also reveal that model 2, the model where all C pools are
assigned the same k factor produces a better statistical fit over the other model. The four parameters included in the models are related to the decay constant (k factor) of tilled and non-tilled
plots and also to the amount of labile carbon added to the readily decomposable soil organic matter due to tillage. The advantage to this modeling approach is that temporal variability of
tillage-induced emissions can be described by analytical functions that include the non-tilled emission plus an exponential term modulated by tillage and environmentally dependent parameters. | {"url":"http://www.ars.usda.gov/research/publications/publications.htm?SEQ_NO_115=216249&pf=1","timestamp":"2014-04-21T16:23:09Z","content_type":null,"content_length":"23757","record_id":"<urn:uuid:caae32aa-c42b-4fc8-a895-0306d2059b2f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/acidra1n/asked","timestamp":"2014-04-18T21:18:07Z","content_type":null,"content_length":"116139","record_id":"<urn:uuid:0fd4b203-26e6-4af1-bb5e-0f6090bab34b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
My On-Job Training Analytics
May 22, 2012
By Al-Ahmadgaid Asaad
I have been working at Provincial Statistics Office of Tawi-Tawi (Philippines) which was part of the training on my OJT (On-Job Training). One of the requirements of the training is at least 80 hours
of services, so I decided to work from April 19 to May 18, 2012, making it sure to surpass the required hours.
In that office, there is this daily records of our attendance which has a Sign in and Sign Out columns, where we actually put the time we arrive and dismiss, respectively. The first time I notice
this, I started to get excited and very particular on the time I input, because I know that at the end of my training I'll be collecting this and make some analysis.
Here's the first plot below, which shows the Arrival and Dismissal time.
In the plot, notice that there are two groups of arrival and dismissal points. The first arrival points are dotted in the early morning of the day, this is due to the schedule of services of the
office which opens at 8:00 am and will close at 12:00 pm. And the other one are in the afternoon, at 1:00 pm and will close at 5:00 pm. And I'm home, taking my lunch between 12:00 pm and 1:00 pm.
Now, It is clearly shown in the plot that I've been late in most days, as seen on the arrival points which is dotted between 12 and 14, (12:00 pm and 2:00 pm, respectively). There are cases that I
went to the office at 2:00 pm, which is very late, and this happened on May 10. Similarly, I've been late eight times in the morning, where five of it happened in the last week of my training.
The next plot below shows the number of hours I've spent in the morning (8:00 am - 12:00 pm).
The trend of the blue lines are not consistent, this is expected since I was not able to maintain my arrival time, As you've known earlier that I was late in most days. By the way, there's no
services during weekends, but as you notice on April 28 which is Saturday I've spent 4.2 hours, this was actually a general cleaning in the office, and so we the trainees were ordered to report on
the said date to help on cleaning also, and in change for that is nonworking day in April 30, that's Monday. But that didn't happen to me, since I was given a special task by my boss, and was told to
report on Monday morning, and that's why I've spent 3.8 hours in that day. In the average, I've spent about 4 hours in the morning, black horizontal line in the plot represents it.
Here's the plot of the hours I've spent in the afternoon of my training, that's from 1:00 pm to 5:00 pm. And the blank in the plot is a missing value, this is because I only reported in the morning
of the April 30.
And on the average, I spent about 3.7 hours in the afternoon of my training, as shown in the plot. Now the longest hours recorded in my entire training in the afternoon is 5 hours, this is because of
the whole day general cleaning which happened on April 28. And the lowest number of hours I've spent happened on May 10, this is because I arrived at 2:00 pm in that day.
And for overall, the number of hours I've spent in a day is shown below. And on the average, I've spent about 7.4 hours a day.
There is a huge decline on April 30, that's 3.8 hours only a day. Well, that's because I only reported in the morning. Finally, the largest number of hours ever recorded on my training was on April
28, since it was a whole day cleaning.
R Codes
for the author, please follow the link and comment on his blog:
Alstat R Blog
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/my-on-job-training-analytics/","timestamp":"2014-04-21T09:55:39Z","content_type":null,"content_length":"39063","record_id":"<urn:uuid:860b99ee-9d76-4c73-b3fe-ed331991632a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rancho Cordova Precalculus Tutors
I recently graduated from CSU Sacramento with a B.A. in Economics. As a student I worked as a tutor and mentor for students in my department. My skills are in math and English, as well as history
and economics.
27 Subjects: including precalculus, English, reading, writing
...I have been working as a Transportation Engineer since my graduation.I am very passionate about all the subjects for Mathematics. I am a Civil Engineer by profession. I used to be a Mathematics
Tutor at Butte Community College, Chico, CA, and I tutored all levels of Math all the way up to Calculus and Differential Equations.
18 Subjects: including precalculus, chemistry, physics, geometry
...I am a recent college student, so I understand the need to have information explained in terms that I comprehend and can relate to. I also want to give students the tools to succeed on their
own. After all, I will not be by their side when they have tests and assignments in class.
8 Subjects: including precalculus, calculus, geometry, algebra 1
...I excelled in math in high school, never getting below a B+. I can teach this subject with ease and help find what helps the student understand the material. My father often related it to
things I was doing in my high school, so that the math made more sense. Its helped me immensely, and I think it would also help my students.
17 Subjects: including precalculus, chemistry, English, statistics
...I offer tutoring to high school and college students from Algebra 1 through Calculus 2 as well as test prep help for several math tests including SAT and ACT. As your tutor, I am committed to
your success. It is my goal to not only help you succeed in your current math course, but also to help ...
10 Subjects: including precalculus, calculus, geometry, algebra 1 | {"url":"http://www.algebrahelp.com/Rancho_Cordova_precalculus_tutors.jsp","timestamp":"2014-04-19T01:51:07Z","content_type":null,"content_length":"25413","record_id":"<urn:uuid:e2fef704-5fe3-47fd-94f3-69e428351797>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
Archives of the Caml mailing list > Message from Diego Olivier Fernandez Pons
[Caml-list] Extensible graphs
Date: -- (:)
From: Diego Olivier Fernandez Pons <Diego.FERNANDEZ_PONS@e...>
Subject: Re: [Caml-list] Extensible graphs
> I'm writing code which I wish to sell in object form and I'd like it
> to contain a basic representation of a graph which can be extended.
> This basic graph might be something like:
> type leaf = A | B
> type node = Leaf of leaf | Group of node list
> People who use this code are likely to want to make a slightly more
> complicated graph which contains, say, an extra leaf type, an extra
> node type and more functions which act on the new type of graph,
> equivalent to this:
(Please forgive my approximative english and feel free to correct it
whenever needed. Moreover, if some elements seem unclear, do not
hesitate to ask for more explanations)
The problem is the extensibility of graph data structure distributed
in a compiled form. My answer will be two folds :
- generic advice based on my Caml programming experience
- specific advice based on my graph data structure implementation
I have read a few of your web pages and you seem to be an "imperative
programmer" more used to languages such as C++ or Java rather than
functional ones like ML or Haskell.
In "Objective Caml" there is of course "Objective" which states
clearly the language has an object layer but there is still "Caml" and
its functional core. Relevant elements for data structure
implementation are :
- parametric polymorphism
- functors
- polymorphic variants
- private constructors
> Adding new functions which use the existing data types is easy, but
> I can't see any way to allow them to add new node types without
> requiring them to reimplement everything, or at least explicitly
> call the old routines from any new ones when they are used with the
> old data types.
What do you mean by "new node types" ?
If what you need is to allow any type to be a node, then you should
try a polymorphic data structure :
'a graph (where 'a stands for the type of the node)
the you could have
int graph : a graph in which every node contains an integer type
(int * char) graph : a graph in which every node contains an integer
and a char data
(int graph) graph : a graph in which every node contains an
int graph data
MyType graph : your own type data in every node
You will find parametric graph data structures in Baire (see the Hump
in the data structure section) and you can easily build your own ones
(e. g. with a parametric map data structure)
If the "node type" requires specific accessors (i.e. if it is a
module) then you should try functorial graphs.
the user code should look like
module MyNode = struct ... end
module MyGraph = Graph.Make (MyNode)
You will find an example of functorial graph library in OCamlGraph
(see the Hump, data structures section) even if in this case it is the
whole graph data structure which is abstracted from the (functorial)
graph algorithms.
You may also want to try "private constructors". It is a kind of
intermediate between the completely open types (e.g. int * int) and
"closed" functors. It is a rather new feature and I am not yet totally
confortable with it, therefor I won't say much more.
> type leaf = A | B | C
> type node =
> | Leaf of leaf
> | FunkyGroup of node list
> | Group of node list
In the example you give, the "node" is not a node of the graph but
a node of the underlying tree that represents the graph : are you
really sure you need that ?
The main problem here is the pattern-matching since the predefined
functions (like count_leaves) based on it do not work any more.
Possible work-around are :
i) pattern-matching simulation via functors
I tried that once for binary trees
type 'a tree = E | N of 'a tree * 'a * 'a tree
type 'a tree2 = E | N of 'a tree2 * 'a * 'a tree2 * int
I didn't want to rewritte all functions like insert, fold, etc. which
do not depend on the extra int information
let rec height = function
| E -> 0
| N (l, _, r) -> 1 + max (height l) (height r)
I defined a module TreePatternMatcher
type 'a t
val is_empty : 'a t -> bool
val left_tree : 'a t -> 'a t
val right_tree : 'a t -> 'a t
val value : 'a t -> 'a
val partition : 'a t -> 'a t * 'a * 'a t
then I wrapped all functions in a functor using this interface
let rec height = function tree ->
if is_empty tree then 0
else let (l, _, r) = partition tree in
1 + max (height l) (height right)
ii) polymorphic variants
In the previous case, the problem was "inside a constructor"
N (l, v, r) against N (l, v, r, _)
If you only need to add patterns, then polymorphic variants could be
what you are looking for. The Caml manual gives a few examples of the
use of variants.
> I've also tried using inheritance by deriving everything from an ABC
> "node". But this just replaces this problem with another problem. If
> the types of node are all derived from a "node" ABC then you can
> easily add new types but you can't easily add new (method) functions
> to all types.
The object layer of Caml is in my opinion rather subtle and the only
case I have needed it is for adaptive programming (when a data
structure changes its representation silently)
Instead of writting
type tree =
| TreeRepresentationOne ...
| TreeRepresentationTwo ...
let insert x = function
| TreeRep1 t -> TR1.add x t
| TreeRep2 t -> TR2.add x t
you use the object layer to downcast to a common subtype.
> Is factoring out as much code as possible the best I can do, or is
> there a better way to approach this problem ?
It would be easier if you gave us a more detailled example. Anyway, in
my opinion you should first try simple solutions (polymorphic data
Diego Olivier
To unsubscribe, mail caml-list-request@inria.fr Archives: http://caml.inria.fr
Bug reports: http://caml.inria.fr/bin/caml-bugs FAQ: http://caml.inria.fr/FAQ/
Beginner's list: http://groups.yahoo.com/group/ocaml_beginners | {"url":"http://caml.inria.fr/pub/ml-archives/caml-list/2004/03/64ee6fa186f51ff6b4ea529875591a84.en.html","timestamp":"2014-04-17T21:37:17Z","content_type":null,"content_length":"11399","record_id":"<urn:uuid:fa295e4a-b22d-4f48-848c-e82a1cec258c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Spider Fly Distance
A spider, S, is in one corner of a cuboid room, with dimensions a by b by c, and a fly, F, is in the opposite corner.
Find the shortest distance from S to F.
There are three straight line routes from S to F.
Let the distances from S to F[1], F[2], and F[3], be d[1], d[2], and d[3] respectively.
Using the Pythagorean Theorem we get:
d[1]^2 = (a+b)^2 + c^2 = a^2 + b^2 + c^2 + 2ab
d[2]^2 = (a+c)^2 + b^2 = a^2 + b^2 + c^2 + 2ac
d[3]^2 = (b+c)^2 + a^2 = a^2 + b^2 + c^2 + 2bc
Without loss of generality, let us assume that a b c.
As b c, ab ac, and it follows that d[1] d[2].
Similarly, as a c, ab bc, and d[1] d[3].
And finally, as a b, ac bc, giving d[2] d[3].
Hence, d[1] d[2] d[3] and, of the three routes, the shortest distance would be from S to F[3]; that is, the journey from S to the longest edge.
What is the smallest cuboid for which the shortest route is integer?
What about the smallest cuboid for which all three routes are integer?
Problem ID: 201 (10 Jan 2005) Difficulty: 3 Star | {"url":"http://mathschallenge.net/full/spider_fly_distance","timestamp":"2014-04-16T05:00:34Z","content_type":null,"content_length":"7962","record_id":"<urn:uuid:5021f6fb-96c5-4dea-88da-d732a91f40b0>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
A202282 - OEIS
A202282 Initial prime in prime decuplets (p+0,2,6,8,12,18,20,26,30,32) preceding the maximal gaps in A202281. 1
11, 33081664151, 83122625471, 294920291201, 730121110331, 1291458592421, 4700094892301, 6218504101541, 7908189600581, 10527733922591, 21939572224301, 23960929422161, 30491978649941, 46950720918371,
84254447788781, 118565337622001, 124788318636251, 235474768767851 (list; graph; refs; listen; history; text; internal format)
OFFSET 1,1
COMMENTS Prime decuplets (p+0,2,6,8,12,18,20,26,30,32) are one of the two types of densest permissible constellations of 10 primes. Maximal gaps between decuplets of this type are listed in
A202281; see more comments there.
REFERENCES Hardy, G. H. and Littlewood, J. E. "Some Problems of 'Partitio Numerorum.' III. On the Expression of a Number as a Sum of Primes." Acta Math. 44, 1-70, 1923.
LINKS Table of n, a(n) for n=1..18.
T. Forbes, Prime k-tuplets
Alexei Kourbatov, Maximal gaps between prime k-tuples
Eric W. Weisstein, k-Tuple Conjecture
EXAMPLE The first four gaps (after the decuplets starting at p=11, 33081664151, 83122625471, 294920291201) form an increasing sequence, with the size of each gap setting a new record. Therefore
these values of p are in the sequence, as a(1), a(2), a(3), a(4). The next gap is not a record, so the respective initial prime is not in the sequence.
CROSSREFS Cf. A027569 (prime decuplets p+0,2,6,8,12,18,20,26,30,32), A202281
Sequence in context: A022545 A086503 A027569 * A131680 A213647 A072218
Adjacent sequences: A202279 A202280 A202281 * A202283 A202284 A202285
KEYWORD nonn
AUTHOR Alexei Kourbatov, Dec 15 2011
STATUS approved | {"url":"http://oeis.org/A202282","timestamp":"2014-04-20T17:50:31Z","content_type":null,"content_length":"15928","record_id":"<urn:uuid:20fefe00-b922-4f5e-8e3d-5d3213bd8d4c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Experimental study of Rayleigh instability in metallic nanowires using resistance fluctuations measurements from 77K to 375K
Bid, Aveek and Bora, Achyut and Raychaudhuri, Arup K (2005) Experimental study of Rayleigh instability in metallic nanowires using resistance fluctuations measurements from 77K to 375K. In: SPIE:
Fluctuations and Noise in Materials II, 24 May, Austin, TX, USA, Vol.5843, 147 -154.
Full text not available from this repository. (
Request a copy
Nanowires with high aspect ratio can become unstable due to Rayleigh-Plateau instability. The instability sets in below a certain minimum diameter when the force due to surface tension exceeds the
limit that can lead to plastic flow as determined by the yield stress of the material of the wire. This minimum diameter is given $d_m \approx 2\sigma_S/\sigma_Y$ , where $\sigma_S$ is the surface
tension and $\sigma_Y$ is the Yield force. For Ag and Cu we estimate that $d_m \approx$ 15nm. The Rayleigh instability (a classical mechanism) is severely modified by electronic shell effect
contributions. It has been predicted recently that quantum-size effects arising from the electron confinement within the cross section of the wire can become an important factor as the wire is scaled
down to atomic dimensions, in fact the Rayleigh instability could be completely suppressed for certain values of $k_F r_O$. Even for the stable wires, there are pockets of temperature where the wires
are unstable. Low-frequency resistance fluctuation (noise) measurement is a very sensitive probe of such instabilities, which often may not be seen through other measurements. We have studied the
low-frequency resistance fluctuations in the temperature range 77K to 400K in Ag and Cu nanowires of average diamete $\approx$ 15nm to 200nm. We identify a threshold temperature T* for the nanowires,
below which the power spectral density $S_V(f) \sim 1/f$. As the temperature is raised beyond T* there is onset of a new contribution to the power spectra. We link this observation to onset of
Rayleigh instability expected in such long nanowires. T* \sim 220K for the 15nm Ag wire and \sim 260K for the 15nm Cu wire. We compare the results with a simple estimation of the fluctuation based on
Rayleigh instability and find good agreement.
Actions (login required) | {"url":"http://eprints.iisc.ernet.in/9819/","timestamp":"2014-04-18T04:13:16Z","content_type":null,"content_length":"22233","record_id":"<urn:uuid:2b951f50-ca25-4823-9c17-e620b65e499a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tinley Park Algebra 2 Tutor
Find a Tinley Park Algebra 2 Tutor
...Math is my specialty - including calculus, geometry, precalculus, and statistics! I have a Bachelor's of Science from California Institute of Technology (CIT), an incredibly challenging
university. My teaching style is one of: * LISTENING to see what your student is doing; to learn how he or s...
21 Subjects: including algebra 2, chemistry, calculus, statistics
...Mathematics, Physics and Chemistry are strong areas of expertise. I have taught high school students back in India. I can also teach mechanical engineering and basic electrical engineering
subjects as well.
16 Subjects: including algebra 2, chemistry, physics, calculus
...Mathematics seems to be an area where many young children struggle. By working closely with them, I can see where that area is and work with the student in order to better understand skills on
how to be able to complete the mathematical concepts. While working previously as a tutor, I worked with students in grades K-12 in the areas of Mathematics, Language Arts, and Reading.
30 Subjects: including algebra 2, Spanish, English, reading
...By my senior year, I was named captain of the Women's varsity team and the number 1 singles player. As the oldest member of the team, other girls looked to me as their leader and my coaches
expected me to lead practices and team warm-ups. Although, I no longer play competitively, I am always looking for opportunities to practice, keep up my skills, and play a friendly match.
13 Subjects: including algebra 2, chemistry, calculus, geometry
...I use both analytical as well as graphical methods or a combination of the two as needed to cater to each student. Having both an Engineering and Architecture background, I am able to explain
difficult concepts to either a left or right-brained student, verbally or with visual representations. ...
34 Subjects: including algebra 2, reading, writing, statistics | {"url":"http://www.purplemath.com/Tinley_Park_Algebra_2_tutors.php","timestamp":"2014-04-16T22:41:03Z","content_type":null,"content_length":"24188","record_id":"<urn:uuid:f654b070-d3f6-4e56-8507-9836ffb395f9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00504-ip-10-147-4-33.ec2.internal.warc.gz"} |
What's the rarest figure eight in the universe?
As many of you have noticed, the planet Earth has not been hurled into space or fallen into the sun. It just keeps orbiting in the solar system. It seems pretty simple, but this fact has baffled
minds like Newton. That's because when you have more than two bodies in an orbiting system, they are inherently unstable. Unless they're orbiting in a very specific, very cool way.
One of the oldest problems ever calculated, once the laws of motion and the theory of gravitation were set down, was how different objects (or bodies) orbited each other. The issue arose naturally
enough. People began studying the movement of the planets in the solar system — and that made them question the physical laws of the universe. The fact that those laws also applied to objects on the
surface of the Earth was an epiphany. It seemed as though, with a few simple calculations, we could find out how literally everything moved, or would move. Mathematicians and physicists started out
simple, with two bodies in a stable orbit. No problem. Sometimes the bodies drifted apart, and sometimes they collapsed, and sometimes they stayed in orbit — at least for an unimaginably long length
of time. The different outcomes of a two-body system were easy to calculate.
The theorists took a single step up, adding another body, and got stuck. And, to a certain extent, they have remained stuck. Three bodies interacting with each other in space change velocity,
position, and proximity constantly. As these values change, they affect each other in ways that alter the values still further. It was impossible to predict in Newton's day, and despite the advent of
computers, which can calculate the different factors at a speed that no human could, it remains difficult to predict what a three-body system will do. They're unstable.
And so the hunt for answers turned to different ways that the three-body system could be made stable. A few work-arounds have been found, but in the 1980s and 1990s, an entirely new three-body
solution was conceived. Instead of a traditional orbit, like the kind the Earth takes around the sun, the three bodies would move in a perfect figure-eight pattern. They'd all trace the same path
along a perfect plane, but always be at different points. The objects would have to be the right mass and speed, and the orbit just the right size, but it could happen.
The more mathematicians and physicists look at it, the more they agree it can exist. But does it? In a universe this size, it certainly has a chance, but few are hopeful of finding it. Neil deGrasse
Tyson says, in his book Death By Black Hole, that it's doubtful that any such system exists in the Milky Way galaxy, and probably only a handful of them exist in the universe. It would be amazing to
see one, though. Who knows? Maybe someday we can even construct one.
Image: Samer Abdallah
Via UIUC, Wolfram Research, AMS, and Science Direct.
1 23Reply | {"url":"http://io9.com/5974319/whats-the-rarest-figure-eight-in-the-universe","timestamp":"2014-04-25T02:03:22Z","content_type":null,"content_length":"90423","record_id":"<urn:uuid:7d7b4c12-2a08-43b8-ae18-5dfaa3ad322a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bloom Filters
April 15th, 2007
A Bloom filter is a data structure for managing sets of values. Bloom filters provide O(1) lookups and insertion and, perhaps most importantly, provide an extremely compact representation of the set
of values being stored. The trade-off for this compact representation is that the lookup operation can have false positives. In other words, lookup(x) may return true even when x isn’t in the set.
You might be wondering why we’d be willing or able to tolerate false positives in set lookups. There are actually lots of scenarios where this makes sense. For instance, the original application of
Bloom filters–spell checking on limited-memory machines–remains a fine motivating example.
In spell checking, a Bloom filter is used to store a dictionary of correctly-spelled words. If lookup(word) returns false, the spell checker flags word as a misspelling. False positives in this
application, e.g., lookup(‘notaword’) == True, results in some misspellings going unnoticed. That might seem to be a bad thing, but it’s all about balancing trade-offs. Bloom filters allow the spell
checking application to load a comprehensive dictionary into a small amount of memory and makes spell checking fast enough that users can run the checker often. The small memory footprint can be
achieved with a false positive rate that results in approximately 1 in 100 misspellings going undetected. Other trade-offs, like using a smaller dictionary or running the spell checker less
frequently, might result in even higher error rates.
Bloom filters were invented by Burton Bloom in 1970 and described in his seminal paper Space/time Trade-offs in Hash Coding With Allowable Errors. Even though they’ve been around for 37 years now,
are straighforward to implement, and have many, many practical uses, you typically don’t find Bloom filters described in data structures textbooks or taught in University undergraduate data
structures courses. That’s something that should probably change IMO.
So, how do Bloom filters work? The concept is relatively simple, assuming that you’re already familiar with hashing and some simple probability. Recall that in traditional hashing you have a function
h(val) that maps val onto an index in a table. Ideally, you want a function h() such that
1. h(val1) == h(val2) when val1 == val2
2. h(val1) != h(val2) when val1 != val2
In practice it’s vary hard to ensure condition 2 given fixed table sizes, so you have to relax the condition to something like “h(val1) is unlikely to equal h(val2) when val1 != val2″. This means
that you occasionally get collisions where distinct values get the same hash value. There’s a vast literature out there about designing hash functions and sizing tables to minimize collisions, and
describing how to handle collisions when they do occur. The salient point for this discussion is that these ‘traditional’ hashing techniques require that the complete value being hashed, or a unique
proxy of that value, be stored in the hash table so that collisions can actually be detected.
This is where Bloom filters differ and the reason why they have false positives on lookup. When adding a value to a Bloom filter you compute k hashes which gives you k indices into an m-bit bit
vector. The k bit vector entries for those indices are set to 1. To look up a value, you also compute k hashes to get k indices. If all k bit vector entries are 1 then you return true indicating that
the value was found, otherwise, you return false. Here’s how you would do this in Python:
import BitVector
class BloomFilter:
def __init__(self, m, k):
"""Instantiate an m-bit Bloom filter using k hash indices
per value."""
self.n = 0
self.m = m
self.k = k
self.bv = BitVector.BitVector(size = self.m)
def Insert(self, s):
for i in self._HashIndices(s):
self.bv[i] = 1
self.n += 1
def Lookup(self, s):
for i in self._HashIndices(s):
if self.bv[i] != 1:
return False
return True
[Note: this snippet uses Avinash Kak's BitVector module.]
Both m and k are user configurable values in this code, and we haven’t really said how to choose either. If we’re going to store n values in a Bloom filter, the probability of a false positive on any
given lookup is given by pow(1 – exp(-kn/m)), k). In other words, the false positive rate increases as k and n get bigger, and as m gets smaller. If you have m and n, then a reasonable starting point
for k is 0.7(m/n). The derivation for these formula are an exercise in simple probability. However, I’m going to leave it as an exercise given that this post is already running long.
Now, the next big question we have to answer in order to complete our implementation is how exactly to do the hashing. For a given population of n values, we want our k hash functions to distribute
bits uniformly over the m indices in the bit vector. One way to do this is to choose k mutually independent hash functions. Alternately, if you have a hash function that produces a large, uniformly
distributed set of bits for each value, you can chop the hash output into k buckets and use each bucket as a hash value.
When m is reasonably big (more than a few ten thousand bits) then you can get away with just having two independent hash functions using a technique discovered by Kirsch and Mitzenmacher. You can
compute the k hash values as h_i(val) = (h1(val) + i * h2(val)) % m. Here’s our implementation.
def _HashIndices(self, s):
indices = []
for i in xrange(1, self.k + 1):
indices.append((hash(s) + i * hashpjw(s)) % self.m)
return indices
Notice that for h1() we’ve used Python’s built in hash function. For h2() I’ve used a simple but very effective string hashing function due to Peter Weinberger. The implementation of that function
def hashpjw(s):
val = 0
for c in s:
val = (val << 4) + ord(c)
tmp = val & 0xf0000000
if tmp != 0:
val = val ^ (tmp >> 24)
val = val ^ tmp
return val
Now all that’s left to do is to measure our performance. I’ve written a little chunk of test code that loads approximately 90% of the words from GNU aspell’s English language dictionary into a Bloom
filter. The remaining 10% of the words are used as a holdback to check the false positive rate of the filter. Here’s the code.
def TestFilter():
import random
# holdback will record words not added to the Bloom filter
holdback = set()
# Instantiate an ~1Mbit bloom filter with k=8
bf = BloomFilter(1090177, 8 )
# Open file with one English word per line
f = open('data/words.dat')
# Add each line to either holdback or Bloom filter
for line in f:
val = line.rstrip()
if random.random() <= 0.10:
# Add ~10% of values to holdback
# Add ~90% of values to Bloom filter
# Print information about current state of Bloom filter
# Count false positives -- # holdback items in the Bloom filter
num_false_positives = 0
for val in holdback:
if bf.InFilter(val):
num_false_positives += 1
# Compute false positive rate and print
rate = 100.0 * float(num_false_positives) / float(len(holdback))
print "Actual false positive rate = %.2f%% (%d of %d)" % (rate,
num_false_positives, len(holdback))
There’s one more function that I haven’t shown that prints some statistics about the current state of the Bloom filter. That code follows.
def PrintStats(self):
k = float(self.k)
m = float(self.m)
n = float(self.n)
p_fp = math.pow(1.0 - math.exp(-(k * n) / m), k) * 100.0
compression_ratio = float(self.bits_in_inserted_values) / m
print "Number of filter bits (m) : %d" % self.m
print "Number of filter elements (n) : %d" % self.n
print "Number of filter hashes (k) : %d" % self.k
print "Predicted false positive rate = %.2f" % p_fp
print "Compression ratio = %.2f" % compression_ratio
And finally, here’s the output of a single run of TestBloomFilter(). Notice that the Bloom filter is about 1Mbit, contains ~126k words and achieved a compression ratio of 7.97 (over just storing the
bits of the original ~126k words). The predicted false positive rate (1.81%) was very close to the measured rate (1.87%). If we wanted to reduce this false positive rate further, holding the number
of words in the dictionary constant, we could increase either m or k or both.
Number of filter bits (m) : 1090177
Number of filter elements (n) : 126733
Number of filter hashes (k) : 8
Predicted false positive rate = 1.81%
Compression ratio = 7.97
Actual false positive rate = 1.87% (265 of 14178)
Full code is available here. Don’t forget to grab BitVector module if you don’t already have it.
Next time, I’ll talk about some contemporary uses for Bloom filters.
1. February 18th, 2008 at 00:12 | #1
Reply | Quote
I noticed that you calculate the hashindices on each for-loop in your function _HashIndices(). If you except larger quantities of words (like >500.000) to be inserted into a big Bloomfilter
(600kB Bitvec) you’ll get an optimum number of 7 Hashfunctions… calculating 2 Hashvalues might then be a little faster than calculating 14 Hashvalues for each word
1. April 28th, 2007 at 15:44 | #1
You must be logged in to post a comment. | {"url":"http://www.coolsnap.net/kevin/?p=13","timestamp":"2014-04-19T19:34:04Z","content_type":null,"content_length":"24208","record_id":"<urn:uuid:01b30601-719d-4af9-adae-ad327b0eeb90>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mission Viejo Calculus Tutors
...While my work consists of developing mathematical models and software for simulating chemical systems that have uses in solar cells and light-emitting diodes, my real passion lies in sharing my
knowledge through education. I have been teaching in some form since high school including (but not li...
6 Subjects: including calculus, chemistry, algebra 2, precalculus
...They are losing many little points on Algebra skills. They know the Calculus skills, but they have problems with Algebra. These small mistakes are turning their grades to C and under.
11 Subjects: including calculus, statistics, algebra 2, geometry
...I am able to help students grasp algebraic concepts by teaching them methods that I have learned along the way. As a young student, I did not always readily see the significance and relevance
of Algebra. However, over the course of my education, I have come to truly appreciate the subject by seeing its principles in real life.
9 Subjects: including calculus, physics, algebra 1, algebra 2
...Without a strong foundation, subsequent levels of math will become frustratingly difficult. Like the course title implies, Algebra 2 must be done after a firm understanding of Algebra 1. The
reason most students do not do well at Algebra 2 is because they do not have a firm foundation and so they can't connect the dots between Algebra 1 and 2.
36 Subjects: including calculus, Spanish, chemistry, English
...My knowledge is extensive, and I can teach you how to study and understand anatomy in a simplistic manner. I have been studying anatomy and physiology since I was a sophomore in high school and
have my bachelor's degree from UCLA in physiology, which is one of the most difficult and detailed maj...
14 Subjects: including calculus, chemistry, geometry, algebra 1 | {"url":"http://www.algebrahelp.com/Mission_Viejo_calculus_tutors.jsp","timestamp":"2014-04-18T20:49:53Z","content_type":null,"content_length":"25285","record_id":"<urn:uuid:2dd97a2b-66da-460e-b3fa-a7ed0aa37b10>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume 36,
Volume 36, Issue 7, July 1995
View Description Hide Description
In this paper there are constructed manifestly covariant relativistic coherent states on the entire complex plane which reproduce others previously introduced on a given SL(2,R) representation,
once a change of variables z∈C→z [ D ]∈ unit disk is performed. Also introduced are higher‐order, relativistic creation and annihilation operators, â,â^°, with canonical commutation relation [â,
â^°]=1 rather than the covariant one [ẑ,ẑ^°]≊ energy and naturally associated with the SL(2,R) group. The canonical (relativistic) coherent states are then defined as eigenstates of â. Finally,
a canonical, minimal representation is constructed in configuration space by means of eigenstates of a canonical position operator.
View Description Hide Description
The main purpose of this work is to provide a rigorous interpretation of kicked models, with the aid of nonstandard analysis. These modes are represented by evolution equations involving Dirac’s
delta functions of time and the method used consists of approximating them by smooth functions on the *R.
View Description Hide Description
For a quantum mechanical system, modeled via unitary or antiunitary representations of a symmetry group G with Lie algebrag, the Hamiltonian H is of special interest. If H is an element of g or
of the universal enveloping algebra U(g), a generic time dependence for any element in U(g) is given through the Heisenberg picture. As an example we consider a system with Gal(N) as symmetry
group with H as one of the generators. For N≥3 one gets from ray representations the free Schrödinger equation. For N=1 a peculiarity occurs: the ‘‘free’’ equation has an interaction term, which
results from the construction and parametrization of unitary ray representations. For N=2 there is another special feature: there exist ray representations of the universal covering group Gal(2)
■, which induce no ray representations of Gal(2). Furthermore, for these representations it is not possible to construct a Schrödinger equation.
View Description Hide Description
On Z [ n ] symmetric algebraic curves of any genus the Hilbert space of analytic free fields with integer spin is constructed. As an application, an operator formalism for the b–c systems is
developed. The physical states are expressed in terms of creation and annihilation operators as in the complex plane and the correlation functions are evaluated exploiting simple normal ordering
rules. The formalism is very suitable for performing explicit calculations on Riemann surfaces and, moreover, it gives some insight into the nature of two‐dimensional field theories on a
manifold. It is proven, in fact, that the b–c systems on a Z [ n ] symmetric algebraic curve are equivalent to a conformal field theory on the complex plane having as primary operators twist
fields and free ghosts. Some consequences of the interplay between topology and statistics are also discussed.
View Description Hide Description
Using the generalized coherent states it is shown that the path integral formulas for SU(2) and SU(1,1) (in the discrete series) are WKB exact, if it is started from the trace of e ^−iTĤ , where
Ĥ is given by a linear combination of generators. In this case, the WKB approximation is achieved by taking a large ‘‘spin’’ limit: J,K→∞, under which it is found that each coefficient vanishes
except the leading term which indeed gives the exact result. It is further pointed out that the discretized form of path integral is indispensable, in other words, the continuum path integral
expression sometimes leads to a wrong result. Therefore great care must be taken when some geometrical action would be adopted, even if it is so beautiful as the starting ingredient of path
integral. Discussions on generalized coherent states are also presented both from geometrical and simple oscillator (Schwinger boson) points of view.
View Description Hide Description
It will be shown that the topological Yang–Mills theory of Witten can be induced from the usual Yang–Mills theory of the second rank antisymmetric tensor field when the matrix derivative of
non‐commutative geometry proposed by Connes is incorporated in the superconnection framework. It is done by identifying consistently the antisymmetric tensor field B with the usual field strength
F by B=−F.
View Description Hide Description
In 1948, Feynman showed Dyson how the Lorentz force law and homogeneous Maxwell equations could be derived from commutation relations among Euclidean coordinates and velocities, without reference
to an action or variational principle. When Dyson published the work in 1990, several authors noted that the derived equations have only Galilean symmetry and so are not actually the
Maxwelltheory. In particular, Hojman and Shepley proved that the existence of commutation relations is a strong assumption, sufficient to determine the corresponding action, which for Feynman’s
derivation is of Newtonian form. In a recent paper, Tanimura generalized Feynman’s derivation to a Lorentz covariant form with scalar evolution parameters, and obtained an expression for the
Lorentz force which appears to be consistent with relativistic kinematics and relates the force to the Maxwell field in the usual manner. However, Tanimura’s derivation does not lead to the usual
Maxwelltheory either, because the force equation depends on a fifth (scalar) electromagnetic potential, and the invariant evolution parameter cannot be consistently identified with the proper
time of the particle motion. Moreover, the derivation cannot be made reparameterization invariant; the scalar potential causes violations of the mass‐shell constraint which this invariance should
guarantee. Tanimura’s derivation is examined in the framework of the proper time method in relativistic mechanics, and the technique of Hojman and Shepley is used to study the unconstrained
commutation relations. It is shown that Tanimura’s result then corresponds to the five‐dimensional electromagnetictheory previously derived from a Stueckelberg‐type quantum theory in which one
gauges the invariant parameter in the proper time method. This theory provides the final step in Feynman’s program of deriving the Maxwelltheory from commutation relations; the Maxwelltheory
emerges as the ‘‘correlation limit’’ of a more general gauge theory, in which it is properly contained.
View Description Hide Description
Some general formulas are derived for the solutions of a BRST quantization on inner product spaces of finite dimensional bosonic gauge theories invariant under arbitrary Lie groups. A detailed
analysis is then performed of SL(2,R) invariant models and some possible geometries of the Lagrange multipliers are derived together with explicit results for a class of SL(2,R) models. Gauge
models invariant under a nonunimodular gauge group are also studied in some detail.
View Description Hide Description
Using an approach based on the canonical formalism, the Yang–Mills theories on a cylinder are rigorously analyzed. In this way the moduli space A/G, can be explicitly described with A being the
space of connections and G the group of gauge transformations. In particular A/G[0], G[0] being the group of the pointed gauge transformations, is diffeomorphic to the structure group of the
theory G, whereas A/G is G modulo the group of inner automorphisms. It is also proven that A→G is a principal fiber bundle with structure group G[0].
View Description Hide Description
Some quantum integrable systems related with the groups U(1,2) and Sp(1,2) are considered. The motion of particles in the potentials g [1]/sinh^2α+g [2]/cosh^2α and g [1] ^’/sin^2θ+g [2] ^’/
cos^2θ is related with the free motion in symmetric spaces of these groups. The integral representations for the Green functions of a free particle on these spaces are given.
View Description Hide Description
In the present paper the problem of a relativistic Dirac electron is analyzed in the presence of a combination of a Coulomb field, a 1/r scalar potential, as well as a Diracmagnetic monopole and
an Aharonov–Bohm potential. Using the algebraic method of separation of variables, the Dirac equation expressed in the local rotating diagonal gauge is completely separated in spherical
coordinates, and exact solutions are obtained. The energy spectrum is computed and its dependence on the intensity of the Aharonov–Bohm and the magnetic monopole strengths is analyzed.
View Description Hide Description
The existence of an internal angular momentum induces nutations and periodic deviations from a mean precession. Motions are classified into three cases. In Case I, the nutation is regular during
precessions so that the motion is a wobbling, the top behaves triaxially. This triaxiality may be involved in the triaxial deformations of nuclear shapes in nuclear physics. Case II is a limiting
case of Case I at an infinite period of nutation. In Case III, the body symmetry axis is over nutated to cross over and to oscillate around the invariable plane. It is an overnutated wobbling.
These three cases can be determined by whether the ratio of the internal angular momentum to the total angular momentum is less than or greater than a critical value.
View Description Hide Description
The massless wave equation on a class of two‐dimensional manifolds consisting of an arbitrary number of topological cylinders connected to one or more topological spheres are analyzed herein.
Such manifolds are endowed with a degenerate (nonglobally hyperbolic) metric. Attention is drawn to the topological constraints on solutions describing monochromatic modes on both compact and
noncompact manifolds. Energy and momentum currents are constructed and a new global sum rule discussed. The results offer a rigorous background for the formulation of a field theory of
topologically induced particle production.
View Description Hide Description
The notion of center of mass is reviewed and three natural definitions for the relativistic regime are proposed. This construction can be explicitly calculated by means of an algorithm which is
described below.
View Description Hide Description
Recent developments in optical imaging inspired the model of photon transport discussed below. (Infrared radiation is used to image relatively soft and homogeneous tissue.) The difficulty of
solving Maxwell’sequations, or even linear transport equations, led to this ‘‘diffuse tomographic’’ model. A recursive scheme for solving the two‐dimensional problem is sketched and the first
recursive step is detailed.
View Description Hide Description
It is shown that any divergence‐free vector field invariant under a group of volume‐preserving transformations can be expressed locally in terms of two scalar potentials which depend on two
variables only. It is also shown that the corresponding field line equations can be written in Hamiltonian form with one of these potentials as the Hamiltonian.
View Description Hide Description
An affine sl(n+1) algebraic construction of the basic constrained KP hierarchy is presented. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian
symmetric space and constrained KP Lax formulation and it is shown that these approaches are equivalent. The model is recognized to be the generalized non‐linear Schrödinger (GNLS) hierarchy and
it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity‐Bäcklund transformations and interpolate between GNLS
and multi‐boson KP‐Toda hierarchies. Our construction uncovers the origin of the Toda lattice structure behind the latter hierarchy.
View Description Hide Description
An inverse scattering problem for a second order matrix differential equation on the line related to the wave propagation in anisotropic media is studied herein. A reconstruction procedure is
given based on the Riemann–Hilbert problem of analytic factorization of matrix functions and a uniqueness theorem is proven.
View Description Hide Description
In this article the existence of generalized solutions to the modified Korteweg–de Vries equation u [ t ]−6σu ^2 u [ x ]+u [ xxx ]=0 is studied. The solutions are found in certain algebras of new
generalized functions containing spaces of distributions.
View Description Hide Description
Extension of the standard construction of the Kadomtsev–Petviashvili (KP) hierarchy by the use of Riemann–Liouville integral is given. In consequence we obtain the new classes of integer as well
as fractional graded KP hierarchies, which are further investigated. The fractional calculus leads to the new generalization of the w [∞]algebra. | {"url":"http://scitation.aip.org/content/aip/journal/jmp/36/7","timestamp":"2014-04-17T19:31:49Z","content_type":null,"content_length":"149411","record_id":"<urn:uuid:2f841abc-6f81-42ac-b135-de9bc316625e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00440-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof of the Feuerbach Theorem
Date: 03/14/2000 at 13:52:26
From: Stefa Ben Ari
Subject: Feuerbach theorem
Please submit the proof of the Feuerbach theorem (the nine point
circle is tangent to the incircle and the circumcircle of a triangle.)
Thank you in advance,
Date: 03/15/2000 at 13:12:57
From: Doctor Floor
Subject: Re: Feuerbach theorem
Hi Stefa,
Thanks for your question.
Before we start, let me say that we will make use of a formula on the
distance between the circumcenter O and incenter I of a triangle. When
we denote by r the radius of the incircle and by R the radius of a
circumcircle of a triangle, then we have:
IO^2 = R(R-2r)
A proof of this formula is given in the Dr. Math archives:
I will show you how the tangency of the incircle and the nine-point
circle can be proven with the use of complex numbers.
To do that, we consider a triangle with vertices A1, A2 and A3, which
we represent by complex numbers z1, z2 and z3, respectively. We can do
this in such a way that z1, z2 and z3 are positioned counterclockwise
on the unit circle with center 0, so R = |z1| = |z2| = |z3| = 1, and
the circumcenter O = 0.
For the measures of the angles of triangle A1A2A3 at A1, A2 and A3 we
will write a1, a2 and a3, respectively.
The centroid G of triangle A1A2A3 is G = (z1+z2+z3)/3. Now let us
consider the orthocenter H. From the fact that OG:GH = 1:2 we see that
H = z1+z2+z3. And since the nine-point center N is halfway between O
and H, we see that N = (z1+z2+z3)/2.
We can choose t1, t2 and t3 with
z1 = e^(t1i)
z2 = e^(t2i)
z3 = e^(t3i)
v1 = e^(i*t1/2)
v2 = e^(i*t2/2)
v3 = e^(i*t3/2)
in such a way that Q1 = v2v3 bisects the arc A2A3 including A1,
Q2 = v1v3 bisects the arc A1A3 including A2 and Q3 = v1v2 bisects arc
A1A2 including A3.
(Note that v1, v2 and v3 are numbers on the unit circle, and that
z1 = v1^2, z2 = v2^2 and z3 = v3^2).
The points P1 = -v2v3, P2 = -v1v3 and P3 = -v1v2 are the points
opposite to Q1, Q2 and Q3, and are the points where the internal angle
bisectors of triangle A1A2A3 meet the circumcircle/unit circle.
We can see that the incenter I of A1A2A3 is the orthocenter of P1P2P3
in the following way:
Let X be the intersection of A1P1 and P2P3. Note that A1P1 passes
through I. Angle A1P1P2 (denoted by <A1P1P2) equals
<A1A2P2 = a2/2
<P3P2P1 = <P3P2A2 + <A2P2P1
= <P3A3A2 + <A2A1P1
= a3/2 + a1/2
This means that
<P1XP2 = 180 degrees - <A1P1P2 - <P3P2P1
= 180 degrees - (a1+a2+a3)/2
= 90 degrees
So IP1 is an altitude in P1P2P3, and by symmetry so are IP2 and IP3,
which proves that I is the orthocenter of P1P2P3.
By this, in the same way that we found H = z1+z2+z3, we see
I = -(v1v2+v1v3+v2v3)
= -v1v2v3(1/v1 + 1/v2 + 1/v3)
= -v1v2v3(v1*+v2*+v3*)
= -v1v2v3(v1+v2+v3)*
where I write z* for the complex conjugate of z. So, for points z on
the unit circle, we know that zz* = 1.
We find that
IO = |O-I|
= |-I|
= |v1v2v3(v1+v2+v3)*|
= |v1||v2||v3||(v1+v2+v3)*|
= 1*1*1*|v1+v2+v3|
= |v1+v2+v3|
IN = |N-I|
= |(z1+z2+z3)/2 - -(v1v2+v1v3+v2v3)|
= |(v1^2+v2^2+v3^2)/2 + (v1v2+v1v3+v2v3)|
= |0.5(v1+v2+v3)^2|
= 0.5*|v1+v2+v3|^2.
We recall the formula IO^2 = R(R-2r), which implies, together with
R = 1, that r = 0.5 - 0.5*IO^2. We derive from this:
r = 0.5 - 0.5*IO^2
= 0.5 - 0.5*|v1+v2+v3|^2
= 0.5 - IN
Now we see that IN = 0.5 - r, and since the radius of the nine-point
circle is half the radius of the circumcircle, and thus 0.5, we see
that the incircle is (internally) tangent to the nine point circle.
The result for the excircles can be found in a similar way. Let's say
we consider the A1-excircle, having radius r1 and center I1. Now we
have to use the formula I1O^2 = R(R+2r1), and we find I1 as the
orthocenter of triangle P1Q2Q3. I leave it to you to follow the
computations as above.
If you have more questions, just write back.
Best regards,
- Doctor Floor, The Math Forum
Date: 03/26/2000 at 14:11:36
From: Michael Ben Ari
Subject: Re: Feuerbach theorem
Dear Doctor Floor,
Thank you for the proof of the Feuerbach theorem received on
03/14/2000 (Einstein's Birthday and also PI (3.14) !
There is something I didn't understand in the proof:
I your proof:
> v1 = e^(i*t1/2)
> v2 = e^(i*t2/2)
> v3 = e^(i*t3/2)
>in such a way that Q1 = v2v3 bisects the arc A2A3 including A1,
>Q2 = v1v3 bisects the arc A1A3 including A2 and Q3 = v1v2 bisects arc
>A1A2 including A3.
I understand that the coordinates of the point Q1 that bisects the arc
A2A1A3 are
it means Q1 = -v2v3, and the coordinates of the point Q2 that bisects
the arc A1A2A3 are
it means Q2 = v1v3 and the coordinates of the point Q3 that bisects
the arc A1A3A2 are
[-cos(t1/2+t2/2),-sin(t1 /2 +t2 /2)]
it means Q3 = -v1v2 and it doesn't fit your statement.
I can not find my mistake. Maybe I misunderstood the notation
e^(i*t1/2). I understand it as "cos(t1/2) + i sin(t1/2)."
Please give this part of the proof in more detail. The remainder of
the proof is clear to me.
Thank you,
Stefa Ben Ari
Date: 03/27/2000 at 06:38:09
From: Doctor Floor
Subject: Re: Feuerbach theorem
Dear Michael,
Thanks for your response.
It seems that you have overlooked the fact that t1, t2, and t3 can be
chosen in two ways.
Suppose that z1 = e^(i*t1), and also z1 = e^(i*(t1+2*pi)).
These give v1 = e^(i*t1/2) and v1' = e^(i*t1/2 + pi) = -v1.
The part of the proof you are quoting is saying that you can choose
these t1, t2, and t3 in such a way that the resulting v1, v2, and v3,
and consequently Q1, Q2, and Q3, are as desired.
It is stated this way to be sure that the right square root of z1 is
chosen (there are always two square roots of course.)
I hope this clears it up!
If you have more questions, just write back.
Best regards,
- Doctor Floor, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/55218.html","timestamp":"2014-04-21T07:41:58Z","content_type":null,"content_length":"11532","record_id":"<urn:uuid:61c67f77-638b-4286-ad00-f9de7b2ddd64>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00046-ip-10-147-4-33.ec2.internal.warc.gz"} |
value of 'e'
Re: value of 'e'
Maybe I should open a thread inviting stefy to teach me Maxima?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Real Member
Re: value of 'e'
I do not know much more about it than you.
The limit operator is just an excuse for doing something you know you can't.
“It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman
“Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
Re: value of 'e'
But you can surely guide me through the programming in it
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
pappym used to say, "when the student is ready the master will appear." The real quote is, "when the master is ready the student will appear." My quote,"when the mathematician is ready the CAS will
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
My question is 'How to be ready?'
Is there a Maxima forum?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
My question is 'How to be ready?'
I think I could partially answer that but not without influencing you. That would be wrong.
About a forum? There maybe, I do not know. Maxima is rather poor when it comes to documentation and support.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
Okay, I have sandboxed myself.
How to be ready now?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
You already know how to program in several languages. Using Maxima in the same way you would use Pascal is a mistake. They are not the same, even though you can use a screwdriver as a hammer, it is
not a hammer.
The purpose of a CAS is to get you to think mathematically even about programming problems. In other words to combine programming and math to get results neither can get alone.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
Okay, but how do I stop thinking like a programmer and start thinking like a mathematician?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
That was a long process for me. I originally became a programmer to do math problems I could not do using math. Then I found out that using math I could speed programs up by thousands of times, so I
went back and forth. I am still working on it and it has been 23 years!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
But how will I get it? I cannot buy it because $139.95 is NaN times my monthly earnings($0.0).
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Yes, I know that. But it only comes in at about 1 GB and you should have enough to spare that. Also you can ask for wolfram to send you a CD.
In the meantime until you figure it out on your own I will assist you with maxima as best I can.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
If this is not a very private question, how much data do you consume in 24hrs?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Probably less than 100MB per day, alot less. When I am downloading big files then it is naturally more.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
How can that be true? Don't you watch a lot of youtube?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
I download it once and watch it many times. I already have tons of them.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
I think I should request that guy who downloaded the Precise Pangolin for me to download this. Maybe, he should like it because he is a Computer Science student.
Is it possible to run Mathematica on Linux?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Yes, it is possible but I do not know anything about the installation of it.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
http://thepiratebay.sx/torrent/8630916/ … _%28Win%29(view with an ad blocker)
It is double the size you told
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
Mine is only about 1.12 GB.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
which version?
Why is Maxima, MATLAB, SciLab, GNU Octave, etc not enough for the math I will do?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: value of 'e'
No one else is using them so it is difficult to help you. There will always be problems in translation. Also, M has more documentation and forums than all of those put together! M also knows more
I am not trying to convince you, when you make up your own mind then it will happen.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Power Member
Re: value of 'e'
this is what I got till now.
friendship is tan 90°.
Re: value of 'e'
For which did you do?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: value of 'e'
How did you do it?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=290030","timestamp":"2014-04-20T00:58:24Z","content_type":null,"content_length":"41327","record_id":"<urn:uuid:291558bb-5de5-4e73-b164-a9adfdc8d4a8>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Catamorphisms and anamorphisms = general or primitive recursion?
The easy way...
Hand waving
Given a function that recurses on itself, do a partial CPS transform so that it only ever recurses on itself with tail calls. Then, convert the recursive calls to codata returns, so that the function
either returns TheAnswer or StillWorking with enough parameters to describe the recursive call / continuation state. This codata can be built with an unfold and can be collapsed back down to the
final answer with a fold.
Matt M
at Thu, 2011-06-16 13:33 |
to post comments
"Constructive" argument
neelk and Charles, your answers are along a power/termination axe: fold+unfold is more expressive than primitive recursion because they can express bigger functions or slower to terminate
This is interesting, and an elegant way to answer the original question, but still it doesn't explain, given a recursive function, how to express it with fold+unfold.
I feel a bit lazy asking this question; It seems that with a little effort, it could be figured. Yet I enjoy seeing this topic discussed here and am confident you'll provide additional, interesting
at Thu, 2011-06-16 07:42 |
to post comments
Broken link
That link is broken. Did you refer to this paper [1]?
[1] http://www.cs.tau.ac.il/~nachumd/papers/termination.pdf
Anyway, thanks for both answers.
at Wed, 2011-06-15 23:13 |
to post comments
Provable shrinkings
you need to show that there is some well-order along which f always shrinks its argument.
And for primitive recursiveness, even this is not enough: you need that the well-foundedness of the measure can be proven using a limited notion of induction. Sufficiently tangled unfolds over tree
structures, such as those you get from recursive path orderings, are not primitive recursive.
Cf. Dershowitz, 1987, Termination of rewriting (long scanned journal article).
Postscript Link fixed, thanks Blaisorblade.
Charles Stewart
at Wed, 2011-06-15 11:31 |
to post comments
Fold + Unfold
...gives you general recursion. The basic reason is that for inductive types, unfolds can diverge, and for coinductive types, folds can diverge. For a simple example, consider the natural numbers:
type nat = Z | S of nat
let rec fold f = function
| Z -> f None
| S n -> f (Some (foldnat f n))
let rec unfold f seed =
match f seed with
| None -> Z
| Some seed' -> S(unfold f seed')
(* Creating an infinite loop *)
let loop () =
unfold (fun () -> Some ()) ()
Given a total f, the fold will always terminate. However, the same is not true of unfold, as the loop example demonstrates. To ensure termination of the unfold, you need to show that there is some
well-order along which f always shrinks its argument.
at Tue, 2011-06-14 14:36 |
to post comments
...is to use the constructive lift monad, the coinductive type T(A) ≡ να. A + α. The intuition is that this type either tells you a value of type A, or tells you to compute some more and try again.
Nontermination is modelled by the element which never returns a value, and always keeps telling you to compute some more.
Our goal is to construct a general fixed-point combinator μ(f : TA → TA) : 1 → TA, which takes an f and then produces a computation corresponding to the fixed point of f. To fix notation, we'll take
the constructors to be:
roll : A + TA → TA
unroll : TA → A + TA
Since this is a coinductive type, we also have an unfold satisfying the following equation:
unfold(f : X → A + X) : X → TA ≡ roll ○ (id + (unfold f)) ○ f
First, we will explicitly construct the bottom element, corresponding to the computation that runs forever, with the following definition:
⊥ : 1 → TA ≡ unfold inr
This definition just keeps telling us to wait, over and over again. Now, we can define the fixed point operator:
μ(f : TA → TA) : 1 → TA
μ(f) ≡ (unfold (unroll ○ f)) ○ ⊥
What this does is to pass bottom to f. If f returns a value, then we're done. Otherwise, f returns us another thunk, which we can pass back to f again, and repeat.
Of course, this is exactly the intuition behind fixed points in domain theory. Lifting in domains is usually defined directly, and I don't know who invented the idea of defining it as a coinductive
type. I do recall a 1999 paper by Martin Escardo which uses it, and he refers to it as a well-known construction in metric spaces, so probably the papers from the Dutch school of semantics are a good
place to start the search.
This construction has seen a renewed burst of interest in the last few years, since it offers a relatively convenient way to represent nonterminating functions in dependent type theory.
at Thu, 2011-06-16 16:07 |
to post comments
Bananas in space
I think this boils down to the same thing, but the way I heard the story (in Meijer and Hutton's Bananas in Space paper) is like this:
fix f = foldr ($) undefined (repeat f)
That is, to construct fix f, first build an infinite stream of fs (an unfold), and then replace each cons with application (a fold).
at Mon, 2011-06-20 22:37 |
to post comments
Yet a third way
Any paper that defines an operational semantics using inference rules demonstrates a third way that unfolds can express general recursion, even though it usually isn't apparent.
A reduction relation R(x,y) is a mapping, or a set of pairs, from input terms x to output terms y. To keep it simple, we'll assume a big-step operational semantics, so the output terms are always
final, unreducible values.
Using the inference rules as a guide, define F(R_n) = R_n + { all the new conclusions reachable given the conclusions in R_n }. (A "conclusion" is a pair (x,y), which means "x reduces to y".) The
fixed point R of F is the union of a completed unfold over F, starting at R_0 = {}.
Usually every R_n is computable. (This is certainly the case when R defines a computable operational semantics.) And even though the limit R isn't computable, you could write a very slow interpreter
that, for any given term x, computes R_n's until R_n(x,y) is true (equiv. (x,y) is in R_n) for some y. It will halt for precisely the same x's that a recursive interpreter would, and give the same
Neil Toronto
at Fri, 2011-06-17 16:26 |
to post comments | {"url":"http://lambda-the-ultimate.org/node/4290","timestamp":"2014-04-20T03:11:15Z","content_type":null,"content_length":"22607","record_id":"<urn:uuid:6ce01e94-05fe-4a60-a676-317f85b7a189>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2008 [00787]
[Date Index] [Thread Index] [Author Index]
Re: Problems to find the local extrema of an InterpolatingFunction
• To: mathgroup at smc.vnet.net
• Subject: [mg87927] Re: Problems to find the local extrema of an InterpolatingFunction
• From: Szabolcs Horvát <szhorvat at gmail.com>
• Date: Sat, 19 Apr 2008 23:54:30 -0400 (EDT)
• Organization: University of Bergen
• References: <fuc7qh$a0q$1@smc.vnet.net>
Modeler wrote:
> Hi,
> does anyone know how to find all the local extrema of an InterpolatingFunction in a specified interval? The only
> thing that seems to work is Findroot, but it only finds a single root each time. Other rootfinding commands do not seem to work. Thanks for your help.
I would feel uncomfortable using FindMinimum on InterpolatingFunction
objects because (when constructed with Interpolation) the derivative of
an InterpolatingFunction is usually not continuous.
f = Interpolation[{1, 3, 7, 4, 2}]
Plot[{f[x], f'[x]}, {x, 1, 5}]
If the InterpolatingFunction was returned by NDSolve, or the values of
the derivative were supplied to Interpolation, then the derivative will
be usually continuous.
If you have the raw data points, I would suggest working with them
directly, or at least using them to find the two data points that
surround the extremum, and using the FindMinimum[f, {x, x0, xmin, xmax}]
syntax to search for the mimimum in a restricted region.
Here's a tutorial on extracting data points from InterpolatingFunctions
returned by NDSolve:
And some simple ways for finding zero crossings or local minima in
discrete data:
data = Table[Sin[x], {x, 0, 5, .2}]
Select[Partition[data, 2, 1], NonPositive[Times @@ #] &]
minPoint[{a_, b_, c_}] := b <= a && b <= c
Select[Partition[data, 3, 1], minPoint]
Ways to construct numerical derivatives from discrete data:
ListConvolve[{1, -1}, data]
ListConvolve[{1, 0, -1}, data]/2
This should be enough to get you started. | {"url":"http://forums.wolfram.com/mathgroup/archive/2008/Apr/msg00787.html","timestamp":"2014-04-17T04:03:01Z","content_type":null,"content_length":"27036","record_id":"<urn:uuid:1ea6d197-1c9b-4b39-94cc-e529b1eb3b03>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
Continuous and Discrete | Guest Blog, Scientific American Blog Network
As far back as the year 2000, a bookstore on Charing Cross Road in central London bore a sign that said “Any Amount of Books.” These days one often hears people conflate not only “amount” and
“number” but also “less” and “fewer,” as in “There were less students in class today.” Alas, the confusion is even more common in North America than in England.
Is it just a simple conversational error that only the grammatically fastidious find grating, or is there something more to it? The truth is that mathematicians recognize the gravity of the error as
well. In fact, far from being a mere linguistic slip, this error does a profound disservice to concepts that are at the very foundation of modern technology.
The fundamental distinction that is glossed over in that usage is the one between the continuous and the discrete. Now “continuous” is a word that is ubiquitous in day-to-day conversation, and its
meaning is well-understood, at least in the sense that the common-sense understanding is consistent with its technical or mathematical meaning. (To understand the full ramifications of continuity,
one has to dig deeper.) Simply put, if someone says, for example, that she has worked continuously for twenty years in a particular office, she means that there were no breaks or gaps in her service
at that office during that twenty-year period.
On the other hand, “discrete” is not a word that occurs often in common parlance, although people seem to understand it well enough. It is difficult to define it precisely – one has to start with the
notions of a set and a one-to-one correspondence between sets and go through the basic ideas put forward by the great nineteenth century mathematician Georg Cantor. (There are many books where they
are discussed, but a beautiful and perspicuous description of them can be found in the book “Satan, Cantor, and Infinity” by Raymond Smullyan. As one might guess from the title, the book is
accessible to anyone with a junior high school mathematics background. It is a delectable read.) The meaning of “discrete” becomes clear, however, when one uses it in an example: one has one child,
two or more children, or none at all. One instinctively understands that it is absurd to talk about 1.2 or 3.5 children. The same thing applies to apples or oranges in a basket.
So, without going into a detailed construction of real numbers, an ordinary person understands that some things, such as children, books, or cars can only be counted, whereas certain other things,
such as water, milk, or the weight of a person have to be measured. Discrete objects are counted, while continuous ones are measured.
Lest one should dismiss these thoughts as the idle ruminations of a disgruntled fusspot, let us observe that the difference between continuity and discreteness is the basis for the profound and
spectacular developments in science and technology that define the 21st century as well as the second half of the 20th. One often hears that ours is the digital age. What does it mean? It means, for
example, that music recorded in the old days was analog, meaning that the signals were continuous.
In contrast, when music is digitized, the signals are sampled at distinct points in time. Yet if the number of sampling points is large enough, and the duration between successive sampling points
very close to, but distinct from, zero, then our ears cannot distinguish between the continuous and discrete signals. In other words, it is beyond our powers of resolution. And this sampling at
discrete time or space intervals is at the heart of digital technology, the hallmark of our times. Thus when we confound the continuous and the discrete and speak of the “amount” of people, for
example, we are in effect saying that digital and analog technologies are the same. Of course, in mathematics itself, there are entirely different sets of ideas and techniques for dealing with
continuous as opposed to discrete problems. Any mathematician worth her salt will tell you that they are very different ways of mathematical thinking. (The two points of view meet, however, when one
considers asymptotics, i.e. what happens in the long run. This is rather like two parallel lines meeting in the far distance, at what mathematicians call the point at infinity.)
As the great Henry Fowler, author of “A Dictionary of Modern English Usage,” said, the ultimate arbiter of correctness of a word or a phrase is usage. So it behooves those of us who care about the
words we use and their meanings to raise alarm bells about the lumping of “amount” and “number”, or “less” and “fewer” as synonyms. Otherwise we will be stuck with them forever and have nobody else
to blame. In that spirit, one only hopes that, in typical English fashion, that sign outside the bookstore in London has spurred many an enraged stickler-for-precision into action.
Acknowledgments: It is a pleasure to thank John Rennie and Keith Johnson for helpful comments and suggestions and Aileen Penner for the illustration.
Note: Continuity is a property of functions. For sets, the corresponding property is connectedness. However, in the interests of keeping the discussion simple and easy to understand, this was not
mentioned in the article.
1. N a g n o s t i c 12:23 pm 06/11/2013
I find continuously irritating the practice of using “impact” in place of “affect”.
Link to this
2. CDBSB 1:06 pm 06/11/2013
I would guess the people that use “impact” in place of “affect” are people who can’t tell when to use “affect or “effect” and use “impact” to prevent any possible errors.
Link to this
3. curmudgeon 1:11 pm 06/11/2013
And yet you do not mind your own misuse of the word ‘continuously’? Very odd! Unless of course you are actually forever in the grip of that irritation first sparked by this usage, in which
case my deepest sympathies.
Link to this
4. lamorpa 1:47 pm 06/11/2013
How about using ‘literally’ in a figurative sense to mean ‘figuratively’?
Link to this
5. ultimobo 2:14 pm 06/11/2013
I studied discrete mathematics – usually done alone and quietly, away from and without disturbing other people
Link to this
6. SteveO 4:14 pm 06/11/2013
Actually, in practical analytical terms it is even more consequential to understand this and a bit more. Whether something is discrete or continuous is only part of what you need to know in
order to actually do anything with the data. Warren S. Sarle (ftp://ftp.sas.com/pub/neural/measurement.html) showed how the “measurement level” is a function of the relationship between the
dependent variable (that which you really want to understand) and the criterion measure (that which you measure in order to understand the dependent variable). That relationship dictates what
statistics you are allowed to calculate and what transforms you are allowed to perform.
To give an example, if you were to measure diameter if a disc, it is continuous, but if you are really interested in understanding disc area, you cannot use the average diameter to find the
average area. The relationship of the diameter (measured continuously) to the area is in fact only ordinal, so you are not allowed to take the average diameter, square it and multiply by pi
and draw conclusions from that to the average area.
That is an obvious example for illustration. (Of course you would probably just calculate the area and take the average in that example.) A more subtle example is in trying to analyze the
common survey question, “Indicate your level of agreement with the following statement,” with the answers Strongly Disagree = 1, Disagree = 2, Neutral = 3, Agree = 4, and Strongly Agree = 5.
The question is testing two or three different constructs (agreement, disagreement and maybe neutrality) and cannot be related back to the numbers in anything but a nominal way. 1 and 2 =
disagreement, 3 = neutrality (or maybe it gets lumped into “not-agreement” with 1 and 2), and 4 and 5 = agreement. A common error is to take an average of such data and make any conclusions
based on that average, or to say, use the t-test to test for differences between groups of respondents. It is just as bad as if we were to add “Not Applicable = 6″ to the scale. We might well
wonder what an average of 5.5 meant!
What I find fascinating is that the level of data, its relationship back to the dependent variable, and thus what you can do with the data, resides solely in the head of the researcher. The
exact same data set could be used correctly to take an average, or could have the average prohibited – nothing in the data itself tells you that!
When I teach applied statistics, this fact blows my students away.
Link to this
7. dadster 4:37 pm 06/11/2013
“Continuous as the stars that shine on the milky way
They stretched in never ending line ” Wordsworth . Some entity which in itself has no bounds or limits , without a beginning or an end is continuous . Somethings which has a beginning and an
ending , however brief the space and time within these limits of beginning and ending there be , is discrete objects .We can however make discrete segments of continuity, like drawing a
Planck length segment of a line or cutting time into discrete pieces of nano seconds or femto seconds for purposes of ” counting ” . We do say that length of a line ( or volume of an enclosed
space ) or seconds , ” measurements ” , though both are
discrete but yet don’t say ” count” the time and length.We discretize for quantified measurements. That which cannot be discretized are called ” qualities ” . Continuities are dissected into
discrete ” particles ” . Continuous radiant energy.is digitalized as photons and gravity as gravitons for ease of quantified measurements .it’s still controversial whether the universe is
made up of continuous entity like the “mind” or made up of discrete elements called
” particles ” of matter . The jury is out on the question. Or, we have to decide whether continuity and discreteness are two sides of the same coin , both inseparably interInked and
intertwined to make up the holistic whole , the web and warp of the fabric of reality complementing each other. It is perhaps these ambiguities in modern scientific thinking that has crept
into the use of language making not much of a distinction between the usage pattern of counting and measuring as it was considered in Newtonian times . If language communicates the ideas
involved , figuratively or otherwise , we need not be sticklers , in this highly interconnected Internet age of information , about it’s ” grammar” so much. Language must catch up with the
advancement in other fields of human transactions and thinking .
Link to this
8. N a g n o s t i c 5:18 pm 06/11/2013
CDBSB, that includes the entire US news media.
Link to this
9. N a g n o s t i c 5:28 pm 06/11/2013
Curmugeon, I stand constructively criticized.
“I find continuously irritating…” should read “I continuously find irritating…”.
My complaint did not concern grammar, though I do consider grammar important. My complaint concerns the use of ‘punchy’ language and catchphrases by all US news media, and the trickle down
effect (impact!) into general usage.
Link to this
10. bumluck 7:02 pm 06/11/2013
All mere pedantry. As the article itself states: “…the ultimate arbiter of correctness of a word or a phrase is usage.” Else we would all still be speaking Proto-Indo-European.
Link to this
11. sastric 9:34 am 06/12/2013
There appears to be some confusion as to what the article says. Let’s see if we can clear it up a bit. First of all, while I agree with Nagnostic’s comment about “impact” and “affect”, I
haven’t used either of those words in this post! Certainly, “impact” is an overused word that has lost some of its punch. In my opinion, it should be used sparingly, as in “9/11 had a
devastating impact on New York City, and by extension, on the US and the civilized world as a whole.”
Contrary to what bumluck says, this post is not “mere pedantry.” Let’s consider an example. Suppose you enjoy collecting rocks and have just returned from a visit to a geologically
interesting place with a collection of rocks. The questions that arise naturally are, first, how many did you collect? To find the answer, you COUNT them and come up with some NUMBER. Next
you might pick up a rock and wonder how much it weighs. To find the answer, you MEASURE the weight and come up with a certain AMOUNT. Then if you want to know how much space it occupies, you
MEASURE its volume get some AMOUNT, and so on. The point is, discrete things can only be counted. Things which are not discrete, but continuous-like, such as weight and volume, have to be
measured. The distinction is fundamental.
Link to this
12. greg_t_laden 10:23 am 06/12/2013
It is possible that the students were actually less, but saying so would be disparaging, certainly not discreet.
Link to this
13. zehlyi 3:08 pm 06/12/2013
‘Thus when we confound the continuous and the discrete and speak of the “amount” of people, for example, we are in effect saying that digital and analog technologies are the same.’
Whoa whoa whoa!
You seem to be arguing that because people conflate amount/number and less/fewer, it means that they don’t understand the difference between discrete and continuous things (or countable and
uncountable/mass nouns). But people use much/many correctly for the most part, and that’s the same distinction. Similarly, even for people who say “amount of books” or “10 books or less”,
they still treat “book” like a countable/discrete noun. They say “I have 10 books”, they pluralize book into books, and they would never say “I have much book(s)”.
As another example, if you look at a language like Swedish, they frequently use the word mycket (much) for even countable nouns, so you might think they’re confused about continuous vs.
discrete. However, they have two words for more: mer / fler. Mer is used for uncountable nouns and fler for uncountable. They don’t mix those up. And have you ever felt that your
understanding of continuous vs. discrete quantities was limited by English only having one word for “more” which is the opposite of both “less” and “fewer”?
Language changes. I think we should focus on teaching people basic math and science skills, and not worry too much about linguistic peevery.
Link to this
14. zoniedude 3:26 pm 06/12/2013
One thing overlooked is the crucial difference between counting and measuring. Counting can be exact, measuring always involves measurement error and a bell curve. Too often I read about
things like test scores that involve counting question answers and then producing a “measured” test score.
In reality any purported “measure” must be considered an “observed” score that approximates a “true” but unobservable score. W. Edwards Deming made this the basis of quality management: two
measures are equivalent even if different when they are within the six standard deviations of normal.
Thus the difference between continuous and discrete is actually the difference between two entirely different worldviews. Continuous measures must be considered in terms of a standard
deviation and are meaningless without it. Not understanding this essentially marks the innumerate.
Link to this
15. rjplummer 3:30 pm 06/12/2013
Another curmudgeonly comment:
The music example is incorrect. Although digital technology does involve thousands of samples per second, it is not the human ear that is deceived. The playback electronics are not designed
to reproduce the discrete samples but instead produce a continuous waveform that is usually indistinguishable from the original continuous waveform.
On the other hand, video has always always depended on the technique you described from the very first motion pictures: the eye perceives the rapidly updated discrete images as motion.
Link to this
16. kstagaman 4:37 pm 06/12/2013
This IS a completely ridiculous bit of pedantry. Of course in science and mathematics, precise language is required. “Theory” for example has (multiple) precise meanings in science and math,
but that doesn’t mean to use it as a synonym for “hypothesis” in the vernacular is wrong. This whole bit about distinguishing between discrete and continuous (only in regards to “fewer/less”
and “amount/number” in vernacular language is just silly. Here are two reasons why: (1) Not all languages have such distinctions. Does the author (and those who agree with him) wish to imply
that speakers of romance language have any less ability to comprehend the differences between discrete and continuous measures simply because, in Spanish for example, “menos” is used for both
less and fewer? If so, I’d like him to take this argument to the numerous brilliant mathematicians and scientists that speak Spanish as a primary language. (2) If the distinction were so
important, then why is it not symmetrical? In English we have “fewer” and “less”, but their antonyms are both “more”. There’s no distinction here, yet its use does not confuse people. If I
say “I picked more flowers today than yesterday.” or “I drank more water today than yesterday.” No one is confused as to whether flowers are discrete entities, or water is generally measured
along a continuous metric. To say that rigorously protecting the use of “fewer” and “less” or “amount” and “number” will keep people from forgetting the difference between analog and digital
is just preposterous.
Link to this
17. jsekhar 11:02 pm 06/13/2013
The dimensionality of the signal for recognition as discreet or continuous is important. Data sets or objects may seem discreet in certain views may seem connected in other views. Further,
any deformation of the sensor by the amplitude of the signal being sensed may cause interactions for the same frequency leading to fuzziness. The separation of discrete and continuous
realities are tied to the energy and momentum of an object and our notion of the reality, especially when approaching the speed of light. In a very fundamental sense signals that are massless
will pose distinctions between discreet and continuous with significant mass and energy variations of the same signal. Clearly there are many possibilities that have to be assessed prior to a
rigorous separation of a class of close adjectives…. some that only mathematics is able to clarify.
All languages will offer different shades of preciseness to distinguish between say
Speaking to a point made above, Spanish is yet another language and unlike Math may have a different rigor and groupings for the construction of phrases for describing a reality. Mapping
across language is open to opinions. Within a language the issue is not pedantic however the preciseness should not only be for…. “spurred many an enraged stickler-for-precision into action”
but for a far nobler cause of controlling new entropy generation in our universe.
Link to this
18. Raghuvanshi1 6:17 am 06/15/2013
There is vast different between less and fewer. When any body serve tea in hotel which not sufficient we called he serve us less tea.on the contrary we any meeting few people attended we
called that fewer people came to meeting. Less we used for what we get and fewer we use what we expect.
Link to this
Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment. | {"url":"http://blogs.scientificamerican.com/guest-blog/2013/06/11/continuous-and-discrete/","timestamp":"2014-04-16T19:13:24Z","content_type":null,"content_length":"115662","record_id":"<urn:uuid:c6051a94-3f79-46f9-a6e8-b36f6c8aff49>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis of tuberculosis prevalence surveys: new guidance on best-practice methods
An unprecedented number of nationwide tuberculosis (TB) prevalence surveys will be implemented between 2010 and 2015, to better estimate the burden of disease caused by TB and assess whether global
targets for TB control set for 2015 are achieved. It is crucial that results are analysed using best-practice methods.
To provide new theoretical and practical guidance on best-practice methods for the analysis of TB prevalence surveys, including analyses at the individual as well as cluster level and correction for
biases arising from missing data.
Analytic methods
TB prevalence surveys have a cluster sample survey design; typically 50-100 clusters are selected, with 400-1000 eligible individuals in each cluster. The strategy recommended by the World Health
Organization (WHO) for diagnosing pulmonary TB in a nationwide survey is symptom and chest X-ray screening, followed by smear microscopy and culture examinations for those with an abnormal X-ray and/
or TB symptoms. Three possible methods of analysis are described and explained. Method 1 is restricted to participants, and individuals with missing data on smear and/or culture results are excluded.
Method 2 includes all eligible individuals irrespective of participation, through multiple missing value imputation. Method 3 is restricted to participants, with multiple missing value imputation for
individuals with missing smear and/or culture results, and inverse probability weighting to represent all eligible individuals. The results for each method are then compared and illustrated using
data from the 2007 national TB prevalence survey in the Philippines. Simulation studies are used to investigate the performance of each method.
Key findings
A cluster-level analysis, and Methods 1 and 2, gave similar prevalence estimates (660 per 100,000 aged ≥ 10 years old), with a higher estimate using Method 3 (680 per 100,000). Simulation studies for
each of 4 plausible scenarios show that Method 3 performs best, with Method 1 systematically underestimating TB prevalence by around 10%.
Both cluster-level and individual-level analyses should be conducted, and individual-level analyses should be conducted both with and without multiple missing value imputation. Method 3 is the safest
approach to correct the bias introduced by missing data and provides the single best estimate of TB prevalence at the population level.
National population-based surveys of the prevalence of pulmonary tuberculosis (TB) disease in adults can be used to measure the burden of disease caused by TB, to measure trends in this burden when
repeat surveys are performed and to understand why people with TB have not been detected or diagnosed by national TB control programmes (NTPs). Surveys are of greatest relevance in countries with a
high burden of TB in which surveillance data capture much less than 100% of cases. Global targets for reductions in disease burden set for 2015 include halving prevalence rates compared with their
level in 1990; the other targets are that mortality rates should be halved between 1990 and 2015, and that TB incidence should be falling by 2015 [1].
The Global Task Force on TB Impact Measurement is hosted by the World Health Organization (WHO) with a mandate to ensure the best-possible assessment of whether 2015 global targets for reductions in
TB disease burden are achieved [2]. The Task Force has strongly recommended national TB prevalence surveys in 22 global focus countries in the years leading up to 2015 [3,4]. Since 2008, there has
been an unprecedented increase in the number of countries either implementing or planning to implement nationwide surveys. Between 2009 and 2015, approximately 23 countries - including 20 of the
global focus countries - are expected to implement a survey, compared with a total of 7 countries in the period 2002–2007 (Figure 1). Only four countries, all in Asia, implemented surveys between
1990 and 2001. The global investment in prevalence surveys will amount to around US$ 50 million between 2010 and 2015. Analysis of results using best-practice methods is crucial.
Figure 1. Global progress with nationwide prevalence surveys of TB disease. Global progress in implementing field operations of nationwide surveys of the prevalence of TB disease, actual (2002–2012)
and expected (2013–2017).
TB prevalence surveys have a cluster sample survey design, in which groups of individuals are sampled, with clusters selected at random from an area sampling frame with probability proportional to
size (PPS). While the classic method of using each survey cluster as the unit of analysis has been carefully and thoroughly described for a TB prevalence survey [5,6], methods to implement an
individual-level analysis, in which each eligible adult enumerated in the survey is the unit of analysis, have not. An individual-level analysis is valuable because it enables adjustment for
differences between participants and non-participants and multiple imputation of missing data, while simultaneously allowing for clustering in the sampling design. Missing data in TB prevalence
surveys can be observed in both the outcome (TB case or not) but also other covariates, for example due to non-participation of eligible individuals, unavailability of screening or diagnostic results
due to human error, and loss of specimens at laboratories for reasons such as contamination. A prevalence estimate based on only individuals with complete data will be biased, except under the strong
assumption that those with and without full information have the same prevalence of TB. Methods that incorporate missing value imputation are thus important for two reasons: to obtain a more valid
estimate of pulmonary TB prevalence, and to assess the bias of simpler analytical approaches [7,8]. Moreover, while participation rates in recent surveys in Asia have been very high, the rates
achieved in other surveys from 2012 onwards may be lower; accounting for missing data will become essential for production of robust results.
Findings from national TB prevalence surveys completed in 2007 in the Philippines and Viet Nam have been published [9,10]. Other national surveys have either not followed the screening strategy now
recommended by WHO [11,12], or the results have been disseminated in a survey report but not in a scientific journal. The analysis of the Philippines survey attempted to account for missing data
using within-cluster mean imputation, stratified on age and sex, but did not include individual-level analysis. The analysis of the Vietnam survey used an individual-level analysis but did not
formally account for missing data on smear and culture results, or age and sex differences between participants and non-participants.
This paper (outlined in Figure 2) provides new theoretical and practical guidance on best-practice methods for analysis of data from a TB prevalence survey, notably methods for individual-level
analyses that account for the cluster sample survey design and that allow correction for biases due to missing data. Methods are described and explained, and then illustrated and compared using data
from the 2007 survey in the Philippines. We draw on material previously developed in 2010 by the authors in a WHO handbook [4] but provide much more explanation of the underlying principles and
methods required to implement multiple imputation of missing data. This includes guidance based on insights gained in 2011 and 2012 through the analysis of prevalence surveys conducted in Myanmar
(2010) [13], Ethiopia (2010/11) [14], and Cambodia (2010/11) [15]. We also place the analytical methods within a new conceptual framework [16,17].
Survey design and summary of key data: an overview
For on-going and future TB prevalence surveys, the eligible population is defined as individuals aged ≥15 years old who were already resident in the selected cluster at the time of the survey team’s
first pre-survey visit [4]. Individuals <15 years old are excluded, because of the difficulties in diagnosing pulmonary TB in children. Cluster size is recommended to be between 400 and 1000 eligible
individuals, with the target cluster size constant within a particular survey [4]. Typically 50–100 clusters are selected, depending on the total sample size required. Sample size is calculated with
the aim of estimating the population prevalence of pulmonary TB among eligible individuals with 20-25% relative precision [4]. In most surveys that have already been completed, participation of
eligible individuals has been of the order of 85%-95%, with typically lower participation in urban areas. Most surveys use stratification, to ensure that the number of clusters allocated to each
stratum is in proportion to the population in that stratum. For example, the Philippines 2007 survey had three strata (urban, rural, and the capital city) [9].
There are 2 co-primary outcomes in a TB prevalence survey: (1) smear-positive pulmonary TB and (2) bacteriologically-confirmed pulmonary TB (smear-positive and/or culture-positive). The TB case
definition, and the screening strategy used to identify pulmonary TB, in a national-level prevalence survey are summarised in Figure 3.
Figure 3. TB case definition, and screening strategy for pulmonary TB.
The number of individuals who were enumerated, were eligible to participate, and who participated at various stages of the survey should be summarised, for example as depicted in Figure 4.
Figure 4. Survey participant flow. Schematic of numbers of participants screened for TB in the prevalence survey according to survey protocol.
Before analysis of the two co-primary outcomes is done, it is essential to describe the completeness and internal consistency of the “core” data i.e. the data that it is essential to collect in all
TB prevalence surveys. This is covered in detail in the WHO handbook [4].
Individual-level analysis of pulmonary TB prevalence: description and explanation of three alternative methods
The two outcomes of smear-positive pulmonary TB and bacteriologically-confirmed pulmonary TB should be analysed separately. Here, we illustrate methods for an individual-level analysis using the
outcome of bacteriologically-confirmed pulmonary TB, which we will refer to hereafter as pulmonary TB. It should be noted that the analytical approach would also be the same for other outcomes that
are binary (yes or no), for example TB diagnosed using the recently endorsed molecular test Xpert MTB/RIF [18].
Individual-level analyses of pulmonary TB prevalence are performed using logistic regression, in which the log odds, i.e. is modelled, where π[ij] is the probability of individual i in cluster j
being a prevalent pulmonary TB case. The simplest model that can be fitted is , in which case α is estimated as , where p is the observed overall proportion of study participants with pulmonary TB.
Correspondingly Logistic regression is used because the outcome is binary i.e. for each individual there is a probability that they have pulmonary TB at the time of the cross-sectional survey (in
the generalised linear models framework, the logistic link function is the “natural link function”). The most crucial characteristic of such analyses is that they take into account the clustering of
individuals: if this is not done, the calculated 95% confidence interval (CI) for true pulmonary TB prevalence will have less than the nominal 95% coverage, due to underestimation of the standard
error of the prevalence estimate.
Two types of logistic regression model are recommended for the analysis of a TB prevalence survey, both of which allow for the clustering in the sampling design. These are: (1) logistic regression,
with robust standard errors calculated from observed between-cluster variability and (2) random-effects logistic regression, in which a parameter for between-cluster variation in pulmonary TB
prevalence is included in the probability model.
Random-effects logistic regression models may be preferred for quantifying the association between risk factors and pulmonary TB prevalence, because they provide a full probability model for the data
including the between-cluster variability in true TB prevalence. However, the estimation process used in these models produces a “shrunken” point estimate of the overall nationwide pulmonary TB
prevalence that is too low because it is calculated as a geometric, and not arithmetic, mean of the observed cluster-specific prevalence values. Therefore, robust standard error logistic regression
models, which are “population-average” models within a generalised estimating equations framework, are preferred for the overall estimation of nationwide pulmonary TB prevalence.
To estimate overall pulmonary TB prevalence, it is recommended to use 3 methods of analysis in total, one of which does not account for missing data and two of which attempt to correct for bias due
to missing data. In Figure 5, we place all three methods within the framework set out in a recent paper that considers the combination of inverse probability weighting (IPW) and multiple imputation
(MI), with the analysis divided into two stages [17]. Method 1 is equivalent to CC/CC (complete-case approach for both Stage 1 and Stage 2), Method 2 is MI/MI, to indicate it relies completely on
multiple missing value imputation, and Method 3 is IPW/MI, to indicate it combines inverse probability weighting (for Stage 1) with multiple imputation (for Stage 2).
Figure 5. Methods 1-3, placed within a conceptual framework for analytical methods that attempt to correct for bias introduced by missing data.
Method 1 (complete-case or CC/CC)
This method uses a logistic regression model with robust standard errors, no missing value imputation, and analysis is restricted to survey participants (=N[2] in Figure 4), and also excludes
individuals who were eligible for sputum examination but smear and/or culture results are missing. Individuals who were not eligible for sputum examination are assumed not to have pulmonary TB,
unless their chest X-ray was later found to be suggestive of TB based on a reading at “central level” by an experienced radiologist – in which case they are also excluded from the analysis. The model
does not account for variation in the number of individuals per cluster, or correlation among individuals in the same cluster, when estimating the point prevalence of pulmonary TB. Equal weight is
given to each participating individual in the sample. However, the model does correct for clustering (by using the observed between-cluster variation) when estimating the 95% CI, and can control for
stratification in the sampling design. This method corresponds to a classical individual-level analysis of a survey, in the case that one does not need to adjust for sampling weights. TB prevalence
surveys are designed to be “self-weighted”, with each individual in the population having the same probability of selection into the sample [4] and thus the same “weight” in the analysis. Among
participants, this method always underestimates true TB prevalence – because data on pulmonary TB are missing only among individuals who were eligible for sputum examination, who have a relatively
higher probability of being a TB case compared with those not eligible. Differential participation in the survey by cluster, age group, and sex may either exacerbate or reduce this bias.
Method 2 (MI/MI)
This method uses a logistic regression model with robust standard errors, with missing value imputation for survey non-participants as well as participants, and includes all individuals who were
eligible for the survey in the analysis (=N[1] in Figure 4). Multiple missing value imputation (additional details below) is used for all individuals: a) without a field chest X-ray result and/or
symptom screening – which includes all individuals who did not participate in the survey, b) with a field chest X-ray reading that the survey protocol stated should also be read at central level, but
missing the central reading, c) eligible for sputum examination but whose status as a pulmonary TB case is unknown due to missing smear and/or culture results and d) ineligible for sputum
examination, but with a central X-ray reading that was suggestive of TB, whose status as a pulmonary TB case is thus unknown. This method allows for both the clustering in the sampling design and the
uncertainty introduced by imputation of missing values when estimating the 95% CI for the prevalence of pulmonary TB.
Method 3 (IPW/MI)
The third method is also a logistic regression model with robust standard errors, with missing value imputation done among the subset of survey participants who were eligible for sputum examination
but for whom smear and/or culture results were missing, and inverse probability weighting applied to all survey participants. This method aims to represent the whole of the survey eligible population
(=N[1] in Figure 4), but the weights are applied only to individuals who participated in the survey. An individual is considered to have participated in the survey if they were screened by both chest
X-ray and symptoms, or they refused or were exempted from X-ray screening but provided sputum samples for TB diagnosis (=N[5] in Figure 4). Missing value imputation is used for individuals eligible
for sputum examination (=N[6] in Figure 4), plus individuals who were not eligible for sputum examination but whose chest X-ray was read as suggestive of TB at central level, for whom data on one or
more of the central chest X-ray reading, symptom questions, and smear and/or culture results were not available. Inverse probability weighting is then used to correct for differentials in
participation in the survey by age, sex, and cluster. This is considered the “safer” method compared with Method 2 because a smaller amount of missing data is imputed. This means that if the
imputation model is miss-specified, the bias in the resulting estimates will be smaller.
Missing value imputation: key concepts in the context of TB prevalence surveys
Three main types of missing data mechanism have been distinguished in the literature [7,8]; we explain them below in the context of data being missing for the primary outcome variable, prevalent
pulmonary TB.
(i) Missing completely at random (MCAR): no adjustment required
Data are MCAR if the probability that an individual has missing data on the outcome, pulmonary TB, is NOT related to either a) the value of the outcome (that is, TB case yes or no) or b) an
individual characteristic that is a risk factor for the outcome (for example age, sex, stratum, cluster, TB symptoms). In this case, analysis can be restricted to individuals who DO participate fully
in the survey, and an unbiased estimate of the true overall prevalence of pulmonary TB in the population will be obtained. In other words, the (probabilistic) sampling design itself automatically
allows for “completely at random” missing data.
(ii) Missing at random (MAR): missing value imputation required
In the context of a TB prevalence survey, data are MAR if two conditions are fulfilled. First, the probability that an individual has missing data for the outcome variable of pulmonary TB (yes or no)
is related to individual characteristics such as age, sex, stratum, TB symptoms, and the field chest X-ray reading. Second, within groups of individuals who are the same for age, sex, stratum, TB
symptoms, and field chest X-ray reading, the probability of data being missing on the outcome variable is not associated with its value (that is, pulmonary TB case yes or no).
If data are MAR, the observed prevalence of pulmonary TB can be used to predict TB (yes or no) for individuals for whom data are missing, provided this is done with stratification on at least an
individual’s age, sex, area of residence, TB symptoms, and field chest X-ray reading. Having done this, an unbiased estimate of the true overall prevalence of pulmonary TB in the population can be
(iii) Missing not at random (MNAR): missing value imputation and also sensitivity analysis required
Data are MNAR if the probability of an individual having missing data on the outcome variable (that is, TB case yes or no) is different for individuals who have pulmonary TB compared with individuals
who do not have pulmonary TB, even after post-stratification of individuals using characteristics that are known to be risk factors for pulmonary TB (such as area of residence, age, sex). If data are
MNAR, it is not possible to correct the estimate of pulmonary TB prevalence simply by using missing value imputation based on the patterns in the observed data. Instead, a sensitivity analysis is
required (see below), which is an area of on-going research [19].
The observed data themselves cannot be used to distinguish between MAR and MNAR. Missing value imputation is implemented under the assumption that data are MAR.
Missing value imputation: recommended approach to implementation
Method 2
In a TB prevalence survey, it is usually the case (based on experience to date) that age, sex, stratum, and cluster are known for all (or almost all) eligible individuals, while there will be missing
data on TB symptoms, field and central chest X-ray readings, smear and culture results, and the primary outcome of pulmonary TB.
It is essential to start by exploring the extent to which data are missing, in order to understand the possible biases that may result from an analysis that is restricted to survey participants and
to choose imputation models that make the MAR assumption plausible. The following three variables should be summarized: the proportion of eligible individuals who participated in the symptom and
chest X-ray screening; the proportion of those with two sputum samples among people eligible for sputum examination; and the proportion with smear and culture results from 0, 1 or 2 sputum samples.
These summaries should be done overall, and be broken down by individual risk factors for pulmonary TB such as age group, sex and stratum – in order to know which individual characteristics are
predictors of missingness.
Missing value imputation is done using regression models in a procedure called “imputation by chained equations”, and can be implemented using standard statistical software packages such as Stata,
SAS, and R [20-22]. For example, in the statistical package Stata this is done using the ice (“imputation by chained equations”) command [23]. Additional file 1 explains, step-by-step, how the
imputation is implemented to create a single imputed dataset. As recently set out in a paper that provides general guidance on the use of multiple imputation [16], key principles to observe when
specifying the imputation model are: (1) it must include all explanatory variables to be investigated as risk factors at the analysis stage, and the outcome variable itself; (2) to make the MAR
assumption plausible it “should include every variable that both predicts the incomplete variable and predicts whether the incomplete variable is missing”; (3) including variables that are predictors
of the incomplete variable, whether or not they also predict missingness, will give better imputations; and (4) including variables that are predictors of missingness, whether or not there is
statistical evidence they are predictors of the incomplete variable, helps to limit the potential for bias.
Additional file 1. Multiple missing value imputation for analysis of pulmonary TB prevalence.
Format: DOC Size: 30KB Download file
This file can be viewed with: Microsoft Word Viewer
Our recommendation, following from this, is as follows. The outcome variable in a TB prevalence survey is pulmonary TB; sputum smear and culture results, the field and central chest X-ray reading,
and TB symptoms are used in combination to define if an individual has pulmonary TB (see Additional file 1 for more detail). Thus all of these variables must be included in the imputation models.
Individual characteristics that are established predictors of pulmonary TB (e.g. age, sex) and/or predictive of data being missing (e.g. age, sex, stratum) should be considered for inclusion in the
imputation models, as illustrated in Additional file 1. The strongest predictors of pulmonary TB and/or missingness (age, sex, stratum) should always be included in the imputation models for TB
symptoms, field X-ray reading, and smear and culture positivity. At the same time, the choice of additional predictors (e.g. smoking and alcohol consumption) may need to be limited so as to avoid
severe collinearity, especially when imputing smear and culture results and the number of positive smear and culture results is small (though because imputation models are being used for predictive
purposes, moderate collinearity is not problematic). Including cluster as an explanatory variable in the imputation model with smear positivity (yes or no) as the outcome variable is not recommended,
because the number of individuals with a positive smear result is low relative to the number of clusters; this is true also for the imputation model with culture positivity (yes or no) as the outcome
variable. For outcomes that are more common, such as abnormal chest X-ray result (yes or no), including cluster as an explanatory variable in the imputation model may be appropriate.
The process described in Additional file 1 is repeated to create, for example, 10–20 imputed datasets (hence the terminology “multiple” missing value imputation). The number of imputed datasets
should be greater than or equal to the percentage of eligible individuals for whom data are missing [16]. To date, this percentage has been in the range 4-15% in TB prevalence surveys, and we
recommend that at least 20 imputed datasets are created.
The overall prevalence of pulmonary TB is calculated for each imputed dataset. The national-level pulmonary TB prevalence estimate is then calculated as the average of the pulmonary TB prevalence
values from each imputed dataset, with a 95% CI that takes into account both the sampling design and the uncertainty due to missing value imputation. In Stata, this can be done using the mim or mi
commands [23].
Method 3
Multiple imputation is an efficient method for accounting for missing data, provided the imputation models are specified appropriately [8,16,24]. An alternative approach is to use a combination of
multiple imputation (MI) and inverse probability weighting (IPW) [17]. With this approach, imputation is used to fill in missing values only among individuals who participated fully in the survey (N
[5] in Figure 4).
Survey participants can be divided into two groups, eligible or ineligible for sputum examination. Individuals who were ineligible for sputum examination are assumed not to have pulmonary TB, unless
they had a normal field chest X-ray reading but an abnormal central chest X-ray reading. For those eligible for sputum examination (N[6] in Figure 4, and additionally individuals with a normal field
chest X-ray reading but abnormal central chest X-ray reading), multiple imputation is used to fill in missing data, in exactly the same way as described for Method 2 above (including using the same
variables in the imputation models). Each of the imputed datasets is then combined with the data on individuals who were ineligible for sputum examination, to give (for example) 20 imputed datasets
that include all individuals who participated fully in the survey.
For each imputed dataset, a point estimate and 95% CI for population pulmonary TB prevalence is then calculated, using logistic regression with robust standard errors and weights. Weights are
calculated for each combination of cluster, age group, and sex. This is done by a) counting the number of eligible individuals in each combination of cluster, age group, and sex (N[ijk], for cluster
i, age group j, sex k) and b) counting the number of survey participants in each combination of cluster, age group, and sex (n[ijk]). The weight for each individual is then equal to N[ijk] / n[ijk],
for the particular combination of cluster/age group/sex that they are in, with n[ijk] / N[ijk] being the probability that the sampled individual participates in the survey – hence the name “inverse
probability weighting”. It is essential to include either the weights or the covariates that predict the weights in the imputation model [17]. We include age group, sex, and stratum (area of
residence) in all imputation models. An average of the estimates of pulmonary TB prevalence from each of the imputed datasets is then calculated, together with a 95% CI. In Stata, this can be done
using the mim and svy commands.
An advantage of using IPW combined with MI, rather than just MI, is that it is relatively simple and transparent to calculate the probability of survey participation by cluster, age group and sex,
compared with adjusting for non-participation through the use of a multivariable imputation model [17,24]. However, an important assumption remains, which is that after post-stratifying on cluster,
age, and sex, the prevalence of pulmonary TB is the same in survey participants and non-participants.
Comparing results across Methods 1–3
If point estimates of pulmonary TB prevalence and their confidence intervals vary greatly among Methods 1–3, it is essential to try to understand the reasons for the differences and the results of
the survey should be interpreted within these limitations. Method 1 introduces biases, as explained above, so it is not surprising if it provides a prevalence estimate that is different to the one
obtained from Methods 2 and 3. If the prevalence estimates from Methods 2 and 3 are considerably different, this may be due to misspecification of the imputation models used in Method 2.
Sensitivity analysis: a simple method
A simple way to implement a sensitivity analysis is to use as a starting point the imputed datasets that were created using Method 2.
For an “extreme” situation in which there are 0 pulmonary TB cases among non-participants, the prevalence of pulmonary TB is estimated simply as the observed number of pulmonary TB cases divided by
the total eligible survey population. For an opposite “extreme” in which the risk of pulmonary TB is twice as high among non-participants as in participants (within sub-groups defined by stratum, age
group, sex, and other variables included in the imputation model for pulmonary TB), the number of pulmonary TB cases among non-participants is estimated for each imputed dataset as 2t[i], where t[i]
is the number of pulmonary TB cases that were imputed in the i^th imputed dataset. Then the overall pulmonary TB prevalence is calculated as the average of the 2t[i] values, plus the number of
pulmonary TB cases among survey participants, divided by the total eligible survey population.
Simulation studies to assess the performance of Methods 1–3
Simulation studies were done for 4 plausible scenarios through which missing data could be generated in TB prevalence surveys. We explored missingness of data on the outcome of prevalent TB by age,
sex, stratum and cluster. We chose these four variables on the basis that they are associated both with the outcome and the reason for missingness [16]. Across the 50 clusters in the 2007 Philippines
survey, the minimum number of individuals aged ≥10 years old for whom data on all of age, sex, stratum (urban, rural, the capital city), cluster, field and chest X-ray reading, TB symptoms, and smear
and culture results, were complete was 190. In order to create a dataset in which the number of individuals in each cluster was the same, all TB cases in each cluster and a random sample of non-TB
cases were selected to create a dataset of 9500 individuals, i.e. 190 in each of 50 clusters. In this dataset, TB prevalence was 1263 per 100,000 (120/9500).
Missing values were then introduced into this dataset to create 1000 datasets with missing data on the field chest X-ray reading and TB symptoms, and smear and culture results, for each of the
following 4 scenarios:
1. Differential participation by age group, sex, and stratum (n=3), with overall participation approximately 90%; 15% of smear and culture results missing completely at random among individuals
eligible for sputum examination; overall, 19% of eligible individuals with missing data on pulmonary TB.
2. Differential participation by age group, sex, and cluster (n=50), with overall participation approximately 90%; 15% of smear and culture results missing completely at random among individuals
eligible for sputum examination; overall, 20% of eligible individuals with missing data on pulmonary TB.
3. As for 2, but among individuals eligible for sputum examination, the probability of missing smear and culture results varied among the 3 strata; overall, 20% of eligible individuals with missing
data on pulmonary TB.
4. As for 2, but among individuals eligible for sputum examination the probability of missing smear and culture results varied among the 50 clusters; overall, 20% of eligible individuals with missing
data on pulmonary TB.
An example analysis using the dataset from the 2007 survey in the Philippines
To illustrate the 3 methods of analysis outlined above, we use the 2007 national TB prevalence survey in the Philippines. In this example, the eligible survey population was individuals aged ≥10
years old, which is different from the current WHO recommendation for the survey population to consist of individuals aged ≥15 years old [4]. However, the analytical approach and presentation of
results remain the same.
Overall, participation was high at 90% of eligible individuals, though it was higher in rural and urban areas than in the capital city, lower among 20–39 year olds than other age groups, and the
age-pattern of survey participation differed between men and women (data not shown). Additional details about the survey are provided elsewhere [9].
Comparison of results across Methods 1–3
Results for the prevalence of pulmonary TB are summarised in Table 1, and the observed distribution of cluster-level pulmonary TB prevalence is shown in Figure 6. From the cluster-level analysis, the
estimate of the prevalence of pulmonary TB is 663 per 100,000 population, with a 95% confidence interval of [516–810], with Method 1 giving an almost identical estimate and 95% confidence interval.
Method 2 gives the same point estimate of pulmonary TB prevalence but with a slightly narrower confidence interval.
Table 1. Prevalence of pulmonary TB (per 100,000 population) in the Philippines 2007 national TB prevalence survey
Figure 6. Distribution of cluster-level prevalence of bacteriologically-confirmed pulmonary TB among 50 clusters, Philippines, 2007.
The point prevalence estimate of pulmonary TB from Method 3, combining multiple imputation with inverse probability weighting, is slightly higher than the estimates from Methods 1 and 2, at 680 per
100,000 and with a slightly wider confidence interval.
Among survey participants, multiple imputation of missing smear and culture results increases the estimate of the prevalence of pulmonary TB from 660 to 670 per 100,000. This is a relatively small
increase, reflecting that among individuals eligible for sputum examination the proportion with missing data on smear and/or culture results was very low. Using inverse probability weighting to
account for differentials in survey participation by cluster, age group, and sex increases the prevalence estimate from 670 to 680 per 100,000.
Overall, the cluster-level analysis and the results from each of Methods 1, 2, and 3 show that the best estimate of pulmonary TB prevalence is of the order of 660 – 680 per 100,000 population among
individuals aged ≥10 years old, with the coverage of the 95% CIs ranging from 516 to 830 per 100,000 population.
Sensitivity analysis
A sensitivity analysis in which pulmonary TB prevalence among non-participants ranges from 0 to being twice as high as among participants, gives a range of the point estimate of pulmonary TB
prevalence from 595 to 731 per 100,000 population, compared with the estimate from Methods 1 and 2 of 660 per 100,000.
Simulation studies to assess the performance of Methods 1–3: results
For all of scenarios 1–4, we analysed each of the 1000 datasets using Methods 1, 2 and 3. For both Methods 2 and 3, 20 imputed datasets were created for each of the 1000 “starting” datasets.
Simulation results showed that for all 4 scenarios, Method 1 underestimated TB prevalence by an average of approximately 9%, with prevalence estimates lower than the true value of 1263 per 100,000
for 97% of the Scenario 4 simulations. Method 2 overestimated TB prevalence by an average of around 1.5%, while Method 3 estimated TB prevalence to an average that was within 1% of the true value.
Details of the results are summarised in Table 2 and Figure 7.
Table 2. Simulation study results, for 4 scenarios of how missing data could arise in a prevalence survey
Figure 7. Density plots of simulated data series. Density plots of the distribution of prevalence estimates calculated from simulation study data series. Dashed vertical line represents the “true”
level of prevalence.
We recommend that the method that uses the cluster as the unit of analysis should remain the first step in the analysis of a TB prevalence survey [6], as it is a simple method of analysis that has
the advantage of being very transparent. It also requires a careful description of the variation in observed cluster-level pulmonary TB prevalence, which is an important feature of the data that
should be described well and summarized graphically. However, it has exactly the same limitations in terms of bias as an individual “complete-case” analysis (Method 1). In our simulation studies,
Method 1 underestimated TB prevalence by an average of about 9%. It is thus essential that a cluster-level analysis is followed by individual-level analyses, initially restricted to individuals for
whom data on the primary outcome of pulmonary TB is complete (a “complete-case analysis”), and then extended through missing value imputation to include all eligible individuals.
Following a general recommendation [7], it is important to present both the complete-case analysis (Method 1) and an analysis that attempts to correct for bias introduced by missing data. Following
recent work on using a combination of IPW and MI [17], it is also recommended to always compare the approach that uses multiple imputation for all eligible individuals (Method 2) with the more
conservative approach that uses multiple imputation only among survey participants and uses IPW to account for differences between participants and non-participants (Method 3). Our simulation studies
show that Methods 2 and 3 both perform well, but that Method 3 is slightly better.
Overall, we recommend Method 3, inverse probability weighting combined with multiple imputation of missing data among individuals eligible for sputum examination, as the method that provides the
safest approach and the single best estimate of population pulmonary TB prevalence.
CC: Complete case; CI: Confidence interval; IPW: Inverse probability weighting; MAR: Missing at random; MCAR: Missing completely at random; MI: Multiple imputation; MNAR: Missing not at random; NTP:
National TB Programme; PPS: Probability proportional to size; TB: Tuberculosis; WHO: World Health Organization.
Authors’ contributions
SF, CS, and KF wrote the paper; all co-authors suggested edits and gave comments on drafts of the manuscript, and all approved the final version. SF and CS led the analytical work, with important
contributions from NY, RD, FM, PG, IO, and KF. RD, EB, ET, and FM contributed to the chapter in the WHO handbook on the analysis of TB prevalence surveys (2011). IO is the lead person in WHO for TB
prevalence surveys. JL and RV took key roles in the planning and implementation of the TB prevalence survey that was conducted in the Philippines during 2007.
Charalambos Sismanidis, Katherine Floyd, Ikushi Onozaki, and Philippe Glaziou are staff members of the World Health Organization. The authors alone are responsible for the views expressed in this
publication and they do not necessarily represent the decisions or policies of the World Health Organization.
The findings and conclusions in this manuscript are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.
Sign up to receive new article alerts from Emerging Themes in Epidemiology | {"url":"http://www.ete-online.com/content/10/1/10","timestamp":"2014-04-16T13:03:17Z","content_type":null,"content_length":"138045","record_id":"<urn:uuid:0241020a-7f9b-450c-974a-33fea2fb0af2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00389-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume and Pi
Date: 11/10/97 at 12:00:19
From: Jake Lail
Subject: Volume and Pi
How do you find the volume of a cylinder that is 7.5mm high and has a
diameter of 4mm?
I haven't been able to figure out any part of this problem.
Thank You,
Jake Lail
Date: 11/12/97 at 23:02:19
From: Doctor Otavia
Subject: Re: Volume and Pi
Hi! I assume you're talking about a cylinder where the two ends are
parallel, resembling a straw. Let's examine what they look like, and
figure out the formula from there.
One way to think of a cylinder is to imagine an awful lot of circles
stacked one on top of another. In this case the base is a circle 4mm.
in diameter. It helps me to visualize problems, so I always imagine a
bunch of coasters in a stack.
We know how to find the area of a circle, which is pi*r^2. (r^2 means
r to the 2nd power, or r squared) and we know that the radius is half
the diameter, so we can find the area of the base.
We now know the area of one of the circles that is in the stack that
makes up the cylinder. This is great, because we know we have a stack
of these circles of area pi*r^2, with a stack (or cylinder) height of
7.5mm, so what you have to do now is multiply the area of the base,
which we know is pi*r^2 times the height, or h. The formula you get is
h * pi * r^2.
All you have to do is substitute the numbers you have in your problem,
that is, a height of 7.5mm and a diameter of 4mm (and don't forget,
the formula asks you for the radius, which is half the diameter), into
the formula, and you have the volume of your cylinder.
Also, make sure you use the proper units, which in this case would be
mm^3, or cubic millimeters, because you're multiplying the height,
which is in mm, times the radius squared, which means mm *mm or mm^2,
so you end up with units of mm*mm^2, which ends up being mm^3. This
makes sense if you think about it, because cubic millimeters are a
unit of volume, and millimeters aren't (they just measure length), and
millimeters squared aren't either (because they just measure area.).
I hope this helps. Good luck!
-Doctor Otavia, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/54977.html","timestamp":"2014-04-18T09:26:56Z","content_type":null,"content_length":"7281","record_id":"<urn:uuid:a9c2017f-76d7-4b0b-84b2-6f6fff5937bc>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Meeker, WA Algebra Tutor
Find a Meeker, WA Algebra Tutor
...I issue pre-lesson home work, and follow-up homework, and I offer unlimited email support on those assignments at no additional cost. Be prepared to work hard and score high! Again, I'm
willing to set up a free trial session so you can see how I work.
16 Subjects: including algebra 2, algebra 1, Chinese, geometry
...It is my goal to make math an interesting, if not enjoyable subject. I am detail oriented and very focused on ensuring that whomever I am working with has a comprehensive, worthwhile and
enjoyable experience. I have worked as a laboratory chemist and as an instructor at Tacoma Community College for several years.
12 Subjects: including algebra 1, algebra 2, chemistry, geometry
...I have a Bachelor's degree in Mathematics and a Master's Degree in Secondary Math Education. I taught Algebra 2 in high school for 2 years and I have tutored many college students in College
Algebra (which typically covers the same topics as Algebra 2 and Pre-Calculus). I taught high school Geom...
16 Subjects: including algebra 2, algebra 1, GRE, geometry
...My past teaching experience includes four years instructing beginning and intermediate college astronomy laboratories, as well as individual student tutoring for those courses. I have found
that the best method for teaching is determined by paying attention to the learning styles and abilities o...
5 Subjects: including algebra 1, physics, precalculus, geometry
...My passion for teaching math stems from the fact that it was never easy for me. I always had to work hard to learn it and so I understand why it is frustrating, and nightmarish for most
people. I am currently a stay at home mom and would like to remain so for a bit longer.
6 Subjects: including algebra 2, algebra 1, geometry, prealgebra
Related Meeker, WA Tutors
Meeker, WA Accounting Tutors
Meeker, WA ACT Tutors
Meeker, WA Algebra Tutors
Meeker, WA Algebra 2 Tutors
Meeker, WA Calculus Tutors
Meeker, WA Geometry Tutors
Meeker, WA Math Tutors
Meeker, WA Prealgebra Tutors
Meeker, WA Precalculus Tutors
Meeker, WA SAT Tutors
Meeker, WA SAT Math Tutors
Meeker, WA Science Tutors
Meeker, WA Statistics Tutors
Meeker, WA Trigonometry Tutors
Nearby Cities With algebra Tutor
Alderton, WA algebra Tutors
Burnett, WA algebra Tutors
Cedarview, WA algebra Tutors
Crocker, WA algebra Tutors
Dieringer, WA algebra Tutors
Elgin, WA algebra Tutors
Firwood, WA algebra Tutors
Lake Tapps, WA algebra Tutors
Osceola, WA algebra Tutors
Ponderosa Estates, WA algebra Tutors
Puy, WA algebra Tutors
Rhododendron Park, WA algebra Tutors
Summit, WA algebra Tutors
Thrift, WA algebra Tutors
Wabash, WA algebra Tutors | {"url":"http://www.purplemath.com/Meeker_WA_Algebra_tutors.php","timestamp":"2014-04-21T13:04:50Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:f2246c32-7635-4ddb-a4e7-86fcc64e98e3>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00151-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sometimes we have a direct translation from English to math, since the symbol ÷ abbreviates the phrase "divided by.'' None of that shady "together with" or "product of" business.
Like so: Five divided by three means 5 ÷ 3.
Here are some phrases that aren't quite as direct, but still straightforward. In other words, they may not make eye contact with you, but you still believe what they're saying.
With these phrases, the first number mentioned goes on top of the line, while the second number mentioned goes below the line. (The second number would totally win in a limbo competition.)
• The ratio of six and seven is
• The quotient of seventy and thirteen is
We can think of the word of as meaning either multiplication or division. How's that for confusing? Or, put another way, how's of that of for of confusing?
Sample Problem
What is one-third of seven?
If we translate of as multiplication, we get
If we translate of as division, we get
When translating the word of, look at the numbers and other words involved to decide if it's more appropriate to translate as multiplication or division. Yes, you'll need to use some logical
reasoning here; it won't always be spelled out for you. When words like one-half, one-third, or one-fourth are floating around next to the of, you can think of this as multiplication by a fraction,
or you can choose to translate it as a division problem. We'll even spell it out for you: A D-I-V-I-S-I-O-N P-R-O-B-L-E-M.
Division Practice:
One-half of nine is what?
One-sixth of twenty is what?
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
The product of four and five.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
Eight times y.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
Forty-two divided by fourteen.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
Four-fifths of twelve.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
Seven doubled.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
Three by five.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
The ratio of six and fourteen.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
One-half of eleven.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
The quotient of eleven and one hundred.
Translate the following English phrase into mathematical symbols, using multiplication or division as appropriate:
Thirty percent of 150. | {"url":"http://www.shmoop.com/word-problems/division-help.html","timestamp":"2014-04-16T16:08:10Z","content_type":null,"content_length":"45533","record_id":"<urn:uuid:fd7e8497-af08-42c1-986f-fa997bb7c8cd>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pillai prime
Pillai prime
If for a given prime $p$ we can find an integer $n>0$ such that $n!\equiv-1\mod p$ but $pot\equiv 1\mod n$ then $p$ is a called a Pillai prime. These are listed in A063980 of Sloane’s OEIS. Sarinya
Intaraprasert proved that there are infinitely many Pillai primes. The first few are 23, 29, 59, 61, 67, 71, 79, 83, 109, 137, 139, 149, 193, …
• 1 R. K. Guy, Unsolved Problems in Number Theory New York: Springer-Verlag 2004: A2
Mathematics Subject Classification
no label found | {"url":"http://planetmath.org/pillaiprime","timestamp":"2014-04-17T06:53:41Z","content_type":null,"content_length":"30587","record_id":"<urn:uuid:300818e0-fda6-4fbc-8dc5-e7599fe25e59>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00286-ip-10-147-4-33.ec2.internal.warc.gz"} |
Testing Software Can Make it Easier to Prove Correctness
raph mentions Dijkstra's quote that testing can only show the presence of bugs, never their absence
Implied in this is that if you develop a formal proof that a program is correct, then the testing becomes superfluous. But in fact, some types of testing can reduce the burdon of developing the
Instead of proving that a program is always correct, you prove a weaker condition. That is: if the program is correct in one case then it is correct in all cases. Then you write a test to establish
that the one case you did not prove actually works.
One specialized version of this technique is, using a proof to establish the induction step of a proof by induction, but then writing a test in order to establish the base case. | {"url":"http://www.advogato.org/person/jdybnis/diary.html?start=12","timestamp":"2014-04-20T18:32:27Z","content_type":null,"content_length":"12895","record_id":"<urn:uuid:cd3e6e87-e533-41b0-99e6-854125eea453>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
FAQ for STAT-L/SCI.STAT.CONSULT
FAQ for STAT-L/SCI.STAT.CONSULT, last modified on December 13, 2002
Compiled by Steve Simon, ssimon@cmh.edu.
"A further knowledge of facts is necessary before I would venture to give a final and definite opinion." Sherlock Holmes in The Adventure of Wisteria Lodge.
This FAQ is posted once a month to STAT-L/SCI.STAT.CONSULT. The FAQ now has two home pages:
http://www-personal.umich.edu/~dronis/statfaq.htm and
Variations and earlier versions of the FAQ can be found on other sites on the web. You are welcome to post all or part of this FAQ at your web site. Please don't modify it without my permission, and
please let me know where you are posting it.
Note: I use uppercase for certain items like e-mail addresses and listserver commands to help highlight them. You can, however, use upper or lower case (or even mixed case) for any of these items.
Table of contents
1 What is STAT-L/SCI.STAT.CONSULT?
2 What are other related listserv/usenet groups?
3 How do I know that my message got posted?
4 How do I use LISTSERV to...
5 How do I get the archives of STAT-L/SCI.STAT.CONSULT?
6 Why have I stopped seeing messages?
7 How can I contact the ASA, Biometric Society, or IMS?
8 How can I contact the major statistics software vendors?
9 Where can I find free/shareware statistical software?
10 What statistics resources can be found on the web?
11 What should I do about these "Spams"?
12 What are some of the problems with stepwise regression?
13 What is the answer to the Monty Hall, Envelope, or Birthday problem?
14 Can someone provide me with references and/or books about [topic]?
15 Can you recommend a good statistics software package?
16 Acknowledgments
1 What is STAT-L/SCI.STAT.CONSULT?
STAT-L and SCI.STAT.CONSULT are a combined LISTSERV/USENET group for the discussion of statistical consulting issues. Through the magic of Internet, any message posted on SCI.STAT.CONSULT also
appears on STAT-L. Any message posted on STAT-L appears on SCI.STAT.CONSULT. So you can follow all the fascinating questions and answers using either system.
We discuss statistical issues of all levels of difficulty, as well as statistical education, the practice of statistical consulting, and other related topics. We also like to debate some of the more
controversial issues in Statistics like the validity of the statistical models used in the Bell Curve book and the pitfalls of stepwise regression models.
Be sure to put your name and e-mail address at the end of your message. Some people have e-mail systems that strip headers from a message, making it impossible for them to reply directly to you.
If you have a question about a particular statistics package, you will probably get a faster and more accurate answer by posting the question on the list that specializes in a particular package
(e.g., SAS-L/COMP.SOFT-SYS.SAS or S-NEWS). Refer to the section "How can I contact the major statistics software vendors?"
We appreciate questions at a levels from beginner to expert. Sometimes, the beginner questions lead to some interesting discussions as to the subtle nuances in statistical consulting. If you want
advice on how to analyze some data, please include some context as to what your data means and what you are trying to investigate. No one can answer a question well that only says "Listed below is
some data. How do I analyze it?"
Be careful about advice on STAT-L/SCI.STAT.CONSULT. You'll find many people who are glad to help you, but you must realize the serious limitations of e-mail. There is no adequate substitute for
getting advice face-to-face with a professional, especially BEFORE collecting any data and BEFORE performing any experiments. Even the most experienced and wise Statisticians will be unable to make
sense out of a poorly designed study.
There are three types of messages that we discourage. First, try to avoid any overly commercial pitches, including posting your resume. On the other hand, we do like to hear about job openings,
especially ones that list starting salaries so we can bemoan how little we make on our current jobs. Postings of upcoming conferences are also acceptable.
Second, don't post your homework questions on here, even if you have permission to do so from your teacher. If you're looking for help on a thesis or disertation, make sure that your advisor is aware
that you are seeking outside help.
Third, while we enjoy a spirited debate, please refrain from flaming and personal attacks. Although we have occasional lapses, this list has a generally high level of civility and politeness. Let's
keep it that way.
Here's some additional advice from Richard Ulrich for SCI.STAT.CONSULT folks.
If you are going to CROSS-POST to several groups, PLEASE send just one message in which you LIST THE SEVERAL GROUPS in the header. i) That way, when someone writes a response, it will show up in
EACH group where the question could be read, not just in one. ii) That way, when a person reads with a Threaded-newsreader, he will see your message just ONCE, instead of over and over.
2 What are other related LISTSERV/USENET groups?
There are two nice web resources for Statistics related lists:
http://www.ukc.ac.uk/php/mff/netres/statlist.html and
http://www.stattransfer.com/lists.html is a nice site sponsored by Circle Systems, makers of Stat/Transfer software. It uses web forms to allow you to subscribe and unsubscribe to STAT-L and a lot of
other lists.
SPECIAL WARNING!!! Please, please, please note that subscription requests go to the LISTSERV or MAILBASE address. If you send a subscription request to the list itself, it will be read by hundreds or
thousands of people, none of whom can get you subscribed. Some of these people will be annoyed enough at your naivete that they will introduce you to a concept known as "flaming".
ALBERT-GIFI -- The Albert Gifi mailing list discusses correspondence analysis, multidimensional scaling, nonlinear multivariate analysis, and optimal scaling
Subscriptions to: LISTSERV@JULIA.MATH.UCLA.EDU
How to subscribe: subscribe ALBERT-GIFI First-name Last-name
Post messages to: ALBERT-GIFI@JULIA.MATH.UCLA.EDU
ALLSTAT -- Discussions on this list are similar to STAT-L/SCI.STAT.CONSULT, but there is a decidedly British flavor to ALLSTAT and a more U.S. flavor to STAT-L/SCI.STAT.CONSULT. This is particularly
noticeable in the postings of meetings. ALLSTAT is a Mailbase system so it uses a slightly different syntax than the LISTSERV system.
Subscriptions to: MAILBASE@MAILBASE.AC.UK
How to subscribe: join ALLSTAT First-name Last-name
Post messages to: ALLSTAT@MAILBASE.AC.UK
Web info and FAQ: http://www.stats.gla.ac.uk/allstat/
Note: Contrary to previous information in this FAQ, you must include your name when subscribing. "Subscribe" can be substituted for "join," however. Here are some additional comments from Dr. Stuart
Young, the list owner.
Note also, that while Allstat does indeed have a "UK flavour" it is not a discussion list. It is a "broadcast system" for distributing notices. Discussions are not encouraged on the list -
replies go to the sender, not to the list.
CRSP-L -- Help With Center for Research in Security Prices (CRSP) Data Bases.
Subscriptions to: LISTSERV@TAMVM1.TAMU.EDU
How to subscribe: sub CRSP-L First-name Last-name
Post messages to: CRSP-L@TAMVM1.TAMU.EDU
Web info and FAQ: http://www-leland.stanford.edu/class/gsb/crsp/CRSP-L/
EDSTAT-L/SCI.STAT.EDU -- Statistics training and education issues.
Subscriptions to: LISTSERV@JSE.STAT.NCSU.EDU
How to subscribe: subscribe EDSTAT-L Firstname Lastname
Post messages to: EDSTAT-L@JSE.STAT.NCSU.EDU
MULTILEVEL -- This list is for people using multilevel analysis (multilevel modeling; hierarchical data analysis) and any associated software (e.g. MLn, HLM, VARCL, GENMOD). MULTILEVEL is a MAILBASE
system so it uses a slightly different syntax than the LISTSERV system.
Subscriptions to: MAILBASE@MAILBASE.AC.UK
How to subscribe: subscribe MULTILEVEL first-name last-name
Post messages to: MULTILEVEL@MAILBASE.AC.UK
PSYCHOMETRICS -- A new listserv has been established for graduate students to discuss theoretical and applied issues in psychometrics. Faculty and research scientists are, of course, welcome to
listen and offer insight.
Subscriptions to: majordomo@lists.stanford.edu
How to subscribe subscribe PSYCHOMETRICS:
Post messages to: psychometrics@lists.stanford.edu
SCI.STAT.MATH -- A more mathematical flavor can be found on this newsgroup, which sad to say, is not mirrored to any LISTSERVer.
SEMNET -- SEMNET is an open forum for ideas and questions about the methodology that includes analysis of covariance structures, path analysis, and confirmatory factor analysis.
Subscriptions to: LISTSERV@UA1VM.UA.EDU
How to subscribe: sub SEMNET first-name last-name
Post messages to: SEMNET@UA1VM.UA.EDU
Web info and FAQ: http://www.gsu.edu/~mkteer/semfaq.html
STEPS -- an e-mail discussion list for users of the STEPS (STatistics Education through Problem Solving) statistical software.
Subscriptions to: mailbase@mailbase.ac.uk
How to subscribe: join STEPS first-name last-name
Post messages to: steps@mailbase.ac.uk
Web info and FAQ: http://www.stats.gla.ac.uk/steps/
TEACHING-STATISTICS -- This list is for those concerned with the initial teaching of statistics in all phases of education. It will relate to the objectives of the journal Teaching Statistics and the
associated Trust, and will also enable discussion of how to make teaching and learning statistics more effective.
Subscriptions to: mailbase@mailbase.ac.uk
How to subscribe: join teaching-statistics first-name last-name
Post messages to: teaching-statistics@mailbase.ac.uk
Web info and FAQ: http://www.mailbase.ac.uk/lists/teaching-statistics/
3 How do I know that my message got posted?
First of all, be patient. It takes a while for your message to be posted. Internet is faster than the Post Office, but it isn't always instantaneous. There's nothing more annoying than seeing the
same messages posted again and again in a half hour time period by people who are unsure whether their messages got through. Please wait half a day or more before panicking.
Second, if you are having trouble posting, it is more likely than not a local problem. Check with your help desk or other local resource.
Third, no matter where you post your message from, if the message gets through, it will be added to two very nice USENET archives, AltaVista and DejaNews. Search for your message using the subject
line or a reasonably unique phrase in the message itself. This system is not instantaneous. Wait half a day or more before searching for your message. See the section "How do I get the archives of
STAT-L/SCI.STAT.CONSULT?" for the web address and other details about AltaVista and DejaNews.
Fourth, if you are using SCI.STAT.CONSULT, then you will eventually see a copy of your message, if it got posted. There are specal USENET groups where you can practice sending test messages
(MISC.TEST or ALT.TEST). If you are a beginner, don't post to SCI.STAT.CONSULT until after you are comfortable posting to one of these test groups.
You will also see your message if you receive the digest from STAT-L.
If you receive individual messages rather than the digest from STAT-L, you will not see your own message when it is posted. The presumption is that you read it when you wrote it, so why would you
want to see it again?
You can change this default in two ways. Send a e-mail to LISTSERV@VM1.MCGILL.CA with a one line message: SET STAT-L REPRO to inform STAT-L that you wish it to send you back a copy of any message you
send in. Send a one line message: SET STAT-L ACK to inform STAT-L that you wish it to send a brief acknowledgment that your message has been sent to the list. Finally, send a one line message: SET
STAT-L NOREPRO if you want to go back to the default. Please note that all of these commands go to LISTSERV and not to STAT-L.
Finally, please note that not every question posted on STAT-L/SCI.STAT.CONSULT gets an answer. No one is getting paid for their time, so you need to appeal to their curiosity or their altruism. If no
one answered your question, maybe you need to ask the question differently?
4 How do I use LISTSERV to...
Before I discuss subscribing, changing digest options, etc., you should be aware of some resources that can help you with these problems.
There are two good web resources, the first specific to LISTSERV and the second a more general introduction that considers other systems such as mailbase:
If you are intimidated by sending commands to a listserver, check out
mentioned in section 2, which is a nice web resource for subscribing and unsubscribing to STAT-L and a lot of other lists.
Specific information about STAT-L is available at
...subscribe to STAT-L?
If you are using SCI.STAT.CONSULT, your USENET reader software should have a menu pick or a command that will allow you to subscribe to SCI.STAT.CONSULT. Every reader is different, so please consult
your help file or your local computer guru.
To subscribe to STAT-L, send a message to LISTSERV@LISTS.MCGILL.CA with a single line: SUB STAT-L First-name Last-name in the body of the text. Please be sure that you send the message to
LISTSERV@LISTS.MCGILL.CA and not to STAT-L@LISTS.MCGILL.CA. If you send your subscription request to STAT-L, hundreds of people will see your message and none of them will be able to subscribe you to
the list. Some in fact will flame you for not reading these instructions more carefully.
It's sort of like a newspaper which has a circulation desk and a letters-to-the-editor desk. If you want to start delivery of the paper you send it to the circulation desk. If you want to start
delivery of STAT-L, you send the request to LISTSERV. Sending a subscription request to STAT-L is like sending a letter to the editor that reads "Please start delivery of the Sunday paper to 1313
Mockingbird Lane"
...get the digest option turned on/off?
If you have no strong preference, the digest option (multiple messages compiled into a single mailing, usually daily) is less burdensome on Internet and creates fewer bounced messages for the list
administrator to deal with. The default when you sign up is for the digest option.
To cancel digest format and to receive the list as separate mailings, send the command SET STAT-L MAIL to LISTSERV@LISTS.MCGILL.CA.
To receive the list in digest format, send the command SET STAT-L DIGEST in the body of a message to LISTSERV@LISTS.MCGILL.CA. Again, please be sure that you send all of these types of messages to
LISTSERV@LISTS.MCGILL.CA and not to STAT-L@LISTS.MCGILL.CA.
...obtain a list of subscribers to STAT-L?
Send the command REVIEW STAT-L F=MAIL to LISTSERV@LISTS.MCGILL.CA. Send the command REVIEW STAT-L BY NAME F=MAIL to sort by name or REVIEW STAT-L BY COUNTRY F=MAIL to sort by country.
This list does not include subscribers to SCI.STAT.CONSULT, as they do not subscribe to the list the same way. I know of no way to obtain the list of subscribers to SCI.STAT.CONSULT.
...keep my name off of the list of subscribers?
Send the a message to LISTSERV@LISTS.MCGILL.CA with a line in the body of the message reading SET STAT-L CONCEAL YES in the body of the message.
To reverse this, send the command SET STAT-L CONCEAL NO in the body of the message.
...stop mail from STAT-L (temporarily or permanently)?
Send a message to LISTSERV@LISTS.MCGILL.CA (again, please don't send the message to STAT-L@LISTS.MCGILL.CA).
To signoff permanently, include the line UNSUBSCRIBE STAT-L in the body of the message.
To temporarily suspend mail, use the line SET STAT-L NOMAIL and when you are ready to resume reading, use the line SET STAT-L MAIL or SET listname DIGEST depending on your preference for individual
messages versus a daily digest.
What if my initial signoff command doesn't work?
This happens sometimes, particularly if your e-mail address changes, even slightly. The key thing to remember here is that only the list owner can help you with this. Sending a message to STAT-L will
not help much unless the list owner happens to be following STAT-L right at that moment.
I would recommend that you get a list of subscribers and see how your e-mail address looks to the system (see above for details). Some mail systems (like ELM) allow you to change the FROM field of a
message. If your mail system supports this, then try sending a message to LISTSERV and change the FROM field so it looks like it came from the original address. You could also ask your system
administrator to create a temporary (or permanent) alias name for you for outbound messages (including the necessary deviant domain part).
If none of the above works, or if it seems too complicated, don't panic. Every list has a human owner who can go in and unsubscribe you manually. You can find the e-mail address of the list owner on
the same list of subscribers that you just got (again, see above). When I last checked in August 1995, the list owner was * OWNER= MICHAEL@LISTS.MCGILL.CA (Michael Walsh, McGill University) *
(514-398-3680) Send a message directly to the list owner, explaining your problem. The list owner will manually unsubscribe you from STAT-L. Most lists now have the convention that
listname-REQUEST@hostname and OWNER-listname@hostname will be sent to the owner of the list. So for our list, you could send a message to STAT-L-REQUEST@LISTS.MCGILL.CA or
OWNER-STAT-L@LISTS.MCGILL.CA to resolve any problem where intervention of the list owner is needed.
5 How do I get the archives of STAT-L/SCI.STAT.CONSULT?
The are three ways to get archives of STAT-L/SCI.STAT.CONSULT. First, the LISTSERV software for STAT-L maintains monthly archive files back to 1994. Send the command INDEX STAT-L to
LISTSERV.VM1.MCGILL.CA to obtain a listing of these file names. Ssend the command GET filename filetype F=MAIL to receive a specific archive file.
You can also search the archives for keywords, but the syntax is a throwback to mainframe days. Here's an example of how to find statistics humor in previous postings. Send the following message to
LISTSERV@VM1.MCGILL.CA (not to STAT-L!)
// JOB Echo=No Database Search DD=Rules
//Rules DD *
Search jokes in stat-l Index
This will get you the following output:
Database STAT-L, 11 hits.
Index Item # Date Time Recs Subject ------ ---- ---- ---- -------
002264 94/05/12 20:47 57 Re: anyone know a good stats joke...
002346 94/05/16 12:42 24 Re: heard any good stats jokes?
002352 94/05/12 16:42 29 Re: anyone know a good stats joke...
002374 94/05/17 00:39 34 Re: anyone know a good stats joke...
002387 94/05/17 17:16 30 Re: anyone know a good stats joke...
004886 94/10/11 09:36 49 Re: The charge of epistemological naivete
005643 94/11/07 17:45 59 Re: Political Correctness vs. Offensive topics of +
005664 94/11/08 11:32 36 Re: Political Correctness vs. Offensive topics of +
008101 95/03/02 14:58 116 us government censorship to the internet?
009133 95/04/18 04:56 90 --NEED HELP WITH EVALUATION--
021605 96/12/23 10:04 48 Re: Farms (STAT-L 21 Dec 1996)
Obviously only some of these are successful hits. For example, any message with the word "epistemological" in the title can't be humorous. Send to LISTSERV@VM1.MCGILL.CA the following syntax to get
the text of specific messages:
// JOB Echo=No Database Search DD=Rules
//Rules DD *
Search jokes in stat-l
Print all of 2264 2346 2352 2374 2387
Send the command GET LISTDB MEMO F=MAIL to LISTSERV@UGA.CC.UGA.EDU to get a full description of LISTSERV search functions (note that LISTSERV.VM1.MCGILL.CA does not have this file).
gopher://jse.stat.ncsu.edu/11/othergroups/statl/ is a gopher site that contains the archives of STAT-L. If you are still using gopher software, point it to jse.stat.ncsu. This site has archives going
back to 1990. In case you were curious, there were 21 messages posted for the whole month of January 1990. Volume has picked up a bit since then.
http://www.reference.com also maintains an archive of STAT-L, other lists, USENET groups, and web discussion groups. I'm not sure how far back this archive goes.
Finally, archives of USENET messages, including messages for SCI.STAT.CONSULT are maintained at two sites, http://altavista.digital.com which apparently only goes back a month or so, and http://
www.dejanews.com going back to March 19, 1995. Follow the instructions at either site for restricting your search to just one newsgroup.
Some people may wish to prevent their postings from being added to these databases. If your posting contains an X-Header looking like x-no-archive: yes or if you place x-no-archive: yes as the first
line of the body text of your message, then your message not be archived.
6 Why have I stopped seeing messages?
Nine times out of ten, the problem is at your site. If you aren't already good friends with the people who administer your Internet connection, now is a good time to start. These people will know
when the connection is running smoothly and when it is erratic.
Posting a test message to STAT-L/SCI.STAT.CONSULT is not likely to help. If you aren't seeing normal traffic, what makes you think that you will see your test message? Also, the people who read your
test message are not in a position to diagnose your problem. Only your new found friends who run your local Internet connection are in a position to diagnose your problem.
Your first step is to check one of the USENET archives described above (Altavista or Dejanews). If you see messages in either archive that are more than 48 hours old and which you have not received
at your local site (via either SCI.STAT.CONSULT or STAT-L), then you have a real problem.
There are some obvious self-diagnostic questions you should ask yourself. For STAT-L readers, ask yourself if you have received mail from other Internet sources. If not, then perhaps the problem is
bigger than STAT-L. Also for STAT-L readers, find out if your site has been bouncing back e-mail recently. The number one cause for not getting STAT-L mail is that the list administrator noticed a
bunch of bounced e-mail error messages and has de-activated your subscription.
To find out if you've been deactivated, send a message to LISTSERV@VM1.MCGILL.CA with QUERY STAT-L in the body of the message. Please make sure you send this to the LISTSERV address and not the
STAT-L address. Within a few hours, you should get a reply showing your status. If you don't get a response, that's a good sign that the listserver is down, which would mean that nobody is getting
messages from STAT-L. If you do get a response, here's what it might look like.
Distribution options for Steve Simon <ssimon@CMH.EDU>, list STAT-L: Ack= No, Mail= Digests, Files= Yes, Repro= No, Header= Short(BSMTP), Renewal= Yes, Conceal= No
If your account was de-activated, the response will be
You are not subscribed to the STAT-L list.
or your distribution option will be set to NOMAIL. In either case, work with your local Internet experts to fix the problem and then either re-subscribe or set the distribution option back to MAIL.
By the way, don't complain to the list owner for de-activating your account. The typical listowner has to sort through hundreds or thousands of bounced message reports weekly, and the only way to
stop these bounced message reports is to de-activate accounts. The people who you need to talk to are your new found friends who maintain your Internet access.
Failure to receive messages is less common for SCI.STAT.CONSULT readers. If you are experiencing problems, the obvious thing to look for is whether any of the newsgroups are getting through. If
nothing is getting through, then you have a local problem. If you get postings from other newsgroups, then perhaps your server has decided not to carry SCI.STAT.CONSULT anymore. Either way, you have
to talk to your local Internet experts.
7 How can I contact the ASA, Biometric Society, or IMS?
American Statistical Association
1429 Duke St.
Alexandria, VA 22314-3402
Tel: 703-684-1221
FAX: 703-684-2036
E-M: asasinfo@amstat.org
Web: http://www.amstat.org
The International Biometric Society
808 17th Street, NW, Suite 200
Washington, DC 20006-3910
Tel: 202-223-9669
FAX: 202-223-9569
E-M: 75703.1407@compuserve.com
Web: http://www.stat.uga.edu/~lynne/symposium/biometric.html
Institute of Mathematical Statistics
3401 Investment Boulevard, Suite 7
Hayward, CA 94545
Tel: 510-783-8141 (Hazel Lowery)
FAX: 510-783-4131
E-M: HLLIMS@stat.berkeley.edu
Web: http://www.imstat.org
8 How can I contact the major statistics software vendors?
http://www.statistics.com/vendors.html, a web site maintained by Resampling Stats, Inc. has a very nice list of statistics software vendor information.
http://www.gsm.uci.edu/~joelwest/MacStats/ is a site for statistics software specific to the Macintosh.
Many of these companies have numerous locations and international distributors. I have only listed corporate headquarters to save space. If you can, check out the web site to get more detailed
information. Also please bear in mind that mergers and other business activity may quickly make parts of this list obsolete.
Finally, I need to repeat my earlier plea about listservers. Please, please, please note that subscription requests go to the LISTSERV or MAILBASE or MAJORDOMO address.
Aptech Systems, Inc. (GAUSS)
23804 SE Kent-Kangley Road
Maple Valley, WA 98038 USA
Tel: 206-432-7855
FAX: 206-432-7832
Web: http://www.aptech.com/
E-M: support@aptech.com (support) info@aptech.com (sales information)
GAUSS mailing list --
Subscriptions to: MAJORDOMO@ECO.UTEXAS.EDU
How to subscribe: subscribe GAUSSIANS
Post messages to: GAUSSIANS@ECO.UTEXAS.EDU
Automatic Forecasting Systems, Inc.(Autobox)
PO Box 563
Hatboro, PA 19040
Tel: 215 675-0652
Fax: 215 672-2534
Web: http://www.autobox.com
Circle Systems, Inc. (Stat/Transfer)
1001 Fourth Avenue, Suite 3200
Seattle, WA 98154
Tel: 206-682-3783
Fax: 206-328-4788
Web: http://www.stattransfer.com
E-M: stat-transfer@circlesys.com (General Information) sales@circlesys.com (Sales) support@circlesys,com (Customer Support)
Civilized Software, Inc. (MLAB)
8120 Woodmont Ave. #250
Bethesda, MD 20815 USA
Tel: 1-301-652-4714
Fax: 1-301-656-1069
Web: http://www.civilized.com
E-M: csi@civilized.com
Conceptual Software Inc. (DBMS/COPY)
9660 Hillcroft # 510
Houston, TX 77096
Tel: 713-721-4200
Fax: 713-721-4298
Web: http://www.conceptual.com/
E-M: eroberts@conceptual.com (General Information) eroberts@conceptual.com (Sales) hfeldman@conceptual.com (Customer Support)
Cytel Software Corporation (StatXact, LogXact, EaSt)
675 Massachusettes Ave.
Cambridge, MA 02139 USA
Tel: (617) 661-2011
Fax: (617) 661-4405
Web: http://www.cytel.com
E-M: sales@cytel.com
Data Description, Inc. (DATADESK)
Box 4555
Ithaca, NY 14853 USA
Tel: (607) 257-1000
FAX: (607) 257-4146
Web: http://www.datadesk.com/datadesk/
E-M: datadesk@datadesk.com
DataMost Corporation (STATMOST)
520 West 9460 South
Sandy, UT 84070 USA
Tel: (801) 255-5008
Fax: (801) 255-5009
Web: http://www.datamost.com
E-M: techsupp@datamost.com
Kovach Computing Services.(SIMSTAT, XLSTAT, MVSP)
Web: http://www.kovcomp.co.uk/
E-M: info@kovcomp.co.uk
Also see Provalis Research
Manuguistics, Inc. (Statgraphics)
2115 East Jefferson St.
Rockville, MD 20852
Tel: 800-592-0050
Web: http://www.statgraphics.com/
E-M: sgsales@manu.com (sales) training@manu.com (training)
MathSoft, Inc. (MATHCAD, S-plus)
101 Main Street
Cambridge, MA 02142 USA
Tel: 617 577-1017
Fax: 617 577-8829
Web: http://www.mathsoft.com
E-M: ideas@mathsoft.com (comments and suggestions) support@mathsoft.com (Support, US or Canada) help@mathsoft.com (Support outside US/Canada) sales-info@mathsoft.com (Sales, US or Canada)
int-info@mathsoft.com (Sales outside US/Canada)
S-plus mailing list --
Subscriptions to: s-news-request@wubios.wustl.edu
How to subscribe: subscribe s-news
Post messages to: s-news@wubios.wustl.edu
web site: http://www.biostat.wustl.edu/s-news/
The MathWorks, Inc. (MATLAB)
24 Prime Park Way
Natick, MA 01760-1500 USA
Tel: (508) 653-1415
Fax: (508) 653-2997
Web: http://www.mathworks.com/home.html
E-M: info@mathworks.com (Sales, pricing, information) support@mathworks.com (Technical support) bugs@mathworks.com (Bug reports) suggest@mathworks.com (Product suggestions) service@mathworks.com
Minitab Inc.
3081 Enterprise Drive
State College, PA 16801 USA
Tel: 814 238-3280
Fax: 814 238-4383
Web: http://www.minitab.com
E-M: sales@minitab.com
Modern Microcomputers (MODSTAT)
7302 Kim Shelly Court,
Mechanicsville, VA 23111
Tel: 804 746-3882
Web: http://members.aol.com/rcknodt/pubpage.htm
E-M: RCKnodt@aol.com
NCSS Statistical Software (NCSS, PASS)
329 North 1000 East Kaysville, Utah 84037 USA
Tel: (800) 898-6109 (801) 546-0445
Fax: (801) 546-3907
Web: http://www.ncss.com
E-M: ncss@ix.netcom.com
Palisade Corporation (@RISK)
31 Decker Road
Newfield, NY 14867 USA
Tel: 607-277-8000 800-432-7475
Fax: 607-277-8001
Web: http://www.palisade.com
Provalis Research (MVSP, SIMSTAT, SIMSTAT-TSF, WORDSTAT)
5000 Adam Street
Montreal, QC
CANADA, H1V 1W5
Tel: 1-800-242-4775 (from overseas: 713-524-6394)
FAX: 713-524-6398
Web: http://www.simstat.com
Also see Kovach Computing Services.
Quantitative Psychology Software (ANOVA MultiMedia, Quantos Power, Central Limit Theorem, Partialing Techniques)
Web: http://psychology.iupui.edu/fb/
E-M: jrasmuss@iupui.edu
Resampling Stats
612 N. Jackson St.
Arlington, VA 22201 USA
Tel: 703-522-2713
Fax: 703-522-5846
Web: http://www.statistics.com
E-M: stats@resample.com learning@statistics.com
SAS Institute Inc. (JMP, SAS, Statview)
SAS Campus Drive
Cary, NC 27513 USA
Tel: 919 677-8000 919 677-8008 (JMP technical support) 919 677-8000, ext 5071 (JMP sales)
Fax: 919 677-8123
Web: http://www.sas.com
ftp: ftp://ftp.sas.com
E-M: corpcom@unx.sas.com (Corporate Communications) sasedu@vm.sas.com (Education) eurwww@mvs.sas.com (European Offices) pubs@unx.sas.com (Publications) software@sas.sas.com (Sales and Marketing)
bussol@unx.sas.com (Business Solutions Division) sasblb2@vm.sas.com (jmp-sales)
On September 26, 1997, SAS Institute purchased Statview software from Abacus, Inc. Information about Statview can be found at the web site, http://www.statview.com.
JMP mailing list --
Subscriptions to: MAJORDOMO@WUBIO.WUSTL.EDU
How to subscribe: subscribe JMP-L
Post messages to: JMP-L@WUBIOS.WUSTL.EDU
SAS mailing list --
Subscriptions to: LISTSERV@UGA.CC.UGA.EDU
How to subscribe: subscribe SAS-L First-name Last-name
Post messages to: SAS-L@UGA.CC.UGA.EDU
SAS Technical Support News --
Subscriptions to: LISTSERV@VM.SAS.COM
How to subscribe: subscribe TSNEWS-L First-name Last-name
Post messages to: Messages posted by SAS Institute only
SCIENTIFIC CONSULTING INC (PCNONLIN)
E-M: 75450.3171@compuserve.com
SPSS Inc. (BMDP, SPSS, Systat)
444 North Michigan Avenue
Chicago IL 60611 USA
Tel: 312 329-3410 800 543-2185 312-494-3283 (SYSTAT Technical Support)
Fax: 312/329-3668
BBS: 312/836-1900 (8/N/1)
ftp: ftp.spss.com
E-M: support@spss.com
Web: http://www.spss.com
BMDP mailing list --
Subscriptions to: LISTSERV@VM1.MCGILL.CA
How to subscribe: sub BMDP-L Firstname Lastname
Post messages to: BMDP-L@VM1.MCGILL.CA
SPSS mailing list --
Subscriptions to: LISTSERV@UGA.CC.UGA.EDU
How to subscribe: sub SPSSX-L Firstname Lastname
Post messages to: SPSSX-L@UGA.CC.UGA.EDU
SYSTAT mailing list --
Subscriptions to: LISTSERV@SPSS.COM
How to subscribe: sub SYSTAT-L Firstname Lastname
Post messages to: SYSTAT-L@SPSS.COM
Stata Corporation
702 University Drive
East College Station, Texas 77840 USA
Tel: 409-696-4600 800-STATA-PC
Fax: 409-696-4601
Web: http://www.stata.com/
E-M: stata@stata.com
STATA mailing list --
Subscriptions to: majordomo@hsphsun2.harvard.edu
How to subscribe: subscribe STATALIST
Post messages to: STATALIST@hsphsun2.HARVARD.EDU
Statistical Sciences (see MathSoft)
Statistics and Epidemiology Research Corporation (EGRET)
Tel: 206-632-3014
FAX: 206-547-4140
E-M: rhm@ms.washington.edu
Apparently, EGRET has been purchased by Cytel Corporation.
StatSoft, Inc. (STATISTICA)
2300 East 14th Street
Tulsa, OK, USA 74104-4442 USA
Tel: (918) 749-1119
Fax: (918) 749-2217
Web: http://www.statsoftinc.com
E-M: info@statsoftinc.com
Product Coordinator Statistical Software Center
Research Triangle Institute
3040 Cornwallis Road
Research Triangle Park NC 27709-2194 USA
Tel: (919) 541-6602
Fax: (919) 541-7431
Web: http://www.rti.org/patents/sudaan/sudaan.html
E-M: sudaan@rti.org
Web: http://www.unistat.com
Here is a list of software for experimental design, collated by Bob Wheeler.
RS/1 software - including RS/Discover (A general purpose statistics package with extensive experimental design and analysis capability.)
BBN Domain Corp.
150 Cambridge Park Dr.
Cambridge, MA 02140
Tel: 617-873-5000
Fax: 617-873-6153
E-M: jtsullivan@bbn.com
Web: http://www.bbndomain.com/
Design Ease & Design Expert software (Experimental design, analysis, and training.)
Stat-Ease, Inc.
2021 E. Hennepin Ave., Ste. 191
Minneapolis, MN 55413
Tel: 612-378-9449
Fax: 612-378-2152
E-M: 72103,1436@compuserve.com
ECHIP software (Experimental design, analysis and training for scientists and engineers.)
ECHIP, Incorporated
724 Yorklyn Road
Hockessin, DE 19707-8733
Tel: 302-239-5429
Fax: 302-239-6227
E-M: support@echip.com
9 Where can I find free/shareware statistical software?
Any search for free/shareware statistical software should start with Statlib. Other software is arranged alphabetically after the description of Statlib.
http://lib.stat.cmu.edu/ Statlib. Link last verified September 14, 2000. "Welcome to StatLib, a system for distributing statistical software, datasets, and information by electronic mail, FTP and
WWW. Starting October 1st [2000], StatLib's URL will just be http://lib.stat.cmu.edu and not http://www.stat.cmu.edu which will be reserved for the URL of the Statistics Department at Carnegie
www.mrc-bsu.cam.ac.uk/bugs/welcome.shtml BUGS. Link last verified September 14, 2000. "Bayesian inference Using Gibbs Sampling is a piece of computer software for the Bayesian analysis of complex
statistical models using Markov chain Monte Carlo (MCMC) methods. It grew from a statistical research project at the MRC Biostatistics Unit, but now is developed jointly with the Imperial College
School of Medicine at St Mary's, London. The Classic BUGS program uses text-based model description and a command-line interface, and versions are available for major computer platforms. A Windows
version, WinBUGS, has an option of a graphical user interface and has on-line monitoring and convergence diagnostics. CODA is a suite of S-plus/R functions for convergence diagnostics. The programs
are reasonably easy to use and come with a range of examples. Considerable caution is, however, needed in their use, since the software is not perfect and MCMC is inherently less robust than analytic
statistical methods.There is no in-built protection against misuse."
ftp://plato.la.asu.edu/pub/donlp2 DONLP2. This is the ftp site for DONLP2. There have been recent updates to DONLP2, one of the few high-quality programs for general nonlinear programming problems
available completely free over the net. There are four different versions (in f77 resp f2c/cc and with exact or numerical differentiation), there is a separate file with three papers as postscript
files and the user's guide (README's and donlp2doc.txt file) have been updated last on 6-24-96.
http://www.epidata.dk/ EpiData. Link last verified on March 7, 2002. EpiData is a comprehensive yet simple tool for documented dataentry. Overall frequency tables (codebook) and listing of data
included, but no statistical analysis tools. EpiData is free and currently developed for windows 95/98/NT/2000. (Works on PowerMac with emulator)
http://www.cdc.gov/publications.htm Epi-Info/Epi-Map. Link last verified September 14, 2000. "Epi Info. Public domain microcomputer programs for handling public health data. Epi Map. Displays data
using geographic or other maps. Epi Meta. Performs meta analysis. DoEpi. A series of interactive exercises for teaching epidemiology computing."
http://GKing.Harvard.Edu Gary King's homepage. Link last verified September 14, 2000. This page includes a wide range of freeware/shareware authored or co-authored by Gary King. "ReLogit: Rare Events
Logistic Regression -- for Stata or for Gauss; AMELIA: A Program for Missing Data -- for Windows or for Gauss ; CLARIFY: Software for Interpreting and Presenting Statistical Results (Stata macros);
Gauss Procedures: A set of utilities and statistical procs, for those who program in Gauss; EI: A Program for Ecological Inference (requires Gauss); EzI: A(n Easy) Program for Ecological Inference;
COUNT: A Program for Estimating Event Count and Duration Regressions; JudgeIt: A Program for Evaluating Electoral Systems and Redistricting Plans; Maxlik: A set of Gauss programs and datasets
(annotated for pedagogical purposes) to implement many of the maximum likelihood-based models I discuss in Unifying Political Methodology: The Likelihood Theory of Statistical Inference; The Virtual
Data Center Project: An operational, open-source, digital library to enable the sharing of quantitative research data, and the development of distributed virtual collections of data and documentation
and the Geospatial Liboratory Project."
http://bevo.che.wisc.edu/octave/ GNU Octave. "GNU Octave is a high-level language, primarily intended for numerical computations. It provides a convenient command line interface for solving linear
and nonlinear problems numerically, and for performing other numerical experiments using a language that is mostly compatible with Matlab. It may also be used as a batch-oriented language. Octave has
extensive tools for solving common numerical linear algebra problems, finding the roots of nonlinear equations, integrating ordinary functions, manipulating polynomials, and integrating ordinary
differential and differential-algebraic equations. It is easily extensible and customizable via user-defined functions written in Octave's own language, or using dynamically loaded modules written in
C++, C, Fortran, or other languages. GNU Octave is also freely redistributable software. You may redistribute it and/or modify it under the terms of the GNU General Public License (GPL) as published
by the Free Software Foundation. Octave was written by John W. Eaton and many others. Because Octave is free software you are encouraged to help make Octave more useful by writing and contributing
additional functions for it, and by reporting any problems you may have."
http://www.psychologie.uni-trier.de:8000/projects/gpower.html G*Power. Link last verified September 14, 2000. "G*Power is a general power analysis program that comes in two essentially equivalent
versions: one runs under the Macintosh OS and the other was designed for MS-DOS. G*Power performs high-precision statistical power analyses for the most common statistical tests in behavioral
research, that is, t-tests (independent samples, correlations, and any other t-test), F-tests (ANOVAS, multiple correlation and regression, and any other F-test), and Chi^2-tests (goodness of fit and
contingency tables). G*Power computes power values for given sample sizes, effect sizes, and alpha levels (post hoc power analyses), sample sizes for given effect sizes, alpha levels, and power
values (a priori power analyses), and alpha and beta values for given sample sizes, effect sizes, and beta/alpha ratios (compromise power analyses). The program may be used to display graphically the
relation between any two of the relevant variables and it offers the opportunity to compute the effect size measures from basic parameters defining the alternative hypothesis."
http://www.kovcomp.com/ Kovach Computing Services. Link last verified September 14, 2000. This company produces and/or distributes the following statistical software: "MVSP - a MultiVariate
Statistical Package, SIMSTAT - General purpose statistical program, WordStat - Textual content analysis add-in for Simstat, Simstat-TSF - Time series add-in for Simstat, XLSTAT - Statistical add-in
for Excel spreadsheets (Windows & Mac), Data Desk - Exploratory Data Analysis (Windows & Mac), Oriana - Circular statistics, Wa-Tor - Population dynamics simulation." Some of this software is
shareware or freeware. Free demos are available for much of the software also.
http://odin.mdacc.tmc.edu/anonftp M.D. Anderson Cancer Center Biomathematics Archive. Link last verified September 14, 2000. "This site contains all code available from the Section of Computer
Science, Department of Biomathematics, University of Texas M. D. Anderson Hospital. The code can be freely copied and used (shareware distribution is encouraged) although the authors retain copyright
for the University of Texas in order to control possible commercial incorporation."
http://www.uic.edu/~hedeker/mix.html The MIXOR/MIXREG Home Page by Don Hedeker. Link last verified October 3, 2000. "MIXOR, MIXREG, MIXNO, and MIXPREG programs A whole family of mixed-up programs!
including mixed-effects linear regression, mixed-effects logistic regression for nominal or ordinal outcomes, mixed-effects probit regression for ordinal outcomes, mixed-effects Poisson regression,
and mixed-effects grouped-time survival analysis. These models are also called multilevel models, hierarchical linear models, random-effects models, and random coefficients models, to name a few."
http://www.ioe.ac.uk/multilevel/ Multilevel Models Project. Link last verified September 14, 2000. "This page introduces some basic information about the Multilevel Models Project at the Institute of
Education, University of London together with details of software, working papers and an introduction to multilevel models. It is updated periodically with links, including information about the
project, macros for the MLwiN multilevel software, collaborations, newsletters etc."
http://www.sagebrushpress.com/pepibook.html PEPI. Link last verified December 31, 2001. Statistical software for Epidemiologists.
http://www-prophet.bbn.com/ PROPHET. Unable to verfity link on September 14, 2000. PROPHET is a UNIX-based workstation software package that gives researchers a wide range of computing capabilities.
One of PROPHET's greatest assets is its new graphical user interface. Employing the latest advances in software technology, PROPHET lets you store, analyze and present Data Tables, Graphs,
Statistical Analyses and Mathematical Modeling, and Sequence Analyses with high-resolution graphics and multiple windows. Anyone, from the computer-naive to the computer-sophisticate, can learn to
use it quickly and effectively.
http://www.tugsg.com/qdstat/qdst_fs.htm QDStat. Link last verified September 14, 2000. "The QD of QDStat stands for "Quickly Done" although some irreverent individuals favor the term "Quick and
Dirty." The package, however, remains true to either name and is an analytical tool for rapid, easy evaluation of relatively small uncomplicated data sets using procedures common to basic statistical
textbooks. QDStat does not understand about three dimensional mixed designs in the Lindquist tradition (there now you know how old I am), factorial designs with confounded interactions, balanced
lattice designs, balanced incomplete-block designs and similar things. Those who have need of such techniques will not be well served by QDStat and should direct their efforts to one of several other
more extensive packages such as SASŪ. On the other hand, the needs of mere mortals may be met by QDStat."
http://www.cas.lancs.ac.uk/software/sabre3.1/sabre.html SABRE. Link last verified September 14, 2000. "SABRE is a program for the statistical analysis of binary, ordinal and count recurrent events.
Such data are common in many surveys either with recurrent information collected over time or with a clustered sampling scheme. It is particularly appropriate for the analysis of work and life
histories, and has been used intensively on many longitudinal datasets. Its development has been funded by ESRC and Lancaster University. In 1989, SABRE 2.0 was released, written by Jon Barry, Brian
Francis and Richard Davies. SABRE 3.0, developed by Dave Stott, was released as freeware on the WWW in 1996. The current release is version 3.1. SABRE is available as freeware under the GNU general
public licence on the WWW."
http://www.myatt.demon.co.uk/index.htm Some Free Public Health Software. Link last verified October 3, 2000. Mark Myatt has a nicely documented list of free software that he and others have written.
http://forrest.psych.unc.edu/research/index.html ViSta. Link last verified September 14, 2000. "ViSta, the Visual Statistics System, features statistical visualizations that are highly dynamic and
very interactive. Dynamic, High-Interaction, Multi-View Graphics: ViSta constructs very-high-interaction, dynamic graphics that show you multiple views of your data simultaneously. The graphics are
designed to augment your visual intuition so that you can better understand your data. See What Your Data Have To Say: ViSta's visually intuitive and computationally intensive approach to statistical
data analysis is designed to clarify the meaning of data so that you can see what your data have to say. Freeware/Open Software: ViSta is free and open. It can be downloaded from the web. Platforms:
ViSta runs under Windows, on Macintosh, and under Unix."
http://www.westat.com/statsoft.html. Westat Statistical Software. Link last verified September 14, 2000. "Westat supports two classes of software packages for statistics professionals. WesVar is a
software package that computes estimates and replicate variance estimates for data collected using complex sampling and estimation procedures. Westat is the distributor in the U.S. and Canada for the
Blaise family of software, a complete survey processing system."
http://www.stat.umn.edu/ARCHIVES/archives.html U of Minnesota Statistics: Software. Link last verified September 14, 2000. "XLISP-STAT is an object-oriented statistical computing environment based on
a dialect of the Lisp language called XLISP. Macanova is an interactive program for statistical analysis and matrix algebra. On the Macanova home page you will find links for Macintosh, DOS, and
Windows executables, documentation, and program source. Arc is software that accompanies the book, Applied Regression Including Computing and Graphics by R. Dennis Cook and Sanford Weisberg,
published by John Wiley in August 1999. Arc is the sucessor to R-code. CUSUM Programs and data sets referenced in the book Cumulative Sum Charts and Charting by Douglas M. Hawkins and David H.
Olwell. FIRM (Formal Inference-based Recursive Modeling) fits dendrographic models relating a dependent variable to a set of predictors."
10 What statistics resources can be found on the web?
This section does not include web sites described in the "How can I contact the major statistics software vendors?" section or in other parts of the FAQ. The web is growing and changing rapidly, so
it is impossible for me to compile a comprehensive list. Here are some interesting sites which have been mentioned on STAT-L/SCI.STAT.CONSULT. You are welcome to send me other interesting web sites.
http://www.nottingham.ac.uk/~mhzmd/bonf.html A biography of Carlo Emilio Bonferroni (Michael Dewey).
http://www.research.att.com/~volinsky/bma.html Bayesian Model Averaging
http://members.tripod.com/~Probability/bayes02.htm Bayeseans vs. Non-Bayeseans
http://www.dartmouth.edu/~chance/chance_news/news.html Chance News
http://www.execpc.com/~helberg/statframes.html Clay Helberg's Statistics on the Web
http://www.indiana.edu/~stigtsts/ Commentaries on Significance Testing.
http://www.stats.gla.ac.uk/cti/ CTI Statistics (Resources for Statistics with an emphasis on teaching)
http://www-leland.stanford.edu/class/gsb/excel2sas.html Excel to SAS and other data translations.
http://noppa5.pc.helsinki.fi/koe/index.html experimental WWW pages for teaching Statistics
http://curriculum.qed.qld.gov.au/kla/eda/ Exploring Data website: curriculum support materials for teachers of introductory statistics.
http://www-stat.ucdavis.edu/stat.html Graduate programs in Statistics
http://members.aol.com/johnp71/javastat.html Interactive Statistics pages (Java/JavaScript).
http://www.rt66.com/~llubet Lloyd's Warehouse of Economic Indicators.
http://www.w3.org/Math/ Math ML, Mathematical Markup Language
ftp://ftp.sas.com/pub/neural/measurement.html Measurement theory FAQ.
http://snipe.ukc.ac.uk/cgi-bin/hpda/mff/ Mike Fuller's homepage which includes statistcs resources on the Internet and the list of statistics email discussion lists.
ftp://ftp.sas.com/pub/neural/FAQ.html Neural networks FAQ.
SAS tips on the web.
http://www.stat.wisc.edu/statistics/consult/ the Section on Statistical Consulting (ASA).
http://www.bioss.sari.ac.uk/smart/unix/moutline.htm SMART, Explorapaedia of Statistical and Mathematical Techniques
http://www.statserv.com/ St@tServ, the central information server for Statistics & Data Analysis on the Internet
http://www.interchg.ubc.ca/cacb/power Statistical power analysis software (Len Thomas).
http://www.xs4all.nl/~jcdverha/scijokes/1_2.html Statistics jokes.
http://www.stat.duke.edu/~box/sis/ Statistics in Sports Section (ASA)
http://www.execpc.com/~helberg/statistics.html Statistics on the Web (Clay Helberg).
http://www.isds.duke.edu/stats-sites.html Statistics servers and other links (The Institute of Statistics and Decision Sciences).
http://www.stat.ucla.edu/textbook/ UCLA Statistics Textbook (interactive pages using JavaScript, Perl, xlisp-stat, etc.)
http://www.statlets.com/ STATLETS: a collection of Java applets designed to assist you in analyzing data over the Internet or local intranets.
http://www.stat.ucla.edu/teach Teaching of statistics
http://faculty.vassar.edu/~lowry/VassarStats.html VassarStats (JavaScript statistics programming)
http://www.stat.ufl.edu/vlib/statistics.html/ Virtual Library of Statistics
http://www.utexas.edu/world/lecture/ World Lecture Hall (Web-based lectures on many academic topics including Statistics).
Web sites for statistics journals (compiled by Tony Corso)
http://www.ams.org/journals American Mathematical Society Journals
http://www.amstat.org/publications/index.html American Statistical Association Publications
http://www.stat.colostate.edu/annappr The Annals of Applied Probability
http://www.stat.berkeley.edu/users/annstat The Annals of Statistics
http://www.nuff.ox.ac.uk/biometrika Biometrika
http://www.wiwi.hu-berlin.de/~sigbert/cs.html Computational Statistics
http://fims-www.massey.ac.nz/~maths/jamds/ Journal of Applied Math and Decision Sciences
http://www.shef.ac.uk/uni/companies/apt/apt2.html Journal of Applied Probability
http://www.o2.net/~jasr/jasr.html Journal of Applied Statistical Reasoning
http://www.carfax.co.uk/jas-ad.htm Journal of Applied Statistics
http://www.pitt.edu/~csna/joc.html Journal of Classification
http://fisher.stat.unipg.it/iasc/Misc-stat-journ-JCGS.html Journal of Computational and Graphical Statistics
http://www.stat.ucla.edu/journals/jebs Journal of Educational and Behavioral Statistics
http://www.apnet.com/www/journal/mv.htm Journal of Multivariate Analysis
http://www.gbhap.com/journals/718/718-top.htm Journal of Nonparametric Statistics
http://jscs.stat.vt.edu/JSCS Journal of Statistical Computation and Simulation
http://www.elsevier.nl/locate/inca/505561 Journal of Statistical Planning and Inference
http://www.stat.ucla.edu/journals/jss Journal of Statistical Software
http://www2.ncsu.edu/ncsu/pams/stat/info/jse/homepage.html Journal of Statistics Education
http://interstat.stat.vt.edu/InterStat Interstat - Statistics on the Internet
http://vision.arc.nasa.gov/publications/Psychometrika Psychometrika
http://www.gbhap.com/journals/604/604-top.htm Statistics - Theoretical and Applied Statistics
http://www.elsevier.nl/inca/publications/store/5/0/5/5/7/3 Statistics & Probability Letters
http://www.stat.ucla.edu/ims/publications/journals/statsci Statistical Science Journal
http://www.maths.uq.oz.au/~gks/webguide/journals.html Guide to the Web for Statisticians: Journals
11 What should I do about these "Spams"?
http://www.cauce.org is a web site for the Coalition Against Unsolicited Commercial E-mails (CAUCE). Visit this site if you want to do something constructive to stop spam. This site is lobbying for
legislation that would make junk e-mail illegal, just like junk FAXes were outlawed recently. In my humble opinion, this seems like the best solution to a problem that is getting worse and worse over
A message distributed across multiple newsgroups or list servers, usually for commercial purposes, is known as a Spam. Some examples of Spams that have hit STAT-L/SCI.STAT.CONSULT are the green card
lawyers, information about lonely women in Russia, and blueprints of the original atom bomb. First, keep in mind that often it is not the original spam messages that are so conspicuous and
potentially intrusive, but rather the inevitable threads of discussion which seem to result from them. Please do not complain to STAT-L about a spam. The person who sent the spam is almost certainly
not a subscriber to STAT-L and will not see your complaint. Other victims of the spam will see your complaint though, which multiplies the annoying effect of the spam.
There are constructive steps that you can take to discourage a spam but be assured that hundreds if not thousands of people have probably already done this on your behalf. You can do nothing and
still be assured that others are looking out for everyone's interests. So the best course of action is to shrug off the message. You might want to get in the practice of recognizing a spam by its
subject line and deleting it unread.
Here are some constructive steps you can take to discourage inappropriate use of Internet resources.
http://www.glr.com/nojunk.html and http://kenjen.com/nospam/ are two sites that you can register at to notify bulk e-mailers that you do not wish to receive commercial e-mail. Some of the more
"responsible" bulk e-mailers work with these sites to clean their address lists. Note that while some e-mail advertisements offer a way to remove your e-mail address from their list, there are some
reports that doing this might actually increase the amount of spam that you get (see the CAUCE web site for more details).
net-abuse@nocs.insp.irs.gov is an e-mail address within the United States Internal Revenue Service. Because of the volume of e-mails that this address has been getting, the owner has asked that this
site be restricted to instances of off-shore money laundering, "cheat the IRS" type UCE mailings and anything dealing with "hate mail" directed towards the IRS and its employees. They cannot
investigate spam, unsolicited commercial e-mail, or e-mail pyramid schemes.
http://www.usps.gov/websites/depart/inspect/ is a page from the web site for the U.S. Postal Service. This particular page explains why chain letters (including Internet chain letters) are illegal
and who to notify. The U.S. Postal Service has the power to impound all incoming mail to an address or post office box that is listed on a chain letter.
http://www.fraud.com/ is the web site of the National Fraud Information Center. They investigate reports of fraudulent uses of the Internet. They also have a toll free number 1-800-876-7060.
http://www.clark.net/pub/rolf/mmf/ is a humorous web site that publishes the name, address, phone, and e-mail accounts of people who foolishly participated in Internet chain letters like "Make Money
http://www.cco.caltech.edu/~cbrown/BL/ is a blacklist of Internet advertisers. Find out how to get someone added to the blacklist and ways that you can show your displeasure to advertisers on the
blacklist. Be cautious, however, of some of the suggestions made at this site which, in my opinion, go beyond a constructive approach. The author, himself, notes that some of his suggestions may not
be legal in some jurisdictions.
http://www.cm.org/nocem.html, http://www.compulink.co.uk/~net-services/spam/, and http://www.mmgco.com/nospam/ offer different software solutions to filter out spams.
news://news.admin.net-abuse.usenet and news://news.admin.net-abuse.email are two USENET newsgroups with information about abuse of the Internet.
12 What are some of the problems with stepwise regression?
All of this material is quoted from various e-mails that appeared on STAT-L/SCI.STAT.CONSULT in 1996. Thanks go to Ira Bernstein, Ronan Conroy, Frank Harrell for their detailed explanations and to
Richard Ulrich who originally compiled these comments. I have done some very minor editing, (mostly adding and changing line breaks) but have tried to avoid any substantive changes to these well
written explanations.
Frank Harrell's comments:
Here are SOME of the problems with stepwise variable selection.
1. It yields R-squared values that are badly biased high.
2. The F and chi-squared tests quoted next to each variable on the printout do not have the claimed distribution.
3. The method yields confidence intervals for effects and predicted values that are falsely narrow (See Altman and Anderson Stat in Med).
4. It yields P-values that do not have the proper meaning and the proper correction for them is a very difficult problem.
5. It gives biased regression coefficients that need shrinkage (the coefficients for remaining variables are too large; see Tibshirani, 1996).
6. It has severe problems in the presence of collinearity.
7. It is based on methods (e.g. F tests for nested models) that were intended to be used to test pre-specified hypotheses.
8. Increasing the sample size doesn't help very much (see Derksen and Keselman)
9. It allows us to not think about the problem.
10. It uses a lot of paper.
Note that 'all possible subsets' regression does not solve any of these problems.
@article{alt89,author = "Altman, D. G. and Andersen, P. K.",journal = "Statistics in Medicine",pages = "771-783",title = "Bootstrap investigation of the stability of a {C}ox regression
model",volume = "8",year = "1989" Shows that stepwise methods yields confidence limits that are fartoo narrow.}
@article{der92bac,author = {Derksen, S. and Keselman, H. J.},journal = {British Journal of Mathematical and Statistical Psychology},pages = {265-282},title = {Backward, forward and stepwise
automated subset selection algorithms: {F}requency of obtaining authentic and noise variables},volume = {45},year = {1992},annote = {variable selection} Conclusions: "The degree of correlation
between the predictor variables affected the frequency with which authentic predictor variables found their way into the final model. The number of candidate predictor variables affected the
number of noise variables that gained entry to the model. The size of the sample was of little practical importance in determining the number of authentic variables contained in the final model.
The population multiple coefficient of determination could be faithfully estimated by adopting a statistic that is adjusted by the total number of candidate predictor variables rather than the
number of variables in the final model."}
@article{roe91pre,author = {Roecker, Ellen B.},journal = {Technometrics},pages = {459-468},title = {Prediction error and its estimation for subset--selected models},volume = {33},year = {1991}
Shows that all-possible regression can yield models that are "too small".}
@article{man70why,author = {Mantel, Nathan},journal = {Technometrics},pages = {621-625},title = {Why stepdown procedures in variable selection},volume = {12},year = {1970},annote = {variable
selection; collinearity}}
@article{hur90,author = "Hurvich, C. M. and Tsai, C. L.",journal = American Statistician,pages = "214-217",title = "The impact of model selection on inference in linear regression",volume =
"44",year = "1990"}
@article{cop83reg,author = {Copas, J. B.},journal = "Journal of the Royal Statistical Society B",pages = {311-354},title = {Regression, prediction and shrinkage (with discussion)},volume =
{45},year = {1983},annote = {shrinkage; validation; logistic model} Shows why the number of CANDIDATE variables and not the number in the final model is the number of d.f. to consider.}
@article{tib96reg,author = {Tibshirani, Robert},journal = "Journal of the Royal Statistical Society B",pages = {267-288},title = {Regression shrinkage and selection via the lasso},volume =
{58},year = {1996},annote = {shrinkage; variable selection; penalized MLE; ridge regression}}
Ira Bernstein's comments:
I think that there are two distinct questions here: (a) _when_ is stepwise selection appropriate and (b) _why_ is it so popular.
Since I have seen some variation in usage of the term "stepwise", I define it as any of a number of _data_ driven variable selection schemes used in regression and discriminant analysis, among
other applications. Some, inappropriately IMHO (since there is no official body to define "appropriate"), use it to describe what I would call hierarchical (_hypothesis_ driven) selection. Like I
would assume many, I would discourage stepwise selection and encourage hierarchical selection. I, of course, assume the researcher does not "cheat" by defining his/her "hierarchy" given the data
but does so by considering alternatives in advance of analysis and, preferably, replicates the study (dream on).
I would probably only argue slightly with "never" as an answer to the use of stepwise selection since I don't know what knowledge we would lose if all papers using stepwise regression were to
vanish from journals at the same time programs providing their use were to become terminally virus-laden. However, I have been in situations that looked like "I have good reason to look at
variables A, B, and C; then look at D, and E, but I have no basis to favor F over G or vice versa past that point." Older versions of SPSS (I haven't used newer versions since switching to SAS a
decade ago) allowed this mixture, and I would personally not object to it as long as the strategy were defined in advance and made clear to readers.
As to part (b), I think that there are two groups that are inclined to favor its usage. One consists of individuals with little formal training in data analysis who confuse knowledge of data
analysis with knowledge of the syntax of SAS, SPSS, etc. They seem to figure that "if its there in a program, its gotta be good and better than actually thinking about what my data might look
like". They are fairly easy to spot and to condemn in a right-thinking group of well-trained data analysts (like ourselves). However, there is also a second group who are often well trained (and
may be here in this group ready to flame me). They believe in statistics uber alles--given any properly obtained data base, a suitable computer program can objectively make substantive inferences
without active consideration of the underlying hypotheses. If stepwise selection is the parent of this line blind data analysis, then automatic variable respecification in confirmatory factor
analysis is the child.
Ronan Conroy's comments:
I am struck by the fact that Judd and McClelland in their excellent book "Data Analysis: A Model Comparison Approach" (Harcourt Brace Jovanovich, ISBN 0-15-516765-0) devote less than 2 pages to
stepwise methods. What they do say, however, is worth repeating:
1. Stepwise methods will not necessarily produce the best model if there are redundant predictors (common problem).
2. All-possible-subset methods produce the best model for each possible number of terms, but larger models need not necessarily be subsets of smaller ones, causing serious conceptual problems
about the underlying logic of the investigation.
3. Models identified by stepwise methods have an inflated risk of capitalising on chance features of the data. They frequently fail when applied to new datasets. They are rarely tested in this
4. Since the interpretation of coefficients in a model depends on the other terms included, "it seems unwise," to quote J and McC, "to let an automatic algorithm determine the questions we do and
do not ask about our data". RC adds that stepwise methods abusers frequently would rather not think about their data, for reasons that are funny to describe over a second Guinness.
5. I quote this last point directly, as it is sane and succinct: "It is our experience and strong belief that better models and a better understanding of one's data result from focussed data
analysis, guided by substantive theory." (p 204)
They end with a quote from Henderson and Velleman's paper "Building multiple regression models interactively". Biometrics 1981;37:391-411 "The data analyst knows more than the computer" and add
"failure to use that knowledge produces inadequate data analysis."
Personally, I would no more let an automatic routine select my model than I would let some best-fit procedure pack my suitcase.
13 What is the answer to the Monty Hall, Envelope, or Birthday problem?
There is a classic probability puzzle, which is called the Monty Hall problem. Here's a nice description from the rec.puzzles FAQ. "The Monty Hall problem can be stated as follows: A gameshow host
displays three closed doors. Behind one of the doors is a car. The other two doors have goats behind them. You are then asked to choose a door. After you have made your choice, one of the remaining
two doors is then opened by the host (who knows what's behind the doors), revealing a goat. Will switching your initial guess to the remaining door increase your chances of guessing the door with the
The general consensus is that the probability of winning the car is 1/3 if you don't switch and 2/3 if you do switch. But there are some implicit assumptions in this problem that cause a raging
debate every time it appears on STAT-L. For example, the host may be perversely trying to goad you into a bad switch and reveals a door only when your current door has a car behind it. There are at
least thirty web sites that discuss this problem. Here are three good sites:
http://www.smartpages.com/faqs/sci-math-faq/montyhall/faq.html SCI.MATH FAQ
http://www.cs.ruu.nl/wais/html/na-dir/puzzles/archive/decision.html REC.PUZZLES FAQ
http://www.ram.org/computing/monty_hall/monty_hall.html has a simulation model based on this problem.
You can also read about this problem in Engel, E. and Venetoulias, A. (1991). Monty Hall's probability puzzle. Chance, Vol 4, # 2, 6-9. and Selvin, S. (1975). A problem in probability, in "Letters to
the Editor," The American Statistician, 29, 67 and 134.
The envelope exchange problem goes something like this (again from the rec.puzzles FAQ). "Someone has prepared two envelopes containing money. One contains twice as much money as the other. You have
decided to pick one envelope, but then the following argument occurs to you: Suppose my chosen envelope contains $X, then the other envelope either contains $X/2 or $2X. Both cases are equally
likely, so my expectation if I take the other envelope is .5 * $X/2 + .5 * $2X = $1.25X, which is higher than my current $X, so I should change my mind and take the other envelope. But then I can
apply the argument all over again. Something is wrong here! Where did I go wrong? In a variant of this problem, you are allowed to peek into the envelope you chose before finally settling on it.
Suppose that when you peek you see $100. Should you switch now?"
Again, there are some subtle assumptions in this problem that cause a lot of commentary. A good reference to the problem is Christensen, R. and Utts, J. (1992) "Bayesian Resolution of the 'Exchange
Paradox,'" The American Statistician, 46(4), 274-276. Note also comments in the Letters to the Editor column in two separate issues the American Statistician in 1993 (pages 160, 311).
http://www.cs.ruu.nl/wais/html/na-dir/puzzles/archive/decision.html, the rec.puzzles FAQ contains a nice discussion of this problem.
The birthday problems goes something like this. There are "r" people in a room. What is the probability that two or more people have the same birthday?
Assuming uniform probabilities for each birthdate, the probability of a match is 1-(n!/(n^r)*(n-r)!) where n equals the number of days in a year and r equals the number of people in the group. For r=
23, the probability exceeds 0.5. A nice summary of this problem with extensions into non-uniform birthdates is Nunnikhoven, T.S. (1992) "A Birthday Problem Solution for Nonuniform Birth Frequencies,"
The American Statistician, 46(4), 270-274.
http://pascal.dartmouth.edu/~zhu/applets/Birthday/Birthday.java is a Java applet for computing these probabilities.
http://www.mste.uiuc.edu/reese/birthday/intro.html has a simulation of the birthday problem.
14 Can someone provide me with references and/or books about [topic]?
Before you post a question like this, it would be nice if you did a little work beforehand. The best resource for finding references about a statistical topic is the Current Index to Statistics
Extended Database (CISED), a CD-ROM with 180,000 references in statistics journals since 1974, with coverage of selected journals dating back as far as 1940. Many university libraries have this
product, and some make it available to their students through a web browser. Licensing agreements, however, prevent libraries from making this product available to the general public. If you want to
purchase an individual license, it is available for as little as $95.
http://www.stat.uchicago.edu/~cis/ is a web site that contains more information about CISED. Two e-mail contacts at IMS and ASA are kmkims@stat.berkeley.edu and cised@amstat.org, respectively.
http://www.stat.wisc.edu/statistics/consult/statbook.html is Glen McPherson's Essential Book List. Back in 1993, Glen McPherson polled the members of STAT-L/SCI.STAT.CONSULT to create a list of books
essential to anyone in the statistical consulting field. The list is organized by major topic areas. Brian Yandell has put this list up on his web site.
http://www.stat.wisc.edu/statistics/consult/book.html is another interesting booklist that can be found at the same web site.
15 Can you recommend a good statistics software package?
If you want a good answer to this question, you need to be specific about your needs. Be sure to tell us which of the following factors are important to you:
1. Ease of learning
2. Quality of help files
3. Extensibility/programmability
4. Data entry and validation
5. Data manipulation
6. Data importing
7. Real time graphics (scatterplt brushing, 3D rotation)
8. Cost
Let us know what statistical procedures you need and what level of user the software is intended for. Tell us what type of computer you plan to run this on.
Also, you can visit some of the web sites listed above to see what the manufacturers have to say about their software packages.
Finally, many statics journals (e.g., The American Statistician) provide regular software reviews. you might find better answers to this question at your libary.
16 Acknowledgments
This list has grown thanks to the small and large contributions of many people. Part of it was shamelessly stolen from well written messages on STAT-L. Here is a partial list of people who you should
thank for directly or indirectly contributing to this FAQ: Gary Ash, Kenneth Benoit, Grant Blank, Jim Box, Benjamin Chan, Ronan Conroy, Tony Corso, Donald Cram, Byron Davis, Barry DeCicco, Joe
Dolgos, Steven Dubnoff, Rick Engberg, Emil Friedman, Mike Fuller, Steve Goodman, Bill Gould, Timothy Green, Duane Griffin, Clay Helberg, Tim Hesterberg, Charles Kincaid, Melvin Klassen, Warren
Kovach, Jan de Leeuw, Lloyd Lubet, Haiko Luepsen, Hans Mittelmann, Brian Monsell, John Nash, Jonathan Newman, Michael Palij, Dennis Roberts, David Ronis, Warren Sarle, Ronald Schoenberg, Russell
Schulz, Karsten Self, Jim Steiger, Len Thomas, Richard Ulrich, Vittorio Viaggi, Michael Walsh, Meredith Warshaw, Mitchell Watnik, Bob Wheeler, Will Wheeler, John Whittington, Forest Young, Sara
Young, Stuart Young, Craig Ziegler.
If there are errors in this FAQ, they are probably my fault; it is difficult to accurately transcribe all of the information I have received, even with cut and paste. Please send any corrections and
additions. Complaints are appreciated also, but please realize that I am doing this on a volunteer effort, mostly during lunch breaks and after work hours.
*** End of FAQ for STAT-L/SCI.STAT.CONSULT *** | {"url":"http://www-personal.umich.edu/~dronis/statfaq.htm","timestamp":"2014-04-20T18:34:37Z","content_type":null,"content_length":"94828","record_id":"<urn:uuid:5d796b98-a091-485a-9ee7-6ee144b0a37b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Could anyone help out on Number theory? A proof of the following:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50429808e4b060a4c5a304d6","timestamp":"2014-04-20T18:49:06Z","content_type":null,"content_length":"49633","record_id":"<urn:uuid:029394dc-b0e8-4424-bdac-c26f83709e94>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00271-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Colony ACT Tutor
Find a The Colony ACT Tutor
...I have done research in the fields of problem solving and the testing effect, and have extensive knowledge of learning and memory. I use my knowledge of human memory research to help students
improve their learning by teaching them specific memory techniques, increasing their processing speed, a...
39 Subjects: including ACT Math, chemistry, reading, English
...I completed advanced math courses in high school, culminating with AP Calculus and AP Statistics. I achieved a 5 on the AP Calculus BC exam and a 4 on the AP Statistics exam. During my time in
college, I have completed various courses in math including Multivariable Calculus, Linear Algebra, Di...
7 Subjects: including ACT Math, algebra 1, algebra 2, SAT math
...By breaking a problem down into smaller manageable steps, I can help identify and resolve issues that need improvement. It's amazing how many students I have heard saying they're "just bad at
math". But I've never met anyone who truly was.
11 Subjects: including ACT Math, geometry, GRE, algebra 1
...The TAKS test was designed in a specific way to teach children how to read actively, learn problem solving techniques, and basically, to learn how to take these types of tests. Texas wants its
students prepared for standardized testing, and is attacking this by giving a TAKS test each year until junior year of high school. These tests are logical, and can be passed easily with
29 Subjects: including ACT Math, reading, Spanish, GRE
...For beginners it is especially difficult, and I try to keep that in mind when teaching. I make a lot of effort to stay relaxed while working with my students through those very tough beginning
stages. If I had to pick out one piece of advice for anyone who's just starting out with the guitar, i...
14 Subjects: including ACT Math, chemistry, physics, calculus | {"url":"http://www.purplemath.com/The_Colony_ACT_tutors.php","timestamp":"2014-04-20T08:57:46Z","content_type":null,"content_length":"23661","record_id":"<urn:uuid:d8f0afc8-c5fc-4bd2-b57b-692996bc9670>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Philosophy of Science Portal
For no particular reason other than to champion an individual for meritorious endeavors in the arena of academics, I will spotlight individuals in the academic realm and wish them unqualified
Bruna De Oliveira
Bruna De Oliveira is from Natal, Brazil and currently working on her Ph.D. in "Adaptive Quantum Design" at USC in Los Angeles, California.
"Bruna is working on electron transport through semiconductor heterostructures. Her approach uses a propagation matrix method which she has generalized to include the effects of electron-phonon
scattering in these structures."
We are developing tools for designing nano-scale devices in which quantum degrees of freedom play an important role. Quantum systems with made-to-order properties, such as filters, modulators, and
switches, are constructed from the atomic level up. The guiding idea of our approach is that in order to achieve such desired system responses, symmetries such as the translational invariance of
conventional crystals have to be broken. The core of adaptive quantum design is therefore the search for optimum configurations of the system constituents, such as atomic positions in an artificial
synthetic solid, that enable a desired system response, e.g. optical absorption at certain frequencies.
As an illustrative example, let us consider interacting atoms described by a long-ranged tight-binding model,
Here the overlap integral is determined by the distance between the atoms and by the nature of the atomic orbitals. The dependence on inter-atomic spacing can be parameterized as a power-law,
where α = 1.5 - 3.0. By breaking the translational symmetry of this system, desired responses can be emulated, e.g. a flat spectral function, or quasi-2D or quasi-3D response in a 1D array.
The above left figure shows the resulting spatial configuration of atoms in a two-dimensional system with periodic boundary conditions. The chosen target function in this case is a top-hat density of
states that is flat within a certain energy regime, and zero otherwise (shown in the right figure). Since this specific target is particle-hole symmetric, i.e. there are as many states above and
below zero-energy, the dominant building blocks that are discovered by the numerical search of the best system configuration are dimer molecules. Other target functions that break particle-hole
symmetry require more complicated building blocks, such as trimer and quadrumer molecules.
Another example for adaptive quantum design are photonic structures with dielectric rods of variable size and position. Existing devices based on ordered photonic crystals have only limited
functionality. On the other hand, adaptive design tools enable new optimized broken-symmetry nano-photonic devices with greater sensitivity, such as mux, combiners, splitters, and channel dropping
Effective numerical search algorithms are essential to find such configurations in the large phase space of possible solutions which includes collective many-body resonances. We have explored and
tested several approaches, including guided random walk, simulated annealing, and genetic algorithms. Their convergence properties differ and depend strongly on system parameters. The plot below
shows a comparison of these methods which were benchmarked on the same computer. The convergence criterion Δ in this case is the squared difference between a top-hat target density of states and the
density of states achieved by adjusting the positions of the constituent atoms. Its is observed that some of the simpler search algorithms tend to get stuck in local minima, whereas the more
sophisticated ones find a lower minimum.
Adaptive quantum design is an enabling method for future advances in nano-technology. It relies on realistic physical models and on efficient numerical search algorithms for global minima. It can be
applied to microscopic systems in order to enable and enhance their functionality. The challenge is to solve the inverse problem where a target solution is given and the corresponding system
configuration has to be found. On the way to matching such target, new building blocks of broken-symmetry configurations can be discovered. This approach enables us to design ultra-small devices and
simultaneously to discover novel phenomena that are not found in conventional solids.
Andie Byrnes is from Great Britain and is doing graduate work at UCL, London working towards her Ph. D. whose thesis is "Origins and development of Neolithic farming in the Faiyum and Cairo areas of
Egypt."As she has written: "I trained as an archaeologist in Scotland many years ago, but lack of funds and lack of sun drove me into telecommunications (wireless, interactive TV and Internet) where
I have been working for the last 15 years. However, once archaeology is in your blood, it never goes away, and I am now carrying out post-graduate studies in Egyptian Archaeology (prehistory) at UCL,
London."Visit her Egyptology sites:
No comments: | {"url":"http://philosophyofscienceportal.blogspot.com/2008/04/academicsfuture-scholars.html","timestamp":"2014-04-16T16:14:49Z","content_type":null,"content_length":"131663","record_id":"<urn:uuid:30fbdaca-d70e-4d5a-ae02-ec643cb424a9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Derivation of Formula for Area of Cyclic Quadrilateral
For a cyclic quadrilateral with given sides a, b, c, and d, the formula for the area is given by
Where s = (a + b + c + d)/2 known as the semi-perimeter.
Derivation of Formula
Cosine law for triangle ADC
note that cos (180° - x) = -cos x
Equate the two x^2
Area of ABCD
Square both sides of the equation
Note that a + b + c + d = P, the perimeter. Thus,
Substitute 1 - cos^2 θ to the equation of A^2 above.
Recall that | {"url":"http://www.mathalino.com/reviewer/derivation-formulas/derivation-formula-area-cyclic-quadrilateral","timestamp":"2014-04-16T16:01:55Z","content_type":null,"content_length":"56677","record_id":"<urn:uuid:8883f333-b51a-4451-8cc6-708994e95bc6>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
inverse function intersections
April 25th 2009, 08:05 PM #1
Apr 2009
inverse function intersections
So theres this question in my textbook:..
Find the point of intersection between the function y= (3/((3x-4)^2)) + 6
I know you begin with
x = (3/((3x-4)^2)) + 6
x-6 = (3/((3x-4)^2))
√ (x-6)(3x-4)= √ 3
but then i just keep going round in circles and can never get the x on its own.
any help would be greatly appreciated=]
So theres this question in my textbook:..
Find the point of intersection between the function y= (3/((3x-4)^2)) + 6
I know you begin with
x = (3/((3x-4)^2)) + 6
x-6 = (3/((3x-4)^2))
√ (x-6)(3x-4)= √ 3
but then i just keep going round in circles and can never get the x on its own.
any help would be greatly appreciated=]
Hi ella85
I'm assuming you want to find the interesection of where $f(x) = f(x)^{-1}$
If so you don't have to actually find the inverse of $f(x)$, you just have to make your function $f(x) = x$ as to points of intersection of a function an its inverse are always found on the line
$y = x$
Therefore simply solve
$\frac{3}{(3x-4)^2} + 6 = x$
$\frac{3}{(3x-4)^2} = x - 6$
$3 = ( x- 6)(3x-4)^2$
I think the solutions are $x = 9, \frac{\sqrt{3}-4}{3}$
In this case that's true. But in general the points of intersection of a function and its inverse can also lie on lines of the form $y = - x + c$ where $c$ is a constant.
eg. Two of the three intersection points of the function
$f: [0, \, + \infty) \longrightarrow R, ~ f(x) = -x^2 + 1$
and its inverse function lie on the line $y = -x + 1$.
thanks for replying=]
sorry, i dont think i explained what im having trouble with properly.
I understand how to find the intersection of a function and its inverse, but I cant solve this particular equation
3x-4 = √(3/(x=6))
i need to make x the subject, but when i square it to get rid of the square root, it still doesnt work..
So theres this question in my textbook:..
Find the point of intersection between the function y= (3/((3x-4)^2)) + 6
I know you begin with
x = (3/((3x-4)^2)) + 6
x-6 = (3/((3x-4)^2))
√ (x-6)(3x-4)= √ 3
but then i just keep going round in circles and can never get the x on its own.
any help would be greatly appreciated=]
If $f(x) = \frac{3}{(3x - 4)^2} + 6$ then an inverse function does not exist. This is because $y = f(x)$ is not one-to-one. So the first thing you have to do is restrict the domain of $y = f(x)$
so that is is one-to-one.
One such restriction might be $\left( \frac{3}{4}, ~ + \infty\right]$. Then $f^{-1}(x) = -\frac{1}{3} \sqrt{\frac{3}{y - 6}} + \frac{4}{3}$.
How I got this:
Solve $x = \frac{3}{(3y - 4)^2} + 6$ for $y$:
$x = \frac{3}{(3y - 4)^2} + 6$
$\Rightarrow x - 6 = \frac{3}{(3y - 4)^2}$
$\Rightarrow \frac{3}{x - 6} = (3y - 4)^2$
$\Rightarrow \pm \sqrt{\frac{3}{x - 6}} = 3y - 4$
$\Rightarrow \pm \sqrt{\frac{3}{x - 6}} + 4 = 3y$.
A simple known point on $y = f(x)$ is (1, 9). Therefore a point on $y = f^{-1}(x)$ is (9, 1). Substitute (9, 1) into $y = \pm \frac{1}{3} \sqrt{\frac{3}{y - 6}} + \frac{4}{3}$:
$1 = \pm \frac{1}{3} \sqrt{\frac{3}{9 - 6}} + \frac{4}{3}$
and so the negative root solution for the inverse function must be used to get equality.
If the restriction $\left(- \infty, ~ \frac{3}{4}\right]$ on the domain of $y = f(x)$ is used, then $f^{-1}(x) = \frac{1}{3} \sqrt{\frac{3}{y - 6}} + \frac{4}{3}$.
I'm really sorry to keep bothering you...you're explaining it all really well
but what im trying to find is the point of intersection between
and the line y=x
so i know that you use simultaneous equations and ive simplified it to the point where
x-6 = 3/((3x-4)^2)
and it says in my book that the answer is 6.015, but i can't work out how to get this answer.
thanks again.
I'm really sorry to keep bothering you...you're explaining it all really well
but what im trying to find is the point of intersection between
and the line y=x
so i know that you use simultaneous equations and ive simplified it to the point where
x-6 = 3/((3x-4)^2)
and it says in my book that the answer is 6.015, but i can't work out how to get this answer.
thanks again.
Are you expected to get an exact answer or is an approximate answer sufficient? From the answer given, it looks like the latter. In which case you should use technology (eg. a graphics or CAS
calculator) to solve $(x - 6)(3x - 4)^2 = 3$.
To get an exact answer will involve solving a cubic equation that has no simple exact real solution (which means you would have to use the Cardano formula).
April 25th 2009, 09:45 PM #2
April 25th 2009, 10:05 PM #3
April 25th 2009, 11:27 PM #4
Apr 2009
April 26th 2009, 03:29 AM #5
April 26th 2009, 03:49 AM #6
Apr 2009
April 26th 2009, 04:22 AM #7 | {"url":"http://mathhelpforum.com/pre-calculus/85668-inverse-function-intersections.html","timestamp":"2014-04-20T04:49:39Z","content_type":null,"content_length":"62796","record_id":"<urn:uuid:0580c620-b115-402e-bd18-23676822d837>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probabilities Measures
up vote 1 down vote favorite
Consider then a convex subset of probability measures $A$ that is closed relative to the set of all probability measures. I'm wondering if $\forall \mu \in$ closure($A$), does there exists a positive
measure $\nu$ such that $\mu + \nu \in A$?
It is well known that $C^\*_0(\mathbb{R})$ (the continuous dual space of $C_0(\mathbb{R})$, which is all continuous functions on $\mathbb{R}$ that vanish at $\pm \infty$) can be identified with the
space of all regular signed measures. Equip this space with the weak-* topology, i.e. where $\mu_n$ converges weakly to $\mu$ if $\int f d\mu_n \rightarrow \int f d\mu$ for all $f \in C_0(\mathbb{R})
$. I'm looking at the set of all probability measures in this space (positive measures for which $\mu(\mathbb{R}) = 1$). This set is not closed in the weak-* topology, since sequences such as $\
delta_n \to 0$ (zero measure) as $n \to \infty$.
Consider then a convex subset of probability measures $A$ that is closed relative to the set of all probability measures. I'm wondering if $\forall \mu \in$ closure($A$), does there exists a positive
measure $\nu$ such that $\mu + \nu \in A$?
add comment
2 Answers
active oldest votes
This is false. Here's a counterexample.
For $n \geq 2$, let $\mu_n$ be the probability measure $\mu_n = (1/n)\delta_0 + (1/2 - 1/n)\delta_1 + (1/2)\delta_n$. The convex hull of $\{\mu_n: n \geq 2\}$ is contained in the set
$P$ of probability measures; let $A$ be its closure in $P$ for the relative weak* topology. Then I claim that (1) the measure $\mu = (1/2)\delta_1$ is in the weak* closure of $A$, but
up vote 3 (2) there is no positive measure $\nu$ such that $\mu + \nu \in A$.
down vote
accepted Well, (1) is clear; $\mu$ is the weak* limit of the sequence $(\mu_n)$. For (2), suppose you had a net $(\mu_\alpha)$ of convex combinations of the $\mu_n$ which converged weak* to a
probability measure of the form $\mu + \nu$ with $\nu$ positive. Then by examining the coefficients of $\delta_1$, we can show that mass has to be escaping to infinity as we take the
limit in $\alpha$, contradicting the assumption that $\mu + \nu$ is a probability measure. That's the idea; I think I'll leave it to you to work out the details.
add comment
Let A be the set of all probability measures supported in (0,1). Then $\delta_1$ (unit mass at 1) is in the weak-* closure of A, but there is no measure in A that is greater or
up vote 1 down equal to $\delta_1$.
1 I think he wants $A$ to be closed in the set of all probability measures, for the relative weak* topology. – Nik Weaver May 6 '12 at 2:03
add comment
Not the answer you're looking for? Browse other questions tagged measure-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/96100/probabilities-measures","timestamp":"2014-04-20T23:57:29Z","content_type":null,"content_length":"53891","record_id":"<urn:uuid:c9713bad-b872-4157-977f-540cd659b963>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ubiquity symposium 'What is computation?'
No matter how you slice it, computers have changed our lives. When we say this, what immediately comes to mind is the way in which ubiquitous computer technology has transformed our communicating,
working, playing, and (witness online dating sites) even our loving. Another interpretation of "how computers have changed our lives" is the possibility that computation itself is essential to even
defining who we are. This is the main idea behind "strong artificial intelligence," where one argues that human brains are nothing more than computing machines. Or, as the Post-It note I had on my
wall in graduate school pithily read: "Either 1) computers will take over the world or 2) they already have." If strong artificial intelligence is true, then the question "What is computation?" is
important to defining what it means to be intelligent and self-aware. Thus defining computation would seem relevant to a very fundamental question in the study of the mind.
Here, I would like to push this argument one step further and argue that the question "What is computation?" is not mere taxonomy, nor simply an aid in our anthropocentric obsession with explaining
our thinking process, but instead is relevant to some of the deepest questions in the nature of the physics of our universe. Understanding what computation is may be more essential to us than we
might have imagined—in fact, it may be tied to the very fabric of how nature works!
The Reliability of Computation
The starting point for this speculative point of view is the reliability of computation. In 1937, when Alan Turing formalized the notion of computation that we now call the Turing machine (Turing,
1937), one remarkable (but often overlooked) fact was that the feasibility of a machine that acted reliably enough to behave like the infinitely extendable machine Turing conceived, was not at all a
technological given. When the first general purpose computer, the ENIAC, was constructed in the mid 1940s, some argued that the machine would not work because the vacuum tubes it was made up would
fail at too high of a rate. Indeed the ENIAC did have a tube failure about once every two days (Randall, 2006). Imagining how one could build larger, and more reliable computers was not at all
obvious. The invention of the transistor in 1948 and the subsequent invention of the integrated circuit in 1958, solved these problems, which then led to an amazing half century of ever increasing
computing power as described by Moore's law. Today, however, you may have noticed that computer clock speeds have stopped increasing and, while transistor sizes continue to shrink, we are fast
approaching the limit where a single switch in a computer is carried out by only a few atoms. And when you get down to such small atomic systems, the problem of reliable computation from unreliable
parts again begins to rear its head. Which leads us to ask, "Why is computation possible at all?" Our generous universe comes equipped with the ability to compute, but how and why is this?
The history of the understanding how unreliable devices can be made reliable is made up of at least two main directions. The first relies on a fundamental theoretical proof while the second, in
contrast, is relevant to how modern computers work in practice. For the first direction, the seminal work is that of John von Neumann (von Neumann, 1956). Von Neumann considered a very simple model
of unreliable computation in which every component of a logical circuit could fail to operate properly with some independent probability. He then asked the question of whether such a logical circuits
could be engineered into a device that failed with vanishingly small probability. Such a device would then be considered as enacting a "robust computation." At first such a result may seem
hopeless—if you are relying on the output of a single bit in the logical circuit then there is always a non-zero probability that the logic gate that outputs this last bit may fail. To circumvent
this seemingly hopeless situation, von Neumann used the simple coding strategy, a la Claude Shannon's information theory (Shannon, 1948), of redundancy. Instead of placing the information into a
single fragile bit, we instead take a bundle of bits and "interpret" them as being either zero or one depending on whether the vast majority of bits in this bundle are zero or one. This is one of the
central tenets of dealing with unreliable bits: don't. Instead deal with lots of unreliable bits and "reinterpret" what you call your information.
The Restorative Organ
Just encoding information is not enough to obtain reliable computation: encoding may prolong the lifetime of a bit, but eventually the encoded information will be destroyed. To overcome this, von
Neumann introduced a "restorative organ" that served to take encoded information with some erred bits and fix as many of those bits as possible. Of course the logic gates he used for this restorative
organ were themselves subject to failure, so it is not at all obvious that this is possible. Further complicating this picture one must be able to compute on the encoded information in a way that
also does not add too many errors to the encoded information. But get around these obstacles von Neumann did, and in the end he produced a theorem out of it: If the rate of error of your logical
gates is below a certain threshold value, then a robust computation can be enacted with a failure of probability as small as you like using only a few more gates (logarithmically more gates as a
function of one over the error probability).
Von Neumann's constructions are all fine and dandy if you are a theoretician, but in the real world, it doesn't seem as if we need these constructions, or at least it doesn't seem that we need them
for devices such as our silicon based integrated circuits. This is the second approach to robust computing: how it occurs in practice. Hard disks, to take an example, use some minimal error
correction, but not so much that it dominates the hardware of the device. How is this possible? The answer to how our real world devices achieve robust computation is perhaps best summed up by a
phrase coined by the lat Rolf Landauer: "information is physical" (Landauer, 2002). By this expression, Landauer did not mean that information itself is a basic construct of physics, but instead that
information really only exists when the physics of a device allows it. Another way to put it is that not all physical systems are suitable for computation. This is of course, quite obvious, but
imagine this question from the perspective of a physicist: what are the physical mechanisms that allow for a device to be a computer? How does one take a physical system and decide whether or not it
is capable of robust computation?
Consider, for example, information stored in a hard drive. This information is encoded into the direction of the spins in a magnetic domain: whether the majority of spins in the domain are pointing
up or down, say, gives a bit the value zero or one. Individual spins in these domains, however, are fragile: each individual spin is subject to noise from its environment that can act to change the
direction of this spin. The domain is able to store information, however, by essentially using the two tricks of von Neumann. First, the information is not encoded into a single spin, but instead is
spread out across many spins and the majority vote of these spins represents the information. Second the system essentially performs error correction: spins locally feel the direction of their
neighbors and energetically favor alignment. Thus when the environment flips a spin, this requires some energy for violating neighborly concordance, and the flipped spin will quickly relax back to
its neighbor's direction in order to minimize energy. As long as the environment does not induce too much noise, i.e., the system is not too hot, then the lifetime of the encoded majority of the
spins is thus made much much longer than the lifetime of an individual spin. This is simply von Neumann's error correcting restorative organ enacted by the physics of the device.
Computation has a Lifetime
So what does this all have to do with the question at hand: "What is computation?" The first are the two realizations that computation has a lifetime and that in the wild it can exist in various
states of digitization. For devices truly deserving the moniker computer, the lifetime of the information is longer than the tasks the device is designed to solve. This lifetime is defined as the
time until the stored and computed on information is randomized. Similarly for devices that we want to call computers the digitization of information should be high. In our example of the hard-drive,
the fact that one takes the majority vote of the spins along a particular direction is an example of a digitization resulting in one bit of digital information (note that physically spins can point
along any continuous direction.) However, one could imagine digitizing the signal differently: for example by binning the total number of spins pointing along the up direction into four different
bins one could encode two bits of digital information. But clearly there is a limiting point in which relying on single analog signals will run into the problem first enumerated by von Neumann: such
single systems, if they fail with some fixed probability, fix the overall lifetime of the information.
But is there more to the question of "What is computation?" than just the above taxonomic clarifications? Here we will follow a speculative line of thought considered originally in a 1999 paper by
Walter Ogburn and John Preskill (Ogburn & Preskill, 1999). The main subject of this paper was the field of topological quantum computation. Quantum computers are devices that manipulate information
that exploits the counter-intuitive properties of quantum theory. Quantum computers offer an interesting example of the notions of robust computing: quantum information is even more fragile than
classical information. Indeed when quantum computers were first shown to be able to outperform classical computers at certain tasks a critique of the field was that because quantum information is so
fragile, no large-scale quantum computer could, in principle, ever be built. However, just as von Neumann showed that classical computing is robust with faulty devices, quantum computing theorists
realized that a similar result could be obtained for quantum computers (Aharonov & Ben-Or, 1997) (Knill, Laflamme, & Zurek, 1998). Thus, in theory at least, quantum computers can be built assuming
that the basic operations of a quantum computer can be performed with high enough fidelity. What Ogburn and Preskill were considering in their paper was a method for doing fault-tolerant quantum
computing based upon prior work by Alexei Kiteav (Kitaev, 2003) which linked physical theories known as topological quantum field theories to fault-tolerant quantum computing. But what is remarkable
about this paper is the final concluding section where the authors make a bold speculation.
Fundamental Physics
To understand this speculation, a little background from fundamental physics is necessary. Quantum theory is a foundational theory of physics. It sits, along with special relativity, as the base upon
which the different physical theories rest. Thus one takes the physics of electrodynamics and puts it together special relativity and quantum theory to form quantum electrodynamics. So far physicists
have figured out how to merge the physics of three fundamental forces, the electromagnetic force, the weak force, and the strong force with quantum theory and special relativity in what is known as
the standard model of particle physics. But the fourth force we know exists, gravity, has resisted merging with quantum theory. This is the basic problem of quantum gravity. In the mid 1970s Stephen
Hawking made a startling claim (Hawking, 1976) based upon extrapolating quantum theory into a regime where gravity is important. He claimed that if one merged quantum theory and gravity one was
necessarily led to a situation in which information can be destroyed in a black hole. In contrast quantum theory, while allowing information to leak into an environment, does not allow for the
quantum information to be explicitly destroyed. This was a radical suggestion and is the basis of what is now termed the "black hole information paradox."
Ogburn and Preskill noted that this idea, that information could be destroyed by processes at a black hole, might have a novel solution in light of the realization that classical and quantum
computers could be built in spite of noise induced by an environment. In particular they noted that Hawking's result is particular odd because it implies that at very short timescales (faster than
the so-called Planck time, about 5 times 10^-44 seconds), where virtual black hole production is thought to be a dominating process, if Hawking is correct, then information must be destroyed in vast
quantities. Yet at longer time scales we know that quantum theory without information loss appears to be correct. In order to reconcile this, Ogburn and Preskill suggested that perhaps nature itself
is fault-tolerant. At short time scales, information is repeatedly being destroyed, but at longer scales, because of some form of effective encoding and error correction, non-information destroying
quantum theory is a good approximation to describing physics. This was especially relevant to the subject of the paper, topological quantum computing, where a physical theory—that of topological
quantum field theory—gave rise to models for robustly storing quantum information.
This is, in many respects, just another step down the line of argument that the universe is a computer of some sort. This point of view has been advocated by many researchers, most notably by Ed
Fredkin (Fredkin, 1992), Stephen Wolfram (Wolfram, 2002) and Seth Lloyd (LLoyd, 2006). In many respects this point of view may be nothing more than a result of the fact that the notion of computation
is the disease of our age—everywhere we look today we see examples of computers, computation, and information theory and thus we extrapolate this to our laws of physics. Indeed, thinking about
computing as arising from faulty components, it seems as if the abstraction that uses perfectly operating computers is unlikely to exist as anything but a platonic ideal. Another critique of such a
point of view is that there is no evidence for the kind of digitization that characterizes computers nor are there any predictions made by those who advocate such a view that have been experimentally
The notion that the universe is at its most fundamental level constantly destroying information, but that quantum theory holds at a larger length scales, is a variation on the theme of the universe
as a deterministic digital computer. But it is one that has not been considered nearly as closely. And indeed, it seems possible that it could lead to predictions in a manner that the mere idea of
the universe as a digital computer cannot. To understand how this might be, we note that if there is a digital component to our physics, then there is a timescale or length scale associated with this
discreteness. Unfortunately there are quite restrictive bounds on what such a digitization can look like: many of our theories have been validated to very high precision. Of course one can always
stick the digitization in at short enough timescales or small enough length scales, but there is no way to get at testable predictions beyond simply probing these shorter time scales with higher
energy physics.
Fault Tolerance
But the case for fault-tolerant computing might be different. Indeed one of the important features of the models considered by Ogburn and Preskill is that they are in some way spatially and
temporally local, and yet they provide some protection of quantum information. It is exactly this locality that means that they are easily meshed with our existing understanding of physics. Thus one
can ask: Are there consequences of requiring that information destroying processes get alleviated at large scales by some local naturally fault-tolerant physics? While I don't know the answer, there
is at least some interesting precedence that this could lead to novel predictions about fundamental physics.
One such precedence concerns fault-tolerant cellular automata. A cellular automaton is a construction where information is stored in spatially separated cells and these cells are updated at discrete
time steps depending on the state of the cell and the state of the neighboring cells. As a model of a faulty cellular automaton one can imagine interleaving deterministic perfectly functioning
cellular automaton updates with a process that randomly and independently adds noise to the cells. Another model of faults introduces cells that don't function at all (think "manufacturing defects").
Now here is the interesting point. If one considers a simple cellular automaton with the first kind of faults on a two dimensional square grid, then there is a simple cellular automaton rule, Toom's
rule (Toom, 1980), which can be used to robustly store information. Notably, however, when one adds manufacturing faults to this system, this tolerance to errors disappears (McCann, 2007). Indeed it
has been shown that for any cellular automaton rule from a broad class of such rules, on a square grid there is no way to robustly store information. But, it turns out that if one changes the spatial
layout of the cells, then a fault-tolerant cellular automaton tolerant to both types of errors can be constructed. What is the change necessary to achieve this? It is to work on grids that can be
embedded without distortion into a hyperbolic two-dimensional space (McCann & Pippenger, 2008). In other words, it seems that a consequence of the hypothesis that nature is fault-tolerant in two
spatial dimensions (for classical computation) is that space must be hyperbolically curved! In this way a consequence of the large scale geometry of the universe emerges from a requirement that a
cellular automata robustly store classical information in the presence of probabilistic and manufacturing faults. Indeed, in this case, it seems to get the answer wrong as we currently believe that
our universe is actually curved in the opposite sense: it is believed that we live in a universe dominated by a de Sitter cosmology, which is elliptically and not hyperbolically curved.
The larger point, however, is that the idea that a noisy, faulty, or information destroying small scale physics may result in consequences for our large scale physics without our ever having to probe
the kinds of time scales at which these processes reveal themselves. Thus it is possible that, in contrast to the notion that the universe is a computer, that the notion that the universe is a faulty
computer may make experimentally testable predictions, even if the level at which these faults show themselves is as far down as the Planck scale.
To the question "What is computation?" we have answered that, from the perspective of physics, computation is a property of only some physical systems and that the idealized form of perfectly digital
computation is an extremely useful emergent property, but unlikely to be fundamental. This in turn led us to consider whether nature itself might actually be much more dirty and noisy than we
normally think, and that the existence of computers should give us pause when understanding how quantum theory can exist in a physics dominated by the destruction of information. Provocatively we
have even suggested that proving theorems about fault-tolerant local systems might lead to evidence for nature being fault-tolerant without an appeal to the details of the actual physics. Truly,
then, computation may lie at the heart of our understanding of the workings of the universe.
Dave Bacon, is an assistant research professor in the department of computer science and engineering at the University of Washington, where he is also an adjunct professor in the department of
Aharonov, D., & Ben-Or, M. (1997). Fault-tolerant quantum computation with constant error rate. Proceedings of the twenty-ninth annual ACM symposium on Theory of computing (pp. 176-188). New York:
ACM Press.
Fredkin, E. (1992). Finite Nature. Proceedings of the XXVIIth Rencotre de Moriond.
Hawking, S. W. (1976). Breakdown of Predictability in Gravitational Collapse. Physical Review D, 14, 2460.
Kitaev, A. (2003). Fault-tolerant quantum computation by anyons. Annals of Physics, 303, 2-30.
Knill, E., Laflamme, R., & Zurek, W. H. (1998). Resilent quantum computation. Science, 279, 342-345.
Landauer, R. (2002). Information is inevitably physical. In A. J. Hey, Feynman and computation: exploring the limits of computers (pp. 77-92). Boulder, CO: Persueus Books.
LLoyd, S. (2006). Programming the Universe: A Quantum Computer Scientist Takes On the Cosmos. Knopf.
McCann, M. (2007). Memory in media with manufacturing faults. PhD thesis, Department of Computer Science, Princeton University.
McCann, M., & Pippenger, N. (2008). Fault tolerance in cellular automata at high fault rates. Journal of Computer and System Sciences, 74, 910-918.
Ogburn, R. W., & Preskill, J. (1999). Topological Quantum Computing. Lecture Notes in Computer Science, 1509, 341-356.
Randall, A. (2006, Feb 14). Q&A: A lost interview with ENIAC co-inventor J. Presper Eckert. Retrieved May 5, 2009, from ComputerWorld: http://www.computerworld.com/s/article/print/108568/
Shannon, C. (1948). A mathematical theory of communication. Bell System Technical Journa, 27, 379-423 .
Toom, A. L. (1980). Stable and attractive trajectories in multicomponent system. In R. L. Dobrushin, Multicomponent Systems (pp. 549-575). New York: Dekker.
Turing, A. M. (1937). On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, 42, 230-265.
von Neumann, J. (1956). Probabilistic logics and the synthesis of reliable organisms from unreliable components. In C. S. McCarthy, Automata Studies (pp. 43-98). Princeton, NJ: Princeton University
Wolfram, S. (2002). A New Kind of Science. Wolfram Media, INC.
DOI: 10.1145/1895419.1920826
©2010 ACM $10.00
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial
advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission
and/or a fee.
Can't agree more with Tim's comment.
Article seems to be an elaboration on the limiting effects of entropy on the man made machinery and media of which man made computation is executed. The cycle of manifestation and deterioration of
man perceived "information" in the physical world is not without its own transition or action constants. Disappointed the article did not address these non-manifesting, non-deteriorating action
constants. For it is these action constants that also apply to abstraction physics. As an example of the pure abstract and its influence on the physical: The concept of the decimal zero represents
nothing, a void, an absence of anything at all. Zero simply does not exist in physical reality, yet we make use of the abstract concept as a place holder for value. By using the non-physical concept
of zero we are able to more easily do computations, than without it (i.e. calculating with roman numerals). How is it something that doesn't physically exist have such powerful value of physical
influence in its concept use? The article does not express physical full scope relevance to computation. It only addresses the entropy effect on the physical matter/media used in computation. Nor
does it address the manifestation effect, opposite entropy. I was hoping for a more complete article on fundamental physics as it applies to abstract computation, rather that just the limited media
of our computations. | {"url":"http://ubiquity.acm.org/article.cfm?id=1920826","timestamp":"2014-04-19T04:19:32Z","content_type":null,"content_length":"49384","record_id":"<urn:uuid:d017bb72-4e62-44c7-b6e3-bd2260c4528c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
El Sobrante Precalculus Tutor
...I tutored pre-calculus as an on-call tutor for Diablo Valley Junior College for three years. I taught pre-calculus sections as a TA at UC Santa Cruz for two years. I have taken classes in
teaching literacy at Mills College.
15 Subjects: including precalculus, reading, calculus, writing
...My professor noticed that I was helping students in his class and I was doing extremely well in his class so he recommended me to the tutoring center. I have enjoyed tutoring ever since. After
taking chemistry I began tutoring it also and with these two subjects as well as working in the stockroom I was able to pay for most of my education.
19 Subjects: including precalculus, chemistry, physics, calculus
I have been successfully tutoring for over 10 years with over 1,250 Wyzant hours and the most 5 star ratings of all the tutors in the Bay Area. I have helped students transform their grades from
Fs to As. It has been a very rewarding experience, since my students get to understand the subject a lot better as well as improve their grades.
59 Subjects: including precalculus, chemistry, reading, calculus
...I spent three semesters working with Junior Achievement in the Palo Alto and Mountain View School Districts.I have experience in developing skills for grade school and high school students. I
am raising two boys, one of whom suffers from ADHD and required extensive home coaching on study skills,...
39 Subjects: including precalculus, chemistry, English, calculus
...I can help with general concepts and ideas that constitute U.S. political reality. Federal, state and local. I can help with everything from comprehension to interpretation of English language
38 Subjects: including precalculus, English, reading, geometry | {"url":"http://www.purplemath.com/El_Sobrante_Precalculus_tutors.php","timestamp":"2014-04-18T00:37:32Z","content_type":null,"content_length":"24062","record_id":"<urn:uuid:bae6c5e3-6655-40c7-96cf-7b11dd4a797b>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
3y-8=8 ?? help me
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50d419c2e4b0d6c1d54151ae","timestamp":"2014-04-17T13:04:46Z","content_type":null,"content_length":"41794","record_id":"<urn:uuid:c67e999c-90d5-4906-aefa-c4c2812d5ea0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00232-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exact solution of the dynamics of the PASEP with open boundaries
The dynamics of the asymmetric exclusion process is governed by the spectrum of its transition matrix. In particular its lowest excited state describes the approach to stationarity at large times. I
will discuss the exact diagonalisation of the transition matrix of the partially asymmetric exclusion process with the most general open boundary conditions. The resulting Bethe ansatz equations
describe the {\em complete} spectrum of the transition matrix. For totally asymmetric diffusion I will present exact results for the spectral gap and derive the dynamical phase diagram. We observe
boundary induced crossovers in and between massive, diffusive and KPZ scaling regimes. | {"url":"http://www.newton.ac.uk/programmes/PDS/Abstract2/degier.html","timestamp":"2014-04-19T10:02:47Z","content_type":null,"content_length":"2816","record_id":"<urn:uuid:01e1e708-c548-47bb-900f-c4bced337862>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fort Hunt, VA Math Tutor
Find a Fort Hunt, VA Math Tutor
...I believe patience is key to working with elementary students who are struggling with their studies, and my experience has taught me how to be patient and meet each student's individual needs.
I have tutored many students in linear algebra at the high school and college levels. I have excelled ...
46 Subjects: including SAT math, linear algebra, probability, algebra 1
...Instead, using diagrams and pictures, I like to ask many questions, helping students figure things out in their own minds. This leads to deep, lasting knowledge, and more importantly,
sharpened critical thinking and reasoning skills. The ultimate goal is that a student no longer needs me, becau...
6 Subjects: including SAT math, statistics, biology, SAT reading
...Thanks to this combination I have an extensive background in science, math, Spanish, and writing. Although I am not a native speaker, I have lived in Spain for 4 months and traveled to Costa
Rica as well. As an undergraduate, I tutored peers in Spanish including grammar, writing, and speaking skills.
17 Subjects: including algebra 2, calculus, geometry, physics
Do you have an elementary-aged child who needs some extra instruction and encouragement? Helping young students to succeed is my passion - learning should be fun, not frustrating! I have earned a
B.S. in Engineering and an M.S. in Business while spending four semesters as a teaching assistant.
6 Subjects: including algebra 1, grammar, prealgebra, spelling
I have always loved math from the start and have a pretty good background since my mom was an excellent math tutor to me. I have enhanced my knowledge in the subject matter by attending the
University of Virginia, where I graduated with a degree in Chemical Engineering and have excelled in all my m...
9 Subjects: including calculus, elementary (k-6th), trigonometry, linear algebra
Related Fort Hunt, VA Tutors
Fort Hunt, VA Accounting Tutors
Fort Hunt, VA ACT Tutors
Fort Hunt, VA Algebra Tutors
Fort Hunt, VA Algebra 2 Tutors
Fort Hunt, VA Calculus Tutors
Fort Hunt, VA Geometry Tutors
Fort Hunt, VA Math Tutors
Fort Hunt, VA Prealgebra Tutors
Fort Hunt, VA Precalculus Tutors
Fort Hunt, VA SAT Tutors
Fort Hunt, VA SAT Math Tutors
Fort Hunt, VA Science Tutors
Fort Hunt, VA Statistics Tutors
Fort Hunt, VA Trigonometry Tutors
Nearby Cities With Math Tutor
Baileys Crossroads, VA Math Tutors
Cameron Station, VA Math Tutors
Forestville, MD Math Tutors
Franconia, VA Math Tutors
Jefferson Manor, VA Math Tutors
Kingstowne, VA Math Tutors
Landover, MD Math Tutors
Lincolnia, VA Math Tutors
Marlow Heights, MD Math Tutors
Montclair, VA Math Tutors
Mount Vernon, VA Math Tutors
North Springfield, VA Math Tutors
Rosslyn, VA Math Tutors
Tysons Corner, VA Math Tutors
West Springfield, VA Math Tutors | {"url":"http://www.purplemath.com/Fort_Hunt_VA_Math_tutors.php","timestamp":"2014-04-21T12:59:05Z","content_type":null,"content_length":"24169","record_id":"<urn:uuid:abf40875-ffc5-44f3-96fd-4c8fb8e7e73c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
complete the square
i have a worked problem with solution, but need to check.
$<br /> x^2+\frac{2x}{3}=\frac{8}{3}<br />$
now i must take half coefficient of x and square,right?
the next step i have been given is
$<br /> x^2+\frac{2x}{3}+\frac{1}{4}=\frac{8}{3}+\frac{1}{ 4}<br />$
but i cannot see where the 1/4 is coming from?
as 2x/3 divided by 1/2 = 1/3? | {"url":"http://mathhelpforum.com/algebra/117351-complete-square.html","timestamp":"2014-04-17T22:34:49Z","content_type":null,"content_length":"52478","record_id":"<urn:uuid:9069f9c7-c0ed-4685-b007-0797bdf195d6>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Semigroups Characterized by Their Generalized Fuzzy Ideals
Journal of Mathematics
Volume 2013 (2013), Article ID 592708, 7 pages
Research Article
Semigroups Characterized by Their Generalized Fuzzy Ideals
Department of Mathematics, COMSATS Institute of Information Technology, Abbottabad 22060, Pakistan
Received 14 January 2013; Accepted 27 February 2013
Academic Editor: Feng Feng
Copyright © 2013 Madad Khan and Saima Anis. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction
in any medium, provided the original work is properly cited.
We have characterized right weakly regular semigroups by the properties of their -fuzzy ideals.
1. Introduction
Usually the models of real world problems in almost all disciplines like in engineering, medical science, mathematics, physics, computer science, management sciences, operations research, and
artificial intelligence are mostly full of complexities and consist of several types of uncertainties while dealing with them in several occasion. To overcome these difficulties of uncertainties,
many theories had been developed such as rough sets theory, probability theory, fuzzy sets theory, theory of vague sets, theory of soft ideals, and the theory of intuitionistic fuzzy sets. Zadeh
discovered the relationships of probability and fuzzy set theory in [1] which has appropriate approach to deal with uncertainties. Many authors have applied the fuzzy set theory to generalize the
basic theories of Algebra. The concept of fuzzy sets in structure of groups was given by Rosenfeld [2]. The theory of fuzzy semigroups and fuzzy ideals in semigroups was introduced by Kuroki in [3, 4
]. The theoretical exposition of fuzzy semigroups and their application in fuzzy coding, fuzzy finite state machines, and fuzzy languages was considered by Mordeson. The concept of belongingness of a
fuzzy point to a fuzzy subset by using natural equivalence on a fuzzy subset was considered by Murali [5]. By using these ideas, Bhakat and Das [6, 7] gave the concept of -fuzzy subgroups by using
the “belongs to” relation and “quasi-coincident with” relation between a fuzzy point and a fuzzy subgroup and introduced the concept of an -fuzzy subgroups, where and . In particular, -fuzzy subgroup
is an important and useful generalization of Rosenfeld’s fuzzy subgroup. These fuzzy subgroups are further studied in [8, 9]. The concept of -fuzzy subgroups is a viable generalization of Rosenfeld’s
fuzzy subgroups. Davvaz defined -fuzzy subnearrings and ideals of a near ring in [10]. Jun and Song initiated the study of -fuzzy interior ideals of a semigroup in [11] which is the generalization of
fuzzy interior ideals [12]. In [13], Kazanci and Yamak studied -fuzzy bi-ideals of a semigroup.
In this paper we have characterized right regular semigroups by the properties of their right ideal, bi-ideal, generalized bi-ideal, and interior ideal. Moreover we characterized right regular
semigroups in terms of their -fuzzy right ideal, -fuzzy bi-ideal, -fuzzy generalized bi-ideal, -fuzzy bi-ideal, and -fuzzy interior ideals.
Throughout this paper denotes a semigroup. A nonempty subset of is called a subsemigroup of if . A nonempty subset of is called a left ideal of if . is called a two-sided ideal or simply an ideal of
if it is both left and right ideal of . A nonempty subset of is called a generalized bi-ideal of if . A nonempty subset of is called a bi-ideal of if it is both a subsemigroup and a generalized
bi-ideal of . A subsemigroup of is called an interior ideal of if .
An semigroup is called a right weakly regular if for every there exist such that .
Definition 1. For a fuzzy set of a semigroup and , the crisp set such that is called level subset of .
Definition 2. A fuzzy subset of a semigroup of the form is said to be a fuzzy point with support and value and is denoted by .
A fuzzy point is said to belong to (resp., quasi-coincident with) a fuzzy set , written as resp., , if resp., . If or , then we write . The symbol means does not hold. For any two fuzzy subsets and
of , means that, for all , .
Generalizing the concept of , Jun [12, 14] defined , where , as . if or .
2. -Fuzzy Ideals in Semigroups
Definition 3. A fuzzy subset of is called an -fuzzy subsemigroup of if for all and the following condition holds: and imply .
Lemma 4 (see [15]). Let be a fuzzy subset of . Then is an -fuzzy subsemigroup of if and only if .
Definition 5. A fuzzy subset of is called an -fuzzy left ideal of if for all and the following condition holds: implies .
Lemma 6 (see [15]). Let be a fuzzy subset of . Then is an -fuzzy left ideal of if and only if .
Definition 7. A fuzzy subsemigroup of a semigroup is called an -fuzzy bi-ideal of if for all and the following condition holds: and imply .
Lemma 8 (see [15]). A fuzzy subset of is an -fuzzy bi-ideal of if and only if it satisfies the following conditions:(i) for all and ;(ii) for all and .
Definition 9. A fuzzy subset of a semigroup is called an -fuzzy generalized bi-ideal of if for all and the following condition holds: and imply .
Lemma 10 (see [15]). A fuzzy subset of is an -fuzzy generalized bi-ideal of if and only if for all and .
Definition 11. A fuzzy subsemigroup of a semigroup is called an -fuzzy interior ideal of if for all and the following condition holds: imply .
Lemma 12 (see [15]). A fuzzy subset of is an -fuzzy interior ideal of if and only if it satisfies the following condition:(i) for all and ;(ii) for all and .
Example 13. Let be a semigroup with binary operation “,” as defined in the following Cayley table:
Clearly is regular semigroup and , , and are left ideals of . Let us define a fuzzy subset of as
Then clearly is an -fuzzy ideal of .
Lemma 14 (see [15]). A nonempty subset of a semigroup is right (left) ideal if and only if is an -fuzzy right (left) ideal of .
Lemma 15. A nonempty subset of a semigroup is an interior ideal if and only if is an -fuzzy interior ideal of .
Lemma 16. A nonempty subset of a semigroup is bi-ideal if and only if is an -fuzzy bi-ideal of .
Lemma 17. Let and be any fuzzy subsets of semigroup . Then following properties hold:(i),(ii).
Proof. It is straightforward.
Lemma 18. Let and be any nonempty subsets of a semigroup . Then the following properties hold:(i), (ii).
Proof. It is straightforward.
3. Characterizations of Regular Semigroups
Theorem 19. For a semigroup , the following conditions are equivalent:(i) is regular;(ii) for left ideals , , and bi-ideal of a semigroup .(iii), for some in ;
Proof. : Let be regular semigroup, then for an element there exists such that . Let , where is a bi-ideal and and are left ideals of . So , , and .
As . Thus .
is obvious.
: As and are left ideal and bi-ideal of generated by , respectively, thus by assumption we have
Thus or or , for some in . Hence is regular semigroup.
Theorem 20. For a semigroup , the following conditions are equivalent:(i) is regular;(ii) for every right ideal and bi-ideal of a semigroup ;(iii), for some in .
Proof. : Let be regular semigroup, then for an element there exists such that . Let , where is right ideal and , and are left ideals of . So , and . As . Thus .
is obvious.
: As is right ideal and is left ideal of generated by , respectively, thus by assumption we have
Thus or , for some in . Hence is regular semigroup.
Theorem 21. For a semigroup , the following conditions are equivalent:(i) is regular;(ii) for every -fuzzy right ideal , -fuzzy left ideals , and of a semigroup .
Proof. : Let be -fuzzy right ideal, and any -fuzzy left ideals of . Since is regular, therefore for each there exists such that
: Let be right ideal, and let and be any two left ideals of generated by , respectively.
Then is any -fuzzy right ideal, and and are any -fuzzy left ideals of semigroup , respectively. Let and . Then , , and . Now
Thus . Therefore .
So by Theorem 20, is regular.
4. Characterizations of Right Weakly Regular Semigroups in Terms of -Fuzzy Ideals
Theorem 22. For a semigroup , the following conditions are equivalent:(i) is right weakly regular;(ii) for every right ideal, left ideal, and interior ideal of , respectively;(iii).
Proof. : Let be right weakly regular semigroup, and let , , and be right ideal, left ideal, and interior ideal of , respectively. Let then , , and . Since is right weakly regular semigroup so for
there exist such that
Therefore . So .
is obvious.
: As , , and are right ideal, left ideal, and interior ideal of generated by an element of , respectively, thus by assumption, we have
Thus or or , for some in . Hence is right weakly regular semigroup.
Theorem 23. For a semigroup , the following conditions are equivalent:(i) is right weakly regular;(ii) for every fuzzy right ideal, fuzzy left ideal, and fuzzy interior ideal of , respectively.
Proof. : Let ,, and be any -fuzzy right ideal, -fuzzy generalized bi-ideal, and -fuzzy interior ideal of . Since is right weakly regular therefore for each there exist such that
Therefore .
: Let , , and be right ideal, left ideal, and interior ideal of generated by , respectively.
Then , , and are -fuzzy right ideal, -fuzzy left ideal, and -fuzzy interior ideal of semigroup . Let and . Then , , and . Now
Thus . Therefore . Hence by Theorem 22, is right weakly regular semigroup.
Theorem 24. For a semigroup , the following conditions are equivalent:(i) is right weakly regular;(ii) for every bi-ideal, left ideal, and interior ideal of , respectively;(iii).
Proof. : Let be right weakly regular semigroup, and , , and be bi-ideal, left ideal, and interior ideal of , respectively. Let then , , and . Since is right weakly regular semigroup so for there
exist such that
Therefore .So .
is obvious.
: As , , and are bi-ideal, left ideal, and interior ideal of generated by an element of , respectively, thus by assumption we have
Thus or or or , for some in . Hence is right weakly regular semigroup.
Theorem 25. For a semigroup , the following conditions are equivalent:(i) is right weakly regular;(ii) for every fuzzy bi-ideal, fuzzy left ideal and fuzzy interior ideal of , respectively;(iii) for
every fuzzy generalized bi-ideal, fuzzy left ideal, and fuzzy interior ideal of , respectively.
Proof. : Let ,, and be any -fuzzy generalized bi-ideal, -fuzzy left ideal, and -fuzzy interior ideal of . Since is right weakly regular for each there exist such that
Therefore .
is obvious.
: Let , , and be bi-ideal, left ideal, and interior ideal of generated by , respectively.
Then , , and are -fuzzy bi-ideal, -fuzzy left ideal, and -fuzzy interior ideal of semigroup . Let and . Then , , and . Now
Thus . Therefore . Hence by Theorem 24, is right weakly regular semigroup.
Theorem 26. For a semigroup , the following conditions are equivalent:(i) is right weakly regular;(ii) for every quasi-ideal , left ideal , and interior ideal of , respectively;(iii).
Proof. : Let be right weakly regular semigroup, and let , , and be quasi-ideal, left ideal, and interior ideal of , respectively. Let then , , and . Since is right weakly regular semigroup so for
there exist such that
Therefore . So .
is obvious.
: As , , and are quasi-ideal, left ideal, and interior ideal of generated by an element of , respectively, thus by assumption we have
Thus or or or , for some in . Hence is right weakly regular semigroup.
Theorem 27. For a semigroup , the following conditions are equivalent:(i) is right weakly regular;(ii) for every fuzzy quasi-ideal, fuzzy left ideal, and fuzzy interior ideal of , respectively.
Proof. : Let ,, and be any -fuzzy quasi-ideal, -fuzzy left ideal, and -fuzzy interior ideal of . Since is right weakly regular therefore for each there exist such that
Therefore .
is obvious.
: Let , , and be quasi-ideal, left ideal and interior ideal of generated by , respectively.
Then , , and are -fuzzy quasi-ideal, -fuzzy left ideal, and -fuzzy interior ideal of semigroup . Let and let . Then , , and . Now
Thus . Therefore . Hence by Theorem 26, is right weakly regular semigroup.
1. L. A. Zadeh, “Fuzzy sets,” Information and Computation, vol. 8, pp. 338–353, 1965. View at Zentralblatt MATH · View at MathSciNet
2. A. Rosenfeld, “Fuzzy groups,” Journal of Mathematical Analysis and Applications, vol. 35, pp. 512–517, 1971. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at
3. N. Kuroki, “Fuzzy bi-ideals in semigroups,” Commentarii Mathematici Universitatis Sancti Pauli, vol. 28, no. 1, pp. 17–21, 1980. View at MathSciNet
4. N. Kuroki, “On fuzzy ideals and fuzzy bi-ideals in semigroups,” Fuzzy Sets and Systems, vol. 5, no. 2, pp. 203–215, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH ·
View at MathSciNet
5. V. Murali, “Fuzzy points of equivalent fuzzy subsets,” Information Sciences, vol. 158, pp. 277–288, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at
6. S. K. Bhakat and P. Das, “On the definition of a fuzzy subgroup,” Fuzzy Sets and Systems, vol. 51, no. 2, pp. 235–241, 1992. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
· View at MathSciNet
7. S. K. Bhakat and P. Das, “$\left(\in ,\in ,Vq\right)$-fuzzy subgroup,” Fuzzy Sets and Systems, vol. 80, no. 3, pp. 359–368, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt
MATH · View at MathSciNet
8. S. K. Bhakat and P. Das, “Fuzzy subrings and ideals redefined,” Fuzzy Sets and Systems, vol. 81, no. 3, pp. 383–393, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH ·
View at MathSciNet
9. S. K. Bhakat, “$\left(\in ,Vq\right)$-level subset,” Fuzzy Sets and Systems, vol. 103, no. 3, pp. 529–533, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at
10. B. Davvaz, “$\left(\in ,\in ,Vq\right)$-fuzzy subnear-rings and ideals,” Soft Computing, vol. 10, no. 3, pp. 206–211, 2006. View at Publisher · View at Google Scholar · View at Scopus
11. Y. B. Jun and S. Z. Song, “Generalized fuzzy interior ideals in semigroups,” Information Sciences, vol. 176, no. 20, pp. 3079–3093, 2006. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet
12. Y. B. Jun, “New types of fuzzy subgroups,” submitted.
13. O. Kazanci and S. Yamak, “Generalized fuzzy bi-ideals of semigroup,” Soft Computing, vol. 12, pp. 1119–1124, 2008. View at Publisher · View at Google Scholar
14. Y. B. Jun, “Generalizations of $\left(\in ,\in ,Vq\right)$-fuzzy subalgebras in BCK/BCI-algebras,” Computers & Mathematics with Applications, vol. 58, no. 7, pp. 1383–1390, 2009. View at
Publisher · View at Google Scholar · View at MathSciNet
15. M. Shabir, Y. B. Jun, and Y. Nawaz, “Semigroups characterized by $\left(\in ,\in ,Vqk\right)$-fuzzy ideals,” Computers & Mathematics with Applications, vol. 60, no. 5, pp. 1473–1493, 2010. View
at Publisher · View at Google Scholar · View at MathSciNet | {"url":"http://www.hindawi.com/journals/jmath/2013/592708/","timestamp":"2014-04-16T22:35:19Z","content_type":null,"content_length":"670303","record_id":"<urn:uuid:f60a20cd-b3d1-42db-844c-7af20014a23a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Transfer Residual Error to Backward Error.
Next: Error Bound for Computed Up: Some Combination of and Previous: Residual Vector.   Contents   Index
Transfer Residual Error to Backward Error.
It can be proved that there are Hermitian matrices
such that
See [256,431,473].^In fact, 5.40). Therefore if nearby matrices. Error analysis of this kind is called backward error analysis and matrices backward errors.
We say an algorithm that delivers an approximate eigenpair for the pair with respect to the norm backward stable if
Next: Error Bound for Computed Up: Some Combination of and Previous: Residual Vector.   Contents   Index Susan Blackford 2000-11-20 | {"url":"http://web.eecs.utk.edu/~dongarra/etemplates/node186.html","timestamp":"2014-04-17T18:40:48Z","content_type":null,"content_length":"9251","record_id":"<urn:uuid:ae9c75cf-d50f-432a-925c-caf78c42728b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Puzzle: One Boy, Born on Tuesday . . .
· Wednesday, May 26, 2010 ·
The gang at The Browser tease Alex Bellos‘ New Scientist essay “Magic Numbers, Mathemagical Tricksters” thusly:
If the Monty Hall puzzle drove you mad, here’s another one sure to do the same: “I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?”
The piece doesn’t disappoint in this regard.
Gary Foshee, a collector and designer of puzzles from Issaquah near Seattle walked to the lectern to present his talk. It consisted of the following three sentences: “I have two children. One is
a boy born on a Tuesday. What is the probability I have two boys?”
The event was the Gathering for Gardner earlier this year, a convention held every two years in Atlanta, Georgia, uniting mathematicians, magicians and puzzle enthusiasts. The audience was silent
as they pondered the question.
“The first thing you think is ‘What has Tuesday got to do with it?’” said Foshee, deadpan. “Well, it has everything to do with it.” And then he stepped down from the stage.
The gathering is the world’s premier celebration of recreational mathematics. Foshee’s “boy born on a Tuesday” problem is a gem of the genre: easy to state, understandable to the layperson, yet
with a completely counter-intuitive answer that can leave you with a smile on your face for days. If you have two children, and one is a boy, then the probability of having two boys is
significantly different if you supply the extra information that the boy was born on a Tuesday. Don’t believe me?
Seems like nonsense to me, too. I figured, it’s 50/50 and the Tuesday was a distractor. After all, there are only two sexes and it’s 50/50 the second will be a boy. Not so much.
After the gathering ended, Foshee’s Tuesday boy problem became a hotly discussed topic on blogs around the world. The main bone of contention was how to properly interpret the question. The way
Foshee meant it is, of all the families with one boy and exactly one other child, what proportion of those families have two boys?
To answer the question you need to first look at all the equally likely combinations of two children it is possible to have: BG, GB, BB or GG. The question states that one child is a boy. So we
can eliminate the GG, leaving us with just three options: BG, GB and BB. One out of these three scenarios is BB, so the probability of the two boys is 1/3.
Now we can repeat this technique for the original question. Let’s list the equally likely possibilities of children, together with the days of the week they are born in. Let’s call a boy born on
a Tuesday a BTu. Our possible situations are:
□ When the first child is a BTu and the second is a girl born on any day of the week: there are seven different possibilities.
□ When the first child is a girl born on any day of the week and the second is a BTu: again, there are seven different possibilities.
□ When the first child is a BTu and the second is a boy born on any day of the week: again there are seven different possibilities.
□ Finally, there is the situation in which the first child is a boy born on any day of the week and the second child is a BTu — and this is where it gets interesting. There are seven different
possibilities here too, but one of them — when both boys are born on a Tuesday — has already been counted when we considered the first to be a BTu and the second on any day of the week. So,
since we are counting equally likely possibilities, we can only find an extra six possibilities here.
Summing up the totals, there are 7 + 7 + 7 + 6 = 27 different equally likely combinations of children with specified gender and birth day, and 13 of these combinations are two boys. So the answer
is 13/27, which is very different from 1/3.
It seems remarkable that the probability of having two boys changes from 1/3 to 13/27 when the birth day of one boy is stated — yet it does, and it’s quite a generous difference at that. In fact,
if you repeat the question but specify a trait rarer than 1/7 (the chance of being born on a Tuesday), the closer the probability will approach 1/2.
The bold emphasis is mine, indicating what I consider to be the key to the problem.
Of course, 12/27 is pretty damned close to 1/2. But my reasoning was wrong.
UPDATE: This is actually a more sophisticated variant of Martin Gardner’s “Boy or Girl Paradox,” which dates to 1959.
Related Posts:
1. john personna says:
It seems remarkable that the probability of having two boys changes from 1/3 to 13/27 when the birth day of one boy is stated — yet it does, and it’s quite a generous difference at that.
That’s just evidence that this is all just numeric idiocy.
Like or Dislike: 2 9
2. john personna says:
Seems like nonsense to me, too. I figured, it’s 50/50 and the Tuesday was a distractor. After all, there are only two sexes and it’s 50/50 the second will be a boy. Not so much.
You were right the first time, as a first approximation. For more assurance you wouldn’t look at BG, GB, or even BTu. You’d need actual population data on whether women who carry boys to term are
more likely or not to carry another. Or if Tuesday births were not 50/50 with a margin of confidence. ;-)
Like or Dislike: 1 5
3. Drew says:
I see it differently (I can be that way). The issue is understanding the question that was really asked.
If the questions was: I have one boy, he was born on Tuesday, now tell me the probabilities of having two boys your instincts (with two trivial technicalities) would be correct. The
technicalities: a) if I remember correctly, the odds of girl vs boy are not 50/50, but more like 51/49. b) genetics matter; if you have a boy, you are more likely to have another boy. But your
basic thrust would have been correct. Aside from my trivial adjustments, a dice is a dice. If fair, despite that 6 6′s in a row may have come up, the odds of another 6 on the next roll is 1/6th.
The issue is the question that was posed, which was a slight of hand. It was not PREDICTIVE in nature, but after the fact. That is, potentially what are the probabilities of what could have
happened? And so now you just get into raw numericy about what might have happened, as they did, with no notion of correlation or randomness, or prediction, which was your reaction.
Seems to me it was a cheesy, gotcha, trick question, and just numerical masturbation after that.
Like or Dislike: 3 4
4. rodney dill says:
All probabilities are 50% something either happens or it doesn’t.
Like or Dislike: 1 10
5. Franklin says:
Note to JJ: Second to last sentence should read 13/27 not 12/27.
That’s just evidence that this is all just numeric idiocy.
Wow … somebody must’ve flunked math and is still bitter about it.
Yeah, I suppose he could have stated, “assuming the chance of having a boy or girl is always 50/50 and the chance of having a child on any day of the week is equal …” but that would have been
assumed at a math conference, particularly one for Martin Gardner. Speaking of whom, this guy was awesome and his type of writing made math enjoyable – look him up if you don’t know anything
about him. Better yet, buy one of his books for your kid.
Like or Dislike: 4 0
6. john personna says:
That’s just evidence that this is all just numeric idiocy.
Wow … somebody must’ve flunked math and is still bitter about it.
Made it through the Cal State system’s junior level calculus, just as much as I needed for a Chem BS. But I’ll admit I didn’t enjoy all those matrix maths in 5 function calculator days.
Gardner initially gave the answers 1/2 and 1/3, respectively; but later acknowledged[1] that the second question was ambiguous. Its answer could be 1/2, depending on how you found out that
one child was a boy.
Maybe my thinking is shaped by the physical sciences. “How you found out” is supposed to be about the accuracy and the precision of the measurement, not the phasing of the text.
Like or Dislike: 0 3
7. john personna says:
Like or Dislike: 2 1
8. The Tuesday factoid is irrelevant. The order of the coin flips is irrelvant. This is basic probability stuff and frankly silly.
This is identical to saying I flipped a coin twice and when I flipped it on Tuesday it was heads. What are the odds that the coin came up heads both time I flipped it? The events are independent,
so the probablity of both coming up heads given that one has come up heads is 50-50.
Now the only exception to this is that fact that strictly speaking genetically, if one child was a boy then your odds of the other child being a boy are not in fact 50-50, but we don’t have
enough information to make a determination as to what the number should be. For instance, where this experiment conducted in China…
Like or Dislike: 4 8
9. Sorry, meant to say births rather than coin flips in the previous post. Got ahead of myself.
Like or Dislike: 0 3
10. Also, the 1/3 answer based upon options of BG, GB, or BB is sheer nonsense. BG and GB are the same option in this scenario unless you specify the boy born on Tuesday was born first or second, at
which time one of these options is eliminated, leaving you with BB or the other one. Sorry but I forget the proper term for this.
Like or Dislike: 4 7
11. Grewgills says:
To answer the question you need to first look at all the equally likely combinations of two children it is possible to have: BG, GB, BB or GG. The question states that one child is a boy. So
we can eliminate the GG, leaving us with just three options: BG, GB and BB. One out of these three scenarios is BB, so the probability of the two boys is 1/3.
If we are going to delve into the silliness then both GG and GB are out since the first child was a boy. That leaves us with the intuitively correct 50%.
Like or Dislike: 4 5
12. Grewgills says:
To be clear the options BG and GB indicate the order of births (or dice rolls etc), for GB to be a possible outcome the first child would have to be a girl.
Like or Dislike: 4 1
13. Franklin says:
BG and GB are the same option in this scenario unless you specify the boy born on Tuesday was born first or second
Incorrect. This has been demonstrated many times, but I’ll try my hand at explaining it. Or you could try it yourself – do a thousand sets of two coin flips. You will get roughly 25% HH, 25% HT,
25% TH, and 25% TT. HT and TH are different cases, but if you want to consider them as one case, then you have to admit the probability of getting one head and one tail is 50%. And it’s easy
enough to show this: flip a coin once. What’s the chance that a second coin flip will be different than the first? 50%, obviously.
So BB, BG, GB, and GG are all equally probably, at 25%. If you consider BG and GB as “the same”, then you must admit the probability of that case is 50%.
Now you’re told that one is a boy. If it helps, it could also be stated as “at least one is a boy.” Either way, this only eliminates one case: GG. Period. The remaining cases are BB, BG, and GB,
all still of equal probability. And only 1/3 of them include a second boy.
Or if you choose to consider BG and GB the “same case”, they were originally 50% possible while BB was only 25% possible, so again, BB is only 25/(50+25) = 1/3 possible.
If you had, as in your example, been told that the *first* child was a boy, then you’ve just eliminated two cases: GB and GG. In that case, the chance of another boy is in fact exactly 1/2. But
that’s not what was stated.
Hope this helps. If not, I can’t help ya.
Like or Dislike: 8 1
14. Franklin says:
If we are going to delve into the silliness then both GG and GB are out since the first child was a boy.
The puzzle does NOT state that the first child was a boy. It states one of the children is a boy. It’s an important distinction – words matter.
Like or Dislike: 9 0
15. Franklin, well, no.
Either one of two cases is possible. Either the boy specified was born first in which case the only options are then BB or BG, or the boy specified was born second, in which case the only options
are GB or BB. In either case, the odds of having two boys are 50-50. There are no other options.
BG and GB are invariant (that’s the word I think I was looking for) in this problem statement unless and until you specify the boy was born first or second. To say otherwise means that you must
also have a B’B and a BB’ as your options to go with BG and GB, where B’ is the unspecified child, leaving you with, wait for it, a 50-50 chance once again.
Like or Dislike: 5 10
16. Drew says:
We seem to be going in circles, and I come back to my original point. JP, Charles and I seem to be on the same page. (Wow. What a Motley crew there?!)
Franklin, you can assert that at a math conference it would be assumed that the issue was to assess not predictive probability, but possible probabilities after the fact. But I think that
devolves into a very simple and mechanical exercise. If that’s all this guy has, I’m really bored. Any dope can do that math.
I maintain, it was an intentional trick question and the essence of the debate they wanted to provoke is one that capitalizes on interpretation of the question’s intent, not the just raw math.
Did I just come to the defense of odo? Heh.
Like or Dislike: 0 2
17. john personna says:
So I’ve read the wikipeida page, and Franklin, I’m going to say this isn’t a math problem at all. It hinges on expectation, human nature, and psychology:
Thus, if it is assumed that both children were considered, the answer to question 2 is 1/3. In this case the critical assumption is how Mr. Smith’s family was selected and how the statement
was formed. One possibility is that families with two girls were excluded in which case the answer is 1/3. The other possibility is that the family was selected randomly and THEN a true
statement was made about the family and IF there HAD BEEN two girls in the Smith family, the statement would have been made that “at least one is a girl”. If the Smith family were selected as
in the latter case, the answer to question 2 is 1/2.
My expectation, based not on math, but on framing of the Boy/Tuesday problem, that it was a random selection of a family, and that Boy/Tuesday were observations about that random sample.
What I’ve learned here isn’t about math, it’s about mathematicians ;-), for some reason they assume that someone has stacked the deck, and given them a non-random sample. They require that, to
get the 1/3 answer.
Like or Dislike: 2 3
18. john personna says:
(Yes, this is one of those things Drew and I agree on. We normally, my mutual silent agreement, tend to keep quiet about such things.)
Like or Dislike: 0 1
19. Drew says:
“Yes, this is one of those things Drew and I agree on. We normally, [by] mutual silent agreement, tend to keep quiet about such things.”
In the upcoming reality TV show we scream and bitch slap each other.
Like or Dislike: 0 0
20. TangoMan says:
If you want to be a real stickler the problem doesn’t tell you to assume a fair coin and if you substitute real birth data then you need to contend with the fact that the coin is slightly biased
towards males, in that 104-105 males are born for every 100 females born.
Like or Dislike: 0 2
21. Robert Prather says:
I believe Franklin is correct. The stuff about psychology, a 51-49 split, etc. is off base in this example.
Like or Dislike: 1 0
22. Grewgills says:
Franklin et al
I realized that I had read into the the problem that the boy was child one shortly after I posted. Doh.
Like or Dislike: 1 0
23. Franklin says:
Either one of two cases is possible. Either the boy specified was born first in which case the only options are then BB or BG, or the boy specified was born second, in which case the only
options are GB or BB.
I think personna’s Wikipedia excerpt (I haven’t read the whole page yet) may be identifying the assumption that I am making and that you aren’t (or perhaps the opposite – you are assuming that
the person is describing the children in some order). Going back to one of my previous posts, would it change anything for you if the wording was, “at least one is a boy”? I think that more
clearly eliminates only the GG case.
I would still classify this as a math problem; we used to call these “story problems” when I was a lad. But you were forced to interpret words to identify the underlying mathematical equations.
That’s what this problem is doing.
Like or Dislike: 1 0
“I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?â€
Robert, that is the complete statement of the problem. Put into mathematical or logical terms (my college background, FWIW) the problem and the solution are exactly as I have stated. Everything
else being argued depends upon some additional, fanciful information that is beyond the statement of the problem, as well as changing the ground rules midstream — which is the primary mistake
made in the “solution” offered as well as by Franklin.
If you look carefully at the coin toss analogy I made you will understand why. The bottom line here is you can only get to a 1/3 answer by assuming that the births are sometimes independent
events and at other times dependent events. Needless to say, or so I thought, it doesn’t work that way. I’ll admit it has been many years since I hung out at Altgeld Hall worrying about such
things, so please don’t make me pull out my Probability and Statistical Inference by Hogg and Tanis to provide a more detailed exposition on conditional probablity, independent events and
dependent events.
Like or Dislike: 0 6
25. And further….
The 7+7+7+6 formulation is wrong, because the first three situations resulting in 7 outcomes each treat the two births as an ordered pair, while in the last situation the ordered pair is
conceptually maintained but the BB pair is discarded because a BB pair appeared in the third situation as well. Now if you label the births with a subscript of 1 and 2 respectively it isn’t hard
to see that B1B2 is not the same ordered pair as B2B1, so you cannot throw it out to end up with only 6 outcomes after all.
Like or Dislike: 2 7
26. BY the way, Marilyn Vos Savant agrees with me: … and conclude that her answer is correct from a mathematical perspective, given the assumptions that the likelihood of a child being a boy or girl
is equal, and that the gender of the second child is independent of the first
To get any other answer requires an ambiguity to be exploited in the statement of the problem, as well as confusing dependence and independence of the two births as discrete events.
Like or Dislike: 2 7
27. And Bayesians can bite me.
Like or Dislike: 1 5
28. Actually, the only thing to be learned from this is that poor problem statements can lead to ambiguous results. Well, that and a little knowledge can be a dangerous thing.
Like or Dislike: 1 1
29. Franklin says:
You are incorrect about Marilyn Vos Savant agreeing with you. She did not and does not. I was reading her column in Parade at the time.
In fact the link you give includes her first version, where it uses the “at least one of” phrase that I’ve mentioned twice and that you haven’t responded to. This is a completely unambiguous
question, and clearly leads to the 1/3 answer.
By the way, I really dislike the 7+7+7+6 formulation as well, although it does work out to the correct answer. A more brute force way would be to label the possibilities of each child, using the
gender and day-of-birth, BSu, BMo, BTu, … GSu, GMo, GTu, etc. There are 14×14 combinations, of which 27 include a boy born on Tuesday, blah blah blah.
I’m honestly not sure where you’re getting the “sometimes independent and sometimes dependent” argument from. This is more straightforward than that, and you’re hardly the only one around here
who has taken classes in logic.
Like or Dislike: 3 0
30. No doubt. I think the application of Bayes theorem is incorrect because I don’t think the two events satisfy the definition of being independent, but I’ll have to check the textbook tomorrow to
be sure.
Like or Dislike: 0 5
31. One last analogy before bed. Imagine there were two NFL games today and you don’t know how they ended, but that I tell you that the first game was won by the team that won the coin toss. Are the
odds that the next game will be won by the team that wins the coin toss less than 50-50?
Like or Dislike: 0 3
32. Sorry let me rephrase that….
Imagine there were two NFL games today and you don’t know how they ended, but that I tell you that one of the games was won by the team that won the coin toss. Are the odds that the other game
will be won by the team that wins the coin toss less than 50-50?
Like or Dislike: 0 3
33. Wayne says:
It is a little smoke and mirrors.
Their numbers are correct but it leads people to think it says something it doesn’t.
The probability of a having two boys where one is a boy being born on a non specific day of the week is still 1/3. It is only after actually specifying a day that it changes to 13/27. Once you
specify a date you get a subset of instances of the original 1/3.
If you add instances where you have two boys with at least one boys being born on Tuesday (13) to the instances where you would have two boys with neither being born on Tuesday (36) (total of BB
(49) and compare it to all the instances where you have one girl and one boy (98) you get back to your 1/3 ratios.
A clearer wording would be “What is the probability I have two boys with one boy being born on a Tuesday†. Yes it is the same information but IMO puts it in better context.
Like or Dislike: 2 0
34. There’s another thing to consider. Twins. If you say the first child was born on a Tuesday, then the probability of the second also being born on a Tuesday are greater than 1/7. Even if they
aren’t twins, one might suggest that the sexual activities of the parents may have led the two children to be conceived on the same day of the week, slightly increasing the probability that they
are both born on the same day too.
Also, if they are twins, is the probability of them being the same gender increased?
This puzzle reminds me of the game where you have to find the ball hidden under one of three cups. You pick one and the game vendor eliminates one empty cup and asks if you would change your
selection to the cup she didn’t remove. There are 2 cups left, but the probability remains that your first choice is 1/3. So the other cup is 2/3 and you should swap. This puzzle is a little bit
like that one, maybe.
Like or Dislike: 1 1
35. Wayne says:
Why would the probability of the second also being born on a Tuesday be greater than 1/7 ?
As with most probability exercises there is an assumption of all else being constant.
Like or Dislike: 0 0
36. Because twins are usually born on the same day.
Like or Dislike: 0 0
37. Ben Collier says:
I found this site after listening to piece about Martin Gardener on “More or Less” on BBC Radio4
The way I thought of this is as follows:
Assuming equal probability of gender and birth day
1. There are 196 equally probably distinct families (2 genders*7 days)*(2 genders*7 days)
2. In 14 families the older child is a boy born on a Tuesday
2a. Of these 14 half, 7 will have a sister
3. In 14 families the younger child is a boy born on a Tuesday
3a. Of these 14 half, 7 will have a sister
4. In 1 family both children are boys born on a Tuesday
5. So there are 27 families including one boy born on a Tuesday (14+14 – 1 for the overlap).
6. Of these 27 families 14 include 1 girl (7 older sisters, 7 younger).
7. So 13/27 families consist of 2 boys where (at least) one was born on a Tuesday
So still (7+6)/(7+7+7+6)
Like or Dislike: 7 0
38. Ben Collier says:
Oh and using the same logic, but posing, “I have two children. One is a boy born in the first half of the day. What is the probability I have two boys?” I get 3/7 – Is this analogous to Charles
Austin’s NFL games and coin tosses?
Like or Dislike: 0 0
39. Wayne says:
I didn’t articulate myself well. Yes if they are twins the probability of both being born on the same day is much higher. However the facts that twins exist (3% of time) does not increase the
probability of it being another boy significantly, especially considering that of that 3% some of them would be mixed sexes.
Regardless throwing in more criteria than the problem stated is going outside of the problem. Remember the “assumption of all else being constant†. That fact that the parents may be ones who
abort all fetuses that are not male does not apply to this math problem since those facts are unknown an is not stated. It is a good idea to keep those circumstances and others in mind when
trying to predict real life outcome but it doesn’t apply on how to properly do the math of the known situation.
Like or Dislike: 0 0
40. Wayne says:
As for your 3/7, once again you are taking a subset of the BB,GB,BG problem. You are taking a larger % subset of the 1/3 BB instances than you are the GB, BG instances. One should take in account
what instances they are excluding as much as what they are including.
For example as in the above problem. 13 is a larger % of 49 than 14 is of 98.
Like or Dislike: 0 0
41. Wayne says:
Ba = Boy afternoon Ga = Girl afternoon
Bm= Boy morning Gm = Girl morning
In your example you are only excluding from original set (4) of instances Ba,Ba(1) while excluding from bg,gb set(8) Ba,Gm + Ba,Ga + Gm,Ba +Ga,Ba (4). So you are excluding taking 25% of first set
and 50% of second set. Or inversely you are including 75%(3 out 4) of the first set and 50% (4 out 8) of second set. Naturally doing so changes the probability.
Like or Dislike: 1 0
42. Tony says:
It is a question of semantics, like most ‘clever’ maths problems. We misinterpret the original question as “What are the chances of me having another boy”.
The answer given is the answer to the question “What are the chances of having two boys, one of whom was born on a day other than a Tuesday, given that one of them is a boy who was born
(irrelevantly) on a Tuesday”.
There are then 27 (rather than 28) allowable combinations of boy / girl / day, 13 of which (rather than 14) are boys.
I should know. Have two boys, one of whom was born on a Tuesday.
Like or Dislike: 0 0
43. Lloyd says:
It seems to me that if we truly believe this ‘numerical masturbation’, as Drew so eloquently put it, then we have to accept that even if the day of birth of the given boy is NOT stated, there is
STILL a probability of 13/27 that there are two boys.
What I mean by that is this:
If the question were stated as “I have two children. One is a boy. What is the probability I have two boys?†, then surely it would be a fair assumption that the given boy was indeed born on *a*
day (be it monday or tuesday or wednesday…). From there on, we can use the same mathematical proof that gave us 13/27 (since regardless of which of the seven days it is, the fraction1/7 for each
day still applies). Am I missing something major here?
Like or Dislike: 0 1
44. I think you’re right, Lloyd. I’ve been coming back to the original conclusion that Tuesday doesn’t matter. Somewhere it says we are to discount one option of BTu-BTu, but I don’t think we should.
Count it twice, giving 14/28 probability.
Like or Dislike: 0 1
45. Of the 196 permutations Sex1Day1:Sex2Day2, how many include at least one BTu? How many of those have a second boy?
Like or Dislike: 0 0
46. 27 and 13. But, of the 27 permutations we had a double chance of picking the BTu:Btu one. So, 14/28.
Like or Dislike: 0 1
47. leo says:
the people who are confused and can’t see why it s not just 50/50 – you are assuming that the FIRST child is a boy born on a tuesday, but the question says that ONE OF THE TWO (you don’t know
which) is a boy born on a tuesday.
This is the major difference that changes the answer from 50/50 to 13/27
Like or Dislike: 2 1
48. Lloyd says:
Yeah you may be right leo… but i still agree with Tomid. I just think that in this case, BTu-Btu is different to Btu-BTu (if you get what I mean). And that even though you have no timeline to
refer to the children as different people, they still can be counted as distinct entities
Like or Dislike: 0 0
49. Geoff says:
Once you know one child is a boy, then there are only four equally likely possibilities for the second child–a younger boy, an older boy, a younger girl or an older girl. In two of these cases we
get two boys, so the probability of having two boys is 2/4, ie 0.5. forget about twins–one is always a little older than the other. Tuesday is a distractor. The essential info is that one child
is known to be a boy.
The 13/27 answer is the right answer to a different question, which is:
Given that I have two children and that they are not both girls, what is the probability that I have two boys with at least one boy born on a Tuesday?
Like or Dislike: 1 1
50. Rob M says:
1/3 is the answer to:
I have randomly chosen a parent who has a boy and just one other child.
What is the chance they have two boys.
13/27 is the answer to:
I have randomly chosen a parent who has a boy born on Tuesday and just one other child. What is the chance they have two boys.
1/2 is the answer to:
I have randomly chosen a parent who has two children.
They have been asked to describe the sex of one of their children, and have replied “boy”.
What is the chance they have two boys.
1/2 is the answer to:
I have randomly chosen a parent who has two children.
They have been asked to describe the sex and birth date of one of their children, and have replied “boy Tuesday”.
What is the chance they have two boys.
Do we think that Monty Hall was specially chosen because he happened to have two children one a boy born on Tuesday, or was Monty Hall always going to be the speaker and is just describing his
particular circumstances.
The answer to the above question will answer the 1/3 13/27 1/2 part.
Like or Dislike: 8 1
51. Wayne says:
You have four “possibilities†but only three instances\events. Older Boy\younger Boy or Older Girl\younger Boy or Older Boy\younger Girl. Therefore the “probability†of having BB is still 1/
3. Sort of like saying there are two possibilities of rolling number 5 on a dice three times in a role. You either you do or you don’t. That is true but the probability of doing so is not 50%.
Rob M
How do you figure ½ for the last two examples?
If you would ask them to describe specifically their first or second child that would be true but to have them choose either child at random than no.
Like or Dislike: 0 1
52. xavier says:
I don’t agree with this analysis.
The Tuesday/Tuesday scenario should be double counted – each boy could be be born on either “this” or “that” Tuesday (ie a bit like there are two ways of getting heads and tails with 2 coins) –
therefore probablitiy is 14/28 + 50%. The 1/3 hypothesis is clearly wrong – this is the Monty Hall trap.
I am not a mathematician so cannot express this in mathematical terms, but I am still sure it is right!
Like or Dislike: 0 6 | {"url":"http://www.outsidethebeltway.com/math_puzzle_one_boy_born_on_tuesday_/","timestamp":"2014-04-20T15:53:43Z","content_type":null,"content_length":"169613","record_id":"<urn:uuid:df43acf1-2dc9-44f3-a7eb-7cd967dac0e8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math does Matter
Math: What Is It Good For?
Why do you need High School math? If you don't see yourself in a career in accounting or science, you may wonder why formulas and graphs are relevant to you.
In fact, studying math is important for most careers, even if they do not directly involve math.
It’s also important for getting into college. If you want to be prepared for life, college and finding a good job, you need math in your high school schedule.
Getting into College
Some students go to college and some don't. Believe it or not, math is a big part of the equation.
Students who take Algebra I in high school and then take geometry have about an 80 percent chance of ending up in college. That 80 percent remains the same no matter what your race, religion or
family income.
Students who successfully complete a math course higher than Algebra II double their odds of completing their bachelor’s degree.
Choose the Right Classes
Algebra and geometry put you on the road to college. One reason: both are college prerequisites — colleges require these courses because they prepare you for college-level work.
Take at least three or four years of math in high school.
Discuss your options with your counselor. Consider taking these classes to be prepared for college:
- Algebra I
- Geometry
- Algebra II
- Trigonometry and Calculus
Challenge Yourself
Push yourself by signing up for higher-level math classes; they’ll help prepare you for college entrance exams and math courses.
Don't worry about getting a C in a college prep course rather than an A in a less challenging class.
College admission officers who read your high school transcript know the level of difficulty of the classes you take.
Be Prepared
If you think you'll never use math again after high school, think again.
Most careers require math skills.
Colleges usually have basic courses that you are required to take during your first or second year — and chances are one or more of them require math skills. The major you choose may also require
some math classes.
Many students avoid higher-level math classes because they don’t think they can handle them.
Believe in yourself; if you work hard, you can succeed in a more challenging class.
Math After College Does Matter
You may end up in a career that doesn't require much math. It's true, your boss may never walk into your office and say, "Quick, what's the Pythagorean Theorem?" but the math you're learning now is
more than the sum of its parts. Math trains and disciplines your mind.
Just as the point of reading books is not to memorize vocabulary words, the point of math is not to memorize formulas.
Math helps you learn to:
- Identify and analyze patterns
- Develop logic and critical thinking
- See relationships
- Solve real-world problems
Will You Need It?
If you're not sure what you want to do after college, keep in mind that you might need math for your future job. If you already have a good idea of what you want to do, and it doesn't require much
math, consider this:
Most students switch majors after starting college. You might, too.
Be prepared with the basics, and keep your options open for whatever path you may follow.
The above info. is from: http://www.collegeboard.com/student/plan/college-success/955.html | {"url":"http://www2.cortland.edu/community/outreach/ace/math-does-matter.dot","timestamp":"2014-04-16T17:17:54Z","content_type":null,"content_length":"28864","record_id":"<urn:uuid:e025767e-bd3c-4e7e-8c86-b55376fb4c1e>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
The graph of y = ax + b, the prototypical linear relationship
The shortest distance between any two points in Euclidean space. A line is implicitly a straight line. It has only one dimension, length, may be infinite in extent, and is the shortest distance
between any two points.
Although in common parlance a straight line is not curved, it is a curve in the mathematical sense (i.e., a mapping of an infinite set of points into a space). Mathematically, a line may be
determined by the presence of any two points in an n-dimensional space (where n is two or more). A line segment is a piece of a line with definite endpoints.
In two dimensional Cartesian coordinates, the equation of a line has the form y = ax + b, where a and b are constants.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/L/line.html","timestamp":"2014-04-17T15:30:58Z","content_type":null,"content_length":"6413","record_id":"<urn:uuid:4c34d66a-abd3-48d3-a80d-d69926b4d0fa>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00029-ip-10-147-4-33.ec2.internal.warc.gz"} |
HsOpenSSL-0.1: (Part of) OpenSSL binding for Haskell Contents Index
Asymmetric cipher decryption using encrypted symmetric key. This is an opposite of OpenSSL.EVP.Seal.
:: Cipher symmetric cipher algorithm to use
-> String encrypted symmetric key to decrypt the input string
-> String IV
-> PKey private key to decrypt the symmetric key
-> String input string to decrypt
-> String decrypted string
open lazilly decrypts a stream of data. The input string doesn't necessarily have to be finite.
:: Cipher symmetric cipher algorithm to use
-> String encrypted symmetric key to decrypt the input string
-> String IV
-> PKey private key to decrypt the symmetric key
-> ByteString input string to decrypt
-> ByteString decrypted string
openBS decrypts a chunk of data.
:: Cipher symmetric cipher algorithm to use
-> String encrypted symmetric key to decrypt the input string
-> String IV
-> PKey private key to decrypt the symmetric key
-> LazyByteString input string to decrypt
-> LazyByteString decrypted string
openLBS lazilly decrypts a stream of data. The input string doesn't necessarily have to be finite.
Produced by Haddock version 0.8 | {"url":"http://hackage.haskell.org/package/HsOpenSSL-0.1/docs/OpenSSL-EVP-Open.html","timestamp":"2014-04-17T22:11:16Z","content_type":null,"content_length":"7915","record_id":"<urn:uuid:c76a5afa-011e-4b0b-9995-fbce8c5d24b7>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Northbrook Calculus Tutor
Find a Northbrook Calculus Tutor
...I am now excited to use my skills to help people succeed to the next level of their life through assisting them through tutoring again!During college, I double majored in Biochemistry and
Spanish--all while minoring in Asian Studies. This experience taught me how to develop excellent study skill...
26 Subjects: including calculus, Spanish, chemistry, writing
...Whatever your style of learning may be, together we will identify and use your existing academic strengths as a basis for how to approach every mathematical/theoretical nut you may need to
crack from the classroom, thru your homework problems, all the way to the big tests. Prior to receiving my ...
7 Subjects: including calculus, physics, geometry, algebra 1
...By my senior year, I was named captain of the Women's varsity team and the number 1 singles player. As the oldest member of the team, other girls looked to me as their leader and my coaches
expected me to lead practices and team warm-ups. Although, I no longer play competitively, I am always looking for opportunities to practice, keep up my skills, and play a friendly match.
13 Subjects: including calculus, chemistry, geometry, biology
...I use both analytical as well as graphical methods or a combination of the two as needed to cater to each student. Having both an Engineering and Architecture background, I am able to explain
difficult concepts to either a left or right-brained student, verbally or with visual representations. ...
34 Subjects: including calculus, reading, writing, statistics
...I have been tutoring for over 10 years.I have worked with a number of students to get ready for their ACT exam. I am qualified in math by virtue of passing the Elementary Math test with 100%. I
certain am qualified to assist in all aspects of elementary subjects although my only real interest wo...
26 Subjects: including calculus, physics, statistics, geometry | {"url":"http://www.purplemath.com/Northbrook_calculus_tutors.php","timestamp":"2014-04-19T19:36:17Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:6c1e196e-a299-4ebb-9a1e-5b4eec213685>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00067-ip-10-147-4-33.ec2.internal.warc.gz"} |
Metropolis Algorithm - Why does it work?
If I'm not mistaken, the stationary distribution X with transition matrix P is given by
Is that correct?
Yes. ( My preference is to multiply a column vector on the left by the matrix, so I would say [itex] TX = X [/itex] where [itex] T [/itex] is the transistion matrix.
A metaphor for teaching Markov chains involves frogs and lily pads. When a frog is on a particular lily pad at time t = n, he has a probability of staying on the pad at time t = n+1 and other
probabilities of jumping to various other lily pads at that time. The probabilities depend only on which lily pad the frog is currently sitting. Instead of one frog imagine a hoard of tiny frogs that
obey this process. If you begin at time t = 0 by tossing frogs on lily pads at random and let them all hop independently then there the population of frogs on each lily pad might roughly stabilize
after a long time. Intuitively, the observed fraction of frogs on each pad would approximate the stationary distribution of the process. The is believable if the lily pads are arranged so a frog on
one can eventually hop his way to any other pad, perhaps indirectly. - i.e. we aren't talking about a set of lily pads containing pads from two different ponds.
It may be stretch to apply this metaphor to the Metropolis algorithm, but let's try. In that situation, there is some unknown probability distribution for frogs on pads. This doesn't involve letting
them jump. Perhaps someone has created this distribution by placing them on the pads. Furthermore the pond is too large for you to see all the lily pads at once. Your job is to define a set of jump
probabilities for each pad so that when the frogs are allowed to jump according to that Markov process, their distribution will approximately preserve the original distribution of the frogs. (i.e.
the original distribution of the frogs will be the stationary distribution of the process.) You can count the number of frogs on a given pad P_a and you can count the number of frogs on the pads
P_b,P_c,...around it, to which the frogs may jump. But you don't know the total number of frogs in the whole pond. How will you determine a set of jump probabilities involving pad P_a?
It may or may be intutitive that you can determine the probabilities by computations using ratios of frogs , such as (number of frogs on pad P_b)/ (number of frogs on pad P_a) without knowing the
original distribution of frogs (e.g. not knowing (number of frogs on pad P_b)/ (total number of frogs in the pond) ). If this is intuitive to you then, since you're doing physics, it may be
sufficient explanation! If not, we must do some algebra. | {"url":"http://www.physicsforums.com/showthread.php?p=4185363","timestamp":"2014-04-17T09:52:52Z","content_type":null,"content_length":"73929","record_id":"<urn:uuid:98a4646e-5b38-4af8-9e87-97ab4ca3c857>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi: the ratio of the circumference to the diameter of a circle. The number in decimal form apparently has no end and never repeats. It can only be approximated.
One can approximate to four decimal places (PI=3.1416 ) or to 8 decimal places PI=3.14159265 . The first approximation uses a denominator of 10000 and the second a denominator of 100,000,000
PI ~= 31416 / 10000 = 3.1416
PI ~= 314159265359 / 100000000000 = 3.14159265359
The denominator for the approximation does not have to be a power of 10. 22/7 is a good approximation; the denominator is 7. 355/113 is another good one.
PI ~= 22 / 7 = 3.142857142857
PI ~= 355 / 113 = 3.141592920354
355 / 113 is correct to six decimal places. PI can actually be approximated using any number in the denominator, and in general a larger number in the denominator allows you to make a better
For a computer, it is easist to work in powers of two. Using a denominator of 256 or 65536, the size of byte or word variables, simplifys the calculations required. The best approximations to PI with
65536 as the denominator is,
PI ~= 205888 / 65536 = 3 + 9280 / 65536 = 3.1416015625
which is 8.90891e-06 too high, or
PI ~= 205887/65536 = 3 + 9279/65536 = 3.141586303711
which is 6.349879e-06 too low, and therefore a little bit better. These are not as good an approximation as 355/113, but still good to several decimal places.
With a denominator of 256 the best we can hope for is:
PI ~= 804 / 256=3 + 36 / 256 = 3.140625
The point here is that we can divide by 256 by simply dropping the last byte of our value. We can divide by 65535 by dropping the last two bytes. So, if we have a math library that can multiply two
16 bit integer numbers and return a 32 bit integer result, we can find X*Pi by simply finding (X*3) + (X*9279) / 65535. Trying to find (X*205887)/65535 will not work because 205887 is larger than 16
bits. Trying to find X * (205887/65535) will not work because only integer math is involved; we would effectivly be asking for X * 3.
Of course, you can do the same thing with many other fractions or values having a fractional part.
See also:
©2014 These pages are served without commercial sponsorship. (No popup ads, etc...).Bandwidth abuse increases hosting cost forcing sponsorship or shutdown. This server aggressively defends against
automated copying for any reason including offline viewing, duplication, etc... Please respect this requirement and DO NOT RIP THIS SITE. Questions?
<A HREF="http://www.piclist.com/techref/method/pi.htm"> Math Methods, Pi</A>
After you find an appropriate page, you are invited to your to this massmind site! (posts will be visible only to you before review) Just type in the box and press the Post button. (HTML welcomed,
but not the <A tag: Instead, use the link box to link to another page. A tutorial is available Members can login to post directly, become page editors, and be credited for their posts.
Link? Put it here:
if you want a response, please enter your email address:
Attn spammers: All posts are reviewed before being made visible to anyone other than the poster.
PICList 2014 contributors:
o List host: MIT, Site host massmind.org, Top posters @20140418 RussellMc, Josh Koffman, alan.b.pearce, IVP, Isaac Marino Bavaresco, cdb, Bob Axtell, Dwayne Reid, Robert Dvoracek, Byron Jeff,
* Page Editors: James Newton, David Cary, and YOU!
* Roman Black of Black Robotics donates from sales of Linistep stepper controller kits.
* Ashley Roll of Digital Nemesis donates from sales of RCL-1 RS232 to TTL converters.
* Monthly Subscribers: Gregg Rew. on-going support is MOST appreciated!
* Contributors: Richard Seriani, Sr.
Welcome to www.piclist.com! | {"url":"http://www.piclist.com/techref/method/pi.htm","timestamp":"2014-04-18T21:26:25Z","content_type":null,"content_length":"22658","record_id":"<urn:uuid:ec40439b-f199-470f-aa96-f8bec7c8bece>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
The (Not So) Simple Pendulum
Posted by:
Ron D.
Pendulums are the defining feature of pendulum clocks, of course, but today they don’t elicit much thought. Most modern “pendulum” clocks simply drive the pendulum to provide a historical look, but a
great deal of ingenuity originally went into their design in order to produce highly accurate clocks. This essay explores horologic design efforts that were so important at one time—not gearwork,
winding mechanisms, crutches or escapements (which may appear as later essays), but the surprising inventiveness found in the “simple” pendulum itself.
It is commonly known that Galileo (1564-1642) discovered that a swinging weight exhibits isochronism, purportedly by noticing that chandeliers in the Pisa cathedral had identical periods despite the
amplitudes of their swings. The advantage here is that the driving force for the pendulum, which is difficult to regulate, could vary without affecting its period. Galileo was a medical student in
Pisa at the time and began using it to check patients’ pulse rates.
Galileo later established that the period of a pendulum varies as the square root of its length and is independent of the material of the pendulum bob (the mass at the end). One thing that surprised
me when I encountered it is that the escapement preceded the pendulum—the verge escapement was used with hanging weights and possibly water clocks from at least the 14th century and probably much
earlier. The pendulum provided a means of regulating such an escapement, and in fact Galileo invented the pin-wheel escapement to use in a pendulum clock he designed but never built. But it took the
work of others to design pendulums for truly accurate clocks, and here we consider the contributions of three of these: Christiaan Huygens, George Graham and John Harrison.
It was Christiaan Huygens (1629-1695) who built the first pendulum clock as we know it on Christmas, 1656. His pendulum swung in a wide arc of about 30° and consisted of a metal ball suspended by
silk threads. There are a few design aspects of pendulums that may appear obvious in retrospect but which were novel enough at the time. First, there is the matter of air and gear friction. To
minimize these effects there must be sufficient mass to make frictional forces irrelevant, the rod of the pendulum should be thin, and the pendulum must be enclosed to avoid drafts. It is also true,
unlike in Huygen’s pendulum, that the bob itself should be thin—later bobs were made to slice through the air, and this feature along with the requirement for significant mass results in the tapered
lens-shaped disk that we see today on pendulums.
Mahoney points out that Huygens’ invention of the pendulum clock contained an important feature: the independent suspension of the pendulum and the crutch linked the clock mechanism and the pendulum,
but in a way that allowed separate, one-way adjustments. The driving force of the escapement could be adjusted without affecting the operation of the pendulum, and varying the characteristics of the
bob (such as increasing the bob mass to overcome variations in crutch coupling or streamlining it to decrease air resistance) did not affect the operation of the escapement. This allowed a practical
means of calibrating the individual components. Also, the silk threads of the pendulum rod in this design were extremely light and strong with little stretch and high resistance to rot, and they also
minimized friction at the pivot point. They were ideal for Huygens.
sin x varies from x as the angle x increases. This error is called the circular error or circular deviation. The wrapping of the silk threads through the “cheeks” seen in the clock diagram
effectively shorten the pendulum length with its distance along its arc, constraining the pendulum to a cycloidal path and providing true isochronism. Huygens revolutionized the design of pendulums
through such mathematical analysis of this and other characteristics of pendulums.
tautochrone, a curve for which a frictionless particle sliding on it under gravity to its lowest point will take the same amount of time regardless of its starting position on the curve. By
definition an isochronic pendulum needs to follow a tautochronic path.
But let’s back up a bit to see how Huygens found this curve. In deriving the relation for the period of a pendulum in 1659 equivalent to T = 2π(L/g)^1/2, Huygens found he had to make an approximation
that was only negligible for small amplitudes of oscillation, one that defined a curved path of a cycloid with a vertical axis of half the pendulum length L. In one of those fortuitous circumstances
that occur so frequently in history, he had studied precisely that curve for a mathematical challenge issued by Blaise Pascal in 1654. Following this lead Huygens found that a body falling from any
point along the cycloid will reach the bottom in the same amount of time, and the ratio of this time to the time for free fall from rest along the axis of the cycloid is π : 2. In 1673 he published
his masterpiece, the Horologium Oscillatorium,
To derive this, Huygens begins by presenting the Galilean properties of free-fall, i.e., that the distance fallen is proportional to both the time squared and the velocity squared. In addition, the
distance fallen in a given time is equal to the distance traversed in the same time with a constant velocity half that of the velocity at the end of the fall. After proving 20 more propositions too
detailed to present here, he arrives at the figure to the right. Here the arc ABC is a cycloid created by point A as the circle AVD rolls along the top line DC. As translated from the Latin by
Blackwell, Huygens states and then proves that:
The time in which a body crosses [spans] the line MN, with the uniform velocity acquired after it has fallen through the arc BG of the cycloid, will be related [proportional] to the time in which
it would cross the line OP, with half of the uniform velocity which it would acquire by falling through the whole tangent BI, as the tangent ST is related to the part QR of the axis.
After one more proposition in which he considers infinitesimal arcs of travel along the cycloid (we’ll touch on that later), Huygens arrives at the culmination of Part II of his work:
On a cycloid whose axis is erected on the perpendicular and whose vertex is located at the bottom, the times of descent, in which a body arrives at the lowest point at the vertex after having
departed from any point on the cycloid, are equal to each other; and these times are related to the time of a perpendicular fall through the whole axis of the cycloid with the same ratio by which
the semicircumference of a circle is related to its diameter.
where the last clause provides the π : 2 ratio mentioned earlier.
evolute) that would “unwind” to form this cycloid (the evolution or evolvent), and he created a new branch of mathematics, the theory of evolutes, to do it. The problem reduced to finding a curve
such that
• Each leaf is tangent to the centerline.
• Each leaf is perpendicular to the cycloidal arc of the pendulum at its point of contact.
• The leaf length to the point of contact with the bob equals the pendulum length, so it must have an arc length of twice the diameter of the circle generating the cycloidal path of the pendulum,
which is half the cycloid measured from its base to vertex.
This leads to the fact that the evolute must be a curve having the same base, height and length as the cycloidal path of the pendulum, and Huygens came to the startling realization that the evolute
is a cycloid generated by the same circle as the cycloid derived for the pendulum path, or in other words, the cycloid is its own evolute! (In 1692 Jacob Bernoulli showed that a logarthmic spiral
also is its own evolute.)
But for a practical pendulum Huygens further proposed that it is necessary to know its “center of oscillation,” and using an axiom equivalent to the conservation of energy he defined the center of
gravity of a pendulum in terms of the modern concept of its moment of inertia. Taking the limits of infinitesimal points of mass, he calculated the centers of oscillation of many types of pendulums;
for example, his spherical bob of radius r on a weightless string produced a center of oscillation 2r^2/5L below the center. This provides the analysis of the effect of sliding weights on pendulums
to adjust for (or measure) geographical differences. He derives the practical technique of using the period of a known pendulum to find g, the acceleration due to gravity of free-falling bodies.
Finally, Huygens describes the conical pendulum and produces theorems on centrifugal force equivalent to (but preceding) Newton’s F=mv^2/r. This is the only place where force appears, as his work is
based on the concept of conservation of energy, still a popular approach to physical problems involving complicated motions.
The Horologium Oscillatorium was written in the style of geometric physics in which quantities are related by proportions demonstrated with geometric constructions, a method soon superseded by the
analysis tools of mathematical physics. But Huygens used infinitesimal time intervals and distances and extrapolated them to limiting cases, prescient in his anticipation of the development of the
calculus. Blackwell points out that while later physicists relied on the extensive foundations of calculus and mechanics to build arguments, Huygen’s work “may be enjoyed as a beautiful specimen of
[his] explicit handling of physical concepts and argument.” It is a self-contained jewel with a brilliant clarity seen in great works of all fields.
approximate solution, something I wouldn’t expect from a geometric construction and surely an indication of how closely Huygens aligned his mathematics with the practical construction of mechanical
And maybe that’s what makes the Horologium Oscillatorium such a fascinating piece of work. Mahoney points out something I hadn’t noticed, that there are three layers of meaning in the diagrams and
sketches of Huygens. In this work we see the overlay of physical shapes (the pendulum cord and cheeks) onto geometric constructions proving theorems about those shapes. In notes from 1659 in which
Huygens first finds the cycloid as the isochronous curve, he also overlays an auxiliary curve, a parabola that describes the velocity of the bob as it moves along the cycloid. He created “a curve in
physical space, the properties of whose normal and ordinate could be mapped by way of a mathematical curve so as to generate another mathematical curve congruent to a graph of velocity against
distance” [Mahoney]. Later we will see that Huygens created a mathematical relation that defines isochronous systems, thereby lifting the mathematics out of the geometrical physics and anticipating
analysis as the new physics.
Huygens later invented the ingenious tri-cordal pendulum, a ring suspended at three points by threads and made to oscillate around its center as shown in his sketches below. Radial placement of
weights could be used to calibrate the pendulum. From his analysis of conical pendulums he discovered that this mechanism would be isochronous if any point of the ring moves along a parabola curved
around the cylinder defined by the ring. To fine-tune the tri-cordal pendulum to this constraint he considered adding cheeks but eventually just went with longer threads.
After the publication of his Horologium Oscillatorium, Huygens found that in a cycloidal pendulum the force on the bob is proportional to the distance or angle from the neutral position, and he
deduced that any mechanical system that met this constraint would be isochronous. He came up with a number of mechanisms of this type. In 1675 this led him to invent the horizontal balance spring as
a clock oscillator, in which the force varies directly with angle in the same way that force varies directly with distance in ordinary springs, although Hooke did not publish his law on this until
1678. (There is some debate today on whether Hooke actually invented the balance spring.)
Meanwhile, in 1670 the anchor escapement was invented, possibly (and certainly claimed) by Robert Hooke. Some authors attribute its discovery to Thomas Tompion (1639-1713), but a more correct
attribution may be to William Clement (1643-1710) [Heldman]. The workings of this escapement are outside the scope of this essay, but its effect on pendulum design was significant because it was used
to reduce the pendulum swing to 4-5°. (It is worth noting here that the verge escapement can have as small an angle of escape as desired by designing very long pallet arms and a large distance
between the horizontal escape wheel and the pivot arbor for the pallets and crutch—there are provincial French pendulum clocks, mostly of the 19th century, with this arrangment [Heldman]. But
certainly the anchor escapement triggered clock designs at the time that had small-angle swings.)
A reduced pendulum swing makes possible much longer pendulums for a given horizontal space. Clocks with 14ft. pendulums were built, for example, and Tompion produced a clock with a 13ft. pendulum
hung above the movement. Longer periods are more directly geared to clock time, but the small swing provided by the anchor escapement also significantly reduced friction at the pivot point. And now
that the swing was small, the silk cords that rolled over the cycloidal cheeks were replaced with a short strip of flat metal (a brass suspension spring) that simply flexed around the shorter arc of
the cheeks, decreasing the friction even more. When a metal rod and bob were connected to the strip, the entire pendulum manifested a permanent, all-metal construction. It might be noted here that
Huygens and others looked to long, slow pendulums for stability, but in fact more success in pendulum clocks was ultimately had with short, fast-moving pendulums.
The other major problem with pendulums was the change in length, and therefore the center of gravity, with temperature. Huygens never fully realized the effect of temperature on his clocks. On hot
days a pendulum lengthens slightly and the clock slows, and the opposite happens on cold days. George Graham (1673/4-1751) attempted to devise a pendulum using the varying expansion rate of metals to
remain isochronous over temperature ranges. In these designs the expansion in temperature of one metal is offset by the expansion of the other, designed so that the net length of the pendulum remains
constant. Failing to arrive at a suitable design, Graham settled on mercury-compensated pendulums as his solution. Here the pendulum is designed to hold mercury in a glass cylinder in much the same
way as a mercury thermometer. When the pendulum length increased with temperature, the mercury expanded as well, and vice-versa. When properly designed, the net center of gravity of the pendulum
remained unchanged regardless of temperature variations. Ingenious! Graham also invented the deadbeat escapement that made for quite small pendulum arcs.
(As an interesting notion Matthys mentions that if a pendulum is not temperature compensated, one might support the bob at the bottom edge. In this way the upward expansion of bob partially
compensates for downward expansion of pendulum rod.)
Well, just about. Harrison was also the first to realize the effect of atmospheric density on the period of a pendulum. Colder temperatures produce higher air densities, which alter the buoyancy of
the bob and therefore the restoring torque. Another factor, absolute humidity, affects the density and viscosity and thence the rate of energy loss, equilibrioum amplitude and period of a pendulum
[Emmerson]. From experiments performed with evacuated bell jars, Harrison adjusted his gridiron pendulum to account for the effect of temperature-induced density changes as well as for thermal
expansion! Harrison also first confronted the effect of air resistance after he invented the grasshopper escapement—the frictional losses were now so low in his clock that the pendulum swung wildly
until he attached small vanes to the pendulum. Air resistance is important, as over 90% of the drive energy imparted to a pendulum is lost through air drag. (Actually, some amount of air resistance
can provide stability to the amplitude of the pendulum swing.) Through this and many other innovations Harrison claimed a pendulum clock accuracy of 1 second per month, an achievement still very much
envied (and challenged).
There are other aspects of pendulums that are not considered here. For example, two or more pendulums that are lightly coupled (such as in clocks sitting on the same mantelpiece) will synchronize
their swings, but in the opposite direction. Huygens made the first observation of a coupled oscillator in just this way in 1665 while recovering from an illness. For unconstrained simple pendulums
with the same natural period, such a loose coupling results in a modal phenomenon in which the total swinging motion moves back and forth between them. Highly coupled oscillators, such as a compound
pendulum where one pendulum is hung from the bob of another, exhibit chaotic motion. Coulomb used a torsion pendulum in 1784 to quantify the electrostatic force, and Cavendish determined the density
of the Earth in 1798 using a pendulum. Foucault also famously used a pendulum in 1851 to directly demonstrate the rotation of the Earth. But in the end my fascination lies with the creative,
technical pursuits seen in the early designs of pendulum clocks.
Andrewes, W. J. H. (Ed.). The Quest for Longitude: The Proceedings of the Longitude Symposium Harvard University, Cambridge, Massachusetts, November 4-6, 1993. Cambridge: Collection of Historical
Scientific Instruments, Harvard University (1996). Wow, is this a neat book, a lavishly illustrated collection of fascinating essays by experts in horology on the pursuit to determine the longitude
of a person at sea, a huge historical problem. Essays that provided information for the present essay include The Longitude Timekeeper of Christiaan Huygens, by J.H. Leopold; ‘John Harrison,
Clockmaker & Barrow; Near Barton upon Humber; Lincolnshire’: The Wooden Clocks, 1713-1730, by Andrew L. King; and The Scandalous Neglect of Harrison’s Regulator Science, by Martin Burgess.
Emmerson, Alan. The papers in Horological Science by Mr. Emmerson presented here are a pedagogical treat, presenting clear mathematical explanations of pendulum physics such as the non-isochronous
behavior of a rigid pendulum suspended between cheeks as mentioned in this essay.
Heldman, Alan W. Personal Communications. Mr. Heldman’s horological knowledge led to several corrections and improvements to this essay, which is much appreciated.
Huygens, Christiaan. The Pendulum Clock or Geometrical Demonstration Concerning the Motion of Pendula as Applied to Clocks, Translated with Notes by Richard J. Blackwell. Ames: Iowa State University
Press (1986 translation of 1673 Horologium Oscillatorium). Surprisingly, this is the first English translation of Huygens’ book, and it’s a really interesting read. This is also the culminating
scientific work presented as geometrical physics (i.e., using geometric constructions as derivations and proofs, with relations between quantities of different dimensions expressed as proportions
rather than equations). Later works by others trended toward analytical approaches, particularly following the invention of calculus. Interestingly, Blackwell also notes that this book is based on an
axiom equivalent to the conservation of energy rather than the concept of forces developed later by Newton. An actual scan of the 1673 book from which the manuscript figures of this essay were drawn
is found at http://kinematic.library.cornell.edu:8190/kmoddl/toc_huygens1.html.
King, Henry C. Geared to the Stars: The Evolution of Planetariums, Orreries, and Astronomical Clocks. Toronto: University of Toronto Press (1978). An encyclopedic, out-of-print work on a niche
subject that is quite an expensive volume to buy on the used market. I was able to borrow it from a local library.
Mahoney, Michael S. Various fascinating papers, most of which involve Huygens, can be found at http://www.princeton.edu/~mike/17thcent.html. In particular, details of Huygens’ original cycloidal
derivations from 1659 can be found in Christiaan Huygens: The Measurement of Time and Longitude at Sea, and an interesting discussion of the physical and mathematical layers within Huygens’ drawings
is presented in Drawing Mechanics.
Matthys, Robert J. Accurate Clock Pendulums. Oxford University Press (2004). Lots of practical advice can be found in this book.
Printer-friendly PDF file of this post.
Almost Scientific › Submission #1 says:
December 10th, 2007 at 3:36 am
[…] Tick-Tok […]
Thanks, my first pingback—Ron D.
Gregg says:
January 10th, 2008 at 12:09 pm
I have been trying to set my pendulum clock for a couple of months now and for the life of me can not figure out whether moving the bob up speeds up the clock or slows it down. I realize the speed of
the clock is somewhat dependent on the distance of the bob. Does the bob move faster when farther away or slower? Any help would be great.
Hi Gregg. All other things being equal, the period of a pendulum increases as the bob moves further from the pivot point, so the clock slows down. The approximate equation for the period T of a
pendulum in terms of the acceleration g due to earth’s gravity, the constant pi, and the length L (measured from the pivot point to the center of mass) is T = 2π(L/g)^1/2, so the period increases as
the square root of the length. In other words, if we assume all the mass is located in the bob, doubling its distance will increase its period by a factor of 2^1/2=1.414. The exact equation for the
period of a pendulum is not expressible as a finite equation like this, but it turns out that the period calculated from it also varies directly with the square root of L. Perhaps your inconsistent
results are due to changes happening in the amplitude of the pendulum swing when you move the bob—larger swings of a pendulum increase its period, and here the exact equation can be approximated as T
= T[0](1 + θ^2/16), where T[0] is the small-angle period calculated from the earlier formula and θ is the amplitude (angle) of the swing. So if the swing increases as a side effect of moving the bob
closer to the pivot point, then they will have opposing effects on the period that can make understanding what is going on much more difficult. — Ron D.
Jim says:
January 16th, 2008 at 10:07 pm
I find it interesting that it has been demonstrated that it is impossible to make a perfect pendulum on Earth. In the ’50’s, a Russian (I have his name somewhere) developed the “perfect” pendulum
using counter operating suspension springs to achieve true cycloidal motion without friction. The pendulum was detached to avoid interference and incorporated every conceivable refinement for
temperature, pressure, etc. It worked great, except it was so precise it was affected by the tides and gained or lost time depending on the relative position of the moon.
I hadn’t heard of this. Alan Emmerson also refers to it in a later comment below. — Ron
FMS_Lima says:
February 26th, 2008 at 8:37 pm
A comment I have to do on the response of Ron D. to Gregg (#2 question, above) is that the pendulum period is actually affected by the angular amplitude (Ron D. named ”θ”), but the formula T=T[0](1+θ
^2/16) suggested by Ron is poor in comparison to a logarithmic one I introduced in one of my recent works [American Journal of Physics vol. 74 (10), p.892 (2006)]. Take a look and help me to
disseminate this interesting approach to the pendulum period (if you wish I can send you a PDF copy by e-mail). Thanks.
Thanks for the pointer! I’ve read through your paper (which can be downloaded as http://arxiv.org/vc/physics/papers/0510/0510206v1.pdf) and you’re absolutely correct. For the information of others
here, the approximation I provided (from the truncation of a series by Bernoulli) is calculated in the paper as having an error of 0.1% and 0.5% for amplitudes of 41° and 60°, respectively. Dr. Lima
derived a logarithmic approximation by linear interpolation of the denominator in the elliptic integral of the exact solution, yielding T = -T[0] ln(a)/(1-a), where T[0] is the small-angle formula
and a = cos(θ/2). This formula exhibits an error of 0.1% and 0.2% for amplitudes of 74° and 86°, respectively. This is a significant improvement for such a simple formula, and as the paper points
out, this is increasingly important as today’s electronic timers and detectors are available to students in physics labs. Thanks again for taking the time to comment on this. — Ron
FMS_Lima says:
April 14th, 2008 at 3:29 am
Dear R. Doerfler,
I’ve just submitted that other paper on the large-angle pendulum period to Am. J. Phys. This complete version will soon be posted on the Cornell “arxiv” and then you can post a copy on your WebSite.
You will surely be glad to know that I included you at the Acknowledgments, at the end of the paper.
After some days of rest I’ll try to solve some (several)century-old problems in Mathematics, namely the determination of a closed-form for zeta(3) = sum(1/(n^3), n=1..infinity) and also the Catalan
constant [=sum((-1)^(n-1)/((2n-1)^2), n=1..infinity)]. I think this will demand at least one year. In the meantime I intend to treat some other problems in quantum physics.
Fabio M. S. Lima
Alan Emmerson says:
December 23rd, 2008 at 8:42 pm
Liked your pendulum paper. The so called barometric effects are not actually caused by pressure variation. Bouyancy change due to density change due to temperature change alters the restoring torque,
absolute humidity change alters density and viscoscity and thence rate of energy loss, equilibrium amplitude and period.
The name of the Russian was Feodsii Michailovich Fedchenko. The path followed by the centre of mass of his pendulum has not been determined but it probably was not cycloidal.
All pendulum clocks can sense the lunar solar efffect ie tides. It’s just that the variation is swamped by other sources of instability.
You will see some papers on these subjects on my website.
Incidentally I reckon the term used to be deduced reckoning. Dead reckoning is a coruption.
Thanks for the correction on the barometric effects, Alan. I’ve updated the essay to incorporate your changes as well as some I received from others via email. I’ve also read that “dead reckoning”
comes from “deduced reckoning” and here I’m corrupting it even more with wordplay by implying dead forms of math reckoning.
Matt Healy says:
February 27th, 2009 at 11:49 pm
Perhaps the most accurate pendulum clock ever built was constructed by Prof. E.T. Hall in the 1990s.
It used an elaborate computerized control system that monitored the motions of the pendulum with an LED and photocell so that nothing needed to physically contact the pendulum and an electromagnet
gave it precisely enough energy to keep its swing at a constant amplitude. Temperature was controlled to a fraction of a degree Celsius. To reduce the effect of vibrations, he built an elaborate
support structure with about 12 tonnes of concrete. He managed accuracy about an order of magnitude better than the Shortt free pendulum clock of the 1920s
Wow, his clock simply amazing to read about. At the moment I’m reading “My Own Right Time: An Exploration of Clockwork Design” by Philip Woodward, from a recommendation of an horologist who contacted
me. What a great book–it’s his story of how he started out knowing very little about pendulum clocks and his failures and successes in trying to design the most accurate one he could. It’s extremely
readable with a light air, while delivering a great deal of technical information. (It’s also extremely expensive new at Amazon, so mine’s through interlibrary loan!) — Ron
Christian Gomez says:
March 14th, 2009 at 7:20 pm
this site is awesome
Donald J. Ziriax says:
December 6th, 2009 at 1:42 pm
I am looking for a pendulum for my Japanese gravity clock. Can you help?
Hi Donald. I’m sorry, but I don’t have any suggestions for you. If anyone else here who is familiar with this clock has a suggestion, please leave a comment here. Thanks. — Ron
Peter Goodwin says:
January 3rd, 2010 at 8:21 pm
Fascinating, even if a lot of the maths is beyond me. But one thing puzzles me, re the temperature compensation - this sentence: “If the ratio is 2:1, two rods can be used to expand downward and one
rod upward, and so forth for different ratios.” Isn’t it the relative lengths, not the number of rods? ie if the ratio of expansion coefficients is 1:2 then one rod of length x will require another
with length x/2 to compensate, and the doubling up of rods either side was just for reasons of mechanical construction or avoiding a bending moment?
Benji says:
October 20th, 2010 at 9:00 pm
I came across your blog from Make Magazine and I thoroughly enjoy it. The works of John Harrison have captivated me since I read Longitude by Dava Sobel in high school. It was at that point that I
realized all the science, math, and engineering I take for granted. A pair of 12 dollar wristwatches would have been a godsend in the 1700s. But I digress, I enjoy the site and look forward to future
Thanks, Benji. I haven’t posted lately but I’ve been very busy working on these types of things so there are essays in the pipeline. — Ron
Fred Thomas says:
January 3rd, 2011 at 11:54 am
Very interesting article. Pendulums are a very rich topic. I google “pendulum” every once in a while and came across your site. I wrote my Master Degree Thesis on the chaotic response of forced
pendulums. I find them of interest, obviously. I appreciate your article! Nice work. A summary of my work on the topic follows for those interested in pendulums. My university recently sent me a pdf
file of the many-year-ago effort. Posted it on Scribd. Some here might find it of interest. Who would think an oscillating mass would have such fidelity and breadth of application and theory. A link
to the full work is: http://www.scribd.com/doc/26783252/Fred-C-Thomas-III-BU-MSME-Thesis-Chaos-Theory-Demo-Machine-1990
Forced Chaotic Pendulum Paper Description:
A sinusoidally-forced, large-amplitude pendulum was designed and built to demostrate the chaotic behavior that can arise in a simple nonlinear system. Following a brief review of the terminology
associated with the study of chaos, the design of the forced pendulum, dubbed The Chaos Machine, is presented. Special attention is given to the electronic control system used to produce the
sinusoidally varying torque that drives the pendulum. Standard frequency response techniques and time-domain simulations are used in the design of the control system. Finally, typical responses of
the pendulum are presented that demonstrate phase-locked periodic and chaotic nonperiodic motion. The Chaos Machine promises to be a useful tool for teaching undergraduate students about nonlinear
system dynamics.
Thanks, Fred! I read through your entire thesis and found it to be the clearest explanation of chaos theory that I have encountered! I’ve never had detailed knowledge of chaos theory, so it was
refreshing to learn about it from such a well-written (and apparently letter-perfect!) source. The reason that a simple pendulum has a linear response at small angles is very apparent from your
explanation of the difference between the approximate linear and exact non-linear differential equations governing it (the use of theta rather than the actual sin(theta) in the equation).
I hope you are a science or electrical engineering teacher now, because you have a real flair for explaining things. It took me a long time (years, in fact) to realize the importance to the
engagement of the reader of starting from basic principles even though they are already known by the reader. Even today I struggle to write clearly, and my attempts on this blog represent many false
starts, stops and edits.
Hey, I just went to your website link–I’ve been there before to your page on charts and nomographs! I see you have pages on many educational topics.
Thanks again, Fred, for putting your thesis online. I enjoyed reading it, and I found a very old friend in the LM741 op-amp! I used them back in the early 80’s to make (among other things) a
closed-loop control system for the electromagnetic field of a large spectrometer by measuring the energy loss in an electrical circuit from the magnetic resonance of a glycerin tablet rotating in the
field. Cheers! — Ron
Fred Thomas says:
January 9th, 2011 at 5:23 pm
Thanks for your kind comments Ron! I still do enjoy writng a bit, but I am an engineer and spend most of my time working on developing new products. Work in the PC business now, after working on data
storage devices for 15 years (Zip drive etc.), prior to that I had an instrument business and prior to that worked in the defense electro-optics business. Pretty chaotic-pendulum-like career, but
there is a strange attractor central to it all
Sam Addington says:
February 28th, 2011 at 10:57 am
I have adjusted the screw on my pendulum as far down as it will go, but the clock still gains 5 minutes every 12 hours. I saw the suggestion of adding weight to the pendulum (say some paper clips?)
Will that work?
Hi Sam. It would be worth a try. Sometimes there is friction in the pendulum or gearwork mechanism and a bob with more mass would have more momentum that might overcome the frictional effects that
are not considered, or insufficiently accounted for, in theoretical results. Good luck, I hope it helps. Let me know either way. — Ron
Per Mark says:
February 12th, 2012 at 3:45 pm
Dear Sir,
The factor to the small-amplitude period for larger amplitudes can be written as a rational a/b where a = 6303 + c and b = 6303 - 11c and c =(w/10)^2, amplitude w in degrees! The formula is found by
writing the factor as a continued fraction and then evaluate.
Really! I didn’t know that. I know that rational expressions are very powerful tools in analysis, much more capable of fitting curves than polynomials, but I hadn’t encountered this factor for large
pendulum amplitudes. This is very interesting to me—thank you very much! — Ron | {"url":"http://myreckonings.com/wordpress/2007/11/19/the-not-so-simple-pendulum/","timestamp":"2014-04-20T00:41:47Z","content_type":null,"content_length":"74782","record_id":"<urn:uuid:73464e6b-766e-4d19-a8fb-9258ec6262dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate an Installment Loan Payment
Edit Article
Edited by Lojjik Braughler, Oliver, Jmuddy95, KommaH and 3 others
An installment payment, such as that paid monthly on a loan, is paid out to the lender with interest charges and, possibly, finance fees included. It is convenient to be able to make monthly
installments when borrowing a large sum of money as long as the monthly payment fits within your budget. It is important to know exactly what your installment payment will be for a specific amount of
money borrowed or "financed" so that there are no surprises when you receive your monthly bill or when the payment is automatically deducted from your bank account. To calculate an installment loan
payment, you will need some information and a home calculator or a calculator available on the Internet specifically for loan payments.
1. 1
Locate your loan paperwork.
2. 2
Find the annual interest rate on your loan.
3. 3
Find the loan amount.
4. 4
Find the number of payments required on the loan.
5. 5
Calculate the monthly interest rate by dividing the annual interest rate by 12.
6. 6
Add 1 to the monthly interest rate just calculated (for example, if the rate was 0.8333 you would add 1, making it 1.8333).
7. 7
Raise the above sum (such as 1.8333) using a negative exponent of the number of loan payments required (for example, if you have to make 36 loan payments, you raise 1.8333 to the -36 power).
8. 8
Subtract your answer from 1 (for example, 1 - [1.8333 to the -36]).
9. 9
Multiply your monthly interest rate by the loan amount.
10. 10
Divide the answer from (1 - [1.8333 to the -36]) to get your monthly installment.
• If you do not want to go through the calculations of this formula by hand or the "long way," you can easily find a loan calculator on the Internet for all types of loans. Mortgage loans are
calculated differently from car loans, for example, but different calculators are available. If you are interested in doing your own calculations, you can do them long hand or purchase an
inexpensive loan calculator that will help you perform the various functions.
• Raising a number to a negative exponent can be changed to 1 divided by the number (for example, 1/1.8333) and raising the denominator to the power specified (or 36). To raise the number to the
specified power, you must multiply the number by itself the number of times represented by the power (or 36 times in this example).
• You can also use common software programs, such as a spreadsheet program, to set up the calculations. The program will act as a "loan calculator" for you. Check the Internet and the sources
listed here for information on how to do this.
Things You'll Need
• Loan paperwork
• Paper, pen or pencil
• Pocket calculator or loan calculator (home version or Internet site)
• Spreadsheet software (optional)
Article Info
Categories: Mortgages and Loans
Recent edits by: Chris, HKristineWhite, KommaH
Thanks to all authors for creating a page that has been read 25,090 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Calculate-an-Installment-Loan-Payment","timestamp":"2014-04-20T01:53:56Z","content_type":null,"content_length":"63077","record_id":"<urn:uuid:2d692c2e-0f7c-4840-b696-a52fd89e0d4f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving for side of square given diagonal
August 16th 2011, 10:58 AM #1
Oct 2008
Solving for side of square given diagonal
Can someone correct me?
Problem: The diagonal of a square quilt is 4 times the square root of 2. What is the area of the quilt in square feet?
The answer is 4. My book suggests that we use the property of a 45 45 90 triangle to solve this. I understand that method. However, I initially attempted to solve it by using Pythagorean theorem
with the diagonal being the hypotenuse. I came up with 4 x 2^(1/2) = 2 x (a^2) I proceeded to divide the left side by 2, and I did not come up with 4 for the answer. Why is this?
Re: Solving for side of square given diagonal
If you want to use Phytagoras then, let $a$ be the side of the square quilt therefore:
Solve this equation for $a$.
Re: Solving for side of square given diagonal
Can someone correct me?
Problem: The diagonal of a square quilt is 4 times the square root of 2. What is the area of the quilt in square feet?
The answer is 4. <--- That is the side-length and not the area
My book suggests that we use the property of a 45 45 90 triangle to solve this. I understand that method. However, I initially attempted to solve it by using Pythagorean theorem with the diagonal
being the hypotenuse. I came up with 4 x 2^(1/2) = 2 x (a^2) I proceeded to divide the left side by 2, and I did not come up with 4 for the answer. Why is this?
1. Draw a sketch.
2. The quilt is placed inside a square of the side-length d.
3. You can prove (by congruent triangles) that the area of the quilt is
$A = \frac12 \cdot d^2$
4. Plug in the value for d and you'll get the given result.
5. If you want to calculate the side-length then $a = \sqrt{A}=d \cdot \sqrt{\frac12}$
Last edited by earboth; August 16th 2011 at 11:54 PM.
Re: Solving for side of square given diagonal
Siron has calculated the sides of the square when what was asked for was the area, s^2.
Re: Solving for side of square given diagonal
no kidding ...
Siron most probably assumed the OP was capable of determining the area of a square once he/she determined the side length.
Effective tutoring sometimes requires providing just enough information to let the OP solve the problem (or screw it up) on their own.
August 16th 2011, 11:20 AM #2
August 16th 2011, 10:20 PM #3
December 19th 2012, 04:18 PM #4
Feb 2010
December 19th 2012, 05:28 PM #5 | {"url":"http://mathhelpforum.com/geometry/186240-solving-side-square-given-diagonal.html","timestamp":"2014-04-19T04:13:29Z","content_type":null,"content_length":"46310","record_id":"<urn:uuid:48f2cb37-e6fa-4ea0-a384-e991277014a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00555-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Help Lenses (new problem post 1 help please) [Archive] - Giant in the Playground Forums
2009-10-22, 12:42 PM
A radar tower sends out a signal of wavelength lambda. It is x meters tall, and it stands on the edge of the ocean. A weather balloon is released from a boat that is a distance d out to sea. The
balloon floats up to an altitude h. In this problem, assume that the boat and balloon are so far away from the radar tower that the small angle approximation holds.
Due to interference with reflections off the water, certain wavelengths will be weak when they reach the balloon. What is the maximum wavelength that will interfere destructively?
Express your answer in terms of x, h, and d.
where the orange line is the first wavelength, the blue line is the second wavelength which reflects off the water, but also a part goes through.
The black square is the radio tower, and the circle the hot air ballon.
my work:
I extend the second wavelength, so it touched down (h+x) length while the first is (h-x) length.
From there i proceeded to find the length difference and set it equal to lambda/2.
for L1(the orange) i got
L1^2= h^2 + 2*h*x+x^2+d^2
L2^2= h^2 - 2*h*x +x^2 +d^2
take the sqrt of both and subtract L1 from L2 you get this:
L2-L1= sqrt(h^2 + 2*h*x+x^2+d^2) - sqrt(h^2 - 2*h*x +x^2 +d^2)=lambda/2 | {"url":"http://www.giantitp.com/forums/archive/index.php/t-129164.html","timestamp":"2014-04-21T09:59:03Z","content_type":null,"content_length":"11262","record_id":"<urn:uuid:c05c68c9-bb38-4866-9ff8-29712d7198cf>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00200-ip-10-147-4-33.ec2.internal.warc.gz"} |
Addition Chains Using Continued Fractions
- Theoretical Informatics and Applications , 1990
"... We show how to compute x k using multiplications and divisions. We use this method in the context of elliptic curves for which a law exists with the property that division has the same cost as
multiplication. Our best algorithm is 11.11% faster than the ordinary binary algorithm and speeds up acco ..."
Cited by 100 (4 self)
Add to MetaCart
We show how to compute x k using multiplications and divisions. We use this method in the context of elliptic curves for which a law exists with the property that division has the same cost as
multiplication. Our best algorithm is 11.11% faster than the ordinary binary algorithm and speeds up accordingly the factorization and primality testing algorithms using elliptic curves. 1.
Introduction. Recent algorithms used in primality testing and integer factorization make use of elliptic curves defined over finite fields or Artinian rings (cf. Section 2). One can define over these
sets an abelian law. As a consequence, one can transpose over the corresponding groups all the classical algorithms that were designed over Z/NZ. In particular, one has the analogue of the p \Gamma 1
factorization algorithm of Pollard [29, 5, 20, 22], the Fermat-like primality testing algorithms [1, 14, 21, 26] and the public key cryptosystems based on RSA [30, 17, 19]. The basic operation
performed on an elli...
- SIAM Journal on Computing , 1999
"... Abstract. An addition chain for a positive integer n is a set 1 = a0
Cited by 10 (0 self)
Add to MetaCart
Abstract. An addition chain for a positive integer n is a set 1 = a0 <a1 < ·· · <ar = n of integers such that for each i ≥ 1, ai = aj + ak for some k ≤ j<i. This paper is concerned with some of the
computational aspects of generating minimal length addition chains for an integer n. Particular attention is paid to various pruning techniques that cut down the search time for such chains. Certain
of these techniques are influenced by the multiplicative structure of n. Later sections of the paper present some results that have been uncovered by searching for minimal length addition chains.
- Proceedings of ACISP 05, LNCS 3574 , 2005
"... Abstract. In this paper we introduce so-called redundant trinomials to represent elements of nite elds of characteristic 2. The concept is in fact similar to almost irreducible trinomials
introduced by Brent and Zimmermann in the context of random numbers generators in [BZ 2003]. See also [BZ]. In f ..."
Cited by 7 (0 self)
Add to MetaCart
Abstract. In this paper we introduce so-called redundant trinomials to represent elements of nite elds of characteristic 2. The concept is in fact similar to almost irreducible trinomials introduced
by Brent and Zimmermann in the context of random numbers generators in [BZ 2003]. See also [BZ]. In fact, Blake et al. [BGL 1994, BGL 1996] and Tromp et al. [TZZ 1997] explored also similar ideas
some years ago. However redundant trinomials have been discovered independently and this paper develops applications to cryptography, especially based on elliptic curves. After recalling well known
techniques to perform e cient arithmetic in extensions of F2, we describe redundant trinomial bases and discuss how to implement them e ciently. They are well suited to build F2n when no irreducible
trinomial of degree n exists. Depending on n ∈ [2, 10, 000] tests with NTL show that improvements for squaring and exponentiation are respectively up to 45 % and 25%. More attention is given to
relevant extension degrees for doing elliptic and hyperelliptic curve cryptography. For this range, a scalar multiplication can be speeded up by a factor up to 15%. 1.
- e7 ← −e2 + yq; (e7 = −ypr0 + yq) 7: e8 ← −e0 + e4; (e8 = −r 2 0 + ypyq) 8: e9 ← e7e8; (e9 = (−ypr0 + yq)(−r 2 0 + ypyq)) 9: a1 ← e9 − e3 − e5; a0 ← e3 − e5 − yp; 10: a3 ← −e1 + e6; a2 ← −yp; a4 ←
0; a5 ← −yq; B Techniques for Reducing Partial Products in , 2003
"... Abstract. We study exponentiation in nonprime finite fields with very special exponents such as they occur, for example, in inversion, primitivity tests, and polynomial factorization. Our
algorithmic approach improves the corresponding exponentiation problem from about quadratic to about linear time ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. We study exponentiation in nonprime finite fields with very special exponents such as they occur, for example, in inversion, primitivity tests, and polynomial factorization. Our algorithmic
approach improves the corresponding exponentiation problem from about quadratic to about linear time. 1.
, 1996
"... We investigate three approaches to VLSI implementation of wavelet filters. The direct form structure, the lattice form structure, and an algebraic structure are used to derive different
architectures for wavelet filters. The algebraic structure exploits conjugacy properties in number fields. All app ..."
Add to MetaCart
We investigate three approaches to VLSI implementation of wavelet filters. The direct form structure, the lattice form structure, and an algebraic structure are used to derive different architectures
for wavelet filters. The algebraic structure exploits conjugacy properties in number fields. All approaches are explained in detail for the Daubechies 4-tab filters. We outline the philosophy of a
design method for integrated circuits. Keywords: Wavelet filter, Daubechies wavelets, integrated circuits, VLSI. 1 INTRODUCTION We investigate different methods to implement orthonormal wavelet
filters as integrated circuits. Many applications of these filters, e. g. in video coding, require high performance and cost effective implementations, which can be achieved by full custom VLSI
implementations. Our main interest is to investigate the relation between the mathematical structure of the filters and their physical implementations. Wavelet filters can be realized in various
ways. Basically,... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=365456","timestamp":"2014-04-21T16:25:11Z","content_type":null,"content_length":"24664","record_id":"<urn:uuid:7da3777c-d2d6-4062-af28-7dfc86bf566e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00370-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Gmath-devel] Re: [Numpy-discussion] Derivatives
Hassan Aurag aurag at crm.umontreal.ca
Wed Mar 1 13:46:42 CST 2000
I have to agree with you on most counts that Numerical Recipes in C=20
is not a full-blown encyclopedia on all subtleties of doing numerical=20
However it does its job greatly for a big class of stuff I am=20
interested in: minimization, non-linear system of equations solving=20
(The newton routine given there is good, accurate and fast.)
There are errors and problems as in most other numerical books. In=20
truth, I don't think there is anything fully correct out there.
When trying to make computations you have to do a lot of testing and=20
a lot of thought even if the algorithm seems perfect. That applies to=20
all books, recipes et al.
I have just discovered that tan(pi/2) =3D 1.63317787284e+16 in=20
numerical python. And we all know this is bad. It should be infinity=20
period. We should define a symbol called infinity and put the correct=20
definition of tan, sin .... for all known angles then interpolate if=20
needed for the rest, or whatever is used to actually compute those thing=
- ------- End of Forwarded Message
I believe you're actually talking about the math module, not the numeric
module (I'm not aware of tan or pi definitions in numeric, but I haven't
bothered to double check that). Never the less, I think it has relevance here
as Numeric is all about doing serious number crunching. This problem is caused
by the lack of infinite precision in python. Of course, how is it even
possible to perform infinite precision on non-rational numbers?
The obvious solution is to allow the routine (tan() in this case) to recognize
named constants that have relevance in their domain (pi in this case). This
would fix the problem:
math.tan(math.pi) = -1.22460635382e-16
but it still doesn't solve your problem because the named constant would have
the mathematical operation performed on it before it's passed into the
function, ruining whatever intimate knowledge of the given named constant that
routine has.
Perhaps you could get the routine to recognize rational math on named
constants (the problem with that solution is how do you not burden other
routines with the knowledge of how to process that expression). Assuming you
had that, even for your example, should the answer be positive or negative
Another obvious solution is just to assume that any floating point overflow is
infinity and any underflow is zero. This obviously won't work because some
asymptotic functions (say 1/x^3) will overflow or underflow at values for x
for which the correct answer is not properly infinity or zero.
It is interesting to note that Matlab's behaviour is the same as Python's,
which would indicate to me that there's not some easy solution to this problem
that Guido et. al. overlooked. I haven't really researched the problem at all
(although now I'm interested), but I'd be interested if anyone has a proposal
for (or reference to) how this problem can be solved in a general purpose
programming language at all (as there exists the distinct possibility that it
can not be done in Python without breaking backwards compatibility).
-----BEGIN PGP SIGNATURE-----
Version: PGPfreeware 5.0i for non-commercial use
Charset: noconv
-----END PGP SIGNATURE-----
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2000-March/000064.html","timestamp":"2014-04-20T02:35:34Z","content_type":null,"content_length":"5813","record_id":"<urn:uuid:ceeb0dbc-e42d-4a9e-b2c4-f1ddfdff6a91>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
The grammar of graphics (L. Wilkinson)
October 30, 2009
By Vincent Zoonekynd's Blog
Though this book is supposed to be a description of the graphics infrastructure a statistical system could provide, you can and should also see it as a (huge, colourful) book of statistical plot
The author suggests to describe a statistical plot in several consecutive steps: data, transformation, scale, coordinates, elements, guides, display. The "data" part performs the actual statistical
computations -- it has to be part of the graphics pipeline if you want to be able to interactively control those computations, say, with a slider widget. The transformation, scale and coordinate
steps, which I personnally view as a single step, is where most of the imagination of the plot designer operates: you can naively plot the data in cartesian coordinates, but you can also transform it
in endless ways, some of which will shed light on your data (more examples below). The elements are what is actually plotted (points, lignes, but also shapes). The guides are the axes, legends and
other elements that help read the plot -- for instance, you may have more than two axes, or plot a priori meaningful lines (say, the first bissectrix), or complement the title with a picture (a
"thumbnail"). The last step, the display, actually produces the picture, but should also provide interactivity (brushing, drill down, zooming, linking, and changes in the various parameters used in
the previous steps).
In the course of the book, the author introduces many notions linked to actual statistical practice but too often rejected as being IT problems, such as data mining, KDD (Knowledge Discovery in
Databases); OLAP, ROLAP, MOLAP, data cube, drill-down, drill-up; data streams; object-oriented design; design patterns (dynamic plots are a straightforward example of the "observer pattern"); eXtreme
Programming (XP); Geographical Information Systems (GIS); XML; perception (e.g., you will learn that people do not judge quantities and relationships in the same way after a glance and after lengthy
considerations), etc. -- but they are only superficially touched upon, just enough to wet your apetite.
If you only remember a couple of the topics developped in the book, these should be: the use of non-cartesian coordinates and, more generally, data transformations; scagnostics; data patterns, i.e.,
the meaningful reordering of variables and/or observations.
<p><a href="http://zoonek.free.fr/blosxom/R/2006-08-27_The_Grammar_of_Graphics.html?seemore=y" class="seemore">See more ...</a></p>
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/the-grammar-of-graphics-l-wilkinson-2/","timestamp":"2014-04-18T13:27:20Z","content_type":null,"content_length":"36817","record_id":"<urn:uuid:89f59cf0-fbdc-491b-986d-b643bc9162b2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Concepts -- Combinations
Let’s start once again with a deck of 52 cards. But this time, let’s deal out a poker hand (5 cards). How many possible poker hands are there?
At first glance, this seems like a minor variation on the Solitaire question above. The only real difference is that there are five cards instead of six. But in face, there is a more important
difference: order does not matter. We do not want to count “Ace-King-Queen-Jack-Ten of spades” and “Ten-Jack-Queen-King-Ace of spades” separately; they are the same poker hand.
To approach such question, we begin with the permutations question: how many possible poker hands are there, if order does matter? 52 × 51 × 50 × 49 × 48 52×51×50×49×48, or 52!47!52!47! size 12{ {
{"52"!} over {"47"!} } } {}. But we know that we are counting every possible hand many different times in this calculation. How many times?
The key insight is that this second question—“How many different times are we counting, for instance, Ace-King-Queen-Jack-Ten of spades?”—is itself a permutations question! It is the same as the
question “How many different ways can these five cards be rearranged in a hand?” There are five possibilities for the first card; for each of these, four for the second; and so on. The answer is 5!
which is 120. So, since we have counted every possible hand 120 times, we divide our earlier result by 120 to find that there are 52!(47!)(5!)52!(47!)(5!) size 12{ { {"52"!} over { \( "47"! \) \( 5!
\) } } } {}, or about 2.6 Million possibilities.
This question—“how many different 5-card hands can be made from 52 cards?”—turns out to have a surprisingly large number of applications. Consider the following questions:
• A school offers 50 classes. Each student must choose 6 of them to fill out a schedule. How many possible schedules can be made?
• A basketball team has 12 players, but only 5 will start. How many possible starting teams can they field?
• Your computer contains 300 videos, but you can only fit 10 of them on your iPod. How many possible ways can you load your iPod?
Each of these is a combinations question, and can be answered exactly like our card scenario. Because this type of question comes up in so many different contexts, it is given a special name and
symbol. The last question would be referred to as “300 choose 10” and written 3001030010 size 12{ left ( matrix { "300" {} ## "10" } right )} {}. It is calculated, of course, as 300!(290!)(10!)300!
(290!)(10!) size 12{ { {"300"!} over { \( "290"! \) \( "10"! \) } } } {} for reasons explained above. | {"url":"http://cnx.org/content/m19071/latest/?collection=col11224/latest","timestamp":"2014-04-20T08:29:59Z","content_type":null,"content_length":"82496","record_id":"<urn:uuid:f36cabdc-b5ef-4aff-aa55-c30649628f5e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00577-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: -sampsi- command and exact tests
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: -sampsi- command and exact tests
From Joseph Coveney <jcoveney@bigplanet.com>
To Statalist <statalist@hsphsun2.harvard.edu>
Subject Re: st: -sampsi- command and exact tests
Date Wed, 02 Feb 2005 17:29:29 +0900
David Miller wrote:
I understand from several previous posts that the command -sampsi- uses
an approximate large sample test on proportion. power, and sample size
calculations. Specifically, it uses the normal approximation (with
correction) as opposed to an exact test. The advice in a previous post
was that the following equalities must hold in order for sampsi to work
(see post from ymarchenko@stata.com entitled st:RE: calculation of
sample size, dated 8 Oct 2004)
I am trying to use sampsi to estimate the required number of samples as
sampsi 0.4 0.46, alpha(0.05) power(0.90) onesample
Stata indicates that 711 samples are required as indicated in the Stata
output below:
sampsi 0.4 0.46, alpha(0.05) power(0.90) onesample
Estimated sample size for one-sample comparison of proportion
to hypothesized value
Test Ho: p = 0.4000, where p is the proportion in the population
alpha = 0.0500 (two-sided)
power = 0.9000
alternative p = 0.4600
Estimated required sample size:
n = 711
This seems to meet the n1p1>=10 etc. requirements listed above to use
the -sampsi- command. However, I am told that the right answer using
NQuery Advisor and its exact test for single proportions is 610
observations. S-Plus gives 613 as an answer. StatXact also gives a
similar answer to NQuery Advisor and S-Plus.
Does anyone know if Stata's use of the normal approximation (with
continuity correction) is indeed what is causing the 100+ discrepancy
here? Is there is an exact test in Stata that can be used instead of
-sampsi-? And are there additional criteria beyond the n1p1>=10 etc.
criteria listed in the referenced previous post that should be checked
before using -sampsi- ?
I have used the -findit- command to see if there is an exact test
available and looked at both Roger Newson's -powercal- command described
in the most recent Stata Journal (4th Quarter 2004) as well as Al
Feiveson's article entitled "Power by Simulation" (Stata Journal, 2nd
Quarter 2002) and wasn't able to find the answer to this question. Is
the -sampncti- command appropriate here?
In simulations, using both Wilson's score method in -ciw- and the exact
binomial test in -bitest- (if I understand its output correctly), it seems
that Stata is on the mark and NQuery Advisor, S-Plus or StatXact are not.
(See do-file below.) A sample size of 600 give about 80 to 85% power for a
two-sided test; 700 gives around 90%. Output from the do-file below:
Sample size: 600, Power: Wilson = 0.85 Exact = 0.83
Sample size: 650, Power: Wilson = 0.88 Exact = 0.88
Sample size: 700, Power: Wilson = 0.90 Exact = 0.90
Sample size: 750, Power: Wilson = 0.91 Exact = 0.91
1. It doesn't appear that Stata's use of the normal approximation accounts
for the discrepancy.
2. Yes; there is an exact test available in Stata.
3. I don't know of any additional criteria to check first, other than
whether the hypothesis pair is directional.
4. -samplncti- wouldn't be appropriate here. (But I've read where power
calculations for binomial data have been done with the Student's t test
statistic in simulations, and it appears to be pretty good for that
Joseph Coveney
set more off
set seed `=date("2005-02-02", "ymd")'
set seed0 `=date("2005-02-02", "ymd")'
local reps = 10000
set obs `reps'
generate byte Wilson = .
generate byte Exact = .
generate float pi1 = 0.46
forvalues den = 600(50)750 {
generate int den = `den'
quietly rndbinx pi1 den
quietly compress
forvalues i = 1/`reps' {
local successes = bnlx[`i']
quietly ciwi `den' `successes'
quietly replace Wilson = (r(lb) > 0.4) in `i'
quietly bitesti `den' `successes' 0.4
quietly replace Exact = (r(p) < 0.05) in `i'
summarize Wilson, meanonly
local Wilson = r(mean)
summarize Exact, meanonly
local Exact = r(mean)
display in smcl as result "Sample size: `den', Power: Wilson = " ///
%4.2f `Wilson', "Exact = " %4.2f `Exact'
drop den bnlx
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2005-02/msg00020.html","timestamp":"2014-04-20T11:44:29Z","content_type":null,"content_length":"8939","record_id":"<urn:uuid:6894ca4f-d081-4024-a7f3-b0bda8348bf2>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00657-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem 71b
Problem 71b. Sorting a subarray.
Given an array a[1] .. a[n] of names and ages, so that each element a[k] is a pair,
a[k] = (string "Name", int age)
Given indexes L and R such that
1 <= L < R <= n.
Sort the subarray a[L] .. a[R] in increasing order of names (or nondecreasing order if some names are equal), but do not change the position of names outside of this sub-array. The sort should be
stable, namely, if two names are equal, then you may not interchange their positions.
Input. The value of n, L, and R on the first line, followed by one pair of values per line: name comma age. For example.
Chua Sandra Jean, 20
Vidal Eric, 21
Sarmenta Luis, 35
Sarmenta Luis, 37
Tan-Rodrigo Mercedes, 36
Output. The entire array, printed out one name-age pair per line, with the subarray sorted as required, and the rest of the array left alone untouched. | {"url":"http://curry.ateneo.net/acm-icpc/prob71b.html","timestamp":"2014-04-19T22:59:39Z","content_type":null,"content_length":"1491","record_id":"<urn:uuid:8282f21d-7489-41bc-9e1f-ee98db1c20a4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Geometric Random Variable
│GEOMET.83p │GEOMET.86p │geomet.89p │
Suppose you have probability p of succeeding on any one try. If you make independent attempts over and over, then the geometric random variable, denoted by X ~ geo(p), counts the number of attempts
needed to obtain the first success.
We often let q = 1 - p be the probability of failure on any one attempt. Then the probability of having the first success on the kth attempt, for k >= 1, is given by
P(X = k) = q^(k - 1) * p
(from k-1 failures followed by a success).
The average (or mean) number of attempts needed to succeed is given by E[X] = 1 / p.
The variance of the number of attempts needed is given by Var(X) = (1 - p) / p^2.
The most likely number of attempts needed, or mode, is given by the integer k such that P(X = k) is maximized. In this case, it will always be k = 1. Thus, if you are going to succeed, then you are
most likely to succeed on the first try.
The cumulative distribution function (or cdf) is given by P(X <= k) = 1 - q^k. This function can be interpreted as the probability of succeeding within k attempts.
We can use the cdf for computing probabilities such as P(j <= X <= k), which is the probability that it will take from j attempts to k attempts to succeed. This value is given by
P(j <= X <= k)
= P(X <= k) - P(X <= j-1)
= (1 - q^k) - (1 - q^(j-1))
= q^(j-1) - q^k .
Lastly, by solving the equation P(X <= k) >= r, (and rounding up to the nearest integer) we can find the number of attempts needed to have at least probability r of success within this number of
attempts. Solving 1 - q^k >= r, we obtain k >= ln(r) / ln(q).
Using the GEOMET Program
The GEOMET program can be used to compute probabilites such as P(j <= X <= k), P(X = k), and P(X <= k). To execute the program, we enter the value of p and the lower and upper bounds of j and k, for
1 <= j <= k. (Enter the same value k for both the lower and upper bound to compute a pdf value P(X = k).) The program also asks if you want a complete distribution to be entered into the STAT Edit
screen. If so, then enter 1. If not, then enter 0. The program then displays P(j <= X <= k) along with the average number of attempts needed to succeed and the standard deviation.
If you enter 1, then most of the distribution will be entered into the STAT Edit screen. Under L1, the possible number of attempts 1, . . . , n are listed. Under L2, the pdf values of P(X = k), for 1
<= k <= n, are listed. Under L3, the cdf values P(X <= k), for 1 <= k <= n, are listed. The upper bound n is chosen so that P(X <= n) exceeds 0.975.
Click here for info on the TI-86 and TI-89 Stat Edit displays.
Example. Suppose one die is rolled over and over until a Two is rolled. What is the probability that it takes
(a) from 3 to 6 rolls?
(b) exactly 4 rolls?
(c) at most 5 rolls?
(d) How many rolls are needed so that there is at least 0.95 probability of rolling a Two within this number of rolls?
Solution.After calling up the GEOMET program, enter 1 / 6 for PROBABILITY, enter 3 for LOWER BOUND, and enter 6 for UPPER BOUND. Also enter 1 for a complete distribution.
(a) We find that P(3 <= X <= 6) = 0.359546, and that the average number of rolls needed is 6 with a standard deviation of 5.47723.
(b) and (c) From looking at the complete distribution in the list editor, we see that P(X = 4) = 0.09645 and P(X <= 5) = 0.59812.
(If you did not enter 1 for a complete distibution, then you can execute the program again with 4 for both LOWER BOUND and UPPER BOUND to find P(X = 4), and you can reexecute the program with bounds
of 1 and 5 to find P(X <= 5).
(d) The number of rolls needed must satisfy k >= ln(.05) / ln(5/6) = 16.431; thus 17 rolls are needed to have a 95% chance of rolling a Two.
1. If two dice are rolled until a Double Six is rolled, what is the probability that it will take (a) at most 24 rolls (b) at least 10 rolls?
2. When dealing 5 cards from a shuffled deck, the probability of dealing a hand with exactly one pair is about 0.42257. If you want the probability of dealing a pair within n independent tries to be
at least 0.99, then how many deals n should be made?
1. Here, X ~ geo(1 / 36). Since there must always be at least one attempt, we first wish to find P(1 <= X <= 24). (a) In the GEOMET program, enter 1 / 36 for PROBABILITY, enter 1 for LOWER BOUND, and
enter 24 for UPPER BOUND. We see that P(1 <= X <= 24) = 0.4914. Alternately, P(X <= 24) = 1 - (35/36)^24.
(b) P(X >= 10) = 1 - P(1<= X <= 9) = 1 - (1 - (35/36)^9) = (35/36)^9 = 0.77605 .
2. Here X ~ geo(0.42257), and we wish to find k so that P(X <= k) >= 0.99. Thus, we must solve for k in the inequality 1 - q^k >= 0.99. We obtain, k >= ln(0.01) / ln(q) = ln(.01) / ln(.57743) =
8.3857. Thus, k = 9 deals are needed.
Return to Table of Contents. | {"url":"http://people.wku.edu/david.neal/statistics/discrete/geometric.html","timestamp":"2014-04-19T09:27:18Z","content_type":null,"content_length":"6338","record_id":"<urn:uuid:60ee8f12-ed9a-45e2-8568-cbbfbfa2e295>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00025-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is Teaching with Spreadsheets?
Do you ever refer to numbers when teaching in your discipline? Do you ever use mathematical models to describe phenomena? If so, then a spreadsheet program might be a useful tool to enhance learning.
Any discipline that uses numbers can make use of spreadsheets.
Spreadsheets allow students to "get their hands dirty" by working with real-world data. Spreadsheets make abstract or complex models accessible by providing concrete examples and allowing "what if"
analyses. Charts on a printed page are "dead" while spreadsheets representations are "live" in that students can interact with the concepts underlying them.
Spreadsheets promote learning in a variety of ways from helping to prepare lectures to creating laboratory sessions, and they can be integrated with a number of other teaching techniques. See more
about how spreadsheets can be used with different pedagogies.
As with any tool, a teacher must consider how much time should be devoted to learning spreadsheets. The case for teaching spreadsheet skills may be more compelling than for other specialized
software, because students are likely to use spreadsheet programs in other classes, careers, or in personal life.
However, there is still a trade-off in taking class time to instruct spreadsheet programming. In addition, when considering any quantitative assignment, teachers must also consider the level of
mathematics required. Spreadsheets may be used flexibly by either requiring spreadsheet construction or sophisticated mathematics or masking the technical skills used to solve a problem. In this way,
spreadsheets may be used to either facilitate student understanding of spreadsheet and quantitative skills or allow students to explore the concepts underlying models without understanding the
mathematical or spreadsheet construction details. Any of these approaches allow students to build an intuitive understanding of quantitative approaches. See more about varying technical content.
Learn more about Excel basics or more advanced spreadsheet tools.
The Teaching with Data module contains more information on how to teach with data. Spreadsheet programs contain a number of powerful tools, some well-known, some less so.
The online journal
Spreadsheets in Education
is a good resource for scholarly articles on teaching and learning using spreadsheets. | {"url":"https://serc.carleton.edu/sp/library/spreadsheets/what.html","timestamp":"2014-04-17T06:43:20Z","content_type":null,"content_length":"28693","record_id":"<urn:uuid:60d75794-2513-4417-9ebe-59e081adde7e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Moment generating function
April 26th 2009, 12:11 PM
Moment generating function
Consider a hierarchical model that defines the joint distribution of (X,Z). Let
X|Z=z ~ N(0,z^2) where Z ~ N(0,1). Determine the moment generating function of X.
My solution so far is attached. Am I going in the right direction? If so, how do I solve the integral for the marginal PDF of X? Or am I doing completely useless stuff?
April 26th 2009, 09:14 PM
1 Your attachment has errors in it
2 I also started working on this IGNORING the MGF comment
3 THEN I realized that z is your standard deviation, hence it cannot be a N(0,1) rv, try a chi-square maybe. Or U(0,1)? | {"url":"http://mathhelpforum.com/advanced-statistics/85777-moment-generating-function-print.html","timestamp":"2014-04-17T06:07:24Z","content_type":null,"content_length":"3865","record_id":"<urn:uuid:04f8a088-1ffe-4500-9e31-0158133599a9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Text A Tutor
Text A Tutor - Live Math Help
Welcome to "Text A Tutor: Live Math Help" for SMARTPHONES gives you access to a math tutor via smartphone when you need it. Send texts or pictures of your problems 365 days per year! New users get a
free credit!
After your free trial you can get additional help with math homework for as little as $1 per question/credit! We respond in less than 10 minutes (between noon-midnight EST) or we'll give your money/
credits back! We also offer help outside those times by appointment.
Simply submit your math problems via the app by typing in a question or using your smartphone's camera. Within minutes our tutors will respond with explanations, worked out solutions, or even video
right to your phone. If you don't understand simply reply back to the tutor and they can give additional help!
This service will change the way you get help with your math! Text A Tutor is not automated! All responses come from real qualified people with math degrees! We started this app so we can help you
with your math!
We currently offer help in most math topics taught in high school and entry level college classes including: pre-algebra, algebra, geometry, algebra 2, pre-calculus, trigonometry, statistics, and
calculus 1 and calculus 2.
Users can get a free text if they like us on Facebook. Go to "Text A Tutor" on Facebook for more info!***
Text A Tutor Policies:
***You MUST have texting through your phone provider to use this service!
***We answer math questions from Pre-Algebra to Calculus 2! Statistics too!
***You must first purchase texts/credits to use this service and then enter in the code! Click the purchase texts button to go to our website! It's super fast and easy to do using Google Checkout!
Just like buying an app! You can get packs as low as $1 per text.
***We answer questions within 10 minutes during open hours. Worked out solutions and videos may take longer!
***Please know our service is not automated. We have real live tutors to help!
***All tutors have degrees in mathematics and are currently employed as highly trained teachers.
***Winter Hours: We will answer everyday from 11 am to midnight (EST)! You may send texts at other times but response time may be longer depending on our tutors locations during these hours!
***At times our tutors may be busy! Please be patient! Our goal is to answer within 10 minutes during our open hours ... however times may vary depending on the type of response desired (ie: work,
video, etc)!
***Worked solutions and video responses are not always available. It depends on our tutor's locations and available technology.
***We do have the ability to block customers and will do so if a customer uses offensive language, tries to hack our app, or break our policies.
***We only tutor in the USA!
***We do not condone cheating. Please use our service responsibly.
Text A Tutor respects your confidentiality of your personal information. We will only send you messages about your questions and our service. We will never send you spam or advertisements nor will we
sell your personal information to a 3rd party.
If you have questions about our service please contact us at textmathtutor@gmail.com.
Key words: math help, math tutor, statistics tutor, statistics help, calculus tutor, calculus help, hwpic (R), math tutoring, tutor.com (R), motuto (R), myHomework (R), hw pic (R), live math help,
real math help, help with math, math tutors, math tutoring, texting a tutor, text tutor, tutor texting, math help, math help, get math help now, math help. Our service is similar to "cha cha" but
dealing specifically with math topics.
We have no association/relation with any of these products: tutor.com, motuto, TutaPoint.com, hwpic, hw pic, or myHomework!
TextATutor (C) 2011. All rights reserved.
Awesome! Extremely helpful? Fast and efficient. Clear and concise! Thank you for saving what could have been a night of freshman frustration.
Great App! This is a great app, I so recommend it, the only thing bad is that it start you off with one credit.
Cool This app is very helpful i recommed this to anyone who's struggling or don't understand.
Really good it's really helpful apps it really works good really helpful app it works really well
Excellent Very good app, excellent service and support. They make it very easy to understand some problems that I have difficulty with.
the best ever! Really fast! Also the best is if one can't help they transfer to another one. Really fast! Good!I will be buying more credits when I need more help Thanks!
What's New
TextATutor now has support for Calculus 2!
Please know .... you get a free credit/trial when you download the app ... but after that we do require you to purchase credits for our service!
Like us on Facebook for an additional free credit!
We'll respond in less than 10 minutes (between noon-midnight EST) or we'll give your money/credits back! After your free credit you will need to purchase credits for additional math help.
Math Helper Free solves math problems and shows step-by-step solution.
Math Helper is an universal assistant app for solving mathematical problems for Algebra I, Algebra II, Calculus and Math for secondary and college students, which allows you not only to see the
answer or result of a problem, but also a detailed solution (in full version).
[✔] Linear Algebra - Operations with matrices
[✔] Linear algebra - Solving systems of linear equations
[✔] Vector algebra - Vectors
[✔] Vector algebra - Shapes
[✔] Calculus - Derivatives
[✔] Calculus - Indefinite Integrals (integrals solver) - Only in Full version
[✔] Calculus - Limits - Only in Full version
[✔] The theory of probability
[✔] The number and sequence
[✔] Function plotter
Derivatives, limits, geometric shapes, the task of statistics, matrices, systems of equations and vectors – this and more in Math Helper!
✧ 10 topics and 43+ sub-section.
✧ Localization for Russian, English, Italian, French, German and Portuguese
✧ Intel ® Learning Series Alliance quality mark
✧ EAS ® approved
✧ More than 10'000 customers all over the world supported development of Math Helper by doing purchase
✧ The application is equipped with a convenient multi-function calculator and extensive theoretical guide
✪ Thank you all helping us to reach 800'000+ downloads of Math Helper Lite
✪ You could also support us with good feedback at Google Play or by links below
✪ Our Facebook page: https://www.facebook.com/DDdev.MathHelper
✪ Or you could reach us directly by email
We have plans to implement
● Numbers and polynomial division and multiplication
● Implement new design and add 50+ new problems
● New applications, like Formulae reference for college and university and symbolic calculator
MathHelper is a universal assistant for anyone who has to deal with higher mathematics, calculus, algebra. You can be a student of a college or graduate, but if you suddenly need emergency assistance
in mathematics – a tool is right under your fingertips! You could also use it to prepare for SAT, ACT or any other tests.
Don't know how to do algebra? Stuck during calculus practice, need to solve algebra problems, just need integral solver or limits calculus calculator? Math Helper - not only math calculator, but
step-by-step algebra tutor, will help to solve derivative and integral, maths algebra, contains algebra textbooks - derivative and antiderivative rules (differentiation and integration), basic
algebra, algebra 1 and 2, etc.
Good calculus app and algebra for college students! Better than any algebra calculator with x, algebra solving calculator or algebra graphing calculator - this is calculus solver, algebra 1 and 2
solver, with step-by-step help, calculator, integrated algebra textbooks and formulas for calculus. This is not just a math answers app, but math problem solver! This mathematics solver can solve any
math problem - from basic math problems to integral and derivative, matrix, vectors, geometry and much more. For everyone into math learning. This is not just math ref app - this is ultimate math
problem solver.
Discover a new shape of mathematics with MathHelper! Real math treasure!
- large display, tablet and landscape support
- multiple formula calculations
- plot with multiple functions
- modulo calculation with big integer
- differential and integral calculus
- curve sketching, limes, minima, maxima
- linear algebra, vectors, matrices
- unit conversion
- number systems, binary/oct/decimal/hex calculator
- complex number calculation
Supported languages:
English, German, Francais, Espanol, Italian, Portuguese
Mathematics is a powerful calculation software for your android smartphone.
Calculate any formula you want and show them in a 2d or 3d plot. The natural display shows fractions, roots and exponents as you would expect it from mathematics.
In a few seconds you derivate or integrate your desired function, calculate the zero points of your function und show them in the function plot. See all maxima, minima or inflection points in one
The easy way of use allows you to solve linear equations in just a moment. Or transform your mathematical, physical or chemical equation to any unknown variable.
You often needs to calculate with binary, octal or hexadecimal number systems? No problem! You can mix them together in one calculation even by using decimal digits. But that's not enough! You can
also calculate with any other number system with base 2 to 18.
From time to time you may need to convert units to another one, like Celsius to Fahrenheit, miles to kilometre, inches to foot and so on.
You will also be able to calculate with vectors, matrices and determinants.
All this features are combined in this app and will make your mathematical life a lot easier.
We have a new ICON
Solve Math problems and plot functions.
Full featured scientific calculator . It can help you solve basic calculations to college calculus.
With "Maths Solver" You can solve complex Math problems or plot multiple functions with accuracy and speed.
No network access required! which means you don't need internet connected to use its features.
You can plot functions and zoom-in , zoom-out (Press back button to make graph full screen by hiding keyboard )
It covers following areas of Mathematics.
Basic Algebra
Solve equation and system of linear equations
Indefinite and Definite Integration
Set operations i.e Union, Intersection, Mean, Median, Max, Min
Matrix : Determinant, Eigenvalues, EnigenVectors, Transpose, Power
Curl and Divergence
2D function plots
For full list of features see Catalog and Examples in app.
Keywords: Math Calculator, Maths, Equations, Scientific Calculator, Graph Plot, Functions Plot, Maths Calculator
Want to learn algebra? Algebra is used every day in any profession and chances are, you’ll run into algebra problems throughout your life! Any of the following sound familiar?
- Trouble paying attention in class
- Improper instruction
- General disinterest in math
Don’t worry! Come finals day, you’ll be prepared with our comprehensive algebra app. Our app features everything you’ll need to learn algebra, from an all-inclusive algebra textbook to a number of
procedural problem generators to cement your knowledge.
- Abridged algebra textbook, covering many aspects of the subject
- Procedural problem generators, ensuring you will not encounter the same problem twice
- Problem explanations, demonstrating step-by-step how to do each problem
- Quicknotes, reviewing entire chapters in minutes
- Intuitive graphing interface, teaching proper graphing methods
- Statistics tracking, helping you identify your weaknesses
Plus, new chapters are added all the time!
Subjects covered:
Ch. 1: Basics
1.1 Basics of Algebra
1.2 Solving Equations
1.3 Solving Inequalities
1.4 Ratios and Proportions
1.5 Exponents
1.6 Negative Exponents
1.7 Scientific Notation
Ch. 2: Graphing
2.1 Rectangular Coordinate System
2.2 Graphing by Points
2.3 Graphing by Slope_Intercept Form
2.4 Graphing by Point_Slope Form
2.5 Parallel and Perpendicular Lines
2.6 Introduction to Functions
Ch. 3: Systems
3.1 Systems of Equations by Substitution
3.2 Systems of Equations by Elimination
3.3 Systems of Equations by Graphing
Ch. 4: Polynomials
4.1 Introduction to Polynomials
4.2 Adding and Subtracting Polynomials
4.3 Multiplying Polynomials
4.4 Dividing Polynomials
Ch. 5: Rationals
5.1 Simplifying Rational Expressions
5.2 Multiplying and Dividing Rational Expressions
5.3 Adding and Subtracting Rational Expressions
5.4 Complex Rational Expressions
5.5 Solving Rational Expressions
Ch. 6: Factoring
6.1 Introduction to Factoring
6.2 Factoring Trinomials
6.3 Factoring Binomials
6.4 Solving Equations by Factoring
Ch. 7: Radicals
7.1 Introduction To Radicals
7.2 Simplifying Radical Expressions
7.3 Adding and Subtracting Radical Expressions
7.4 Multiplying and Dividing Radical Expressions
7.5 Rational Exponents
7.6 Solving Radical Expressions
Ch. 8: Quadratics
8.1 Extracting Square Roots
8.2 Completing the Square
8.3 Quadratic Formula
8.4 Graphing Parabolas
Keywords: learn algebra, algebra, math, free, graphing, algebra textbook, teach algebra, algebra tutor, algebra practice, algebra problems, review algebra, study algebra, algebra prep, algebra cheat
sheet, algebra formulas, algebra notes, algebra quicknotes, pre-algebra, algebra 2, common core, high school math, high school algebra, ged, sat, gmat
Math Helper solves math problems and shows step-by-step solution.
Math Helper is an universal assistant app for solving mathematical problems for Algebra I, Algebra II, Calculus and Math for secondary and college students, which allows you not only to see the
answer or result of a problem, but also a detailed solution.
[✔] Linear Algebra - Operations with matrices
[✔] Linear algebra - Solving systems of linear equations
[✔] Vector algebra - Vectors
[✔] Vector algebra - Shapes
[✔] Calculus - Derivatives
[✔] Calculus - Indefinite Integrals (integrals solver)
[✔] Calculus - Limits
[✔] The theory of probability
[✔] The number and sequence
[✔] Function plotter
Derivatives, limits, geometric shapes, the task of statistics, matrices, systems of equations and vectors – this and more in Math Helper!
✧ 10 topics and 43+ sub-section.
✧ Localization for Russian, English, Italian, French, German and Portuguese
✧ Intel ® Learning Series Alliance quality mark
✧ EAS ® approved
✧ More than 10'000 customers all over the world supported development of Math Helper by doing purchase
✧ The application is equipped with a convenient multi-function calculator and extensive theoretical guide
✪ Thank you all helping us to reach 800'000+ downloads of Math Helper Lite
✪ You could also support us with good feedback at Google Play or by links below
✪ Our Facebook page: https://www.facebook.com/DDdev.MathHelper
✪ Or you could reach us directly by email
We have plans to implement
● Numbers and polynomial division and multiplication
● Implement new design and add 50+ new problems
● New applications, like Formulae reference for college and university and symbolic calculator
MathHelper is a universal assistant for anyone who has to deal with higher mathematics, calculus, algebra. You can be a student of a college or graduate, but if you suddenly need emergency assistance
in mathematics – a tool is right under your fingertips! You could also use it to prepare for SAT, ACT or any other tests.
Don't know how to do algebra? Stuck during calculus practice, need to solve algebra problems, just need integral solver or limits calculus calculator? Math Helper - not only math calculator, but
step-by-step algebra tutor, will help to solve derivative and integral, maths algebra, contains algebra textbooks - derivative and antiderivative rules (differentiation and integration), basic
algebra, algebra 1 and 2, etc.
Good calculus app and algebra for college students! Better than any algebra calculator with x, algebra solving calculator or algebra graphing calculator - this is calculus solver, algebra 1 and 2
solver, with step-by-step help, calculator, integrated algebra textbooks and formulas for calculus.
This is not just a math answers app, but math problem solver! This mathematics solver can solve any math problem - from basic math problems to integral and derivative, matrix, vectors, geometry and
much more. For everyone into math learning. This is not just math ref app - this is ultimate math problem solver.
Discover a new shape of mathematics with MathHelper!
Full Math Tutor now on sale for a dollar! Download it! Three games available with this demo!
Check out the new Ultimate Math Tutor for kids and adults. The math tutor app contains Addition, Subtraction, Comparisons, Multiplication, Division, Fractions, Money, Percentages, and Powers. This
tutor app also has grades for each subject. Check up on your kids progress - get all their scores directly to your email!
The Ultimate Math Tutor isn't only for kids. It makes a great brain game for adults. Test your mind every day, see how high your score can go, you might even make the leader board! There are 9
different brain game modes. Have fun with it!
The Ultimate Math Tutor isn't just one or two different types of math. Most math apps and tutor apps contain a very limited amount of problems for the student to work on. The Ultimate Math Tutor
contains nine different types of math problems. Division, Fractions, Comparisons, Subtraction, Money, Multiplication, Percentages, Addition, and Powers. A student can pick and choose which they want
to use they are not required to do any specific one. The student is graded on the individual subject. For example the student only uses the addition only the addition grade changes.
The money section of this app is definitely worth checking out. Real money displayed for your kid to count and add up. Money is something that everyone uses constantly no matter what line of work you
are in and it would be great for your kid to master American money at an early age. The money page shows all the common monies in America and is adjustable to how many items are counted at a time.
You can add more or take money off to count.
Every section has a Leader board scoring system. If you submit your score you may become a high score! The leader boards are simple and show the top 20 scores of students. The leader board is a great
way to get your kid to have a goal in this math tutor app. If your kid hates doing fractions maybe you can convince them to try to get a high score and everyone wins.
So check out this app. Lots of categories. Addition, Subtraction, Comparisons, Multiplication, Division, Fractions, Money, Percentages, and Powers. Grades. Emailing grades to help track your child's
progress. You will know if your kid is actually using the app if you are getting an email with the grades and scores changing. Leader boards to help students have a goal and earn a high score. Simple
and not distracting game play style that allows your kid to focus and do the math. No loud obnoxious noises to bother you. A math tutor app at its finest and simplest to help your student get as much
math experience as possible.
Makes a great brain game for adults. Brush up on some math. Its great for you to just do actual math. Use the money section to practice counting money. As a brain game you have a lot of choices try
doing a little of each section every day! Its been proven 15 minutes a day of brain games help keep you sharp.
Need more than free videos to learn math? YourTeacher's Pre-Algebra app is like having a personal math tutor in your pocket.
“It’s like a private school math classroom, but you are the only student.”
"I just love YourTeacher and the way you explain things. I felt like I was in a classroom instead of just looking at examples."
“Prior to using your Pre-Algebra program, my daughter had 6 test grades of 45-56. Once she was on the program, she had 4 test grades of 85-100. My 13 year old student actually sits down with the
program on her own and completes the work with out me nagging her to do it. She does not have to be driven to another location, she is not losing school time to attend tutoring, nor am I losing time
from work. I know the work she does is correct. I'm not praying that the tutor knows what he/she is doing. AND YOU CAN NOT BEAT THE EXCELLENT PRICE. This program works and is such a blessing. Thank
you very, very much for this program."
Need more than videos to learn Pre-Algebra…
YourTeacher’s Pre-Algebra app replicates the entire math classroom experience with your own personal math teacher.
Our lessons include:
-Multiple video example problems
(similar to how a teacher starts class at the board by explaining the examples from the textbook)
-Interactive practice problems with built-in support
(similar to how a teacher assigns practice and walks around the class providing help)
-A Challenge Problem
(similar to how a teacher assigns a higher level problem which students must work on their own to prove mastery)
-Extra problem worksheets
(similar to how a teacher assigns additional problems for homework)
-Review notes
(similar to how a teacher provides summary handouts or refers you to your textbook)
Scope and Sequence
CHAPTER 1: WHOLE NUMBERS
Place Value
Comparing Numbers
Estimating Sums and Differences
Addition and Subtraction Word Problems
Estimating Products and Quotients
Multiplication and Division Word Problems
Order of Operations
Grouping Symbols
Addition Properties
Multiplication Properties
CHAPTER 2: INTEGERS
Graphing and Writing Integers
Comparing Integers
Opposites and Absolute Value
Order of Operations
Word Problems
Addition and Subtraction
Multiplication and Division
Divisibility Rules
Factors and Primes
Prime Factorization
Multiples and Least Common Multiples
Greatest Common Factor
Introduction to Fractions
Equivalent Fractions (Part I)
Lowest Terms
Equivalent Fractions (Part II)
Improper Fractions and Mixed Numbers
Comparing Proper Fractions
Comparing Mixed Numbers and Improper Fractions
Word Problems
Adding and Subtracting Like Fractions
Adding and Subtracting Unlike Fractions
Adding Mixed Numbers
Subtracting Mixed Numbers
Multiplying Fractions
Multiplying Mixed Numbers
Dividing Fractions
Dividing Mixed Numbers
CHAPTER 4: DECIMALS
CHAPTER 6: RATIO, PROPORTION, & PERCENT
CHAPTER 7: GEOMETRY
(Wifi or 3G connection required)
prealgebra pre algebra tutoring tutor online help math prep preparation test practice testing your teacher course review tutor tests
This version of this application has been discontinued, to download the new version search for the same app, searching “Math Helper 2”.
Train your brain, learn math tricks and amaze others!
With Math Tricks, it will be easy to solve math problems in just a few seconds!
11 * 86 = 946
YOU can calculate this in LESS than two seconds!
You can learn how to convert temperature from Celsius to fahrenheit in just few seconds!
You can learn to multiply any number with 125!
Useful Features to become a math genius:
- Multiplication Tricks
- Division Tricks
- Number Tricks
- Square Tricks
- Remainder Tricks
- Date Tricks
We constantly update the app with new tricks.
Check out the screenshots and see for yourself!
*** Issue with our app? - Please send us an email: support@jmtapps.com ***
HwPic is an online tutoring application. You can use it to send a picture of a problem you are having difficulties with to a knowledgeable tutor who will solve the problem in detail, and Email you an
answer with complete explanation. HwPic is created to tutor students in Chemistry, Biology, Physics, Anatomy, History, Geography, Political science, Economy and all levels of math including Algebra,
Algebra 2, College Algebra, Geometry, Trigonometry, Pre-Calculus, Calculus and many more Math & Science subjects. HwPic’s sole purpose is to help students with individual questions that they might
have while working on their homework. This application is free of charge to download however it requires Knowledge points to continue to work. Knowledge points can be purchased or earned. Most
answers are sent within 5-20 minutes by a professional tutor.
Please note that we answer only one question per picture and only one email account could be registered per device. You can always change your email address in the settings. We do send a text message
with each email notifying you that your answer is ready in your email. If you wish to stop the text messages, simply test us at HwPic@HwPic.com.
Key words: HwPic, Hw Pic, Homework Picture, Tutoring, Tutor, Homework Tutor, Algebra, Chemistry, Physics, Biology, math tutor, math help, Algebra tutor, Algebra help, ,statistics Tutor, statistics
help, calculus tutor, calculus help, text tutor (R), wolframalpha (R), wolfram alpha (R), math tutoring, tutor.com (R), motuto (R), myHomework (R), texttutor (R), live math help, real math help, help
with math, math tutors, math tutoring, texting a tutor, tutor texting, math help, math help, get math help now, math help. Our service is similar to "cha cha" but dealing specifically with math &
science topics.
We have no affiliation with Wolframalpha, tutor.com, motuto, TutaPoint.com, Text tutor, Texttutor, or myHomework!
Want to Solve Math Problems for Free and view detailed step by step solution instantly? Want to Plot simple equations for free? Want to Learn Algebra FUN way? Look no further. iKaes presents Math
Solver app to help students and parents – optimized, on the go app for your mobile and tablet.
Types of Math problems and Algebra equations it can solve:
- Simple Quadratic equations >>> e.g. x^2 + 5x + 6 = 0
- Simple Linear equations >>> e.g. 2x + 6 = 22
- Some complex Quadratic equations >>>> e.g. x^2 + 2x + 4x – 5 + 11 = -x
- Applying PEMDAS >>> e.g. 2 + 4/8 * 3 - 2
- Find GCF, LFM
- Work with Fractions
Future release plans (the app doesn’t solve these yet!):
- Solve complex Quadratic equations >>>> e.g. x^2 + 2x + 4x – 5 + 11 = -x
- Solve complex Linear equations >>>> e.g. (2x + 5)/(x - 4) = 4x + 3
- Solve problems involving decimal points >>>> e.g. 2.5 * 4.6 + 2/4
- Solve Geometry questions
Its fun to learn Algebra with its very easy to understand Algebra learning resources. It covers these Algebra topics:
------- What is a Polynomial?
------- Adding and Subtracting Polynomials
------- Multiplying and Dividing Polynomials
------- Rational Expressions
------- Conjugates and Rationalizing the Denominator
------- Special Binomial Products
Introduction to Algebra
------- One step Equations
------- Two step Equations
------- Algebraic Multiplication
------- Order of Operations - PEMDAS
------- Solving by Substitution
------- Linear Inequalities
------- What is Exponent?
------- Negative Exponents
------- Reciprocal of Numbers and Variables
------- Squares and Square roots
------- Fractional Exponents
------- Laws of Exponents
------- Multiplying and Dividing Variables with Exponents
Simplifying Algebraic Expressions
------- Expanding and FOIL method
------- Multiplying Signed Numbers
------- Commutative, Associative and Distributive Laws
------- Cross Multiply
------- Adding, Subtracting, Multiplying and Dividing Fractions
Linear Equations
------- Equation of a Straight Line
------- Coordinates and Quadrants
Quadratic Equations
------- What is a Quadratic Equation?
------- Solving by Factoring
------- Completing the Square
------- Solving using Quadratic Formula
Have any feedback, facing any issues, or see a wrong answer? Send us feedback on: www.iKaes.com/feedback.html or support@iKaes.com
We constantly review and act upon user feedback, and update our App to increase customer satisfaction.
Also visit us at www.iKaes.com
ON SALE FOR A LIMITED TIME!!! GET TODDLER TUTOR NOW FOR ONLY $1.69 USD!
Toddler Tutor is an app that turns your Android-powered touchscreen smartphone into a tutor for your toddler.
Toddler Tutor was designed for and tested by toddlers, and delivers an educational and interactive experience to teach your toddler colors, shapes, letters, numbers, and animals.
All lessons and games are audio based, so your toddler will not only be able to see the name of each object, but they will hear the name as well. Additionally, in the animal lesson and game, your
toddler will be able to hear the name of each animal and a sound that the animal either makes or is related to.
Need more than free videos to learn math? YourTeacher's Algebra app is like having a personal math tutor in your pocket.
“It’s like a private school math classroom, but you are the only student.”
"I just love YourTeacher and the way you explain things. I felt like I was in a classroom instead of just looking at examples."
"My daughter is doing Algebra 1 in 8th Grade. She had been getting really low grades because they are moving through the material so quickly. She had a test 3 days after we bought your program and
she got 94% (the highest score in the class) because we had her work through the modules over and. She really enjoys the program and her motivation is good again."
Need more than videos to learn Algebra…
YourTeacher’s Algebra app replicates the entire math classroom experience with your own personal math teacher.
Our lessons include:
-Multiple video example problems
(similar to how a teacher starts class at the board by explaining the examples from the textbook)
-Interactive practice problems with built-in support
(similar to how a teacher assigns practice and walks around the class providing help)
-A Challenge Problem
(similar to how a teacher assigns a higher level problem which students must work on their own to prove mastery)
-Extra problem worksheets
(similar to how a teacher assigns additional problems for homework)
-Review notes
(similar to how a teacher provides summary handouts or refers you to your textbook)
Scope and Sequence
YourTeacher’s Algebra app covers an entire year of Algebra 1.
Addition and Subtraction
Multiplication and Division
Order of Operations
Least Common Multiple
Addition and Subtraction
Multiplication and Division
Order of Operations
Combining Like Terms
Distributive Property
Distributive / Like Terms
One-Step Equations
Two-Step Equations
Equations with Fractions
Equations Involving Distributive
Variable on Both Sides
Variable on Both Sides / Fractions
Variable on Both Sides / Distributive
Integer Solutions
Decimal Solutions
Fractional Solutions
Beginning Formulas
CHAPTER 3: WORD PROBLEMS
Number Problems
Consecutive Integer Problems
Geometry Problems
Percent Problems
Age Problems
Value Problems
Interest Problems
Introductory Motion Problems
Solving and Graphing Inequalities
Combined Inequalities
The Coordinate System
Domain and Range
Definition of a Function
Function and Arrow Notation
Graphing within a Given Domain
Graphing Lines
The Intercept Method
Graphing Inequalities in Two Variables
Patterns and Table Building
Word Problems and Table Building
Slope as a Rate of Change
Using the Graph of a Line to Find Slope
Using Slope to Graph a Line
Using Coordinates to Find Slope (Graphs and Tables)
Using Coordinates to Find Slope
Using Slope to Find Missing Coordinates
Using Slope-Intercept Form to Graph a Line
Converting to Slope-Intercept Form and Graphing
Linear Parent Graph and Transformations
Using Graphs and Slope-Intercept Form
Using Tables and Slope-Intercept Form
Direct Variation
Applications of Direct Variation and Linear Functions
CHAPTER 7: EXPONENTS & POLYNOMIALS
CHAPTER 10: RADICALS
CHAPTER 11: QUADRATICS
CHAPTER 13: QUADRATIC EQUATIONS & FUNCTIONS
(Wifi or 3G connection required)
algebra, algebra tutoring, algebra tutor, algebra help
Having trouble with algebra?
Solved the problem but you're not sure you got it right?
Meet yHomework - the algebra calculator that gives you a full step-by-step solution!
yHomework is an easy to use Math solver, designed for humans. Enter your expression or equation, and get the full step-by-step solution! Just the same as your teacher would write on the board, and
just the same as you would solve it in your notebook.
Say you have: 3(x+5)=6
Not sure what to do next? Just type it in and get the step-by-step solution with just one tap - no excuses, no hassle - get the full deal, for free!
Current material covers:
Simplification, single unknown equation, 2 equation sets, quadratic equations.
Questions? Comments? Contact us at: support@yhomework.com
Application made to make mathematical calculus, that are considered laborious and exhaustive when made by hand, and make easier the life of engineers and mathematicians. Solve 2nd grade equations,
equations linear systems, make conversions between rectangular and polar formats...
You can also work with matrices. Calculate determinant, make multiplications between matrices, calculate the inverse and adjoint.
Tutor.com To Go™ is the mobile companion for Tutor.com, and the only education app that connects you to an expert tutor for real-time help. Here's what a few students had to say about their
experience with Tutor.com To Go:
"I cannot believe how much my math is improving just because of this app! Thank you tutor.com!" - Middle Grades Math Student
"I love how you can draw on the whiteboard" – Middle Grades Math Student
"this makes all my classes so much easier! :) THANK YOU! :)" - Biology Student
With Tutor.com To Go, you can:
- Connect to a live tutor for one-to-one help in math, science, social studies, and English
- Save and review past one-to-one tutoring sessions
- Store essays, assignments, or photos of homework problems in your Tutor.com Locker
- Share items in your Locker with a tutor
- Access thousands of educational resources from the SkillsCenter™ Resource Library
Your saved sessions and all the items in your Locker are also available from your computer.
If you have any problems, please contact us at help@tutor.com.
More from developer
"LDS Prophets Audio Quotes Lite" is a free trial version of an app that gives users inspirational audio quotes from latter-day prophets and apostles from the Church of Jesus Christ of Latter-Day
Saints. All quotes are heard in the prophets/apostles own voice and are accompanied with the quote text and sourcing. This lite version contains 3 quotes ... for more quotes (Full version 1.1 has
nearly 60) please buy the full version.
This app is perfect for lessons, family home evenings, missionary work, presidency meetings, seminary, your own personal study, or for anywhere else you might need a quick thought or lesson on a
specific church topic.
The app is easy to use! Simply chose your topic, then the quote, and then enjoy hearing it and reading it at the same time. There is also a random button for those just wanting to hear a random
inspired quote from our leaders.
This app is an exciting leap into how the mobile revolution can change our world for the better. Not only will you have a powerful quote but you'll hear it in the voice of the prophet or apostle who
said it, making it even more relevant and spiritual. !
Quote topics include: agency, atonement of Jesus Christ, Book of Mormon, choices, family, financial management, Joseph Smith, missionary work, prayer, priesthood,repentance, revelation, scripture,
service, temples, temptation, testimony, and a just for fun category.
I would love input on this app and would love to make it better. If you see anything let me know: acadecadroid@gmail.com
The full version 1.1 contains nearly 50 quotes. (This lite version contains just 3 quotes.) An update is scheduled for July which will contain even more quotes. If there is a quote that you really
love and would like to see on our next update please e-mail me to let me know: acadecadroid@gmail.com
Please know I do not own any rights to these quotes, audio, or images. The charge for this app is strictly for the development of it and the time it takes to create an app. "The Church of Jesus
Christ of Latter-day Saints," "Book of Mormon," and "Mormon" are trademarks of Intellectual Reserve, Inc. With that said I wish to share my testimony that Jesus is the Christ and that he restored the
church through the prophet Joseph Smith. Joseph Smith translated the Book of Mormon which I know is true and through him Heavenly Father restored all keys to the Earth including the power to seal
families forever. I also testify that there is a true and living prophet on Earth, Thomas S. Monson and that he leads and guides the church in these latter days!
Thanks and enjoy the app! :)
tags: lds, quotes, Mormon, prophet, apostle, Jesus Christ, Thomas S. Monson, 12 apostles, priesthood, relief society, Mormons, eternal life, Sunday school, family home evening,
Welcome to "LDS Radio Collection," an app that gives you one click access to more than 10 LDS radio stations on your Android device. Live stream and listen to LDS hymns, music, talks,and devotionals
anytime and anywhere. This is a must have app for Mormons everywhere! Internet/3G access required. Some stations may require Flash Player!
This app provides a great selection of stations to choose from. If one of the stations isn't playing what you want to hear ... just push the back button on your device and switch to another station!
The current radio stations that can be accessed are:
***Mormon Music Channel
***BYU-Idaho Inspirational Station
***BYU Radio
***Sunday Sounds
***Music and The Spoken Word
***and others.
The app costs $1. There are no subscription fees or any additional costs. The $1 is for maintenance of the app. Works best with RealPlayer! Download it for free on the market!
If you have any questions or suggestions let us know via e-mail. This app is not associated with The Church of Jesus Christ of Latter Day Saints nor any radio station or Mormon music producers.
Please note that some of these radio stations do have commercials/advertisements which are not associated with our app.
We hope you enjoy this app! :)
Tags: Mormon music, lds music, LDS music, Latter Day Saint music, Mormon radio, music.
"LDS Prophets Audio Quotes" is a new app that gives users nearly 60 inspirational audio quotes from latter-day prophets and apostles from the Church of Jesus Christ of Latter-Day Saints. All quotes
are heard in the prophets/apostles own voice and are accompanied with the quote text and sourcing.
This app is perfect for lessons, family home evenings, missionary work, presidency meetings, seminary, your own personal study, or for anywhere else you might need a quick thought or lesson on a
specific church topic.
The app is easy to use! Simply chose your topic, then the quote, and then enjoy hearing it and reading it at the same time. There is also a random button for those just wanting to hear a random
inspired quote from our leaders.
This app is an exciting leap into how the mobile revolution can change our world for the better. Not only will you have a powerful quote but you'll hear it in the voice of the prophet or apostle who
said it, making it even more relevant and spiritual.
Check out the lite version before buying to see if this is something that would benefit you! The lite version has 3 quotes!
Quote topics include: agency, atonement of Jesus Christ, Book of Mormon, choices, family, financial management, Joseph Smith, missionary work, prayer, priesthood,repentance, revelation, scripture,
service, temples, temptation, testimony, and a just for fun category.
I would love input on this app and would love to make it better. If you see anything let me know: acadecadroid@gmail.com
Version 1.1 now contains nearly 60 quotes and has a play/pause button. An update is scheduled for the end of July which will contain even more quotes. If there is a quote that you really love and
would like to see on our next update please e-mail me to let me know: acadecadroid@gmail.com
Please know I do not own any rights to these quotes, audio, or images. The charge for this app is strictly for the development of it and the time it takes to create an app. "The Church of Jesus
Christ of Latter-day Saints," "Book of Mormon," and "Mormon" are trademarks of Intellectual Reserve, Inc. With that said I wish to share my testimony that Jesus is the Christ and that he restored the
church through the prophet Joseph Smith. Joseph Smith translated the Book of Mormon which I know is true and through him Heavenly Father restored all keys to the Earth including the power to seal
families forever. I also testify that there is a true and living prophet on Earth, Thomas S. Monson and that he leads and guides the church in these latter days!
Thanks and enjoy the app! :)
tags: lds, quotes, Mormon, prophet, apostle, Jesus Christ, Thomas S. Monson, 12 apostles, priesthood, relief society, Mormons, eternal life, Sunday school, family home evening,
"Just Say It" is a party game app in which players compete by reciting various phrases into their mobile phones. Players score points according to how fast they say the phrase and if the android
device understands the phrase that was supposed to be said.
This app supports and keeps score for up to 4 players. It also has three levels of difficulty. The easy category includes various quotes from movies and songs whereas the medium and hard category
includes various tongue twisters and harder sentences. The fun part of the game is seeing if the Android device understands what each player says and who will win the game! :)
Version 1.0.6: fixed some of the first market day issues with sound/image.
The real purpose for the creation of this app was to create a fun way for students to learn a variety of simple facts in a classroom setting. The facts can be turned into the sentences the students
recite while playing this game. It is a powerful way of learning because players not only read but also say the information in a fun setting. The playing of the game helps instill repetition needed
for learning the concepts. This can be a great educational tool and learning game!
We envision this app being played in many places to increase learning! Topics for the app could include vocabulary, quotes from a book, key concepts in a chapter of a textbook, a bible study game,
family reunion games, etc.
DecaDroid would love to alter this game for your own needs. Starting at $5.99 DecaDroid will recreate this app and personalize it to fit your needs and include your own topics. Simply create your own
sentences/phrases to be used in the game and send it to us via e-mail. Visit our website for more information and for payment options. We can even post it in the market if you'd like for everyone to
have access to it!
You don't need our app to use our service just do the following:
DOWNLOAD THE TEXT+ APP AND SEND QUESTIONS TO 208- 557-8676
Welcome to "Text A Tutor: Live Math Help" for SMARTPHONES gives you access to a math tutor via smartphone when you need it. Type in the free code of the week for a free trial!
Get help with math homework NOW for as little as $1 per question/credit! We respond in less than 10 minutes or we'll give your money/credits back!
This service will change the way you get help with your math! Text A Tutor is not automated! All responses come from real qualified people with math degrees! We started this app so we can help you
with your math!
Its easy to use! Simply purchase credits through our website/app for as low as $1 per question. Then submit your math problems via email or using your device's camera.
Responses come with explanations and answers. We also offer options for getting worked out solutions or even having a video explanation done just for your problem for a small extra fee. (Video
Requires Adobe Flash player on device!)
We currently offer help in most math topics taught in high school and entry level college classes including: pre-algebra, algebra, geometry, algebra 2, pre-calculus, trigonometry, statistics, and
calculus 1.
***We now have a free trial offer to the first 1000 new users! Users can get a free text if they like us on Facebook Go to "Text A Tutor" on Facebook for more info!***
Text A Tutor Policies:
***You MUST have internet access and a Gmail account!
***We answer math questions from Pre-Algebra to Calculus 1! Statistics too!
***You must first purchase texts/credits to use this service and then enter in the code! Click the purchase texts button to go to our website! It's super fast and easy to do using Google Checkout!
Just like buying an app! You can get packs as low as $1 per text.
***We answer questions within 10 minutes during open hours. Worked out solutions and videos may take longer!
***Please know our service is not automated. We have real live tutors to help!
***All tutors have degrees in mathematics and are currently employed as highly trained teachers.
***Fall Hours (Starting August 15th): We will answer everyday from 2 pm to Midnight (EST)! You may send texts at other times but response time may be longer depending on our tutors locations during
these hours!
***Questions can only be sent from this app and only from the same device where code was entered!
***Please do not reply to the answer text unless asked to do so!
***At times our tutors may be busy! Please be patient! Our goal is to answer within 10 minutes during our open hours ... however times may vary depending on the type of response desired (ie: work,
video, etc)!
***Worked solutions and video responses are not always available. It depends on our tutor's locations and available technology.
At this time we are only releasing Text A Tutor: Math. Depending on its success we will expand the service to other subjects. USA only!
Text A Tutor respects your confidentiality of your personal information. We will only send you messages about your questions and our service. We will never send you spam or advertisements nor will we
sell your personal information to a 3rd party.
If you have questions about our service please contact us at textmathtutor@gmail.com.
Key words: math help, math tutor, statistics tutor, statistics help, calculus tutor, calculus help,(R), math tutoring, hwpic(R), hw pic, tutor.com (R), motuto (R), myHomework (R), live math help,
real math help, help with math, math tutors, math tutoring, texting a tutor, text tutor, tutor texting, math help, math help, get math help now, math help. Our service is similar to "cha cha" but
dealing specifically with math topics.
We have no association/relation with, hw pic, hwpic, tutor.com, motuto, TutaPoint.com, or myHomework!
Text A Tutor (C) 2011. All rights reserved.
"Do You See A Temple" is a fun LDS family game app in which players try to be first to press a button when a temple appears. While playing the game, players learn more about the importance of the
holy temple! The game supports up to 4 players going head to head at one time and is a blast for members of all ages!
This app is perfect for a family home evening lesson about temples. During the game players will get acquainted with different temples of this dispensation and learn their locations. The app also
provides many interesting facts about temples and has links to various movies/resources for learning more about the importance of temples after the game! While playing the game players also get the
opportunity to hear songs about the temple!
The app even has four levels of difficulty to make it more challenging: easy, normal, hard, impossible ... oh and the impossible level really is nearly impossible! :)
So get the family together and play "Do You See A Temple." It's a family home evening lesson and game all in one and prepared just for you. (Although the dessert is not included!)
Sale Price: $1.00 (until next update!)
New update planned for first/second week of July ... will include more pictures.
If you have any questions or have suggestions on how to make this better please e-mail me at: acadecadroid@gmail.com.
key words: temple, mormon, temples, latter day saint, the Church of Jesus Christ of Latter Day Saints, prophet, temple game, temples game, family home evening, conference, Thomas S. Monson, apostles,
eternal life, lds apps, mormon apps
What is new? Version 1.1.6 brings the following:
*** Opens in full screen and because of this images look better!
*** Better button functionality
*** Changed times in levels. Easy level is now easier!
*** Added instruction notification to "press temple to continue"
*** Installs directly to SD card (if available)!
Turn your Android phone into a polling response system. This is the perfect app for teachers or presenters who want to ask questions or poll their students/audience and collect the data. Poll
Anywhere collects polling questions via text message and displays results live on your screen.
Poll Anywhere is easy to use! Simply have the participants text your phone number or your Google Voice number while the app is engaged and it will collect and display results for you as messages
arrive. For a long time cell phones have been an issue to teachers and presenters but now you can use cell phones to your advantage!
The evidence of the effects of polling students during class is overwhelming. It's a great way to check for understanding and to provide other learning opportunities. Simply ask the class a question,
have them text you, then adjust teaching accordingly. Or even better allow students to discuss which they think is the right answer.
Some teachers and presenters purchase expensive equipment, batteries, and software to conduct polls. With this app you can avoid all those costs. Simply collect answers with your phone.
The presenter / teacher is the only one that needs the app. Your participants simply text your number with their own cell phones. The app works with any phone works as long as it has texting ... even
i-phones. The app collects their responses and presents it on your screen.
***This app does collect text messages ... make sure you have an unlimited texting plan through your service provider or use your Google Voice Number!
***This is the LITE/FREE version. It only allows you to collect 5 responses. If you like the app please purchase the full version to be able to collect unlimited data.
Key words: poll everywhere, poll anywhere, clickers, ResponseWare, response ware, presenter ware, presenterware, einstruction, clicker, classroom polls, grade book, polling service, wiffiti, SMART,
classroom response system, iclicker, eclicker, class clicker.
Do you know your famous Mormons and some of their star moments? "Mormon or Not?" is the app that will challenge your skills while at the same time giving you some inspiring quotes and videos on many
famous Mormons.
"Mormon or Not" includes famous sports stars, politicians, and even Hollywood stars but be careful not all are Mormon and some may even surprise you!
Go even further by pressing the "See them in Action" button to see their star moments, hear them tell inspiring stories, and even share testimonies.
"Mormon or Not?" is a great idea for Family Home Evening. We hope you enjoy! Future updates will included even more famous people and their stories and interactions with the Church of Jesus Christ of
Latter Day Saints.
Key Words: Mormon, Latter Day Saint, LDS, The Church of Jesus Christ of Latter Day Saints, famous people, famous Mormons, Family Home Evening.
Having issues or want to give us a suggestion? Send us an email at acadecadroid@gmail.com.
This app has no association with the Church of Jesus Christ of Latter Day Saints nor has access to church membership records. It is possible that people who were once members of the church are no
longer members of the church which might not be reflected in this app.
Based on the hit improvisation game show Who's Line Is It Anyway comes "Improv Howdown". This app gives everyone the opportunity to get in on the fun of singing in an ol' fashion improv hoedown. The
app is great for individuals wanting to improve their on the spot skills and even better in party settings where up to four people can sing create together.
The app provides random improv topics that participants must use to create lyrics for the hoedown. Once you are ready, press the record button and sing to the hoedown music. After singing ... listen
to hilarious recording over and over again, move on to the next topic, or even share the recording with friends.
The app plays very similar hoedown music as found on the hit show Who's Line Is It Anyway and also provides video links to professional improv actors showing off their skills. No two hoedowns are
ever alike ... so lets get singing! This app is guaranteed to make you laugh.
Tags: Whose Line Is It Anyway, karaoke, improv, improvisation games, improv games, party games, singing party games, Drew Carey, Wayne Brady, Brad Sherwood, Colin Mochrie, Ryan Stiles,
Improv-A-Ganza, game show,
Turn your Android phone into a polling response system. This is the perfect app for teachers or presenters who want to ask questions or poll their students/audience and collect the data. Poll
Anywhere collects polling questions via text message and displays results live on your screen.
Poll Anywhere is easy to use! Simply have the participants text your phone number or your Google Voice number while the app is engaged and it will collect and display results for you as messages
arrive. For a long time cell phones have been an issue to teachers and presenters but now you can use cell phones to your advantage!
The evidence of the effects of polling students during class is overwhelming. It's a great way to check for understanding and to provide other learning opportunities. Simply ask the class a question,
have them text you, then adjust teaching accordingly. Or even better allow students to discuss which they think is the right answer.
Some teachers and presenters purchase expensive equipment, batteries, and software to conduct polls. With this app you can avoid all those costs. Simply collect answers with your phone.
The presenter / teacher is the only one that needs the app. Your participants simply text your number with their own cell phones. The app works with any phone works as long as it has texting ... even
i-phones. The app collects their responses and presents it on your screen.
***This app does collect text messages ... make sure you have an unlimited texting plan through your service provider or use your Google Voice number!
Key words: poll everywhere, poll anywhere, clickers, ResponseWare, response ware, presenter ware, presenterware, einstruction, clicker, classroom polls, grade book, polling service, wiffiti, SMART,
classroom response system, iclicker, eclicker, class clicker. | {"url":"https://play.google.com/store/apps/details?id=appinventor.ai_ochonosi.TextATutor_Math&referrer=utm_source%3Dappbrain","timestamp":"2014-04-16T23:56:00Z","content_type":null,"content_length":"275109","record_id":"<urn:uuid:9b952c5e-1129-48e0-95eb-e15761e19936>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Upper Marlboro Algebra 2 Tutor
Find an Upper Marlboro Algebra 2 Tutor
...I have taught at all three levels, as Physical Education teacher, third grade teacher, fourth grade teacher and now I teach math to sixth, seventh and eighth grade students. I have been
tutoring for several years now and I really enjoy when a student has that "Ah Hah" moment. Witnessing a student become successful, is the ultimate prize.
15 Subjects: including algebra 2, reading, algebra 1, geometry
...I have been teaching elementary and secondary math for 21 years. In addition, I have been tutoring areas in basic math through college level math for 25 years. I enjoy math and I really enjoy
helping students gain a better understanding of mathematics.
15 Subjects: including algebra 2, geometry, ASVAB, algebra 1
...I am extremely good with basic algebra. Linear Algebra is more advanced, but can be approached easily if you understand the basics of matrices. I achieved an A in Linear Algebra when I took it
in college.
32 Subjects: including algebra 2, reading, algebra 1, calculus
...If you need help, in any branch of high school math, and you're willing to work at it, I can turn you into an A student in short order.I can help with Terms of algebra, Algebraic addition,
subtraction, multiplication, division, Factors and factoring, Linear equations and their solutions and Quadr...
17 Subjects: including algebra 2, English, calculus, geometry
...I specialize in tutoring math (from pre-algebra to differential equations!) and statistics. I completed a B.S. degree in Applied Mathematics from GWU, graduating summa cum laude, and also
received the Ruggles Prize, an award given annually since 1866 for excellence in mathematics. I minored in economics and went on to study it further in graduate school.
16 Subjects: including algebra 2, calculus, statistics, geometry | {"url":"http://www.purplemath.com/Upper_Marlboro_Algebra_2_tutors.php","timestamp":"2014-04-18T04:29:19Z","content_type":null,"content_length":"24267","record_id":"<urn:uuid:5f916334-6935-410e-baa0-3f3ab1379c33>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Laurel, MD Algebra 2 Tutor
Find a Laurel, MD Algebra 2 Tutor
...I always break the harder math into small simpler parts so that the students can take care/handle the problem without much difficulties. I have been teaching in high schools and colleges for
several years. I have teachers license certificate from District of Columbia state department of education.
8 Subjects: including algebra 2, geometry, algebra 1, SAT math
...My name is Alexis, and I am eager to share my love of math and science with you or your child. I have always loved math and science, attending science and biology camps at CMU and Johns
Hopkins University during my childhood and completing college level calculus with straight A's at the age of 1...
34 Subjects: including algebra 2, chemistry, calculus, geometry
...Since graduating I have begun to tutor again and being new to the area I am currently working to expand my number of students. I believe that everyone can learn math and value it as a problem
solving tool. I expect my students to be open about their needs, concerns and communicate what works well for them.
22 Subjects: including algebra 2, calculus, geometry, GRE
I am currently an 8th grade math teacher for Anne Arundel County Public Schools. I have previously taught a wide variety of math subjects from 7th grade through entry level college classes. My
previous clients have gone on to significantly increase their score on their standardized tests as well as raise their class grades by an average of 1.5 letter grades.
12 Subjects: including algebra 2, reading, algebra 1, geometry
...I definitely can be a great help in this subject! I received a Bachelor of Science and I have a huge passion for biology! I will be continuing my graduate degree focusing on Microbiology as
13 Subjects: including algebra 2, chemistry, calculus, biology
Related Laurel, MD Tutors
Laurel, MD Accounting Tutors
Laurel, MD ACT Tutors
Laurel, MD Algebra Tutors
Laurel, MD Algebra 2 Tutors
Laurel, MD Calculus Tutors
Laurel, MD Geometry Tutors
Laurel, MD Math Tutors
Laurel, MD Prealgebra Tutors
Laurel, MD Precalculus Tutors
Laurel, MD SAT Tutors
Laurel, MD SAT Math Tutors
Laurel, MD Science Tutors
Laurel, MD Statistics Tutors
Laurel, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/Laurel_MD_Algebra_2_tutors.php","timestamp":"2014-04-21T02:42:22Z","content_type":null,"content_length":"23968","record_id":"<urn:uuid:3ea760c3-71e9-41a9-9927-58c98312034d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: Re: st: Principal Components Analysis with count data
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: Re: st: Principal Components Analysis with count data
From "Nick Cox" <n.j.cox@durham.ac.uk>
To <statalist@hsphsun2.harvard.edu>
Subject RE: Re: st: Principal Components Analysis with count data
Date Thu, 13 Aug 2009 17:53:20 +0100
I think that's a good approximation, at least informally -- which is
presumably why Evans Jadotte suggested that technique earlier in the
thread (see below).
But I'm distinguishing, as others may not be, between count data and
categorical data. The number of Statalist postings in a unit time is a
count variable; gender is a categorical variable. Naturally, I'm aware
that every count corresponds to a category and vice versa. But there is
a variation in what models are most appropriate.
Lachenbruch, Peter
A short question on PCA for categorical variables: wouldn't
correspondence analysis be useful here? Or is my interpretation of CA
as the categorical analog of PCA way off base?
Nick Cox
There are various unstated assumptions and criteria that need to be
spelled out for a fruitful discussion.
1. Continuous versus discrete. I don't know any reason why PCA might not
be as helpful, or as useless, on discrete data (e.g. counts) as compared
with continuous data. I wouldn't think it useful for categorical
variables, which I take to be a quite different issue.
2. Skewed versus symmetric. In principle, PCA might work very well even
if some of the variables were highly skewed. In practice, skewness quite
often goes together with nonlinearities, and a transformation might help
in either case.
3. Whether PCA will work well does depend on what you expect it to do
ideally, which is not clear in the question.
Evans Jadotte <evans.jadotte@uab.es>
I think a straightforward way to deal with this issue is to apply a
Multiple Correspondence Analysis (MCA) to your data. See Asselin (2002)
for an application, and also reference therein.
Cameron McIntosh
> You should also check out chapters 8 and 9 of:
> Basilevsky, A. (1994). Statistical Factor Analysis and Related
Methods: Theory and Applications. New York: Wiley.
>> I don't know much about this but a while ago I was looking for
something similar and I came across this paper which helped me:
>> http://cosco.hiit.fi/search/MPCA/buntineDPCA.pdf
>> If that's not useful to you, it has a bunch of references in the
back. Maybe those can help.
Jason Ferris
>>> As PCA is appropriate for continuous data. I am wondering if it is
>>> appropriate for count data (i.e., highly skewed)? Can someone
>>> advice, guidance or a resource in using PCA with count data?
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-08/msg00628.html","timestamp":"2014-04-18T13:12:21Z","content_type":null,"content_length":"8613","record_id":"<urn:uuid:7d23b049-ee6a-40aa-9f32-46c9ca886c6f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Combination of Polymer, Compliant Wall, and Microbubble Drag Reduction Schemes
Advances in Mechanical Engineering
Volume 2011 (2011), Article ID 743975, 10 pages
Review Article
The Combination of Polymer, Compliant Wall, and Microbubble Drag Reduction Schemes
Institute of Thermophysics, Siberian Branch of Russian Academy of Sciences, Prospekt Ac. Lavrentyev, 1, Novosibirsk 630090, Russia
Received 9 March 2011; Accepted 30 April 2011
Academic Editor: Jinjia Wei
Copyright © 2011 Boris N. Semenov. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
The promising study of turbulence management by joint use of compliant coatings with other drag reduction means is proposed. Its outlooks are conditioned by different considered factors and confirmed
by the first experimental and theoretical results.
1. Introduction
The combined use of different means is one of the main principles of nature development. The study of hydrodynamic problems of bionics (Aleyev [1], Bushnell and Moore [2]) also convinces us of the
correctness of this statement. Bionics is the way from observations and astonishment at making the first estimations (the conclusion about the paradox existence) to the explanation for the
The characteristic “nature” example of the study of bodies with low drag is the investigation of dolphins, the search of reasons of well-known paradox of Gray [3]. These investigations showed that in
consequence of long evolution dolphins possess different variants of adaptation to different, rapidly changing conditions of their inhabitation in sea (Woodcock [4], Focke [5], Semenov [6], Alekseeva
and Semenov [7], Wu and Chwang [8]). Here, excellent variants of economical swimming of dolphins were discovered and described. For example, Woodcock [4] described the “motionless” swimming of
dolphins near the ship nosing. Focke [5] investigated this fact. He showed by calculations that dolphins (using pressure distribution near the ship nosing) can swim with any ship velocity and without
essential energy losses (as “external passengers of ship-travellers without tickets”). The other example: Wu and Chwang [8] show by theoretical calculations that dolphins can obtain an energy for
their swimming from a wavy stream. So, they can swim in sea waves with minimum energy losses (quoted work permits to explain the physical essence of surf boards too). Above-mentioned results
requested to introduce new, additional conditions for selection of dolphin speed observations (used for analysis of Gray’s paradox). But note: they cannot explain Gray’s paradox for observations of
high-speed swimming of dolphins under conditions of the absolute calm, far from ships. And here, the other conclusion is important. As the result of long evolution, dolphins enjoyed different
variants of adaption to very different and often changed residing conditions in sea. So, our aim is to search and study many dolphins “secrets”. Here, the analysis of the dolphin body shape (Young [9
], Hertel [10]) was the important step to explain the observed low drag. The other important step was made by Kramer [11–13], who simulated the dolphin skin compliance in delaying the transition to
turbulence. Semenov [14] gave the additional explanation for low drag (of dolphin Tursiops Tursio Ponticus) taking into account also the possibilities of joint use of compliant dolphin skin,
water-soluble secretions decreasing drag, and gas microbubbles observed in experiments.
Technical progress is connected with this main principle of nature development (the combined use of different means) too. There are a lot of possible variants of the combined use of different (and
numerous) methods of drag reduction for different hydrodynamic conditions. Two passive means (compliant coatings and riblets) and two active means (polymeric additives and gas microbubbles) are
considered here in order to estimate outlooks for their joint action investigations.
2. Some Notes on Investigation Outlooks
These notes can be interesting to both research of near-wall turbulence and representatives of industry who use scientific successes. So, first of all, it is important to note that all considered
methods of turbulence management (compliant coatings, riblets, air microbubbles, and PEO additives) satisfy the ecology requirements.
Motivations of fine outlooks on joint use of the considered methods of drag reduction can be divided into four groups.
2.1. Initial Approach
The initial approach to joint use of different drag reducing means took into account only the simplified dependence of possible drag reduction efficiency for their joint action on their individual
efficiencies This expression is correct if all considered drag reducing means act independently and do not change the action conditions for the others (here and further, drag reduction efficiency is
considered concerning turbulent friction coefficient for smooth hard surface: ).
In this case, the possible drag reduction efficiency for joint action of different drag reducing means must be less than the sum of their individual drag reducing possibilities The prognosticated
negative deviation from the sum of individual efficiencies depends on their values and number of means used jointly for turbulence management.
These dependences can be analysed at ease for the variant of equal individual efficiencies: . So, the deviation from the sum of individual efficiencies is calculated as This deviation increases for
increasing .
And for , it has the limit
Results of this prognosis are shown in Figure 1. The prognosticated negative deviations are small when the sum of individual efficiencies is less than 20%. But they are very considerable for 80% sum:
for example, for two combined drag reducing means and for .
This approach was used for our initial estimations. Viscoelastic coatings, riblets, gas bubbles, and polymer additives are four well-known means for the action on near-wall turbulence. Their actions
for the decrease of the turbulence production are very different.
Compliant surface reacts on long-wave disturbances. According to the estimation of the interference theory of Semenov [15] and experimental data of Kulik et al. [16] the real viscoelastic coating is
deformed by the pressure wave with length more than one thousand viscous scales. Viscous scale is , where friction velocity is , and are density and viscosity of flow, and is friction stress on a
wall (Hinze [17]). Small additives in a flow put out the microeddy turbulence for the turbulence linear scales less than one hundred viscous scales (Greshilov et al. [18]). Riblets manage microeddy
structures too (Choi [19]). The flowing screen of gas bubbles can destroy long-wave powerful fluctuations going to the wall from the turbulent core and background flow (Bogdevich et al. [20]).
It is known (Hinze [17], Cantwell [21]) that in the main both microeddies of viscous sublayer and long waves of turbulent core generate a new turbulence. So, the joint use of considered methods of
drag reduction gives possibility to wait for new qualities of turbulence production decrease. Therefore, the combined use of these four methods permits to obtain the best results in turbulent drag
reduction as compared with above described prognosis.
2.2. Association of Useful Qualities
A study of joint use of different methods of drag reduction is promising because of a number of other reasons too. It is attractive as a possible origin of other useful properties (in addition to
drag reduction possibility) which are inherent in separate methods.
For example, drag reducing compliant coatings can have high anticorrosion properties. One-layer coatings created in Institute of Thermophysics of Russian Academy of Sciences (Kulik et al. [22]) have
the excellent immunity to a damage by acids and alkalis.
Another example: tests carried out by Russian and Bulgarian scientists (Malyuga et al. [23]) show that creation of an air-bubble layer in a near-wall region is a sufficiently effective method for
reducing the amplitudes of the propelled-induced pressures and the plate vibrations for ships.
And thirdly, for joint use of compliant coatings, air microbubbles, and polymeric additives, it is possible to suppress turbulent wall-pressure fluctuations in the very wide frequency band that is
impossible for any method used separately. So, it is possible to believe that these combinations will lead to the strong decrease of the hydrodynamic noise in the very wide frequency band too.
2.3. Here, It Is Important to Take into Account the Economic Factor
The turbulence management by compliant coatings and riblets is particularly useful due to their passive nature. As a result, additional energy is not required for the turbulence control. The
injection of gas microbubbles and polymer additives is connected with consumption of some energy and materials. Although drag reduction by the high-molecular polymer additive use is realized for its
very small concentration in a flow, expenses for its use may be higher than the economy (for example) of expenses for fuel. Therefore, Berman [24] suggested to estimate the specific efficiency ,
determining the expediency of drag reduction. He showed for a flow in a pipe that decreased as the concentration increased (for a flow with constant concentration of polymer additives) and was
significantly less at the friction minimization than the specific efficiency at moderate values of drag reduction . It is connected with nonlinear form of dependence of on and asymptotic achievement
of maximum value of drag reduction. Semenov [25, 26] carried out analogous analysis for a flow with variable concentration of polymer additives in a flow (for turbulent boundary layer on a plate) and
showed that from the point of view of profit, it is worthwhile not to tend to the drag minimization but to restrict drag reduction nearly twice (). So, the combined investigations must be carried out
for variants of small consumptions of PEO too. And only the joint use of the considered methods can permit to achieve maximum and profitable efficiency of drag reduction.
The similar situation is realized for drag reduction using gas bubbles. However, in this case, it is possible even to achieve drag reduction “free of charge” by the use of engine exhaust.
2.4. “Mutual Aid” of Different Drag Reducing Means
And after all, here, it is necessary to enumerate to some other factors of an interaction between jointly used methods of turbulence management. They are subject to a study as proposed factors of “a
mutual aid” promoting to an appearance of new qualities.
The flowing screen of gas bubbles destroys powerful fluctuations going to a wall from the turbulent core and background flow. So, the bubble screen defends polymer additives acting with high
efficiency just in a near-wall region. It decreases their ousting from this region.
The drag reducing polymers (polyethylene oxide, polyacril amide, etc.) are surface-active substances which decrease the surface tension and so the separation diameter of a bubble at its generation on
the porous injecting insert. Besides, polymer additives in flow prevent the bubble coalescence and also impede bubble rising. Note that it is very important for drag reduction to have microbubbles
with diameter less than 0.2mm. The decrease of microbubble diameter leads to an improvement of screening properties of a bubble layer, to a displacement of the peak concentration of gas bubbles in a
water flow to a wall, and to a decrease of the bubble buoyancy velocity. Hence, one can expect that the flow of high-polymer solutions aerated by gas bubbles will result in mutual increase of effects
of drag reduction on a streamlined surface (Malyuga et al. [27, 28]).
Waves and eddies are responsible for the near-wall turbulence production near a smooth surface. The wave action role is decreased as a result of the surface roughness increase. Compliant coatings
respond to the pressure fluctuation waves. So, the viscoelastic boundary action losses a physical sense as a result of high roughness of a surface (Semenov and Semenova [29, 30]). The increase of the
viscous sublayer thickness by polymer additives increases the permissible roughness of a compliant surface that simplifies and cheapens the coatings preparation technology.
Semenov and Semenova [29–31] have carried out the first calculations for joint action of compliant boundary and polymer additives in the turbulent boundary layer in order to explain the obtained
experimental results (Semenov et al. [32, 33] and Kulik et al. [34, 35]). One of possible factors of an interaction between two considered methods of turbulence management is the action of compliant
boundary on mass transfer in a near-wall region. Carried-out calculations show that the decrease (increase) of mass transfer, achieved by the use of a viscoelastic coating, decreases (increases) the
polymer consumption a little. The other factor is the influence of polymer additives in a flow on the interference action of viscoelastic boundary on near-wall turbulence. The calculations show that
injected polymer additives extend the phase-frequency region of positive action of compliant boundary; that is, they extend possibilities of drag (and noise) reduction by compliant coatings. These
two problems are described in Section 4 in details.
Semenov and Semenova [29, 30] considered the action of drag reducing riblets for joint use with compliant coating and concluded that it extends the phase-frequency region of positive action of
compliant boundary too.
The viscoelastic coating for drag reduction is the mechanical vibrational system with amplitude-phase-frequency characteristic chosen for action on the near-wall turbulence spectrum band responsible
for the main production of new turbulence. And, of course, this choice must take into account the existence of natural turbulence background conditions. However, both for different usual experimental
hydrodynamic installations and for practical objects (ships and pipe-lines) the existence of additional strong pressure fluctuations in a flow is quite possible. These additional pressure
fluctuations can swing the compliant coating in the frequency region of its negative action very essentially. So, the total production of new turbulence (for all frequency regions) can be even
increased. The important factor of the gas bubble layer action is the defence of a near-wall region of the turbulent boundary layer. So, the injection of gas bubbles into a near-wall flow will ensure
the stable drag reduction action of viscoelastic coating for different exploitation conditions.
Further, the following indexes are used for meaning: compliant surface—, polymer additives—, air microbubbles—, riblets—, and joint use—their combinations.
3. Experimental Investigations
The quantity of experimental investigations is small still. Only some variants of joint use of different drag reducing means were considered.
Already the first experiments (carried out at the Institute of Thermophysics RAS) for joint use of compliant coatings and polymer additives (Semenov et al. [32, 33]) showed fine outlooks of this
study. There was obtained that the total effectiveness of turbulent drag reduction is equal to the algebraic sum of individual small efficiencies of these methods of turbulence management. These
successes initiated new investigations.
3.1. Experimental Conditions
The experiments were carried out in the saline lake Issyk-Kool, where 2.1m—long, 0.175m—diameter streamline body of revolution was towed by the tow boat with speed m/s.
This model (see Figure 2) was described in details formerly by Kulik et al. [16, 36]. It was equipped (in the middle of its length) with 0.66m—long “floating” surface element for measuring the
skin-friction drag. There were tested different variants of these cylindrical elements. One had a solid smooth surface, and the others were mounted with compliant coatings. Careful measurement of
friction coefficient for the case of hard polished surface in water flow was used for comparison as a standard.
The model nosing had a ring slot for polymeric solution injection. The model was equipped with the 35-mm long insert made from porous metal for air injection. Sizes of injected microbubbles varied
from 0.07mm to 0.2mm.
All experiments were carried out for low background turbulence conditions. The spectrum analysis of measured wall-pressure fluctuations (see the example in Figure 3) in frequency band from 10Hz to
10kHz revealed strong peaked deviation from smooth distribution in frequency only for low frequencies (below 20Hz), that is inessential for these investigations.
All experimental conditions were described in details by Semenov et al. [37].
3.2. Joint Action of Compliant Coatings and Polymer Additives
New results of these investigations were described by Kulik et al. [34, 35] and Semenov et al. [37]. There the mass consumption of polyethylene oxide (PEO of different molecular mass ) was varied.
The corresponding dimensionless parameter is , where is diameter of the measured “floating” element, : density of PEO, : thickness of turbulent boundary layer calculated for water flow (with
temperature ) without polymer additives for the middle abscissa of the “floating” element (with solid smooth surface). According to Kutateladze and Leontyev [38], the thicknesses of diffusion and
dynamic turbulent layers near this “floating” element are approximately equal. So, is like to the near-wall concentration of PEO for the middle abscissa of the “floating” element.
The first experimental results of Semenov et al. [32, 33] showed that is shifted concerning so as ; that is, the summarizing property was discovered for the joint of compliant coating and polymer
additives that confirmed our initial prognosis for small individual effectivenesses. However, contrary to initial estimations, it was noted that for the case of the increase of separate effects the
magnitude of combined drag reduction exceeded their sum. So, further, it is considered the deviation of the drag reduction efficiency for joint action from the sum of the drag reduction efficiencies
for separate actions in order to investigate this summarizing property.
Shown in Figure 4 are data (from Semenov et al. [37]) for the joint use of different compliant coatings (both decreasing and increasing the turbulent friction) and polymer additives (for the great
variation of polymer consumption and, accordingly, ). These results witness the existence of three zones:(1)the zone of the exact sum of individual efficiencies ;(2)the zone of positive deviation ;
(3)the zone of negative deviation .
Here, the zone of the exact sum is observed for all tested variants till . Zones of positive and negative deviations follow the zone of the exact sum when polymer consumption increases. But here, we
see considerable differences for different tested variants.
Experimental results from Figure 4 are shown in Figure 5 again for their comparison with initial prognosis. Here, these results are considered in dependence on drag reduction of hard surface by
polymer additives, that is, on individual efficiency of polymer additives .
According to (1), . The prognosticated deviation must be So, in this case, the deviation must be negative for a “positive coating” () and positive for a “negative coating” ().
The deviations prognosticated according to (5) (shown in Figure 5 by lines) are contrary to experimental data for the second and third zones. Thus, these results show the presence of an interaction
of compliant coating and polymeric additives. So, above-mentioned zones can be termed as(2) the zone of positive interaction of two considered methods of drag reduction (with ),(3) the zone of
negative interaction of two considered methods of drag reduction (with ).
3.3. Joint Action of Air-Microbubbles and Polymer Additives
Malyuga et al. [27, 28] carried out first experiments on drag reduction using the injection of PEO (WSR-301)—solutions aerated by air bubbles. They measured the friction in 3points of the hard flat
plate from distance 0.25m (N1), 0.99m (N2) and 2.23m (N3) behind the slot for –10m/s. They determined that an aeration of injected PEO solutions can lead to an increase of their efficiency of
drag reduction. The maximum additional increase of their efficiency was measured: 36% in point N2 and 16% in point N3 but in point N2 were measured both an increase and a decrease of drag reduction
efficiency. And here, the results were worse for an increase of PEO consumption. It is important to note that highly large consumption of injected air and polymer was used in this experiment. The
corresponding dimensionless parameters were
Here, is the surface of studied plate part, and is the volumetric consumption of injected air.
Some above-mentioned results and new data (Semenov et al. [37]) obtained in experiments (described in Section “Experimental conditions”) for very small consumption of air and polymer are shown in
Figure 6. Here, we can see the same three zones: the zone of the exact sum, zones of positive and negative interaction.
Note that the negative interaction zone corresponds to very high consumption of PEO and air.
3.4. Joint Action of Compliant Coating and Air Microbubbles
The first experiment is described by Semenov et al. [37]. One compliant coating was tested for very small consumption of injected air . m/s, °C. Drag reduction of hard surface by air-microbubbles
varied from 7% to 14%. There was obtained that the total efficiency of turbulent drag reduction is equal to the sum of individual efficiencies .
3.5. Joint Action of Riblets and Surface Compliance
According to theoretical estimations of Semenov and Semenova [29], this combination must be the fine variant of passive (without energy expenditure) methods of turbulent drag reduction.
But experimental data are still absent.
3.6. Joint Action of Riblets and Polymer Additives
The first experimental results were described by Reidy and Anderson [39] and Choi et al. [40]. They found out that individual efficiencies of two methods of drag reduction are summed up for their
joint use. Note that they considered very small consumption of polymers.
Koury and Virk [41] and Virk and Koury [42] investigated this problem in detail: for two polyethyleneoxides (and ) and one polyacrylamide (), in two hydraulically smooth pipes of 7.82mm and 10.2mm
i.d. and in four riblets pipes formed by, respectively, lining each of the smooth pipes with 0.11mm and 0.15mm V—groove riblets of equal height and spacing. Within the polymeric regime, at moderate
drag reductions of order 50%, drag reduction in the riblet walled pipe significantly exceeded that in the smooth pipe, by as much as 15%. But the greatest drag reduction by riblets in water was
measured ~10%. So, the positive deviation from the exact sum of individual efficiencies is observed here. At conditions of asymptotic maximum drag reduction, of order 80%, friction factors in the
present riblet-walled pipe were identical to smooth for but departed off the smooth asymptote in the direction of lesser drag reduction for . And here, the negative interaction is observed.
3.7. Joint Action of Riblets and Air Microbubbles
The opinion about the promising study of this combination is based on an expectation that riblets and air microbubbles manage with very differed structures of turbulence. But both experimental and
theoretical investigations were not carried out still.
3.8. Joint Action of Compliant Coating, Air Microbubbles, and Polymeric Additives
The first experiment is described by Semenov et al. [37]. Russian scientists measured the friction of a floating cylindrical element (see “Experimental Conditions” here). They carried out tests for
very small consumption of air and PEO. They used the one-layer compliant coating tested also by Choi et al. [43] after this experiment. Results are shown in Figure 7. Here, the positive deviation
increases monotonously with increasing consumption of air and PEO. It shows the presence of an interaction of compliant coating, air microbubbles, and polymer additives in the whole region of this
Note that the effectiveness of drag reduction for joint use of compliant coating, air microbubbles, and PEO additives exceeded the sum of individual efficiencies by as much as 11% (for ).
4. Theoretical Analysis of Interaction between Compliant Boundary and Polymer Additives
The discovered peculiarities of drag reduction using a complex of different methods of turbulence management require theoretical explanations.
Compliant coatings and polymer additives manage with very differed structures of near-wall turbulence. So, both methods of drag reduction are independent according to this point of view.
But the other factor of an interaction between compliant boundary and polymer additives is a possible reason of observed contradictions between experimental data and initial prognosis: a change of
action conditions of one method by other method of drag reduction.
4.1. The Considered Influence of the Viscoelastic Boundary on the Turbulent Diffusion of Polymer Additives
One possible factor of an interaction between two considered methods of turbulence management is the action of compliant boundary on mass transfer in a near-wall region. Here, the integral approach
was used. The calculation analysis was carried out on the base of approximate model [26] for a flat plate analogous to the construction scheme tested in quoted experiments [32–35] described here in
Section “Experimental Conditions”.
It is supposed that the slot injection of PEO solutions at satisfies the conditions of pulseless injection of polymeric additives into a near-wall flow [25]. Here, the constant efficiency of drag
variation using compliant coating (independent on polymer additives in flow) is considered from to . is the body length. For this part of the body, it was calculated The local friction reduction by
PEO additives is determined according to the formula grounded in [26]
The near-wall concentration of PEO may be determined according to the experimental data of Fabula and Burns [44] as
The thickness of turbulent boundary layer is determined as where , is the kinematic coefficient of water viscosity, for and . Here, the existence of laminar boundary layer from to is proposed. In the
point of transition from laminar form a flow to a turbulent one (at ) the condition of continuity of momentum thickness is written. On its base, the initial thickness of turbulent boundary layer at
is determined. Here, the power form of the velocity profile with index 1/11 was taken.
So, the friction coefficient (without polymer injection) is calculated according to the Falkner's formula [45]
The system of (8), (9), (10) is solved for given molecular , dimensionless coefficient of PEO consumption , Reynolds number . After its solution, the drag variation (for ) and drag reduction (for )
are calculated according to (7). On the base of these calculations, the deviation of drag reduction for joint use of compliant surface and polymer additives from the sum of efficiencies for separate
actions is determined.
Carried out calculations show that the mass transfer decrease (increase) by the use of viscoelastic coating decreases (increases) the polymer consumption a little. So, it is unlikely that it is the
main factor of the interaction between these two methods of turbulence management. However, this approach can and must be taken into account for future investigations and accurate analysis.
One example is shown in Figure 8. We see that in both considered cases ( and ), the calculated deviations (points) differ from the initial prognosis (lines) inessentially.
4.2. The Interference Action of Viscoelastic Boundary on Near-Wall Turbulence in Flow with Polymer Additives
Here, the other factor of interaction between two methods of drag reduction (the influence of polymer additives in a flow on the interference action of viscoelastic boundary on near-wall turbulence)
is considered.
Formerly, the interference form of a compliant boundary action was analysed by Semenov [15, 46] for a turbulent near-wall flow of Newtonian fluids. He used the near-wall turbulence model of Sternberg
[47]. The main modeling parameter (written by Semenov for solution of the problem [46]) is the complex dimensionless compliance of a boundary. He determined the region of this parameter values for
drag reduction [48–50]. This theoretical model was used for modeling and choice of one-layer compliant drag reducing coatings. These coatings provided up to 20% drag reduction in experiments [16, 36
]. They were used in above-written experimental combined investigations of different methods of turbulence management too.
Here, the interference approach is used for a compliant boundary of a water flow with PEO additives. In this case is suitable the former solution [46] of the problem on an interaction between a
viscoelastic boundary and the viscous sublayer of a turbulent boundary layer. Here, we take into account that PEO additives in a flow do not change long-wave structures, the ratio of wave numbers for
transverse () and main () directions.
Drag reduction by polymer additives, a change of the velocity profile , viscosity and wave velocity are taken into account in calculations. It is important to note that the increase of the viscous
sublayer thickness by polymer additives increases the region of permissible use of the linear theory near a wall.
The complex compliance of the boundary (the modeling parameter) is characterised by amplitude and phase of the boundary displacement relative to the turbulent pressure fluctuation. This parameter
must be determined for the frequency band of the main production of turbulence. The increase of permissible amplitudes of viscoelastic boundary oscillations follows the increase of thickness of a
viscous sublayer.
The obtained solution [46] shows the restriction of the phase region for positive action of a viscoelastic boundary (for drag reduction). This positive action is connected with the decrease of
near-wall turbulence production. For fixed frequency (, where is cyclic frequency) the production change of the turbulence energy should be Index “” corresponds to a compliant boundary. The
interference action of a compliant boundary for fixed frequency is neutral if this integral is equal to zero. According to the near-wall turbulence model of Sternberg [47], the calculated viscous
sublayer thickness is connected with the fluctuation frequency as .
For the neutral action variant, the mean velocity profile is written according to the experimental data for a hard wall.
The improved interference theory (presented by Semenov and Semenova [29]) was used for first calculations of joint action of a compliant boundary and polymer additives.
Neutral phase-frequency lines (calculated according to condition (12)) restrict (from below) a region of for positive action of compliant boundary (). One example for is shown in Figure 9 (for two
variants of the abscissa). The phase shift of the compliant boundary displacement relative to acting fluctuating pressure is on the ordinate. The dimensionless frequency is on the abscissa. In Figure
9(a), it is made dimensionless by the use of real flow viscosity near a wall and real friction velocity . In Figure 9(b), it is made dimensionless by the use of kinematic viscosity of water and
friction velocity without drag reduction in order to compare the different influences of drag reducing polymer additives for identical conditions of a water flow.
We see that injected polymer additives extend the phase-frequency region of positive action (PFRPA) of compliant boundary. This extension of PFRPA is maximum at .
The injection of drag reducing polymeric additives into a flow leads to a displacement of PFRPA to the left that can lead even to the change of the action sign of compliant boundary (from “+” to “−”
and on the contrary).
We see that from the right branch of the neutral line is displaced distinctly to the left. So, minimum velocity of possible drag reduction using compliant coating must increase with the increasing
individual efficiency of drag reducing polymeric additives. For example, it must increase to two times at .
It leads to explanation of reasons of drag reduction peculiarities discovered in experiments [32–35, 37] on joint use of compliant coating and polymer additives.
The used theoretical approach does not permit still to carry out a quantitative comparison. It is a problem for future investigations.
5. Conclusion
So, this exposition shows fine outlooks of further study of turbulence management by joint use of compliant coatings, riblets, polymer additives, and microbubbles.
1. Y. G. Aleyev, Nekton, The Hague, 1977.
2. D. M. Bushnell and K. J. Moore, “Drag reduction in nature,” Annual Review of Fluid Mechanics, vol. 23, no. 1, pp. 65–79, 1991. View at Scopus
3. J. Gray, “Studies in animal locomotion. The propulsive powers of the Dolphin,” The Journal of Experimental Biology, vol. 13, pp. 192–199, 1936.
4. A. H. Woodcock, “The swimming of dolphins,” Nature, vol. 161, no. 4094, p. 602, 1948. View at Scopus
5. H. Focke, “Ueber die ursachen der hohen schwimmgeschwin-digkeiten der delphine,” Zeitschrift für Flugwissenschaften und Weltraumforschung, vol. 13, no. 2, pp. 54–61, 1965.
6. B. N. Semenov, “On the existence of the hydrodynamic phenomenon of Dolphins (Tursiops Tursio Ponticus),” Bionika, no. 3, pp. 54–61, 1969.
7. T. E. Alekseeva and B. N. Semenov, “On the determination of the hydrodynamic drag of Dolphins,” Journal of Applied Mechanics and Technical Physics, no. 2, pp. 160–164, 1971.
8. T. Y. Wu and A. T. Chwang, “Extraction of flow energy by fish and birds in a wavy stream,” in Swimming and Flying in Nature, pp. 687–702, Plenum Press, New York, NY, USA, 1975.
9. A. D. Young, “The calculation of the total and skin friction drags of bodies of revolution at 0^0 Incidence,” Tech. Rep. RM 1947, ARC, 1939.
10. H. Hertel, Structur—Form—Bewegung, Krauskopf, Mainz, Germany, 1963.
11. M. O. Kramer, “The Dolphin’s secret,” New Scientist, vol. 7, pp. 1118–1120, 1960.
12. M. O. Kramer, “Boundary layer stabilization by distributed damping,” Journal of the American Society of Naval Engineers, vol. 72, no. 1, pp. 25–33, 1960.
13. M. O. Kramer, “Boundary layer stabilization by distributed damping,” Naval Engineers Journal, vol. 74, no. 2, pp. 341–348, 1962.
14. B. N. Semenov, “The study of Dolphins as low-drag bodies (e.g., Tursiops Tursio Ponticus),” in Proceedings of the 4th International Congress of the Society for Technical Biology and Bionics,
Munich, Germany, 1998.
15. B. N. Semenov, “On conditions of modelling and choice of viscoelastic coatings for drag reduction,” in Recent Developments in Turbulence Management, pp. 241–262, Kluwer Academic Publishers, 1991.
16. V. M. Kulik, I. S. Poguda, and B. N. Semenov, “Experimental investigation of one-layer viscoelastic coating action on turbulent friction and wall pressure pulsations,” in Recent Developments in
Turbulence Management, pp. 236–289, Kluwer Academic Publishers, 1991.
17. J. O. Hinze, Turbulence, McGraw-Hill, 1959.
18. E. M. Greshilov, A. M. Evtushenko, L. M. Lyamshev, and N. L. Shirokova, “Some peculiarities of an action of polymeric on near-wall turbulence,” Journal of Engineering Physics, vol. 25, pp.
999–1004, 1973.
19. K.-S. Choi, “Turbulent drag reduction strategies,” in Emerging Techniques in Drag Reduction, pp. 77–98, MEP, London, UK, 1996.
20. V. G. Bogdevich, N. V. Malykh, A. G. Malyuga, and I. A. Ogorodnikov, “Acoustic properties of wall bubble layer in water of great void fraction,” in Hydrodynamics and Acoustics of Near-Wall and
Free Flows, pp. 77–107, Institute of Thermophysics, Novosibirsk, Russia, 1981.
21. B. J. Cantwell, “Organized motion in turbulent flow,” Annual Review of Fluid Mechanics, pp. 457–515, 1981. View at Scopus
22. V. M. Kulik, I. S. Poguda, and B. N. Semenov, “The action of viscoelastic coatings on the friction reduction for flows of water and polymeric solutions,” in Proceedings of the 12th Short Course
for Pipe-Line Problems, pp. 42–43, Upha, 1989.
23. A. G. Malyuga, V. I. Mikuta, and G. Gerchev, “The influence of near-wall bubble layer on screw propeller-induced effects on the wall,” in Proceedings of the 17th Session of BSHC, vol. 2, pp. 42/
1–42/12, Varna, Bulgaria, 1988.
24. N. S. Berman, “Drag reduction by polymers,” Annual Review of Fluid Mechanics, vol. 10, pp. 47–64, 1978. View at Scopus
25. B. N. Semenov, “The polymeric solution injection into flow for drag reduction,” Siberian Physical-Technical Journal, no. 4, pp. 99–108, 1991.
26. B. N. Semenov, “The pulseless injection of polymeric additives into near-wall flow and perspectives of drag deduction,” in Recent Developments in Turbulence Management, pp. 293–308, Kluwer
Academic Publisher, 1991.
27. A. G. Malyuga, V. I. Mikuta, and O. I. Stoyanovsky, “Turbulent drag reduction at flow of polymer solutions aerated by air bubbles,” in Near-Wall and Free Turbulent Flows, pp. 121–130, Institute
of Thermophysics, Novosibirsk, Russia, 1988.
28. A. Malyuga, V. Mikuta, A. Nenashev, S. Kravchenko, and O. Stoyanovsky, “Local drag reduction at flow of polymer solutions aerated by air bubbles,” in Proceedings of the 6th National Congress, pp.
74/1–74/6, Varna, Bulgaria, 1989.
29. B. N. Semenov and A. V. Semenova, “Recent developments in interference analysis of compliant boundary action on near-wall turbulence,” in Proceedings of the International Symposium on Sea Water
Drag Reduction, pp. 189–195, Newport, UK, 1998.
30. B. N. Semenov and A. V. Semenova, “On interference action of a compliant boundary on near-wall turbulence,” Thermophysics and Aeromechanics, vol. 9, no. 3, pp. 393–403, 2002.
31. B. N. Semenov and A. V. Semenova, “Joint effect of a compliant boundary and polymer additives on the near-wall turbulent flow,” Thermophysics and Aeromechanics, vol. 7, no. 7, pp. 187–195, 2000.
32. B. N. Semenov, V. M. Kulik, V. A. Lopyrev, B. P. Mironov, I. S. Poguda, and T. I. Yushmanova, “The combined effect of small quantities of polymeric additives and pliability of the wall on
friction in turbulent flow,” Fluid Mechanics. Soviet Research, vol. 14, no. 1, pp. 143–149, 1985. View at Scopus
33. B. N. Semenov, V. M. Kulik, V. A. Lopyrev, B. P. Mironov, I. S. Poguda, and T. I. Yushmanova, “Towards the influence of flow polymer additives and surface compliance on wall-turbulence,” in
Proceedings of the 5th International Congress on Theoretical and Applied Mechanics, vol. 2, pp. 371–376, Varna, Bulgaria, 1985.
34. V. M. Kulik, I. S. Poguda, B. N. Semenov, and T. I. Yushmanova, “Influence of flow velocity in combined effect of a compliant surface and polymer additives on turbulent friction,” Izvestia
Sibirskogo Otdelenia Akademii nauk SSSR, no. 15, pp. 42–46, 1987. View at Scopus
35. V. M. Kulik, I. S. Poguda, B. N. Semenov, and T. I. Yushmanova, “Effect of flow velocity on the synergistic decrease of turbulent friction by a compliant wall and a polymeric additive,” Soviet
Journal of Applied Physics, no. 1, pp. 49–54, 1988.
36. V. M. Kulik, I. S. Poguda, and B. N. Semenov, “Experimental study of the effect of single-layer viscoelastic coatings on turbulent friction and pressure pulsation on a wall,” Journal of
Engineering Physics, vol. 47, no. 2, pp. 878–883, 1984. View at Publisher · View at Google Scholar · View at Scopus
37. B. N. Semenov, A. I. Amirov, V. M. Kulik, A. G. Malyuga, and I. S. Poguda, “Turbulent drag reduction by a combined use of compliant coatings, gas microbubbles and polymer additives,”
Thermophysics and Aeromechanics, vol. 6, no. 2, pp. 211–219, 1999.
38. S. S. Kutateladze and A. I. Leontyev, Heat and Mass Transfer and Friction in Turbulent Boundary Layers, Energiya, Moscow, Russia, 1972.
39. L. W. Reidy and G. W. Anderson, “Drag reduction for external and internal boundary layer using riblets and polymers,” AIAA Paper, 1988, N138.
40. K. S. Choi, G. E. Gadd, H. H. Pearcey, A. M. Savill, and S. Svensson, “Tests of drag-reducing polymer coated on a riblet surface,” Applied Scientific Research, vol. 46, no. 3, pp. 209–216, 1989.
View at Publisher · View at Google Scholar · View at Scopus
41. E. Koury and P. S. Virk, “Drag reduction by polymer solutions in riblet-lined pipes,” in Proceedings of the 8th European Drag Reduction Working Meeting, Lausanne, Switzerland, 1993.
42. E. Koury and P. S. Virk, “Maximum drag reduction by polymer solutions in riblet-lined pipes,” in Proceedings of the 9th European Drag Reduction Meeting, Napoly, Italy, 1995.
43. K. S. Choi, X. Yang, B. R. Clayton et al., “Turbulent drag reduction using compliant surfaces,” Proceedings of the Royal Society A, vol. 453, no. 1965, pp. 2229–2240, 1997. View at Scopus
44. A. G. Fabula and T. G. Burns, Dilution in a Turbulent Boundary Layer with Polymeric Friction Reduction, Naval Undersea Research and Development Center, Pasadena, Calif, USA, 1970.
45. Y. I. Voitkunsky, R. Y. Pershitz, and I. A. Titov, Handbook on Theory of a Ship, Sudpromgiz, Leningrad, 1960.
46. B. N. Semenov, “Interaction of an elastic boundary with a viscous sublayer of a turbulent boundary layer,” Journal of Applied Mechanics and Technical Physics, no. 3, pp. 58–62, 1971.
47. J. Sternberg, “A theory for viscous sublayer of a turbulent flow,” The Journal of Fluid Mechanics, vol. 13, no. 2, pp. 241–271, 1962.
48. B. N. Semenov, “Analysis of deformation characteristics of viscoelastic coatings,” in Hydrodynamics and Acoustics of Near-Wall and Free Flows, pp. 57–76, Nauka, Novosibirsk, Russia, 1981.
49. B. N. Semenov, “On the properties of viscoelastic boundary for turbulent friction reduction,” Siberian Physics-Technical Journal, no. 1, pp. 63–73, 1993.
50. B. N. Semenov, “Analysis of four types of viscoelastic coating for turbulent drag reduction,” in Emerging Techniques in Drag Reduction, pp. 187–206, MEP, London, UK, 1996. | {"url":"http://www.hindawi.com/journals/ame/2011/743975/","timestamp":"2014-04-16T05:50:26Z","content_type":null,"content_length":"272264","record_id":"<urn:uuid:6588e755-e4f1-47b1-9b3c-6e6811a92a3f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by roba on Saturday, November 14, 2009 at 4:58pm.
The recursive algorithm of the 4th Section of the chapter "Computational Geometry"
employs a trick of presorting, in which we maintain two arrays X and Y of the input
points P sorted on coordinate x and y, respectively. The algorithm starts with sorting
all the input points in Q in time O(n log n). Assuming that a subset P Q of input
points together with arrays X and Y are given, set P is partitioned into PL and PR and
the corresponding arrays XL, XR, YL, and YR are all obtained in time O(/P/). To see
this, observe that the median xm of x-coordinates of the points in P is the x coordinate
of the point in the middle of X. To obtain YL and YR, scan array Y and move a point
(x,y) with x < xm to YL and a point (x, y) with x xm to YR.
Consider a modi cation of the recursive algorithm in which presorting is not applied.
Instead, we sort in each recursive call, applying an algorithm sorting by comparisons.
Each time a given subset P needs to be partitioned into PL and PR, the points in P
are sorted on the x-coordinate. In the "combine" part, the set of points in the vertical
strip of width 2ä is sorted on the y-coordinates.
Find a tight asymptotic estimate on the running time of this algorithm as a function
of the size n of the input set Q.
Hints: Find a recurrence for the running time. It is dierent from the recurrence
T(n) = 2T(n=2) + O(n) describing the version with presorting. Solve the recurrence.
To this end, you might apply the approach used to prove the \master theorem" of
Chapter "Divide-and-Conquer."
Diameter of a convex polygon.
There is given a convex polygon P, represented as a sequence of consecutive points
(p0, p1,........ pn-1)
in the sense that the polygon P consists of segments pi, pi+1, where the addition of
subscripts is modulo n.
1) Give an efficient algorithm to nd a pair of points in P of the maximum distance
from each other.
A readable description of the underlying idea of the algorithm in words, possibly illus-
trated with simple drawings, will be better than a tight pseudocode.
2) Argue why the algorithm is correct.
The correctness of the algorithm is to rely on the convexity of the polygon. Point in
your correctness argument where you resort to convexity.
3) Estimate the running time of the algorithm.
The goal is to design an algorithm of the asymptotically optimal running time.
Related Questions
graham school - how,arrays and an expanded algorithm it is a 4th grade homework
4th Grade Math - Arrays and an expanded algorithm: 23 x 17
programming - 1. Write a structured algorithm that prompts the user to input two...
Computers - Problem-Solving 1. Develop an algorithm or write pseudocode to ...
programming - Write pseudocode to accept as input names of an unspecified number...
programming - Write pseudocode (using pascal) to accept as input names of an ...
computers - Develop an algorithm or write a pseudo-code that accepts as input ...
Math - What the heck is recursive? recursive: repeats Going up a stairwell is a...
pseudocode programming - Develop an algorithm or write pseudocode to determine ...
Math - How do you do arrays and an expanded algorithm | {"url":"http://www.jiskha.com/display.cgi?id=1258235896","timestamp":"2014-04-21T15:40:27Z","content_type":null,"content_length":"10811","record_id":"<urn:uuid:bca246ee-067e-4262-8343-607ebab14017>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
34,117pages on
this wiki
A percentile is the value of a variable below which a certain percent of observations fall. So the 20th percentile is the value (or score) below which 20 percent of the observations may be found. The
term percentile and the related term percentile rank are often used in descriptive statistics as well as in the reporting of scores from norm-referenced tests.
The 25th percentile is also known as the first quartile; the 50th percentile as the median.
The 95th percentile
The 95th percentile is a mathematical calculation widely used to evaluate the regular, sustained utilization of your Internet connection. The reason this statistic is so useful in measuring data
throughput is that it gives a very accurate picture of the cost of the bandwidth. The 95th percentile says that 95% of the time, your usage is below this amount. Just the same, the remaining 5% of
the time, your usage is above that amount. The 95th percentile is a good number to judge how much bandwidth you are actually utilizing and helps filter out usage spikes.
There is no standard definition of percentile ^[1] ^[2] , however all definitions yield similar results when the number of observations is large. One definition, usually given in unsophisticated
texts, is that the $p$-th percentile of $N$ ordered values is obtained by first calculating the rank $n = \frac{N}{100}\,p+\frac{1}{2}$, rounding to the nearest integer, and taking the value that
corresponds to that rank.
An alternative method, used in many applications, is to use linear interpolation between the two nearest ranks instead of rounding. Specifically, if we have $N$ values $v_1$, $v_2$, $v_3$,...,$v_N$ ,
ranked from least to greatest, define the percentile corresponding to the $n$-th value as $p_n=\frac{100}{N}(n-\frac{1}{2}).$ In this way, for example, if $N=5$ the percentile corresponding to the
third value is $p_3=\frac{100}{5}(3-\frac{1}{2})=50.$ Suppose we now want to calculate the value $v$ corresponding to a percentile $p$. If $p<p_1$ or $p>p_N$, we take $v=v_1$ or $v=v_N$ respectively.
Otherwise, we find an integer $k$ such that $p_k\le p \le p_{k+1}$ , and take $v=v_k+\frac{N}{100}(p-p_k)(v_{k+1}-v_k).$^[3] When $p=50$, the formula gives the median. When $N$ is even and $p=25$,
the formula gives the median of the first $\frac{N}{2}$ values.
Linked with the percentile function, there is also a weighted percentile, where the percentage in the total weight is counted instead of the total number. In most spreadsheet applications there is no
standard function for a weighted percentile. One method for weighted percentile extends the method described above. Suppose we have positive weights $w_1$, $w_2$, $w_3$,...,$w_N$ , associated
respectively with our $N$ sample values. Let $S_n=\sum_{k=1}^{n}w_k$ be the $n$-th partial sum of these weights. Then the formulas above are generalized by taking $p_n=\frac{100}{S_N}(S_n-\frac{w_n}
{2})$ and $v=v_k+\frac{p-p_k}{p_{k+1}-p_k}(v_{k+1}-v_k).$
Alternative methods
Many software packages, such as Excel, use the following method to estimate the value, $v_p$, of the $p^{th}$ percentile of an ascending ordered dataset containing ${N}$ elements with values $v_1,
v_2, ... ,v_N$;
$n = \frac{p}{100}\,({N}-1)+1$
$n$ is then split into its integer component, $k$ and decimal component, $d$, such that $n = k + d$
If $k = 1$, then the value for that percentile, $v_p$, is the first member of the ordered dataset, $v_1$.
If $k = N$, then the value for that percentile, $v_p$, is the $N^{th}$ member of the ordered dataset $v_N$.
Else $(1< k < N)$ then $v_p=v_k+d(v_{k+1}-v_k).$
An alternative method, is as above, with $n$ calculated as $n = \frac{p}{100}\,({N}+1)$
Relation between percentile, decile and quartile
• P[25] = Q[1]
• P[50] = D[5] = Q[2] = median value
• P[75] = Q[3]
• P[100] = D[10] = Q[4]
• P[10] = D[1]
• P[20] = D[2]
• P[30] = D[3]
• P[40] = D[4]
• P[60] = D[6]
• P[70] = D[7]
• P[80] = D[8]
• P[90] = D[9]
Note: One quartile is equivalent to 25 percentile while 1 decile is equal to 10 percentile.
When ISPs bill "Burstable" Internet bandwidth, the 95th or 98th percentile usually cuts off the top 5% or 2% of bandwidth peaks in each month, and then bills at the nearest rate. In this way
infrequent peaks are ignored, and the customer is charged in a fairer way.
Physicians will often use infant and children's weight and height percentile as a gauge of relative health.
Percentiles are often represented graphically, using a "normal curve". A normal curve is always divided in the same respective manner. At the peak, in the center, stands the point of the mean of the
distribution being graphed. On both the right and left sides each, the graph is divided into 3 equal parts, 1, 2, and 3 to the right and -1, -2, -3 to the left respectively. The important thing to
remember is that at each of these standard deviation represents a fixed percentile. In other words, every standard deviation unit on the axis, including standard deviation units -3 to +3 have
specific percentiles that are always paired with them, regardless the data or values in the distribution. So, what are the pairs of percentiles/standard deviation units? -2 = 2.5th percentile; -1 =
16th percentile; 0 = 50th percentile (also the mean of the distribution as previously stated); +1 = 84th percentile; +2 = 97.5th percentile; +3 = 99.8th percentile.
Percentage also becomes a factor in measuring a distribution graphically. On any normal curve, 99.7% of data lies between the -3 and +3 values, 95% between -2 and +2, 68% between -1 and +1, 34%
between 0 and -1 or 0 and +1, 16% between -1 and -2 or +1 and +2 and 2.5% between -2 and -3 or +2 and +3. The remaining 0.3% of the data is between -3 and negative infinity or +3 and positive
See also
1. ↑ Lane, David Percentiles. URL accessed on 2007-09-15.
2. ↑ Pottel, Hans Statistical flaws in Excel. URL accessed on 2006-03-22.
3. ↑ Matlab Statistics Toolbox - Percentiles. URL accessed on 2006-09-15.
External links | {"url":"http://psychology.wikia.com/wiki/Percentile?oldid=85863","timestamp":"2014-04-17T16:57:47Z","content_type":null,"content_length":"70283","record_id":"<urn:uuid:38d554f0-7910-4134-a169-5cb6b0c04314>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normal Probability Distribution [Archive] - Free Math Help Forum
04-06-2011, 04:12 PM
I have a problem that I have been working on for two days and I am completely confused! Hopefully someone will be able to help me!
Members of a management team suggested order quantities of 15,000, 18,000, 24,000, or 28,000 units. The wide range of order quantities suggested indicate considerable disagreement concerning the
market potential. the product management team asks you for an analysis of the stock-out probabilites for various order quantities, an estimate of the profit potential, and to help make an order
quantity recommendation. Specialty (the company name) expects to sell Weather Teddy (the product) for $24 based on a cost of $16 per unit. If inventory remains after the holiday season, Specialty
will sell all surplus inventory for $5 per unit. After reviewing the sales history of similiar products, Specialty's senior sales forecaster predicted an expected demand of 20,000 units with a 0.90
probability that demand would be between 10,000 units and 30,000 units.
1. Use the sales forecaster's rediciton to describe a normal probability distribution that can be used to approximate the demand distribution. Sketch the distribution and show its mean and standard
2. Compute the probability of a stock-out for the order quantities suggested by members of the management team.
3. Compute the projected profit for the rder quantities suggested by the management team under three scenarios: worst case in which sales = 10,000 units, most likely case in which sales = 20,000
units, and best case in which sales = 30,000 units.
4. One of Specialty's managers felt that the profit potential was so great that the order quantity should have a 70% chance of meeting demand and only a 30% chance of any stock-outs. What quantity
would be ordered under this policy, and what is that projected profit under the three sales scenarios.
5. Provede your own recommendation for an order quantity and note the associated profit projections. Provide a rationale for your recommendation.
Thank you so much to whoever could offer me some help with this nightmare of a problem | {"url":"http://www.freemathhelp.com/forum/archive/index.php/t-70407.html","timestamp":"2014-04-18T03:09:55Z","content_type":null,"content_length":"5580","record_id":"<urn:uuid:1921589c-9480-4770-ae10-75303825e838>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
This version differs from the simple one in providing an extra argument to the sampling action that will be globally distributed to every node and can be used to update the state. For instance, it
can hold the time step between the two samplings, but it could also encode all the external input to the system.
The interface of this module differs from the old Elerea in the following ways:
• the delta time argument is generalised to an arbitrary type, so it is possible to do without external altogether in case someone wants to do so;
• there is no sampler any more, it is substituted by join, as signals are monads;
• generator has been conceptually simplified, so it's a more basic primitive now;
• there is no automatic delay in order to preserve semantic soundness (e.g. the monad laws for signals);
• all signals are aged regardless of whether they are sampled (i.e. their behaviour doesn't depend on the context any more);
• the user needs to cache the results of applicative operations to be reused in multiple places explicitly using the memo combinator;
• the input can be retrieved as an explicit signal within the SignalGen monad, and also overridden for parts of the network.
data Signal a Source
A signal can be thought of as a function of type Nat -> a, and its Monad instance agrees with that intuition. Internally, is represented by a sampling computation.
Monad Signal
Functor Signal
Applicative Signal
Bounded t => Bounded (Signal t)
Enum t => Enum (Signal t)
Eq (Signal a) Equality test is impossible.
Floating t => Floating (Signal t)
Fractional t => Fractional (Signal t)
Integral t => Integral (Signal t)
Num t => Num (Signal t)
Ord t => Ord (Signal t)
Real t => Real (Signal t)
Show (Signal a) The Show instance is only defined for the sake of Num...
data SignalGen p a Source
A signal generator is the only source of stateful signals. Internally, computes a signal structure and adds the new variables to an existing update pool.
Monad (SignalGen p)
Functor (SignalGen p)
MonadFix (SignalGen p)
Applicative (SignalGen p)
:: SignalGen p (Signal a) the generator of the top-level signal
-> IO (p -> IO a) the computation to sample the signal
Embedding a signal into an IO environment. Repeated calls to the computation returned cause the whole network to be updated, and the current sample of the top-level signal is produced as a result.
The computation accepts a global parameter that will be distributed to all signals. For instance, this can be the time step, if we want to model continuous-time signals.
:: a initial value
-> IO (Signal a, a -> IO ()) the signal and an IO function to feed it
A signal that can be directly fed through the sink function returned. This can be used to attach the network to the outer world. Note that this is optional, as all the input of the network can be fed
in through the global parameter, although that is not really convenient for many signals.
:: IO (SignalGen p (Signal [a]), a -> IO ()) a generator for the event signal and the associated sink
An event-like signal that can be fed through the sink function returned. The signal carries a list of values fed in since the last sampling, i.e. it is constantly [] if the sink is never invoked. The
order of elements is reversed, so the last value passed to the sink is the head of the list. Note that unlike external this function only returns a generator to be used within the expression
constructing the top-level stream, and this generator can only be used once.
:: a initial output
-> Signal a the signal to delay
-> SignalGen p (Signal a)
The delay transfer function emits the value of a signal from the previous superstep, starting with the filler value given in the first argument.
:: Signal (SignalGen p a) a stream of generators to potentially run
-> SignalGen p (Signal a)
A reactive signal that takes the value to output from a monad carried by its input. It is possible to create new signals in the monad.
:: Signal a signal to memoise
-> SignalGen p (Signal a)
Memoising combinator. It can be used to cache results of applicative combinators in case they are used in several places. Other than that, it is equivalent to return.
:: Signal Bool the boolean input signal
-> SignalGen p (Signal Bool) a one-shot signal true only the first time the input is true
A signal that is true exactly once: the first time the input signal is true. Afterwards, it is constantly false, and it holds no reference to the input signal.
:: a initial state
-> (p -> a -> a) state transformation
-> SignalGen p (Signal a)
A pure stateful signal. The initial state is the first output, and every following output is calculated from the previous one and the value of the global parameter (which might have been overridden
by embed). It is equivalent to the following expression:
stateful x0 f = mfix $ sig -> input >>= i -> delay x0 (f <$> i <*> sig)
:: a initial internal state
-> (p -> t -> a -> a) state updater function
-> Signal t input signal
-> SignalGen p (Signal a)
A stateful transfer function. The current input affects the current output, i.e. the initial state given in the first argument is considered to appear before the first output, and can never be
observed. Every output is derived from the current value of the input signal, the global parameter (which might have been overridden by embed) and the previous output. It is equivalent to the
following expression:
transfer x0 f s = mfix $ sig -> input >>= i -> liftA3 f i s <$> delay x0 sig | {"url":"http://hackage.haskell.org/package/elerea-2.1.0/docs/FRP-Elerea-Param.html","timestamp":"2014-04-17T21:38:02Z","content_type":null,"content_length":"25827","record_id":"<urn:uuid:67cee366-afdc-48e8-bbb8-c0b420026a68>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Transformation to Block-wise Stack Matrix
As I wrote in my original post, for a (1 x 2) input, the elegant solution is to simply use the matrix transpose. However, it is not clear how to generalise this for larger block structures (e.g. a
solution that perhaps involves both matrix multiplications and transposes).
Okay; same argument as above, but now [itex]x, y, a, b, c, d[/itex] are block matrices with the required dimensions. It's still just as impossible (except that in my previous post, I clumsily
transposed the order of the products [itex]xc[/itex] and [itex]yd[/itex]).
Doing it right, you see that we need to simultaneously satisfy two equations:
• [itex] x = a(xc + yd)[/itex]
• [itex] y = b(xc + yd)[/itex]
We're only assuming that multiplication is associative here (not necessarily commutative), which is true for matrix multiplication, so it doesn't matter if these guys are scalars or block matrices.
The first requires [itex]c \ne 0[/itex] and [itex]d=0[/itex], while the second requires [itex]d \ne 0[/itex] and [itex]c=0[/itex]. Can't be done in this case; therefore, can't be done in general.
Stylish's solution works, of course, but I'm not sure if it's quite like what you had in mind. Actually, his solution reminds me of a similar thread we had recently. See here:
for how to generate matrices that "pick out" an
component of a matrix (along with a boneheaded mistake by yours truly).
PS On rereading your original post, I noticed you basically gave exactly the argument which I gave later (except where I pointed out it would apply to block matrices as well). Somehow I missed that
on the first go'round -- sorry! :) | {"url":"http://www.physicsforums.com/showthread.php?t=513905","timestamp":"2014-04-19T15:09:01Z","content_type":null,"content_length":"58094","record_id":"<urn:uuid:7d538297-98d5-42db-aa03-1a753cc88624>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00299-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of propagator
quantum mechanics
quantum field theory
, the
gives the
probability amplitude
for a particle to travel from one place to another in a given time, or to travel with a certain energy and momentum. Propagators are used to represent the contribution of
virtual particles
on the internal lines of
Feynman diagrams
. They also can be viewed as the inverse of the wave operator appropriate to the particle, and are therefore often called
Green's functions
Non-relativistic propagators
In non-relativistic quantum mechanics the propagator gives the amplitude for a particle to travel from one spatial point at one time to another spatial point at a later time. It is a Green's function
for the Schrödinger equation. This means that if a system has Hamiltonian $H$ then the appropriate propagator is a function $K\left(x,t;x"t\text{'}\right)$ satisfying
$left\left(H_x - ihbar frac\left\{partial\right\}\left\{partial t\right\} right\right) K\left(x,t;x",t\text{'}\right) = -ihbar delta\left(x-x\text{'}\right)delta\left(t,t\text{'}\right)$
where $H_x$ denotes the Hamiltonian written in terms of the $x$ coordinates and $delta\left(x,x"\right)$ denotes the Dirac delta-function.
This can also be written as
$K\left(x,t;x",t\text{'}\right) = langle x\text{'} | hat\left\{U\right\}\left(t,t\text{'}\right) | xrangle$
where $hat\left\{U\right\}\left(t,t"\right)$ is the unitary time-evolution operator for the system taking states at time $t$ to states at time $t"$.
Path integral in quantum mechanics
The quantum mechanical propagator may also be found by using a
path integral
$K\left(x,t;x",t\text{'}\right) = int exp left\left[frac\left\{i\right\}\left\{hbar\right\} int_t^\left\{t\text{'}\right\} L\left(dot\left\{q\right\},q,t\right) dtright\right] D\left[q\left(t\
where the boundary conditions of the path integral include q(t)=x, q(t')=x'. Here $L$ denotes the Lagrangian of the system. The paths that are summed over move only forwards in time.
Using the quantum mechanical propagator
In non-relativistic
quantum mechanics
, the propagator lets you find the state of a system given an initial state and a time interval. The new state is given by the equation:
$psi\left(x,t\right) = int_\left\{-infty\right\}^infty psi\left(x",t\text{'}\right) K\left(x,t; x\text{'}, t\text{'}\right) dx\text{'}$
If $K\left(x,t;x",t\text{'}\right)$ only depends on the difference $x-x"$ this is a convolution of the initial state and the propagator.
Relativistic propagators
In relativistic quantum mechanics and quantum field theory the propagators are Lorentz invariant. They give the amplitude for a particle to travel between two spacetime points.
Scalar propagator
In quantum field theory the theory of a free (non-interacting) scalar field is a useful and simple example which serves to illustrate the concepts needed for more complicated theories. It describes
spin zero particles. There are a number of possible propagators for free scalar field theory. We now describe the most common ones.
Position space
The position space propagators are Green's functions for the Klein-Gordon equation. This means they are functions $G\left(x,y\right)$ which satisfy
$\left(square_x^2 + m^2\right)G\left(x,y\right)=-delta\left(x-y\right)$
(As typical in relativistic quantum field theory calculations, we use units where the speed of light, $c$, is 1.)
We shall restrict attention to 4-dimensional Minkowski spacetime. By performing a Fourier transform we obtain:
$G\left(x,y\right) = frac\left\{1\right\}\left\{\left(2 pi\right)^4\right\} int d^4p , frac\left\{e^\left\{-ip\left(x-y\right)\right\}\right\}\left\{p^2 - m^2\right\}$
where $p\left(x-y\right):= p_0\left(x^0-y^0\right) - vec\left\{p\right\} cdot \left(vec\left\{x\right\}-vec\left\{y\right\}\right)$ is the 4-vector inner product. In Minkowski spacetime this
expression is not uniquely defined since there are poles in the integrand.
The different choices for how to deform the integration contour in the above expression lead to different forms for the propagator. The choice of contour is usually phrased in terms of the $p_0$
The integrand then has two poles at $p_0 := pm sqrt\left\{vec\left\{p\right\}^2 + m^2\right\}$ so different choices of how to avoid these lead to different propagators.
Causal propagator
Retarded propagator:
A contour going clockwise over both poles gives the causal retarded propagator. This is zero if $x$ and $y$ are spacelike or if $x^0 > y^0$ (i.e. if $x$ is to the future of $y$).
This choice of contour is equivalent to calculating the limit:
$G_\left\{ret\right\}\left(x,y\right) = lim_\left\{epsilon to 0\right\} frac\left\{1\right\}\left\{\left(2 pi\right)^4\right\} int d^4p , frac\left\{e^\left\{-ip\left(x-y\right)\right\}\right\}\
left\{\left(p_0+iepsilon\right)^2 - vec\left\{p\right\}^2 - m^2\right\} = left\left\{ begin\left\{matrix\right\}$
frac{1}{2pi} delta(tau_{xy}^2) - frac{m J_1(m tau_{xy})}{4 pi tau_{xy}} & textrm{ if }, x prec y 0 & textrm{otherwise} end{matrix} right.
$tau_\left\{xy\right\}:= sqrt\left\{ \left(x^0 - y^0\right)^2 - \left(vec\left\{x\right\} - vec\left\{y\right\}\right)^2\right\}$
is the proper time from $x$ to $y$ and $J_1$ is a Bessel function of the first kind. The expression $x prec y$ means $x$causally precedes $y$ which, for Minkowski spacetime, means
$x^0 < y^0$ and $tau_\left\{xy\right\}^2 geq 0$.
This expression can also be expressed in terms of the vacuum expectation value of the commutator of the free scalar field. Here
$G_\left\{ret\right\}\left(x,y\right) = i langle 0| left\left[Phi\left(x\right), Phi\left(y\right) right\right] |0rangle Theta\left(x^0 - y^0\right)$
where $Theta \left(x\right) := left \left\{ begin\left\{matrix\right\} 1 & mbox\left\{for\right\} & x ge 0 0 & mbox\left\{for\right\} & x < 0 end\left\{matrix\right\} right.$
is the Heaviside step function and
$left\left[Phi\left(x\right),Phi\left(y\right) right\right]:= Phi\left(x\right) Phi\left(y\right) - Phi\left(y\right) Phi\left(x\right)$
is the commutator.
Advanced propagator:
A contour going anti-clockwise under both poles gives the causal advanced propagator. This is zero if $x$ and $y$ are spacelike or if $x^0 < y^0$ (i.e. if $x$ is to the past of $y$).
This choice of contour is equivalent to calculating the limit:
$G_\left\{adv\right\}\left(x,y\right) = lim_\left\{epsilon to 0\right\} frac\left\{1\right\}\left\{\left(2 pi\right)^4\right\} int d^4p , frac\left\{e^\left\{-ip\left(x-y\right)\right\}\right\}\
left\{\left(p_0 - iepsilon\right)^2 - vec\left\{p\right\}^2 - m^2\right\} = left\left\{ begin\left\{matrix\right\}$
-frac{1}{2pi} delta(tau_{xy}^2) + frac{m J_1(m tau_{xy})}{4 pi tau_{xy}} & textrm{ if }, y prec x 0 & textrm{otherwise}. end{matrix} right.
This expression can also be expressed in terms of the vacuum expectation value of the commutator of the free scalar field. Here
$G_\left\{adv\right\}\left(x,y\right) = -i langle 0|left\left[Phi\left(x\right), Phi\left(y\right) right\right]|0rangle Theta\left(y^0 - x^0\right).$
Feynman propagator
A contour going under the left pole and over the right pole gives the Feynman propagator.
This choice of contour is equivalent to calculating the limit (see Huang p30):
$G_F\left(x,y\right) = lim_\left\{epsilon to 0\right\} frac\left\{1\right\}\left\{\left(2 pi\right)^4\right\} int d^4p , frac\left\{e^\left\{-ip\left(x-y\right)\right\}\right\}\left\{p^2 - m^2 +
= left { begin{matrix} -frac{1}{4 pi} delta(s) + frac{m}{8 pi sqrt{s}} H_1^{(1)}(m sqrt{s}) & textrm{ if }, s geq 0 -frac{i m}{ 4 pi^2 sqrt{-s}} K_1(m sqrt{-s}) & textrm{if }, s < 0. end{matrix}
$s:= \left(x^0 - y^0\right)^2 - \left(vec\left\{x\right\} - vec\left\{y\right\}\right)^2.$
Here $x$ and $y$ are two points in Minkowski spacetime, and the dot in the exponent is a four-vector inner product. $H_1^\left\{\left(1\right)\right\}$ is a Hankel function and $K_1$ is a modified
Bessel function.
This expression can be derived directly from the field theory as the vacuum expectation value of the time-ordered product of the free scalar field, that is, the product always taken such that the
time ordering of the spacetime points is the same:
$G_F\left(x-y\right)$ = i lang 0>T(Phi(x) Phi(y))|0 rang
= i lang 0> [Theta(x^0 - y^0) Phi(x)Phi(y) + Theta(y^0 - x^0) Phi(y)Phi(x) ] |0 rang.
This expression is Lorentz invariant as long as the field operators commute with one another when the points $x$ and $y$ are separated by a spacelike interval.
The usual derivation is to insert a complete set of single-particle momentum states between the fields with Lorentz covariant normalization, then show that the $Theta$ functions providing the
causal time ordering may be obtained by a contour integral along the energy axis if the integrand is as above (hence the infinitesimal imaginary part, to move the pole off the real line).
The propagator may also be derived using the path integral formulation of quantum theory.
Momentum space propagator
The Fourier transform of the position space propagators can be thought of as propagators in momentum space. These take a much simpler form than the position space propagators.
They are often written with an explicit $epsilon$ term although this is understood to be a reminder about which integration contour is appropriate (see above). This $epsilon$ term is included to
incorporate boundary conditions and causality (see below).
For a 4-momentum $p$ the causal and Feynman propagators in momentum space are:
$tilde\left\{G\right\}_\left\{ret\right\}\left(p\right) = frac\left\{1\right\}\left\{\left(p_0+iepsilon\right)^2 - vec\left\{p\right\}^2 - m^2\right\}$
$tilde\left\{G\right\}_\left\{adv\right\}\left(p\right) = frac\left\{1\right\}\left\{\left(p_0-iepsilon\right)^2 - vec\left\{p\right\}^2 - m^2\right\}$
$tilde\left\{G\right\}_F\left(p\right) = frac\left\{1\right\}\left\{p^2 - m^2 + iepsilon\right\}.$
For purposes of Feynman diagram calculations it is usually convenient to write these with an additional overall factor of $-i$ (conventions vary).
Faster than light?
The Feynman propagator has some properties that seem baffling at first. In particular, unlike the commutator, the propagator is nonzero outside of the light cone, though it falls off rapidly for
spacelike intervals. Interpreted as an amplitude for particle motion, this translates to the virtual particle traveling faster than light. It is not immediately obvious how this can be reconciled
with causality: can we use faster-than-light virtual particles to send faster-than-light messages?
The answer is no: while in classical mechanics the intervals along which particles and causal effects can travel are the same, this is no longer true in quantum field theory, where it is
commutators that determine which operators can affect one another.
So what does the spacelike part of the propagator represent? In QFT the vacuum is an active participant, and particle numbers and field values are related by an uncertainty principle; field
values are uncertain even for particle number zero. There is a nonzero probability amplitude to find a significant fluctuation in the vacuum value of the field $Phi\left(x\right)$ if one measures
it locally (or, to be more precise, if one measures an operator obtained by averaging the field over a small region). Furthermore, the dynamics of the fields tend to favor spatially correlated
fluctuations to some extent. The nonzero time-ordered product for spacelike-separated fields then just measures the amplitude for a nonlocal correlation in these vacuum fluctuations, analogous to
an EPR correlation. Indeed, the propagator is often called a two-point correlation function for the free field.
Since, by the postulates of quantum field theory, all observable operators commute with each other at spacelike separation, messages can no more be sent through these correlations than they can
through any other EPR correlations; the correlations are in random variables.
In terms of virtual particles, the propagator at spacelike separation can be thought of as a means of calculating the amplitude for creating a virtual particle-antiparticle pair that eventually
disappear into the vacuum, or for detecting a virtual pair emerging from the vacuum. In Feynman's language, such creation and annihilation processes are equivalent to a virtual particle wandering
backward and forward through time, which can take it outside of the light cone. However, no causality violation is involved.
Propagators in Feynman diagrams
The most common use of the propagator is in calculating probability amplitudes for particle interactions using Feynman diagrams. These calculations are usually carried out in momentum space. In
general, the amplitude gets a factor of the propagator for every internal line, that is, every line that does not represent an incoming or outgoing particle in the initial or final state. It will
also get a factor proportional to, and similar in form to, an interaction term in the theory's Lagrangian for every internal vertex where lines meet. These prescriptions are known as Feynman
Internal lines correspond to virtual particles. Since the propagator does not vanish for combinations of energy and momentum disallowed by the classical equations of motion, we say that the
virtual particles are allowed to be off shell. In fact, since the propagator is obtained by inverting the wave equation, in general it will have singularities on shell.
The energy carried by the particle in the propagator can even be negative. This can be interpreted simply as the case in which, instead of a particle going one way, its antiparticle is going the
other way, and therefore carrying an opposing flow of positive energy. The propagator encompasses both possibilities. It does mean that one has to be careful about minus signs for the case of
fermions, whose propagators are not even functions in the energy and momentum (see below).
Virtual particles conserve energy and momentum. However, since they can be off shell, wherever the diagram contains a closed loop, the energies and momenta of the virtual particles participating
in the loop will be partly unconstrained, since a change in a quantity for one particle in the loop can be balanced by an equal and opposite change in another. Therefore, every loop in a Feynman
diagram requires an integral over a continuum of possible energies and momenta. In general, these integrals of products of propagators can diverge, a situation that must be handled by the process
of renormalization.
Other theories
If the particle possesses spin then its propagator is in general somewhat more complicated, as it will involve the particle's spin or polarization indices. The momentum-space propagator used in
Feynman diagrams for a Dirac field representing the electron in quantum electrodynamics has the form
$tilde\left\{S\right\}_F\left(p\right) = \left\{\left(gamma^mu p_mu + m\right) over p^2 - m^2 + i epsilon\right\}$
where the $gamma^mu$ are the gamma matrices appearing in the covariant formulation of the Dirac equation. It is sometimes written, using Feynman slash notation,
$tilde\left\{S\right\}_F\left(p\right) = \left\{1 over gamma^mu p_mu - m + iepsilon\right\} = \left\{1 over p!!!/ - m + iepsilon\right\}$
for short. In position space we have:
$S_F\left(x-y\right) = int\left\{\left\{d^4 pover \left(2pi\right)^4\right\} , e^\left\{-i p cdot \left(x-y\right)\right\} \right\}, \left\{\left(gamma^mu p_mu + m\right) over p^2 - m^2 + i
= left({gamma^mu (x-y)_mu over |x-y|^5} + { m over |x-y|^3} right) J_1(m |x-y|).
This is related to the Feynman propagator by
$S_F\left(x-y\right) = \left(i partial!!!/ + m\right) G_F\left(x-y\right)$
where $partial!!!/ := gamma^mu partial_mu$.
The propagator for a gauge boson in a gauge theory depends on the choice of convention to fix the gauge. For the gauge used by Feynman and Stueckelberg, the propagator for a photon is
$\left\{-i g^\left\{munu\right\} over p^2 + iepsilon \right\}$
where $g^\left\{mu nu\right\}$ is the metric tensor. The Fourier transform of this in the Feynman gauge is:
G_A(x-y) = -int{{d^4 pover (2pi)^4} , e^{-i p cdot (x-y)} }, {-i g^{munu} over p^2 + iepsilon } =
{ g^{munu} over |x-y|^2} -2 {x^{mu}x^{nu} over |x-y|^4}.
In general
More generally, a propagator is defined as the two point correlation function $langle phi\left(x\right) phi\left(y\right) rangle$. This propagator is sometimes called the dressed propagator as
opposed to the free field or bare propagator defined previously.
□ Bjorken, J.D., Drell, S.D., Relativistic Quantum Fields (Appendix C.), New York: McGraw-Hill 1965, ISBN 0-07-005494-0.
□ Edited by DeWitt, Cécile and DeWitt, Bryce, Relativity, Groups and Topology, (Blackie and Son Ltd, Glasgow), Especially p615-624, ISBN 0444868585
□ Griffiths, David J., Introduction to Elementary Particles, New York: John Wiley & Sons, 1987. ISBN 0-471-60386-4
□ Halliwell, J.J., Orwitz, M. Sum-over-histories origin of the composition laws of relativistic quantum mechanics and quantum cosmology, arXiv:gr-qc/9211004v2
□ Kerson Huang, Quantum Field Theory: From Operators to Path Integrals (New York: J. Wiley & Sons, 1998), ISBN 0-471-14120-8
□ Itzykson, Claude, Zuber, Jean-Bernard Quantum Field Theory, New York: McGraw-Hill, 1980. ISBN 0-07-032071-3
□ Pokorski, Stefan, Gauge Field Theories, Cambridge: Cambridge University Press, 1987. ISBN 0-521-36846-4 (Has useful appendices of Feynman diagram rules, including propagators, in the back.)
□ Schulman, Larry S., Techniques & Applications of Path Integration, Jonh Wiley & Sons (New York-1981) ISBN 0471764507 | {"url":"http://www.reference.com/browse/propagator","timestamp":"2014-04-21T00:27:36Z","content_type":null,"content_length":"103705","record_id":"<urn:uuid:129ff151-5d2e-4ea2-a886-79355b18a0b8>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
by Andrew Bourmistroff
Guest Writers
Andrew Bourmistroff
Articles submitted by Andrew Bourmistroff
Copyright 2000-2002 Andrew Bourmistroff.
All rights reserved. Reprinted with Permission.
Parameter A and the Egyptian Decans
by Andrew Bourmistroff
Editorial correction by Gilles & Margaret Nullens ( UK )
• Why ancient Egyptians built up Pyramids ?
• Was Pyramid symbol of the Sun ?
• Why ancient Egyptians divided the ecliptic on the 36 Decans only ?
I. Parameter A
In Part I " Hermetic geometry " from my work " Numbers of Thoth " , I described how some of my results refer to new properties of Pyramid considered as geometrical figure.
The most important new property of any Pyramid is the maximum difference ( β - α ) :
α - it is the angle of the lateral edge of the pyramid in relation to its base ;
β - it is the angle of the lateral side of the pyramid in relation to its base ;
The maximum difference ( β - α ) is equal to 9.879 degrees in any Pyramid.
Later I named the maximum difference ( β - α ) the universal Parameter A.
2. The Egyptian Year and the Decans
It is known that ancient Egyptians used their own calendar.
" The ancient Egyptians originally employed a calendar based upon the Moon , and , like many peoples throughout the world , they regulated their lunar calendar by means of the guidance of a sidereal
calendar. They used the seasonal appearance of the star Sirius ( Sothis ); this corresponded closely to the true solar year , being only 12 minutes shorter. Certain difficulties arose , however ,
because of the inherent incompatibility of lunar and solar years. To solve this problem the Egyptians invented a schematized civil year of 365 days divided into three seasons , each of which
consisted of four months of 30 days each. To complete the year , five intercalary days were added at its end , so that the 12 months were equal to 360 days plus five extra days. This civil calendar
was derived from the lunar calendar ( using months ) and the agricultural , or Nile , fluctuations ( using seasons ); it was , however , no longer directly connected to either and thus was not
controlled by them. The civil calendar served government and administration , while the lunar calendar continued to regulate religious affairs and everyday life.
In time , the discrepancy between the civil calendar and the older lunar structure became obvious. Because the lunar calendar was controlled by the rising of Sirius , its months would correspond to
the same season each year , while the civil calendar would move through the seasons because the civil year was about one-fourth day shorter than the solar year. Hence , every four years it would fall
behind the solar year by one day , and after 1,460 years it would again agree with the lunisolar calendar. Such a period of time is called a Sothic cycle." [ 1 ]
The 3 seasons are :
• Aakhet , which correspond to the Nile over flow
• Pert , which correspond to seed-time
• Shemu , which correspond to harvest
Aakhet's season consist of next 4 months :
• Thoth
• Paophi
• Hat-hor
• Khoiak
Pert's season consist of next 4 months :
• Tybi
• Mekhir
• Phamenoth
• Pharmuthi
Shemu's season consist of next 4 months :
• Pakhons
• Pauni
• Epiphi
• Mesore
The Epagomenes consist of 5 special days :
• the first day – Osiris’ day
• the second day – Horus’ day
• the third day – Seth’s day
• the fourth day – Isis’ day
• the fifth day – Nephthys' day
The ancient Egyptians also used an unusual system of time intervals known as the Decans.
The Decans were 36 special bright groups of stars from the sky along the ecliptic. These Decans were used as a special calendar --- each decan would rise above the dawn horizon for ten days every
" HERMETICA : EXCERPT VI : From the Discourses of Hermes to Tat.
Tat. In your former General Discourses you promised to explain about the thirty-six Decans; I therefore ask you to tell me about them now, and to explain their working.
Hermes. I am quite willing, Tat; and of all my teachings, this will be of supreme importance, and will stand highest among them. I bid you mark it well.
I have told you before about the zodiacal circle, which is also called the animal-bearing circle, and about the five planet-stars and the sun and the moon, and the several circles of these seven
Tat. You have, thrice-greatest one.
Hermes. I desire you then, in your thoughts about the thirty-six Decans also, to bear in mind what I have told you, that so my teaching about the Decans also maybe intelligible to you.
Tat. I bear in mind what you have told me, father.
Hermes. I told you, my son, that there is a body which encloses all things. You must conceive the shape of that body as circular; for such is the shape of the universe.
Tat. I conceive its shape as circular, even as you bid me, father.
Hermes. And you must understand that below the circle of this body are placed the thirty-six Decans, between the circle of the universe and that of the zodiac, separating the one circle from the
other; they bear up, as it were, the circle of the universe, and look down on the circle of the zodiac.
They retard the all-enclosing body,—for that body would move with extreme velocity if it were left to itself,—but they urge on the seven other circles, because these circles move with a slower
movement than the circle of the universe.
And subject to the Decans is the constellation called the Bear, which is centrally situated with regard to the zodiac. The Bear is composed of seven stars, and has overhead another Bear to match it.
The function of the Bear resembles that of the axle of a wheel; it never sets nor rises, but abides in one place, revolving about a fixed point, and making the zodiacal circle revolve, transmitting
the world from night to day, and from day to night.
Let us understand then that both the ... of the seven planets and all... ;or rather, that the Decans stand round about all things in the Cosmos as guardians, holding all things together, and watching
over the good order of all things.
Tat. Even so I conceive them, father, according to your words.
Hermes. And further, my son, you must understand that the Decans are exempt from the things that befall the other stars. They are not checked in their course and brought to a standstill, nor hindered
and made to move backwards, as the planets are; nor yet are they as are the other stars. They are free, and exalted above all things; and as careful guardians and overseers of the universe, they go
round it in the space of a night and a day.
Tat. Tell me then, father, do the Decans act on us men also?
Hermes. Yes, my son, they act on us most potently. If they act on the heavenly bodies, how could it be that they should not act on us also, both on individual men and on communities The force which
works in all events that befall men collectively comes from the Decans; for instance, overthrows of kingdoms, revolts of cities, famines, pestilences, overflowings of the sea, earthquakes,— none of
these things, my son, take place without the working of the Decans.For if the Decans rule over the seven planets, and we are subject to the planets, do you not see that the force set in action by the
Decans reaches us also, whether it is worked by the Decans themselves or by means of the planets?
And besides this, my son, you must know that there is yet another sort of work which the Decans do; they sow upon the earth the seed of certain forces, some salutary and others most pernicious, which
the many call daemons.
Tat. And what is the bodily form of these beings, father?
Hermes. They do not possess bodies made of some special kind of matter, nor are they moved by soul, as we are; for there is no such thing as a race of daemons distinct from other beings; but they are
forces put in action by these six and thirty gods. " [ 2 ]
3. 36 Pyramids in the Sky
Taking into consideration that a circle is divided in 360 degrees and that the Egyptian year has 365 days , we can say that the Earth rotates , in average :
360 / 365 = 0.9863 degrees each day.
Or , in other words , the Sun has an average shift in the sky along the ecliptic as 0.9863 degrees per day.
By the way , the movement of the Earth in its orbit around the Sun, does not proceed at a constant rate because of the eccentricity of the orbit of the Earth.
The maximum rate is 1.0020 degrees per day ( on 4 January ) while the minimum rate is 0.9701 degrees per day ( on 4 July) ---- with modern value of eccentricity ( 0.0167 ) .
From my previous data we saw that the Sun has average shift in the sky of 9.863 degrees per ten days or per one Egyptian Decan.
Look again on this data :
Parameter A of any Pyramid = 9. 879 degrees
One Egyptian Decan ( in angular meaning ) = 9. 863 degrees
On this base I conclude that :
• Pyramids were sacred symbols of the Sun ( Re ) for ancient Egyptians
• Pyramids were sacred symbols of the Egyptian Decan as general time interval
in Egyptian year
• Ancient Egyptians believed that the ecliptic ( The Path of the Sun ) consisted of 36 Decans (Pyramids) plus 5 additional days
• Ancient Egyptians knew the meaning of the Parameter A of Pyramids and that a circle was divided in 360 degrees
Enclosure # 1 : Definition of Year [ 3 ]
• A Year is time required for the Earth to travel once around the Sun, about 365 1/4 days. This fractional number makes necessary the periodic intercalation of days in any calendar that is to be
kept in step with the seasons. In the Gregorian calendar a common year contains 365 days, and every fourth year (with a few exceptions) is a leap year of 366 days.
• The Solar Year is year (365 days 5 hours 48 minutes 46 seconds), also called Tropical Year, or year of the seasons, is the time between two successive occurrences of the vernal equinox (the
moment when the Sun apparently crosses the celestial equator moving north). Because of the precession of the equinoxes (an effect of a slow wobble in the Earth's rotation), the solar year is
shorter than the Sidereal Year (365 days 6 hours 9 minutes 10 seconds), which is the time taken by the Sun to return to the same place in its annual apparent journey against the background of the
• The Anomalistic Year (365 days 6 hours 13 minutes 53 seconds) is the time between two passages of the Earth through perihelion, the point in its orbit nearest the Sun.
• A Lunar Year (used in some calendars) of 12 synodic months (12 cycles of lunar phases) is about 354 days long.
• A Cosmic Year is the time (about 225 million years) needed for the solar system to revolve once around the centre of the Milky Way Galaxy.
Enclosure # 2 : Definition of the Decans [ 4 ]
• The Decans are 36 star configurations circling the sky somewhat to the south of the ecliptic. They make their appearance in drawings and texts inside coffin lids of the 10th dynasty (around 2100
BC) and are shown on the tomb ceilings of Seti I (1318-04 BC) and of some of the Rameses in Thebes. The decans appear to have provided the basis for the division of the day into 24 hours.
Besides representing star configurations as decans, the Egyptians marked out about 25 constellations, such as crocodile, hippopotamus, lion, and a falcon-headed god. Their constellations can be
divided into northern and southern groups, but the various representations are so discordant that only three constellations have been identified with certainty: Orion (depicted as Osiris), Sirius
(a recumbent cow), and Ursa Major (foreleg or front part of a bull). The most famous Egyptian star map is a 1st-century-BC stone chart found in the temple at Dandarah and now in the Louvre.
[ 1 ] Encyclopaedia Britannica , 1996.
[ 2 ] Hermetica , p.158 - 160 , Solos Press , 1997.
[ 3 ] Encyclopaedia Britannica , 1996.
[ 4 ] Ibid.
Copyright 2003 by Andrew Bourmistroff
[email protected]
All rights reserved. Reprinted with Permission.
Has the Great Pyramid shifted by 9.85 degrees ?
By Andrew Bourmistroff
[email protected]
Editorial correction by Graham Russell (UK)
The important fact to any researcher is the availability of precise & true geodesic information which refers to interesting object.
Doubtlessly such information is in the works by W.M.F.Petrie [1] & J.H.Cole [2] it refers to the Complex Giza and the Great Pyramid.
For instance, we have next Cole's data :
Table I The Great Pyramid : The length of the sides
│ Side │ North │ East │ South │ West │Average│
Table II The Great Pyramid : The errors of the 90 ° corners
│Corner│North - West│South - West│South - East│North - East│
│error │ - 0 ' 2 " │ + 0 ' 33 " │ - 3 ' 33 " │ + 3 ' 2 " │
Table III The Great Pyramid : The direction of the sides
│ Side │ North │ East │ South │ West │
│direction│0°2'30" W. of true N.│0°1'57" N. of true E.│0°5'30" W. of true N.│0°2'28" N. of true E.│
Concerning to geometry of the Great Pyramid my attention was attracted by fresh work "The Pyramid Paper" by Terrans Nevin (USA).
He used his own mathematical software for the analysis of the geometry of the Great Pyramid on basis of Cole's data.
In May 1995 he found the so-called special "Four Circles".
Then in March 1997 a mysterious line was found by Dave Seymour and Terrans Nevin. No satisfactory explaination has yet be found for its line.
Four corners of the base and the apex of the Great Pyramid have a geometrical and constructive shift from each other.
The Great Pyramid has a hidden azimuth of 350.15 degrees (or 9.85 degrees West of true North).
At once I saw that the value 9.85 degrees is very close to my Parameter A of 9.879 degrees.
As you remember, the maximum difference (β - α) or Parameter A is equal to 9.879 degrees of any Pyramid, where :
α - it is an angle of an inclination of a lateral edge of the pyramid and it's base ;
β - it is an angle of an inclination of a lateral side of the pyramid and it's base ;
But is it possible to aggregate together these two meanings ?
Because the speech goes about only idealized meaning of 9.879 degrees and about the meaning of 9.85 degrees obtained after due consideration of the practical geometry of Cole's measurements.
It is known from plane analytic geometry that the equation of a straight line is called a linear equation with variable parameters ( x ) and ( y ) to which the coordinates at any point of this
straight line may be found .
The general equation of this type is defined as ax + by + c = 0 ( 1 ) and is called as a general equation of a straight line .
The equation of straight line, allowing for a variable ( y ) , i.e. equation of type y = kx + b ( 2 ) is called as an equation with an angular factor . Parameter ( k ) is called an angular factor and
it is proportional to the tangent of the angle of the slope of the straight line to the axis ( ox ) .
k = tan φ ( 3 )
Parameter ( b ) is the value of a section cut by a straight line ( 2 ) . It is considered from the origin of the coordinates on the axis ( oy ) .
The equation of type x / a + y / b = 1 ( 4 ) where ( a ) and ( b ) are values of sections cut by a straight line on axes of the coordinates ( Figure 1 ) is known as a form of the equation of a
straight line in sections .
Figure 1
The angle ( Φ ) between two straight lines y = kx + b ( 5 ) and y1 = k1x + b1 ( 6 ) is the angle on which it is necessary to turn the first straight line (with an angular factor ( k ) ) to coincide
it with the second straight line (with an angular factor ( k1 ) ) counter clockwise ( Figure 2 ) .
Figure 2
This angle is calculated under using the next formula :
tan Φ = (k1 - k) / (1 + k1 k ) ( 7 )
Now we shall consider the formula as a function of an angular difference in trigonometry :
tan (β - α) = (tan β - tan α) / (1 + tan β tan α) ( 8 )
The conformity between two formulas is certain :
Φ = (β - α) = Parameter A
k1 = tan β
k = tan α
Hence we can confirm the connection between the hidden azimuth 9.85 degrees (West of true North) and the universal Parameter A of the Pyramid equal 9.879 degrees (Figure 3).
Figure 3
1. W.M.F. Petrie , The Pyramids and Temples of Gizeh , London , Field & Tuer , 1883.
2. J. H. Cole , Determination of the Exact Size and Orientation of the Great Pyramid of Giza , Cairo , Government Press , 1925.
Copyright 2003 by Andrew Bourmistroff
[email protected]
All rights reserved. Reprinted with Permission. | {"url":"http://www.world-mysteries.com/andb1.htm","timestamp":"2014-04-19T18:35:02Z","content_type":null,"content_length":"38001","record_id":"<urn:uuid:4d515226-38ee-4ef5-8ee7-5192d197d04f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
E-Coli, A Game Of Life?
The problem of finding an optimal survival behavior for E-coli bacteria.
The E-coli cell is put in a container full of water and somewhere in the container there is a piece of sugar. The dissolving sugar creates a potential "field" of sugar concentration around it. The
goal of the e-coli cell is by "orienting" itself by the field and using it's limited propulsion abilities to reach the sugar as fast as possible.
The cell can be viewed as an autonomous robot. The robot and its environment can be represented by means of the <whatever-you-call-the-A-W-S-SP-WM-BG-system> model.
A description of the different parts of this model for the e-coli case follows.
The e-coli cell has rather rudimentary and stochastic propulsion system. It can execute only two actions:
1. World
The world is defined by the potential field created by the dissolving sugar. The sugar is presumed to be a point source. For simplicity I'll accept also that the intensity of the field
decreases linearly with distance i.e.
R - Current distance from the sugar (the origin)
I[0] - Intensity at the origin (the sugar). I presume that it's constant and is known (by the cell)
I[R] - Intensity at the current distance. Intensity in effect is the concentration of the sugar solution.
Figure .1 - A plot of the field around the sugar (I[0]=1)
The e-coli cell is defined by its length H and one end is denoted Head and the other Tail. When the cell "jumps" it jumps in the direction of its head.
The position of the cell in the world is specified in polar coordinates.
R - Distance of the center of the cell from the origin (length of the cell vector)
g - Positive angle between the vector of the cell and the positive direction of the X axis
a - Orientation of the cell - the angle between the vector defining the cell's position (R, g ) and the heading of the cell (i.e. the direction tail->head) (a Î [-180,180], a =0 ó facing
Figure 1.2.2 shows the e-coli cell and the sugar and all the parameters that were mentioned above.
Figure .2 - The "world"
The e-coli cell has only one very simple sensor. It can measure the difference between the sugar concentration at its head and it's tail D I. from Figure 1.2.2 it is a matter of simple
trigonometry to find an expression for D I.
Equation .1
Equation .2
Equation .3
Equation .4
Equation 1.3.4 expresses the "reading" of the difference sensor as a function of the current distance to the goal R and the orientation of the cell a . The next figure is a 3D plot of D I
against the distance R and the orientation a . The length of the cell H and I[0] are assigned 1.
Figure .1 - D I plotted against the distance R and the angle a
2. Sensory processing
Since the ultimate goal of the robot/cell is reaching the sugar it would be good if the sensory processing system can calculate the distance to the sugar R. But if we look at Equation 1.3.4
we see that there are two unknown variables in it, the distance R and the heading a . If we could perform a jump with a known length and measure D I before and after the jump then it is
possible to calculate R and a . But the cell can only make jumps with a random length!
Calculating a is also extremely important because it is much easier to discuss the problem in terms of Alpha instead in terms of D I. Yet for the same reasons as with R we can not calculate a
Further analysis of this problem is directly connected with the world model and behavior generation systems and will be discussed in the next parts. The important thing to point out is that
our sensory system is extremely "crippled" and complicates the problem, much more than the "stochastic" propulsion system.
3. The world model
An ideal world model would consist of the current coordinates of the cell (g and R) and it's heading a . Yet g is not relevant to the problem because it does not matter at all from which
direction we will approach the sugar. On the other hand the other two parameters a and R are critical for the system to work. But unfortunately they can not be easily calculated from the
sensory input we get. For this reason later I will discuss the problem with two situations in mind. One in which we only have the sensor described in 1.3 giving us D I and one in which we can
measure R and a . I consider the later to be interesting because it is easier to analyze analytically and also permits to develop some nice learning algorithms. It also provides some clues as
to how the original problem should be addressed.
A world model that relies only on D I can be an array of a certain number of previous states (i.e. values of D I) and the actions that were taken to go from each state to the next (i.e. a
rotation or a translation at each step).
4. Behavior generation (BG)
The function of the BG subsystem is to decide at each step whether the cell should rotate or jump and feed the appropriate command to the actuators. It can be said that on each step the BG module
should generate one bit of information:
This is the most interesting part of the system and it is the one that can be implemented in different ways, giving the e-coli's different levels of intelligence and respectively different ability
(probability) to come to the sugar in first (minimum time).
It is interesting that there is some probability, that even a cell with a totally stochastic behavior (i.e. the decision jump/rotate is random) can outperform a very "shrewd" cell using highly
sophisticated decision making algorithm. Of coerce such a probability is very low.
The consideration of the BG algorithm will heavily depend on whether we use the "ideal" world model containing a and R at each step or the realistic one. In the next parts a number of different
algorithms are discussed having different "intelligence" and learning abilities and built over the two different world models.
1. Behavior generation using the {a , R} world model
First I want to discuss the mathematical side of the problem. As we can see from Equation 1.3.4 in order to calculate a we need R and vice versa. R can be found if we had one additional sensor
located in the geometrical center of the body of the cell that measures the absolute value of the sugar concentration I[R]. In this case given that I[0] is known from Equation 1.2.1 follows:
Now we can substitute the expression for R in Equation 1.3.4 and after some (quite tedious) transformations it is possible to find an expression for a . I tried to use Maple to do this but the
resulting equation is so clumsy that I would not even include it here. So from this point on I will accept that there is some "sensible" method of measuring a .
The BG function should be a threshold function that compares a with a particular threshold value T and generates a jump if |a |<T and a rotation otherwise (a Î [-180,180],a =0ó facing sugar).
This BG algorithm will be used in all cases except for the "dumb" ones 2.1 and Error! Reference source not found.. It looks like this:
BG(float T, float Alpha)
if (abs(Alpha)<T)
<Calculate new Alpha and R>;
It is responsibility of the world model to provide a threshold value to the BG module at each step. The choice of this value is the only factor (apart from chance) that determines the
"survivability" of the cell. Some definitions:
Intelligence - I will use it as a synonym of the E-coli’s knowledge of how to dynamically change the threshold within one "race" to the sugar.
Learning - in this case will be the "evolution" of the knowledge based on experience gathered through previous races.
Memory – I do not intend to give a definition of memory. The important thing is that memory is absolutely necessary in order to have intelligence and/or learning or even a fixed behavior that
provides any chance of survival.
In this case we do not even need the world model, sensors and sensory processing, because we can not use them. We just have a BG module that generates a random sequence of jumps/turns. Such a
"brainless" creature would not survive in any environment different from saturated sugar solution anywayJ
In the rest of the cases we presume that there is memory available…as much as we need…
This is the case of fixed behavior that I mentioned earlier. The idea is that the world model contains a (genetically) fixed threshold which is used by the BG module. The particular value of
the threshold will determine how successfully the cell will perform. I think (have not proven it) that there is an optimal threshold value for this case. A cell with (good) learning algorithm
should eventually end up with the optimal threshold value (see 2.3).
Next I will give some considerations about what the optimal threshold may be. First a threshold of 90° (which is D I>0) might not even give a solution, because using it the cell may actually
go away from the sugar (see Figure 2.2.1). I would not try to prove this.
If on the other we want the cell to approach the sugar every time it jumps we should have R[2£ ]R[1]. If the length of the jump J is the maximum possible J[max] and R[2]=R[1] then we have
If the threshold is 60° the cell will constantly approach the sugar until it gets within a distance J[max] at which point it will start "wandering" around it in a circle of radius J[max]. I
think this is the minimal "good survival behavior".
If we take a simple assumption (not a very good one I admit) a better value of the threshold can be calculated. The assumption is that the cell is far from the sugar, i.e. R>>J[max]. In that
case with each jump the cell approaches the sugar approximately by cos(a ) (see the picture).
Figure .2
If the threshold is T and the initial distance to the sugar is R[0] then in the worst case we will reach the sugar in N[j]=R[0]/cos(T) jumps. The total number of moves is N=N[j]+N[r]. On the
other hand the probability of a jump is P[j]=2T/360 therefore N[j]=2NT/360. So we get:
We want to minimize N, which is to maximize the denominator of Equation 2.2.3. So we find its first derivative and make it equal zero cos(T)-Tsin(T)=0 ó cotan(T)=T and from here
T» 49.29348360°
The problem is that the assumption that we made is not very good when the E-coli starts approaching the sugar. I suspect that the exact solution is 45° which is pure abduction since I cannot
prove it. A working simulation for the next case should show whether I am right or not.
The algorithm for this case looks like this:
float T=<CHOSEN CONSTANT>, R, Alpha;
<initialize R, Alpha>;
BG(T, Alpha);
while (R>Jmax); //until sugar reached
E-coli with learning, but without "intelligence"
The idea is to make many trials (to the sugar) with different thresholds and choose the best one by comparing the memorized costs. In order to minimize the error that may be introduced by the
random propulsion system we should make many tests with each threshold and find the mean cost.
A practical implementation of the last idea wold be to start with some threshold T[0] – sixty degrees (see 2.2) may be a good starting point – and perform a kind of a binary search,
memorizing only the best threshold we have found so far and the corresponding cost. The search need not end in a certain point, it may go on for the cells whole life.
Here is the algorithm (some pseudo C code):
int OptCost=FFFFFFFh; //max integer
int Step=Topt/2; //searching step 30 degrees initially
int CurrCost, CurrMeanCost;
float T, R, Alpha;
float Topt=60;
int I;
<initialize R=cost, Alpha>;
T=Topt-Step; //try decreasing the treshold
BG(T, Alpha);
while (R>Jmax); //go to the sugar
if (CurrMeanCost<OptCost) then {
else {
T=Topt+Step; //try increasing the treshold
BG(T, Alpha);
while (R>Jmax); //go to the sugar
if (CurrMeanCost<OptCost) then {
Step=Step/2; //Decrease the step
while (alive);
After the algorithm runs for some time the Step will become very small and Topt should converge to its optimal value. This algorithm can be viewed as a numerical solution for the problem that
was solved with some simplifying assumptions in point 2.2.
It is important to note that there is a hidden assumption that the turn and the jump have the same cost. We may of course say that the cost of the jump is for example K times the cost of the
turn. The value of K will influence the optimal threshold. We may still accept that all the turns have the same cost and all the jumps have the same cost (i.e. the cost does not depend on the
random factor) for the similar reason as in the footnote from page *. In the extreme case when K is infinite (or the cost of the turn is zero) the optimal threshold approaches 0
Choosing a single threshold angle may give good results, but I think that changing the threshold dynamically depending on the distance to the sugar R can give even better results. The problem
is finding the appropriate relationship between R and T.
An obvious solution is for each R to choose the maximum value of the threshold T[max] that will guarantee that after a jump we will be closer to the sugar. this is equivalent to Equation
But later in point 2.2 I have showed that for R>>J[max] the optimal angle is 49.21 degrees, while for this case T[max] is 90 degrees. Since I have taken the worst case, I think that probably
the optimal angle may be T[opt.]=T[max]/2. F R® ¥ , T[max]=90° , T[opt.]=45° . The graph Equation 2.4.1 is shown next.
Figure .1 – Determining the threshold value as a function of the distance R
The algorithm for this case looks like this:
float T, R, Alpha;
<initialize R, Alpha>;
BG(T, Alpha);
while (R>Jmax); //until sugar reached
Of course I may be wrong about the T[opt.]=T[max]/2. Bun I think that there should be some constant P so that T[opt.]=T[max]/P would be the optimal solution. A learning algorithm as described
in the next part can determine this constant.
The learning algorithm is in fact the same as in part 2.3 but this time instead of the threshold value we are searching for am optimal value of P. The oly difference would be in the two lines
calculating T. A good initial value for P would be 2.
int OptCost=FFFFFFFh; //max integer
int Step=Popt/2; //searching step 30 degrees initially
int CurrCost, CurrMeanCost;
float P, T, R, Alpha;
float Popt=2;
int I;
<initialize R=cost, Alpha>;
BG(T, Alpha);
while (R>Jmax); //go to the sugar
if (CurrMeanCost<OptCost) then {
else {
BG(T, Alpha);
while (R>Jmax); //go to the sugar
if (CurrMeanCost<OptCost) then {
Step=Step/2; //Decrease the step
while (alive);
Again this algorithm is supposed to converge to an optimal value of P.
I think that this world model is the best I was able to think of in the case when R and a are measurable.
In the next section I will try to explore the much more complicated case when we can only measure D I. I am not sure whether an algorithm exist for this case that will perform as good as the one
just described.
2. Behavior generation using the "realistic" world model
In this section I will not pay attention to the simple cases without memory and without learning. They are trivial and do not differ from what was described in sections 2.1 and 2.2.
1. E-coli with memory and learning, but without "intelligence"
This case is quite similar to the case described in part 2.3 the only difference is that we are searching for an optimal threshold value of D I instead of a . This does not change the
algorithm at all so I would not repeat it here. It is important to note that since D I depends on the distance there may be different optimal values for reaching the goal starting from
different initial distances.
2. E-coli with memory and "intelligence", but without learning
The problem how to change the threshold difference while moving towards the sugar is complicated by the fact that D I depends both on the distance and the angle. In this case a threshold that
gets the cell closer to the sugar after a jump at a longer distance may in fact lead the cell away if used at a closer distance. Even though I can not say what the algorithm for changing the
threshold should be there are several important properties that it should have. First it should keep history of the biggest positive value of D I, which will increase as the sugar is getting
closer. The threshold value should be chosen depending on this maximum value. In the beginning of the algorithm the cell should execute a number of turns in order to "orient" itself in the
world (i.e. to find some initial maximum value from which to calculate the threshold value). Keeping the maximum value close to the reality (i.e. to the real maximum value for the particular
distance R when a =0) is important so it may be good to introduce some overhead of turning (Z times for example) after each X jumps, which will help keep the maximum precise. The threshold
can be calculated as D I[max]/Y. Probably a good value of Y is 2 (just like in section 2.4). I have no idea whatsoever about the optimal values of X and Y, but they can be found by…
3. E-coli with memory, "intelligence" and learning
A learning algorithm for the real life situation will have to find (using multiple trials) the optimal values of X, Y and Z. While the idea of such an algorithm is not different from the one
described in section 2.5 the task is much heavier. Unlike the one from section 2.5 this one has to do a search in a space with three degrees of freedom, which greatly increases the set of possible
solutions and therefore the complexity.
Apart from the algorithms described in sections 2.3, 2.5, 3.1 and 3.3 a totally different approach can be used to achieve learning. En evolutionary process can be simulated.
We "put" a big number of cells, with different initial values of the parameter we want optimized in the sugar container. All cells are given an equal initial amount of energy reserve, which they can
use to move. Each move consumes a unit of energy. All cells start at the same initial distance from the sugar. If a cell consumes its energy before reaching the sugar it dies. If however a cell
reaches the sugar it splits in two new cells that are given a full load of energy and are put at the initial distance to start over their race to the sugar. If this is run for a while and the
conditions (initial energy, initial distance) are chosen correctly (experimentally) only the best cells will survive the natural selection. These cells should have the optimal value of the
A slightly different approach (and a better one) can be also used. Instead of generating a large number of initial species (which would be needed in order to have value close to the optimal in the
initial generation) diversity can be introduced by mutation. Each time a cell splits its "genes" (holding the values being optimize) can be changed a small random value. This way, starting with a
comparatively small initial variety and population size, a large variety of species can emerge. I such a "game of life" well-adapted organisms will eventually evolve and prevail.
There are numerous works available on genetic/evolutionary algorithms. An overwhelming amount of information on the subject can be easily found on the WEB. | {"url":"http://www.pages.drexel.edu/~par24/e-coli/e-coli.html","timestamp":"2014-04-18T00:48:00Z","content_type":null,"content_length":"31147","record_id":"<urn:uuid:08bf6389-a000-42a4-adfc-770ddbdbe0df>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00103-ip-10-147-4-33.ec2.internal.warc.gz"} |
No. 1472: Big Numbers
Today, big numbers are an old source of fascination. The University of Houston's College of Engineering presents this series about the machines that make our civilization run, and the people
whose ingenuity created them.
Large numbers are an old source of fascination. However, they're raising interesting new issues these days. Try this question: How many numbers can your 32-bit PC deal with? It's a very powerful
machine. Yet it can manipulate only a number equal to two raised to the power of 32. That gives us only the numbers one to about four billion - a four followed by nine zeros.
But we don't have to go to the cosmos to find big numbers. The number of air molecules in your office is like a one followed by thirty zeros. Now think about this: when physicists set out to
predict the behavior of those molecules, the first step is to count the number of ways their speeds and locations can be rearranged. And here we reach one of the 20th-century frontiers of big
Think about arranging books on a shelf. If we have only one book, of course there's only one way to arrange it. If we have four books, there are 24 ways. Increase the number of books to only ten,
and we can find almost four million ways to distribute those books on the shelf. When we come to arrangements of air molecules around us, that number would be so long we couldn't begin to write
it down if we filled every piece of paper in the world.
So we find ways of approximating, or of focusing on, parts of big collections of things or of long calculations. Mathematicians have created all kinds of means for going far beyond our ability to
count, and they constantly look for more.
Now the 21st-century frontier for huge numbers will lie in questions about connectivity. Think how words like web and network infuse our lives these days. We can find a trillion ways to connect
two million telephones with one another. And if each phone can teleconference with four others, that number goes through the roof.
And so we reach the most important network of all. A trillion axons within the human brain offer vastly more than a trillion ways to look at things. For all practical purposes, that number is
uncountable. The saving irony in all this is that our capacity for finding new ways to deal with inconceivably large numbers is itself inconceivably large.
So our capacities are much greater than we realize. As the meaning of large-number complexity becomes clearer, I'm betting that we'll ultimately find ways to harness our own brains in ways we
haven't yet imagined were there to harness.
I'm John Lienhard, at the University of Houston, where we're interested in the way inventive minds work.
(Theme music)
I am grateful to N. Shamsundar, Mechanical Engineering Department, University of Houston, for steering me toward the topic and for his counsel.
The statistical description of the air in your office is developed in any text on statistical thermodynamics. See, e.g., Tien, C-l. and Lienhard, J.H., Statistical Thermodynamics. New York:
Hemisphere Pub. Corp., 1979 (See especially Chapters 2 through 5.)
About putting a number of books equal to N on a shelf: There are N ways to place the first book in the row. Then there are only N-1 ways to place the second book, N-2 ways to place the third, and
so forth. Thus we can place the first two books in N(N-1) ways, the first three books in N(N-1)(N-2) ways, and all N books in N! = N(N-1)(N-2) Ö (2)(1) ways. N! is the symbol for this product;
it's called "N factorial." If we evaluate N!, we get:
1! = 1
2! = 2
3! = 6
4! = 24
10! = 3,628,800
20! = 2,432,902,008,176,640,000, etc.
The Engines of Our Ingenuity is Copyright © 1988-1999 by John H. Lienhard. | {"url":"http://uh.edu/engines/epi1472.htm","timestamp":"2014-04-18T13:40:08Z","content_type":null,"content_length":"9206","record_id":"<urn:uuid:97623171-ac6e-41a7-a5ae-c00681f427ad>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inequality and Investment Bubbles
A clearer link is established
"Money, it’s a gas," says the sixties rock group Pink Floyd in their song “Money.” Indeed, physics professor Victor Yakovenko is an expert in statistical physics and studies how the flow of money and
the distribution of incomes in American society resemble the flow of energy between molecules in a gas. In his lectures to be delivered on April 19 at New York University and April 20 at the New
School for Social Research, Yakovenko will bring his physics-of-incomes study up to date, including a report on the correlation between levels of income inequality and the appearance of financial
downturns, such as the dot-com bubble of 2000 and the more recent housing bubble of 2008.
That the rich really are different is a common opinion. It turns out that the rich even have their own physics. Yakovenko, who is a professor at the University of Maryland and also a fellow of the
Joint Quantum Institute*, produces a plot of the cumulative percentage of the population versus income. The graph shows that the actual income distribution (the data coming from the IRS) for the
poorer 97% of reported returns follows a type of curve---the Boltzmann-Gibbs curve---that applies to the energy distribution of molecules in a gas. The curve is named for 19th century physicists
Ludwig Boltzmann and J. Williard Gibbs, pioneers in statistical physics.
By contrast, the upper 3 percent or so of incomes, starting at a tax-return level of about $140,000, lie along a different curve, one named for Vilfredo Pareto, an economist who studied income
distributions in the 19th century. This distinction in income curves is generally attributed to the fact that the most affluent segment of society makes more of its income from investments, which are
taxed at a lower rate, rather than income from labor.
“A mathematical analysis of the empirical data clearly demonstrates the two-class structure of a society,” Yakovenko says. The lower-97% curve is an example of exponential behavior, while the
upper-3% curve is an example of a power-law behavior. The power-law curve is conspicuously different from the exponential curve in having a long tail, as shown in Figure 1.
Then, Yakovenko plots the percentage of total income lying in that tail on through the years. He finds that the periods of greatest inequality are also periods of bursting investment bubbles. Most
recently the inequality peaks lined up very closely with the housing bubble of 2008, the dot.com bubble of 2000, and the savings-and-loan crisis of the late 1980s, as shown in Figure 2.
Yakovenko successfully models income distribution pretty well using basic statistical physics. In the case of a gas, molecules come to have a great inequality in energies, all through their random
collisions with each other. People are not inanimate molecules and yet through their economic and social “collisions” they too come to have a very similar, and unequal, distribution of incomes.
Previously the upper income bracket (the upper 3%) curve had been pretty well studied, but Yakovenko was one of the first, perhaps the first, to demonstrate that the lower bracket (the lower 97%) was
described by the venerable Boltzmann-Gibbs curve developed to represent the spread of energies of molecules in a gas.
Yakovenko’s pioneering study of the 97% was summarized in a review paper in the journal Review of Modern Physics in 2009 (**) written in collaboration with the distinguished economist J. Barkley
Rosser, Jr. Yakovenko got started in econophysics in the year 2000, at a time when statistical mechanics wasn’t used much to study economics. He has prepared an updated study of income distributions,
for his participation in a celebration (April 20-21) of the career of economist Duncan Foley at the New School for Social Research in New York. Foley was a pioneer in marrying economics and
statistical mechanics.
(*) The Joint Quantum Institute is operated by the University of Maryland in College Park and the National Institute of Standards and Technology in Gaithersburg, MD.
(**) See reference publication.
Copies of Yakovenko’s 2012 report are available from the Joint Quantum Institute.
Quantum physics began with revolutionary discoveries in the early twentieth century and continues to be central in today’s physics research. Learn about quantum physics, bit by bit. From definitions
to the latest research, this is your portal. Subscribe to receive regular emails from the quantum world. Previous Issues...
Sign Up Now
Sign up to receive A Quantum Bit in your email! | {"url":"http://jqi.umd.edu/news/inequality-and-investment-bubbles","timestamp":"2014-04-16T23:04:09Z","content_type":null,"content_length":"67910","record_id":"<urn:uuid:2dd56c4f-a139-4dd3-b598-3561e86ef855>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamically variable machine readable binary code and method for reading and producing thereof
A machine readable binary code which is dynamically variable in size, format and density of information is provided. The binary code is formed as a matrix having a perimeter and data contained
therein. The perimeter is provided with density indicia for indicating the density of data contained within the matrix. The perimeter is also provided with size indicia for indicating the size of the
matrix. By utilizing the density indicia and size indicia, a scanning device is able to calculate the size and information density of the binary code.
Inventors: Priddy; Dennis G. (Safety Harbor, FL), Cymbalski; Robert S. (Clearwater, FL)
Assignee: International Data Matrix, Inc. (Clearwater, FL)
[*] Notice: The portion of the term of this patent subsequent to July 30, 1990 has been disclaimed.
Appl. No.: 07/907,769
Filed: June 30, 1992 | {"url":"http://patents.com/us-5329107.html","timestamp":"2014-04-18T13:14:58Z","content_type":null,"content_length":"44126","record_id":"<urn:uuid:7e3243bf-cfcb-49b4-9efc-8df2d7ffcb74>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00211-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ordinary Kriging
The first step in ordinary kriging is to construct a variogram from the scatter point set to be interpolated. A variogram consists of two parts: an experimental variogram and a model variogram.
Suppose that the value to be interpolated is referred to as f. The experimental variogram is found by calculating the variance (g) of each point in the set with respect to each of the other points
and plotting the variances versus distance (h) between the points. Several formulas can be used to compute the variance, but it is typically computed as one half the difference in f squared.
Experimental and Model Variogram Used in Kriging
Once the experimental variogram is computed, the next step is to define a model variogram. A model variogram is a simple mathematical function that models the trend in the experimental variogram.
As can be seen in the above figure, the shape of the variogram indicates that at small separation distances, the variance in f is small. In other words, points that are close together have similar f
values. After a certain level of separation, the variance in the f values becomes somewhat random and the model variogram flattens out to a value corresponding to the average variance.
Once the model variogram is constructed, it is used to compute the weights used in kriging. The basic equation used in ordinary kriging is as follows:
where n is the number of scatter points in the set, fi are the values of the scatter points, and wi are weights assigned to each scatter point. This equation is essentially the same as the equation
used for inverse distance weighted interpolation (equation 9.8) except that rather than using weights based on an arbitrary function of distance, the weights used in kriging are based on the model
variogram. For example, to interpolate at a point P based on the surrounding points P1, P2, and P3, the weights w1, w2, and w3 must be found. The weights are found through the solution of the
simultaneous equations:
where S(dij) is the model variogram evaluated at a distance equal to the distance between points i and j. For example, S(d1p) is the model variogram evaluated at a distance equal to the separation of
points P1 and P. Since it is necessary that the weights sum to unity, a fourth equation:
is added. Since there are now four equations and three unknowns, a slack variable, l, is added to the equation set. The final set of equations is as follows:
The equations are then solved for the weights w1, w2, and w3. The f value of the interpolation point is then calculated as:
By using the variogram in this fashion to compute the weights, the expected estimation error is minimized in a least squares sense. For this reason, kriging is sometimes said to produce the best
linear unbiased estimate. However, minimizing the expected error in a least squared sense is not always the most important criteria and in some cases, other interpolation schemes give more
appropriate results (Philip & Watson, 1986).
An important feature of kriging is that the variogram can be used to calculate the expected error of estimation at each interpolation point since the estimation error is a function of the distance to
surrounding scatter points. The estimation variance can be calculated as:
When interpolating to an object using the kriging method, an estimation variance data set is always produced along with the interpolated data set. As a result, a contour or iso-surface plot of
estimation variance can be generated on the target mesh or grid.
Related Links: | {"url":"http://www.ems-i.com/gmshelp/Interpolation/Interpolation_Schemes/Kriging/Ordinary_Kriging.htm","timestamp":"2014-04-17T12:53:41Z","content_type":null,"content_length":"10188","record_id":"<urn:uuid:36e94826-cbcd-447e-93f0-c68c095fe7dd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about polyhedral combinatorics on My Brain is Open
Wish you all a Very Happy New Year. Here is a list of my 10 favorite open problems for 2014. They belong to several research areas inside discrete mathematics and theoretical computer science. Some
of them are baby steps towards resolving much bigger open problems. May this new year shed new light on these open problems.
• 2. Optimization : Improve the approximation factor for the undirected graphic TSP. The best known bound is 7/5 by Sebo and Vygen.
• 3. Algorithms : Prove that the tree-width of a planar graph can be computed in polynomial time (or) is NP-complete.
• 4. Fixed-parameter tractability : Treewidth and Pathwidth are known to be fixed-parameter tractable. Are directed treewidth/DAG-width/Kelly-width (generalizations of treewidth) and directed
pathwidth (a generalization of pathwidth) fixed-parameter tractable ? This is a very important problem to understand the algorithmic and structural differences between undirected and directed
width parameters.
• 5. Space complexity : Is Planar ST-connectvity in logspace ? This is perhaps the most natural special case of the NL vs L problem. Planar ST-connectivity is known to be in $UL \cap coUL$.
Recently, Imai, Nakagawa, Pavan, Vinodchandran and Watanabe proved that it can be solved simultaneously in polynomial time and approximately O(√n) space.
• 6. Metric embedding : Is the minor-free embedding conjecture true for partial 3-trees (graphs of treewidth 3) ? Minor-free conjecture states that “every minor-free graph can be embedded in $l_1$
with constant distortion. The special case of planar graphs also seems very difficult. I think the special case of partial 3-trees is a very interesting baby step.
• 7. Structural graph theory : Characterize pfaffians of tree-width at most 3 (i.e., partial 3-trees). It is a long-standing open problem to give a nice characterization of pfaffians and design a
polynomial time algorithm to decide if an input graph is a pfaffian. The special of partial 3-trees is an interesting baby step.
• 8. Structural graph theory : Prove that every minimal brick has at least four vertices of degree three. Bricks and braces are defined to better understand pfaffians. The characterization of
pfaffian braces is known (more generally characterization of bipartite pfaffians is known). To understand pfaffians, it is important to understand the structure of bricks. Norine,Thomas proved
that every minimal brick has at least three vertices of degree three and conjectured that every minimal brick has at least cn vertices of degree three.
• 9. Communication Complexity : Improve bounds for the log-rank conjecture. The best known bound is $O(\sqrt{rank})$
• 10. Approximation algorithms : Improve the approximation factor for the uniform sparsest cut problem. The best known factor is $O(\sqrt{logn})$.
Here are my conjectures for 2014 :)
• Weak Conjecture : at least one of the above 10 problems will be resolved in 2014.
• Conjecture : at least five of the above 10 problems will be resolved in 2014.
• Strong Conjecture : All of the above 10 problems will be resolved in 2014.
Have fun !!
TrueShelf 1.0
One year back (on 6/6/12) I announced a beta version of TrueShelf, a social-network for sharing exercises and puzzles especially in mathematics and computer science. After an year of testing and
adding new features, now I can say that TrueShelf is out of beta.
TrueShelf turned out to be a very useful website. When students ask me for practice problems (or books) on a particular topic, I simply point them to trueshelf and tell them the tags related to that
topic. When I am advising students on research projects, I first tell them to solve all related problems (in the first couple of weeks) to prepare them to read research papers.
Here are the features in TrueShelf 1.0.
• Post an exercise (or) multiple-choice question (or) video (or) notes.
• Solve any multiple-choice question directly on the website.
• Add topic and tags to any post
• Add source or level (high-school/undergraduate/graduate/research).
• Show text-books related to a post
• Show related posts for every post.
• View printable version (or) LaTex version of any post.
• Email / Tweet / share on facebook (or) Google+ any post directly from the post.
• Add any post to your Favorites
• Like (a.k.a upvote) any post.
Feel free to explore TrueShelf, contribute new exercises and let me know if you have any feedback (or) new features you want to see. You can also follow TrueShelf on facebook, twitter and google+
. Here is a screenshot highlighting the important features.
Open Problems from Lovasz and Plummer’s Matching Theory Book
I always have exactly one bed-time mathematical book to read (for an hour) before going to sleep. It helps me learn new concepts and hopefully stumble upon interesting open problems. Matching Theory
If you are interested in learning the algorithmic and combinatorial foundations of Matching Theory (with a historic perspective), then this book is a must read. Today’s post is about the open
problems mentioned in Matching Theory book. If you know the status (or progress) of these problems, please leave a comment.
1 . Consistent Labeling and Maximum Flow
Conjecture (Fulkerson) : Any consistent labelling procedure results in a maximum flow in polynomial number of steps.
2. Toughness and Hamiltonicity
The toughness of a graph $G$, $t(G)$ is defined to be $+\infty$, if $G = K_n$ and to be $min(|S|/c(G-S))$, if $G eq K_n$. Here $c(G-S)$ is the number of components of $G-S$.
Conjecture (Chvatal 1973) : There exists a positive real number $t_0$ such that for every graph $G$, $t(G) \geq t_0$ implies $G$ is Hamiltonian.
3. Perfect Matchings and Bipartite Graphs
Theorem : Let $X$ be a set, $X_1, \dots, X_t \subseteq X$ and suppose that $|X_i| \leq r$ for $i = 1, \dots, t$. Let $G$ be a bipartite graph such that
a) $X \subseteq V(G)$,
b) $G - X_i$ has a perfect matching , and
c) if any edge of $G$ is deleted, property (b) fails to hold in the resulting graph.
Then, the number of vertices in $G$ with degree $\geq 3$ is at most $r^3 {t \choose 3}$.
Conjecture : The conclusion of the above theorem holds for non-bipartite graphs as well.
4. Number of Perfect Matchings
Conjecture (Schrijver and W.G.Valiant 1980) : Let $\Phi(n,k)$ denote the minimum number of perfect matchings a k-regular bipartite graph on 2n points can have. Then, $\lim_{n \to \infty} (\Phi(n,k))^
{\frac{1}{n}} = \frac{(k-1)^{k-1}}{k^{k-2}}$.
5. Elementary Graphs
Conjecture : For $k \geq 3$ there exist constants $c_1(k) > 1$ and $c_2(k) > 0$ such that every k-regular elementary graph on 2n vertices, without forbidden edges , contains at least $c_2(k){\cdot}
c_1(k)^n$ perfect matchings. Furthermore $c_1(k) \to \infty$ as $k \to \infty$.
6. Number of colorations
Conjecture (Schrijver’83) : Let G be a k-regular bipartite graph on 2n vertices. Then the number of colorings of the edges of G with k given colors is at least $(\frac{(k!)^2}{k^k})^n$.
7. The Strong Perfect Graph Conjecture (resolved)
Theorem : A graph is perfect if and only if it does not contain, as an induced subgraph, an odd hole or an odd antihole.
I have been teaching (courses related to algorithms and complexity) for the past six years (five years as a PhD student at GeorgiaTech, and the past one year at Princeton). One of the most
challenging and interesting part of teaching is creating new exercises to help teach the important concepts in an efficient way. We often need lots of problems to include in homeworks, midterms,
final exams and also to create practice problem sets.
We do not get enough time to teach all the concepts in class because the number of hours/week is bounded. I personally like to teach only the main concepts in class and design good problem sets so
that students can learn the generalizations or extensions of the concepts by solving problems hands-on. This helps them develop their own intuitions about the concepts.
Whenever I need a new exercise I hardly open a physical textbook. I usually search on internet and find exercises from a course website (or) “extract” an exercise from a research paper. There are
hundreds of exercises “hidden” in pdf files across several course homepages. Instructors often spend lots of time designing them. If these exercises can reach all the instructors and students across
the world in an efficiently-indexed form, that will help everybody. Instructors will be happy that the exercises they designed are not confined to just one course. Students will have an excellent
supply of exercises to hone their problem-solving skills.
During 2008, half-way through my PhD, I started collected the exercises I like in a private blog. At the same time I registered the domain trueshelf.com to make these exercises public. In 2011,
towards the end of my PhD, I started using the trueshelf.com domain and made a public blog so that anybody can post an exercise. [ Notice that I did not use the trueshelf.com domain for three years.
During these three years I got several offers ranging upto $5000 to sell the domain. So I knew I got the right name :) ] Soon, I realized that wordpress is somewhat “static” in nature and does not
have enough “social” features I wanted. A screenshot of the old website is shown below.
The new version of TrueShelf is a social website enabling “crowd-sourcing” of exercises in any area. Here is the new logo, I am excited about :)
The goal of TrueShelf is to aid both the instructors and students by presenting quality exercises with tag-based indexing. Read the TrueShelf FAQ for more details. Note that we DO NOT allow users to
post solutions. Each user may add his own “private” solution and notes to any exercise. I am planning to add more features soon.
In the long-run, I see TrueShelf becoming a “Youtube for exercises”. Users will be able to create their own playlists of exercises (a.k.a problem sets) and will be recommended relevant exercises.
Test-preparation agencies will be able to create their own channels to create sample tests.
Feel free to explore TrueShelf, contribute new exercises and let me know if you have any feedback (or) new features you want to see. You can also follow TrueShelf on facebook, twitter and google+.
Let’s see how TrueShelf evolves.
Linear Complementarity Problem
Linear Complementarity Problem (LCP) is a generalization of Linear Programming and a special case of quadratic programming. I stumbled upon LCP theory due to my interest in complexity problems in
game theory and PPAD-completeness. As we will see these concepts are very closely related.
Let M be a $n \times n$ square matrix and q an n dimensional vector. Let LCP(q,M) be the following problem : Find two vectors w and z satisfying
LCP(q,M) consists of linear constraints and complementary conditions. Since $w{\geq}0, z{\geq}0$ the complementary conditions ${w_i}{z_i}=0$ is equivalent to ${w^T}{z}=0$. There is an obvious
exponential time algorithm to solve LCP. For every i, set either $w_i=0$ or $z_i=0$ and solve the resulting system of linear equations. If one of these linear systems has a solution then the
corresponding LCP is solvable. Deciding if a given LCP has a solution is NP-complete. The following exercise shows that LCP is a generalization of LP.
Exercise : Every LP can solved by solving a corresponding LCP, representing the complementary slackness of the LP.
LCP can also be expressed in the following equivalent form :
Lemke’s algorithm is a “path-following” algorithm (similar to simplex algorithm) to solve LCP. Unfortunately, Lemke’s algorithm can sometimes fail to produce a solution even if one exists !! There
are many special instances of LCP on which Lemke’s algorithm always produces a solution or a certificate that no solution exists.
As mentioned earlier, solving an LCP is NP-complete. What about special cases ? i.e., when the input matrix M is special.
• If M is a Positive Semi-Definite matrix, then LCP(q,M) can be solved in polynomial time. In fact, every LCP with a PSD matrix is a convex quadratic program and every convex quadratic program can
be expressed as an LCP with a PSD matrix.
• If M is a Z-matrix, Chandrasekaran’s algorithm solves LCP(q,M) in polynomial time [Chandrasekaran'70].
• If M is a triangular P-matrix, LCP(q,M) can be solved in polynomial time by using a back substitution method.
• If M is a P-matrix, LCP(q,M) has a unique solution for every q.
Following is one of the coolest applications of LCP.
Exercise : Finding a Nash Equilibrium in a bimatrix game can be expressed as an LCP.
Lemke-Howson’s algorithm [Lemke,Howson'64] to solve a bimatrix game is known to take exponential number of steps in the worst case [Savani, vonStengel'04]. It is also known that finding Nash
equilibrium in a bimatrix game is PPAD-complete [Chen,Deng'09].
Open Problems :
□ The complexity of solving LCP with a P-matrix (P-LCP) is open for more than two decades !! P-LCP is known to be in PPAD [Papadimitriou'94]. Note that recognizing Z-matrices and PSD-matrices
can be done in polynomial-time but recognizing P-matrices is coNP-complete [Coxson'94].
□ Are there other interesting classes of matrices M for which LCP(q,M) is solvable in polynomial time ?
□ Savani and von Stengel’s instance of bimatrix game has “full support mixed equilibrium”, which can easily solved using linear programming techniques. It is an open problem to construct an
instance of a bimatrix game that does not have full-support mixed equilibrium and the Lemke-Howson algorithm takes exponential number of steps on this instance.
Savani and von Stengel’s instance of bimatrix game has full support.
It is open problem to construct an instance of bimatrix game that
does not have full-support mixed equilibrium and the Lemke-Howson algorithm
takes exponential number of steps.
References :
• [Chandrasekaran'70] R. Chandrasekaran. “A Special Case of the Complementary Pivot Problem“, Opsearch, 7(1970) 263 268.
• [Coxson'94] G. E. Coxson. The P-matrix problem is co-NP-complete. Math. Programming, 64(2):173–178, 1994.
• [Chen,Deng'09] Xi Chen, Xiaotie Deng, Shang-Hua Teng: Settling the complexity of computing two-player Nash equilibria. J. ACM 56(3): (2009)
• [Savani, vonStengel'04] Rahul Savani, Bernhard von Stengel: Exponentially Many Steps for Finding a Nash Equilibrium in a Bimatrix Game. FOCS 2004: 258-267
• [Lemke,Howson'64] Lemke, C. E. and J. T. Howson, Jr. (1964), Equilibrium points of bimatrix games. Journal of the Society for Industrial and Applied Mathematics 12, 413–423.
• [Papadimitriou'94] Christos H. Papadimitriou: On the Complexity of the Parity Argument and Other Inefficient Proofs of Existence. J. Comput. Syst. Sci. 48(3): 498-532 (1994)
Held Karp Relaxation
The Traveling Salesman Problem (TSP) is undoubtedly the most important and well-studied problem in Combinatorial Optimization. Today’s post is a quick overview of the Held-Karp Relaxation of TSP.
TSP : Given a complete undirected graph $G(V,E)$ with non-negative costs $c_e$ for each edge $e \in E$, find a hamiltonian cycle of G with minimum cost. It is well-known that this problem is
Exercise : There is no $\alpha$-approximation algorithm for TSP (for any $\alpha \geq 1$) unless P=NP.
Metric TSP : In Metric-TSP, the edge costs satisfy triangle inequality i.e., for all $u,v,w \in V$, $c(u,w) \leq c(u,v) + c(v,w)$. Metric-TSP is also NP-complete. Henceforth, we shall focus on metric
Symmetric TSP (STSP) : In STSP, the edge costs are symmetric i.e., $c(u,v) = c(v,u)$. Approximation algorithms with factor 2 (find a minimum spanning tree (MST) of $G$ and use shortcuts to obtain a
tour) and factor 3/2 (find an MST, find a perfect matching on the odd degree nodes of the MST to get a eulerian graph and obtain a tour) are well-known. The factor 3/2 algorithm, known as
Christofides Algorithm [Christofides'76], is the best known approximation factor for STSP. No improvement in the last three decades !!
Following is the Held-Karp Relaxation for STSP with the cut constraints and the degree constraints. The variables are $x_e$, one for each edge $e \in E$. For a subset $S \subset V$, $\delta(S)$
denotes the edges incident to $S$. Let $x(\delta(S))$ denote the sum of values of $x_e$ of the edges with exactly one endpoint in $S$. For more details of Held-Karp relaxation see [HK'70, HK'71]
Exercise : In the following instance of STSP the cost between vertices u and v is the length of the shortest path between u and v. The three long paths are of length k. Prove that this instance
achieves an integrality ratio arbitrarily close to 4/3 (as k is increased).
Asymmetric TSP (ATSP) : In ATSP, the edge costs are not necessarily symmetric i.e., the underlying graph is directed. The Held-Karp relaxation for ATSP is as follows :
Charikar, Goemans and Karloff [CGK'04] showed that the integrality of Held-Karp relaxation for ATSP is at least $2-\epsilon$. Frieze, Galbiati and Maffioli [FGM'82] gave a simple $O({\log}_2{n})$
-approximation algorithm for ATSP in 1982, where n is the number of vertices. In the last eight years, this was improved to a guarantee of 0.999 ${\log}_2{n}$ by Blaser [Blaser'02], and to $\frac{4}
{3}{\log}_3{n}$ Kaplan et al [KLSS'03] and to $\frac{2}{3}{\log}_2{n}$ by Feige and Singh [FS'07]. So we have an approximation factor better than ${\ln}n$ !!
Open Problems :
□ The long-standing open problem is to determine the exact integrality gap of Held-Karp relaxation. Many researchers conjecture that the integrality gap of Held-Karp relaxation for STSP is 4/3
and for ATSP it is bounded by a constant. The best known upper bounds are 3/2 and O(logn) respectively.
□ The size of the integrality gap instance of ATSP (constructed by [CGK'04]) is exponential in $1/\epsilon$ to achieve an integrality gap of $2-\epsilon$. Is there a polynomial-sized (in $1/\
epsilon$) instance achieving an integrality gap of $2-\epsilon$ ?
References :
Nicos Christofides, Worst-case analysis of a new heuristic for the travelling salesman problem, Report 388, Graduate School of Industrial Administration, CMU, 1976.
• [HK'70] Micheal Held and Richard M. Karp, The Traveling Salesman Problem and Minimum Spanning Trees, Operations Research 18, 1970, 1138–1162.
• [HK'71] Michael Held and Richard Karp, The Traveling-Salesman Problem and Minimum Spanning Trees: Part II, Mathematical Programming 1, 1971, 6–25.
• [Christofides'76] Nicos Christofides, Worst-case analysis of a new heuristic for the travelling salesman problem, Report 388, Graduate School of Industrial Administration, CMU, 1976.
• [FGM'82] A. M. Frieze, G. Galbiati and M. Maffioli, On the Worst-Case Performance of Some Algorithms for the Asymmetric Traveling Salesman Problem, Networks 12, 1982, 23–39.
• [Blaser'02] M. Blaser, A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality, Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms, 2002, 638–645.
• [KLSS'03] H. Kaplan, M. Lewenstein, N. Shafir and M. Sviridenko, Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multidigraphs, Proceedings of the 44th Annual IEEE
Symposium on Foundations of Computer Science, 2003, 56–67.
• [CGK'04] Moses Charikar, Michel X. Goemans, Howard J. Karloff: On the Integrality Ratio for Asymmetric TSP. FOCS 2004: 101-107
• [FS'07] Uriel Feige, Mohit Singh: Improved Approximation Ratios for Traveling Salesperson Tours and Paths in Directed Graphs. APPROX-RANDOM 2007: 104-118
Scarf’s Lemma
Proof and Applications of Scarf’s Lemma…….
Today’s post is about Scarf’s Lemma, my recent obsession. I learnt about Scarf’s Lemma while reading Haxell & Wilfong’s paper [HW'08] on Fractional Stable Paths Problem (FSPP). I will write about
FSPP in a future post. Today’s post is about the elegant proof of Scarf’s Lemma and its wonderful applications.
Scarf’s Lemma : Let $m < n$ and let $B$ be an $m \times n$ real matrix such that $b_{ij} = \delta_{ij}$ for $1 \leqslant i, j \leqslant m$. Let $b$ be a non-negative vector in ${\mathbb{R}}^m$,
such that the set $\{\alpha\in{\mathbb{R}}_{+}^n : B{\alpha}=b\}$ is bounded. Let $C$ be an $m \times n$ matrix such that $c_{ii} \leqslant c_{ik} \leqslant c_{ij}$ whenever $i,j \leqslant m$, $i
eq j$ and $k > m$. Then there exists a subset $J$ of size $m$ of $[n]$ such that
□ (feasible) : $B{\alpha}=b$ for some $\alpha\in{\mathbb{R}}_{+}^n$ such that $\alpha_j=0$ whenever $jotin{J}$.
□ (subordinating) : For every $k \in [n]$ there exists $i \in [m]$ such that $c_{ik} \leqslant c_{ij}$ for all $j \in J$.
Proof of Scarf’s Lemma :
We want an $\alpha\in{\mathbb{R}}_{+}^n$ which is simultaneously feasible for $B$ and subordinating for $C$. Note that it is easy to find a feasible $x$ and a subordinating $y$ that are “almost same”
. Choose $x = [m]$. Choose $y = (2,3,....,m,j)$ where j is selected from all of the columns $k > n$ so as to maximize $c_{1k}$. Now $x$ and $y$ have $m-1$ columns in common. To find the required $\
alpha$ we shall apply the following (feasible and ordinal) pivot steps. Throughout we shall maintain this relationship of having at least $m-1$ common columns. This elegant and powerful idea was
first introduced in the Lemke-Howson Algorithm. I will talk about Lemke-Howson algorithm it in a future post.
Assuming non-degeneracy for matrices $B$ and $C$ the following lemmas hold. ($C$ is said to be non-degenerate if no two elements in the same row are equal).
(i) Feasible Pivot Step : Let $J$ be a feasible basis for $(B, b)$, and $k\in[n]\setminus{J}$. Then there exists a unique $j \in J$ such that $J+k-j$$(i.e., J\cup\{k\}{\setminus}\{j\})$ is a
feasible basis.
(ii) Ordinal Pivot Step : Let $K$ be a subordinating set for $C$ of size $m$-1. Then there are precisely two elements $j \in [n]{\setminus}K$ such that $K+j$ is subordinating for $C$, unless $K\
subseteq[m]$, in which case there exists precisely one such j.
Proof of (i) is well-known. For Proof of (ii), refer to [Schrijver's Book]. To prove Scarf’s lemma we construct a bipartite graph $\mathcal{G}$ with partitions $A$ (the set of all feasible bases
containing $1$) and $B$ (the set of all subordinating sets of size $m$ not containing $1$. A vertex $a \in A$ is joined to a vertex $b \in B$ if $a\setminus b = \{1\}$.
Exercise : Prove that every node (except $[m]$ and the required solution $\alpha$) in $\mathcal{G}$ have degree two.
Since $[m]$ is not subordinating, the required solution $\alpha$ which is both feasible and subordinating must exist. Note that this proof gives a $PPA$ membership of the computational version of
Scarf’s lemma. For $PPAD$-membership and $PPAD$-completeness of Scarf’s lemma and its applications (mentioned below) see [KPRST'09].
Applications of Scarf’s Lemma :
Scarf’s lemma provides an elegant proof for a number of “fractional stability type” problems. Here is the list, starting with Scarf’s original paper that introduced the Scarf’s lemma.
Theorem (Scarf’67) : Every balanced game with non-transferable utilities has a non-empty core.
Theorem (AH’98) : Every clique-acyclic digraph has a strong fractional kernel.
Theorem (AF’03) : Every hypergraphic preference system has a fractional stable solution.
Theorem (HW’08) : Every instance of Stable Paths Problem has a fractional stable solution.
Open Problems : Are there other unexplored applications of Scarf’s lemma ? It is known [Scarf'67] that Scarf’s lemma provides a combinatorial proof of Brower’s fixed point theorem. Can we use
Scarf’s lemma to prove other fixed-point theorems, for example, geometric stability theorems from topology. The above mentioned applications are all $PPAD$-complete [KPRST'09]. Are there
interesting applications of Scarf’s lemma that are polynomial time solvable ?
References :
• [Scarf'67] Herbert E. Scarf : The Core of an N Person Game. Econometrica, Vol 69, 35-50 (1967)
• [AH'98] Ron Aharoni, Ron Holzman : Fractional Kernels in Digraphs. J. Comb. Theory, Ser. B 73(1): 1-6 (1998)
• [AF'03] Ron Aharoni, Tamás Fleiner : On a lemma of Scarf. J. Comb. Theory, Ser. B 87(1): 72-80 (2003)
• [HW'08] Penny E. Haxell, Gordon T. Wilfong : A fractional model of the border gateway protocol (BGP). SODA 2008: 193-199
• [KPRST'09] Shiva Kintali, Laura J. Poplawski, Rajmohan Rajaraman, Ravi Sundaram, Shang-Hua Teng Reducibility Among Fractional Stability Problems Electronic Colloquium on Computational Complexity
(ECCC) TR09-041 (2009) [pdf]
• [Schijver's Book] Alexander Schrijver : Combinatorial Optimization, Polyhdera and Efficiency, Volume B Springer-Verlag Berlin Heidelberg, (2003) | {"url":"http://kintali.wordpress.com/category/polyhedral-combinatorics/","timestamp":"2014-04-18T05:41:39Z","content_type":null,"content_length":"127177","record_id":"<urn:uuid:a942a82a-2926-4254-8275-adc890fd96f9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Belvedere, CA Algebra 2 Tutor
Find a Belvedere, CA Algebra 2 Tutor
...I teach my students both the mathematical concepts of statistics/probability and how to deal with word problems: recognize the appropriate statistical setting described in the scenario and
apply the correct method to solve the problem. The positive reviews left by many of my statistics students ...
14 Subjects: including algebra 2, calculus, geometry, statistics
...Civil Engineering, Carnegie-Mellon University M.S., Ph.D. Environmental Engineering Science, California Institute of Technology (Caltech) Dr. G.'s qualifications include a Ph.D. in engineering
from CalTech (including a minor in numerical methods/applied math) and over 25 years experience as a practicing environmental engineer/scientist.
13 Subjects: including algebra 2, calculus, statistics, physics
...I am a certified EMT via the San Francisco Paramedics Association thus, I am CPR, First Aid, and AED certified. This course taught me how to remain calm in any emergency situation as well as
provide proper care to an injured person. In addition, this includes proper administration of common prescription medications such as inhalers, vasodialators, vasoconstrictors, etc.
30 Subjects: including algebra 2, English, Spanish, reading
...Also, as an undergrad, I was a Teaching Assistant for the intro to probability and statistics course at Caltech. I have been a Teaching Assistant (TA) for a number of probability courses, both
at Caltech and at Cal. As an undergrad at Caltech, I was a TA for the intro to probability and statistics course required for all undergrad students.
27 Subjects: including algebra 2, chemistry, physics, geometry
...I have received my doctoral degree in clinical psychology. I have provided treatment and assessment for children presenting with ADD/ADHD. I completed behavioral plans with families.
30 Subjects: including algebra 2, English, reading, grammar
Related Belvedere, CA Tutors
Belvedere, CA Accounting Tutors
Belvedere, CA ACT Tutors
Belvedere, CA Algebra Tutors
Belvedere, CA Algebra 2 Tutors
Belvedere, CA Calculus Tutors
Belvedere, CA Geometry Tutors
Belvedere, CA Math Tutors
Belvedere, CA Prealgebra Tutors
Belvedere, CA Precalculus Tutors
Belvedere, CA SAT Tutors
Belvedere, CA SAT Math Tutors
Belvedere, CA Science Tutors
Belvedere, CA Statistics Tutors
Belvedere, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Belvedere_CA_algebra_2_tutors.php","timestamp":"2014-04-18T08:55:52Z","content_type":null,"content_length":"24289","record_id":"<urn:uuid:155700ff-c810-48bf-8901-dc07c29589ce>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computational and Theoretical
• About
Newly emerging "electron correlation" devices made out of transition metal oxide heterostructures (Sr(Zr)TiO3), battery materials (LiMPO4 (with M = Mn, Fe, Co, and Ni) and new molecular magnets
used in quantum computing are at the heart of new experimental developments in materials and chemical sciences. Such experimental progress poses many questions to our theoretical understanding.
The answers can be found using a combination of modeling and theory to support the experiment. In our group, to tackle these important questions we are developing controlled, reliable, and
systematically improvable theoretical methods that describe correlation effects and are able to treat solids and large molecules realistically.
Our work is interdisciplinary in nature and we connect three fields, chemistry, physics and materials science. Our goal is to develop theoretical tools that give access to directly experimentally
relevant quantities. We develop and apply codes that describe two types of electronic motion (i) weakly correlated electrons originating from the delocalized "wave-like" s- and p-orbitals
responsible for many electron correlation effects in molecules and solids that do not contain transition metal atoms (ii) strongly correlated electrons residing in the d- and f-orbitals that
remain localized and behave "particle-like" responsible for many very interesting effects in the molecules containing d- and f-electrons (transition metal nano-particles used in catalysis,
nano-devices with Kondo resonances and molecules of biological significance - active centers of metalloproteins). The mutual coupling of these two types of electronic motion is challenging to
describe and currently only a few theories can properly account for both types of electronic correlation effects simultaneously.
Available research projects in the group involve (1) working on a new theory that is able to treat weakly and strongly correlated electrons in molecules with multiple transition metal centers
with applications to molecular magnets and active centers of enzymes (2) developing a theory for weakly correlated electrons that is able to produce reliable values of band gaps in semiconductors
and heterostructures used in solar cells industry (3) applying the QM/QM embedding theories developed in our group to catalysis on transition metal-oxide surfaces and (4) applying the embedding
formalism to molecular conductance problems in order to include correlation effects.
Representative Publications
D. Zgid, E. Gull and G. K-. L. Chan, "Dynamical mean-field theory from a quantum chemical perspective", J. Chem. Phys. , 134, 094115 (2011) (JCP Editors Choice for 2011)
D. Zgid, D. Ghosh, E. Neuscamman, and G. K-. L. Chan, "A study of cumulant approximations to n-electron valence multireference perturbation theory", J. Chem. Phys. 130, 194107 (2009)
D. Zgid and M.Nooijen, "The density matrix renormalization group self-consistent field method: Orbital optimization with the density matrix renormalization group method in the active space", J.
Chem. Phys. 128, 144116 (2008)
D. Zgid and M. Nooijen, "Obtaining the two-body density matrix in the density matrix renormalization group method", J. Chem. Phys. 128, 144115 (2008)
D. Zgid and M. Nooijen, "On the spin and symmetry adaptation of the density matrix renormalization group method", J. Chem. Phys. 128, 014107 (2008)
• Education
□ Ph.D., University of Waterloo, Canada
PostDoc, Cornell University, Columbia University
• Research Areas of Interest
□ Computational/Theoretical Chemistry
Energy Science
Inorganic Chemistry
Materials Chemistry
Physical Chemistry
Surface Chemistry
Electronic structure of molecules and crystalline systems | {"url":"http://www.lsa.umich.edu/vgn-ext-templating/v/index.jsp?vgnextoid=79a2958b47380410VgnVCM100000c2b1d38dRCRD&vgnextchannel=74502d58a73df310VgnVCM100000c2b1d38dRCRD&vgnextfmt=detail","timestamp":"2014-04-16T10:34:47Z","content_type":null,"content_length":"19322","record_id":"<urn:uuid:b0699b66-7f80-48bb-87c6-e3a04e2e6dd4>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Torsion of the curve
October 26th 2009, 04:09 AM #1
Apr 2009
Torsion of the curve
a) Prove for each t for which
(b) Show that
(c) Verify the validity of the following Fre'net-Serret formulas:
(d) Show that if the curvature is identically zero, then the curve is a straight line
(e) Show that if the torsion is identically zero, then the curve lies in a plane
(f) If the torsion is identically zero and the curvature is a nonzero constant, then show that the curve is a circle.
I already have part (a) answered need help with the others though
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/calculus/110569-torsion-curve.html","timestamp":"2014-04-21T13:52:53Z","content_type":null,"content_length":"29832","record_id":"<urn:uuid:16e66328-8d1f-4484-baf1-d46bad6220af>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ferrocement tank book Index
purchase tank book
Website General Construction Guide
Continue to Chapter Two
Chapter One: Site Preparation and Calculations
60 cubic meters
Water is heavy. Be sure to locate the tank on solid ground. Cut enough room for the entire tank to sit on solid ground if the tank is going to be on a hillside. Excavated soil is not good for a tank
site because it will settle over time. Ferrocement water tanks last for decades and stable ground is important.
Enough area for working is also important, especially on the uphill side. Make the site large enough so dirt and rocks don’t fall into the steel armature. Contamination entangled in the structure is
a problem to avoid during construction. The area made up of excavated fill is a good place for the access road to terminate and to store materials. If this is a large tank and the excavated material
is a mountain of dirt poised to cause damage below during a flood year, then it should be placed on a cut bench cut of its own and be compacted for stability and safety.
Volume Calculation:
πr^2h = volume (where π = 3.14, r = radius, and h = height)
The following example is for a tank of sixty cubic meters; height is 2.13 meters.
πr^2(2.13) = 60 cubic meters (sixty thousand liters)
r^2 = 60 cubic meters ÷ (2.13 x 3.14) = 8.971 m^2
r = radius = 3 meters
2r = diameter = d = 6 meters
Strength Calculations:
Sixty cubic meters is used in this example because many ferrocement tanks have been built of this size and there have been no problems, even after twenty-five to thirty years. Tanks of this age in
the 200 to 400 cubic meter class have likewise shown no problems. Two hundred cubic meters is somewhat more difficult to build and 400 cubic meters is the beginning of a heavier construction project
Convert the depth into pressure, measured in grams per square centimeter and calculate the circumference in centimeters.
πd = 3.14 x 6 meters = circumference = 1884 centimeters.
The pressure on a square centimeter (kg/cm^2) = the depth of 2.13 meters = 0.213 kilograms per square centimeter.
This means that there is 0.213 kilograms of outward pressure on a one centimeter square at the bottom of the tank wall. Since the wall is 1884 centimeters around, the total outward force on the
bottom centimeter of wall is 0.213 x 1884 = 401 kilograms.
The next step is to determine the strength of the wall as it resists this outward pressure. The concrete plaster is only considered as waterproofing for the steel in this calculation. All the
strength is assumed to be in the steel. Add up the horizontal strands of welded wire and the bars which encircle the tank. Count the welded wire and the reinforcing bars separately since they are
different strengths of steel. Reinforcing steel is 3515 kilograms of tensile strength per square centimeter and the welded wire is 6328kg/cm^2.
There are five horizontal wires and two reinforcing bars in the bottom thirty centimeters of this sixty cubic meter tank. Ignore the welded wire bent to come up and out of the floor until further
along the discussion. Standard welded wire is ten gauge wire on 7.5 centimeter squares. Ten gauge wire is 0.356 cm diameter.
πr^2 = 0.1 square centimeters of steel times five wires = 0.5 square centimeters. Multiply this by 6328 kilograms per square centimeter = 3164 kilograms of tensile strength in the bottom 30
centimeters of wall. Divide by 30 to compute the welded wire strength in an average centimeter of wall. 3164 ÷ 30 = 105 kilograms of horizontal welded wire tensile strength per average vertical
centimeter of wall. The same calculation is done for two horizontal wraps of #4 bar (1.27 centimeters).
πr^2 multiplied by 2 multiplied by 3515 kilograms of tensile strength per square centimeter = 7030 kilograms of tensile strength in the reinforcing bar, in the bottom 30 cm of wall. Divide by 30 to
find the average strength in a centimeter of wall. 7030 ÷ 30 = 234.
The total wall steel strength is 234 + 105 kilograms = 339 kilograms of tensile strength in the steel. There is an additional #4 bar in the floor-to-wall key which brings the steel strength figure to
456 kilograms.
The final step in comparing steel tensile strength to water force is to draw a circle and quarter it as pictured below.
Imagine all the water force as concentrated in one direction along arrow B. The small circle at A is an anchor. Arrow B pulls with a force of 401 kilograms, which is the total outward water force on
the bottom centimeter of wall (calculated above).
Imagine next that the tank wall is infinitely strong except where the line CD cuts the tank in half. At points C and D the wall is the tensile strength of the steel calculations; 456 kilograms at C
and 456 kilograms at D. Total wall steel strength the water must break is thus 912 kilograms. Steel tensile strength divided by water force is 912 ÷ 401 = 2.3; the wall steel is 2.3 times stronger
than the water force.
Note 1: The welded wire coming out of the floor adds enough to bring the steel strength figure to almost 2.5 times stronger than water force, assuming that all the wires are at 45 degrees.
Note 2: An impression of just how strong ferrocement is for structures other than tanks is gained by reversing arrow B; push instead of pull. Well cured ferrocement easily has 550 kilograms of
compression strength per square centimeter. If a structural wall is eight centimeters thick, points C and D would add 8800 kilograms to the 912 kilograms of steel strength. Arrow B must push with a
force greater than 9700 kilograms to crush a one centimeter wide arc of ferrocement, at points C and D.
Economics (cost analysis):
Area calculations for 60m^3 tank: Floor or roof area = πr^2 = π3^2 = 28.26 m^2
Wall area = 2πr(height) = 2π(3)(2) = 37.5 m^2
Roof: The roof steel extends down the wall and the roof is also an arc.
Floor: To estimate floor steel add ten percent for waste and ten percent for the steel which extends beyond the circumference line before bending it to vertical position.
The result is (1.2)πr^2 = floor area calculation for steel. Add a little more for roof arc and use (1.25)πr^2 = roof area calculation for roof steel.
Floor or roof area multiplied by 2 (two layers of welded wire) = 56.5 m^2. Multiply this figure by the factors discussed previously. 56.5(1.2)(floor) + 56.5(1.25)(roof) = 138.4 ≈ 138m^2 of welded
wire in the roof and the floor.
Conclude the welded wire computation by adding the wall.
There are two layers of welded wire in the wall. 37.5m^2 multiplied by two = 75m^2; add 10 m^2 for wire overlaps and waste = 85 m^2.
The total for welded wire is 138m^2 for roof and floor plus 85m^2 for the wall = 223 m^2 of welded wire. The price of welded wire per m^2 multiplied by 223 m^2 = total cost of welded wire.
Calculation of reinforcing bars depends upon the spacing chosen between the bars and the length of a standard bar. Chapter two uses the grid space of 30 to 45 centimeters. Six meters is used further
on in this book as a standard length. The method used to calculate reinforcing steel is to visualize a square with equal to the standard length of reinforcing steel. In this example it is a six meter
square with an area of 36m^2.
Nineteen bars creates a spacing of 33.33 centimeters across six meters. This equals thirty eight bars total. Divide 38 bars by 36 m^2 = 1.05 reinforcing steel bars per m^2. Add ten percent for waste
and overlaps and there are 1.15 bars per m^2.
28.26m^2 (roof) + 28.26m^2 (floor) + 37.5m^2 (wall) = 94m^2 (total).
1.15 bars/m^2 multiplied by 94m^2 = 108 bars of reinforcing steel at a 33.33 centimeter spacing.
This calculation at a 45 centimeter space between bars is 6 m divided by 45 cm, plus one bar = 14.33 bars. multiply this by two for the total bars = 28.66. Divide by 36m^2 = .79 bars/m^2. Add ten
percent = .9 bars/m^2. Multiply by the total area (94m^2) and the reinforcing bars required equals 85.
Multiply the price of one reinforcing steel bar by the number of bars to compute the total cost of reinforcing steel bars.
Expanded metal for the inside of the roof and wall is wall plus roof areas multiplied by their use factors. 28.26(1.25) (roof) + 37.5(1.1) (wall) = 76.5 m^2.
Concrete is best estimated at 7.75 centimeter thickness multiplied by the total area plus approximately five percent for waste. The floor is estimated separately and done first.
A small volume factor (0.2) for the joint between wall and floor is added to the floor estimate. 28.26 m^2 (floor area) multiplied by 0.0775 m (thickness) multiplied by 1.2 = 2.6 m^3.
Roof and wall is (28.26 m^2 + 37.5 m^2)(0.0775)(1.05) = 5.35 m^3.
Summary (60 m^3 tank):
Welded Wire ................... 223 m^2
Expanded metal ............. 76.5 m^2
Thin welded wire ............ 40 m^2
Chicken wire (roof) ......... 30 m^2
Reinforcing steel bars .... 85 to 108
floor ............................... 2.6 m^3
roof and wall ................. 5.35 m^3
Other materials:
Tie wire ......................... 2 - 3 rolls
Water seal (inside):
Cement product ........... 70 - 100 kg
glue .............................. 12 - 16 l
Hog rings ...................... 3 - 5 kg
Hinge and Latch
Color pigments, extra cement water seal product, and glue (if the outside is to be colored).
Continue to Chapter Two | {"url":"http://www.ferrocement.com/tankBook/ch1.en.html","timestamp":"2014-04-19T19:35:07Z","content_type":null,"content_length":"11444","record_id":"<urn:uuid:3017769b-9e8b-4e9b-b143-faf39a46c72a>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |