content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Kids Math Books
Double Division with 3 Digit Divisors 612416/983
Method of Vietnamese Diagram: Double the Divisor 3 Times
Method of International Diagram: Double the Divisor 3 Times Method of Vietnamese Diagram: Double the Divisor 5 Times
It's Easy!
Step 1 - Double, double, double.
Step 2 - Subtract off multiples.
Step 3 - Add up your answer."
More books about long division for kids
Mathimagination Series: Book A, beginning multiplication and division; Book B, operations with whole numbers; Book C, number Decimals and Percentages With Pre- And Post-Tests: Place Value,
theory, sets and number bases; Book D, fractions; Book E, decimals and percent Addition, Subtraction, Multiplication, Division
Read more
9th Vietnam Mathematical Olympiad 1970 Problems
A1. ABC is a triangle. Show that sin A/2 sin B/2 sin C/2 < 1/4.
A2. Find all positive integers which divide 1890·1930·1970 and are not divisible by 45.
A3. The function f(x, y) is defined for all real numbers x, y. It satisfies f(x,0) = ax (where a is a non-zero constant) and if (c, d) and (h, k) are distinct points such that f(c, d) = f(h, k), then
f(x, y) is constant on the line through (c, d) and (h, k). Show that for any real b, the set of points such that f(x, y) = b is a straight line and that all such lines are parallel. Show that f(x, y)
= ax + by, for some constant b.
B1. AB and CD are perpendicular diameters of a circle. L is the tangent to the circle at A. M is a variable point on the minor arc AC. The ray BM, DM meet the line L at P and Q respectively. Show
that AP·AQ = AB·PQ. Show how to construct the point M which gives BQ parallel to DP. If the lines OP and BQ meet at N find the locus of N. The lines BP and BQ meet the tangent at D at P' and Q'
respectively. Find the relation between P' and Q'. The lines DP and DQ meet the line BC at P" and Q" respectively. Find the relation between P" and Q".
B2. A plane p passes through a vertex of a cube so that the three edges at the vertex make equal angles with p. Find the cosine of this angle. Find the positions of the feet of the perpendiculars
from the vertices of the cube onto p. There are 28 lines through two vertices of the cube and 20 planes through three vertices of the cube. Find some relationship between these lines and planes and
the plane p.
Put x = A/2, y = B/2. We have sin C/2 = sin(90^o-x-y) = cos(x+y). So we need to show that sin x sin y cos(x+y) < 1/4, or (cos(x-y) - cos(x+y) )cos(x+y) < 1/2, or 2 cos(x-y) cos(x+y) < 1 + 2 cos^2
(x+y). But 2 cos(x-y) cos(x+y) ≤ cos^2(x+y) + cos^2(x-y) ≤ 1 + cos^2(x+y) < 1 + 2 cos^2(x+y) (since 0 < x,y < 90^o.
A2. Answer: k·2^a7^b193^c197^d, where k = 1, 3, 3^2, 3^3, 5, 3·5, a = 0, 1, 2, or 3, b = 0 or 1, c = 0 or 1, d = 0 or 1 (192 solutions in all)
1890 = 2·3^35·7, 1930 = 2·5·193, 1970 = 2·5·197 (and 193 and 197 are prime). So 1890·1930·1970 = 2^33^35^37·193·197.
Source: http://www.kidsmathbooks.com
8th Vietnam Mathematical Olympiad 1969 Problems
1. A graph G has n + k points. A is a subset of n points and B is the subset of the other k points. Each point of A is joined to at least k - m points of B where nm < k. Show that there is a point in
B which is joined every point in A.
2. Find all real x such that 0 < x < π and 8/(3 sin x - sin 3x) + 3 sin^2x ≤ 5.
If a = xy / (x + y) and b = yz / (y + z) and c = zx / (z + x).
Know that a, b and c are not equal to zero, find the value of x in terms of a, b and c.
Fun math books for kids (read aloud to ages 9 and up)
Adding Fractions with Different Denominators
How to Add Fractions with different denominators:
• Find the Least Common Denominator (LCD) of the fractions
• Rename the fractions to have the LCD
• Add the numerators of the fractions
• Simplify the Fraction
Adding Fractions with Unlike Denominators
Adding Fractions with Different Denominators (No LCD)
4th Vietnam Mathematical Olympiad 1965 Problems
1. At time t = 0, a lion L is standing at point O and a horse H is at point A running with speed v perpendicular to OA. The speed and direction of the horse does not change. The lion's strategy is to
run with constant speed u at an angle 0 < φ < π/2 to the line LH. What is the condition on u and v for this strategy to result in the lion catching the horse? If the lion does not catch the horse,
how close does he get? What is the choice of φ required to minimise this distance?
2. AB and CD are two fixed parallel chords of the circle S. M is a variable point on the circle. Q is the intersection of the lines MD and AB. X is the circumcenter of the triangle MCQ. Find the
locus of X. What happens to X as M tends to (1) D, (2) C? Find a point E outside the plane of S such that the circumcenter of the tetrahedron MCQE has the same locus as X.
3. m an n are fixed positive integers and k is a fixed positive real. Show that the minimum value of x[1]^m + x[2]^m + x[3]^m + ... + x[n]^m for real x[i] satisfying x[1] + x[2] + ... + x[n] = k
occurs at x[1] = x[2] = ... = x[n].
Source: Nguyễn Thị Lan Phương, http://www.kidsmathbooks.com
3rd Vietnam Mathematical Olympiad 1964 Problems
1. Find cos x + cos(x + 2π/3) + cos(x + 4π/3) and sin x + sin(x + 2π/3) + sin(x + 4π/3).
2. Draw the graph of the functions y = | x^2 - 1 | and y = x + | x^2 - 1 |. Find the number of roots of the equation x + | x^2 - 1 | = k, where k is a real constant.
3. Let O be a point not in the plane p and A a point in p. For each line in p through A, let H be the foot of the perpendicular from O to the line. Find the locus of H.
4. Define the sequence of positive integers f[n] by f[0] = 1, f[1] = 1, f[n+2] = f[n+1] + f[n]. Show that f[n] = (a^n+1 - b^n+1)/√5, where a, b are real numbers such that a + b = 1, ab = -1 and a >
1. Using cos(A+B) = cos A cos B - sin A sin B, we have cos(x + 2π/3) = -(1/2) cos x + (√3)/2 sin x, cos(x + 4π/3) = -(1/2) cos x - (√3)/2 sin x. Hence cos x + cos(x + 2π/3) + cos(x + 4π/3) = 0.
Similarly, sin(x + 2π/3) = -1/2 sin x + (√3)/2 cos x, sin(x + 4π/3) = -1/2 sin x - (√3)/2 cos x, so sin x + sin(x + 2π/3) + sin(x + 4π/3) = 0.
2. Answer
0 for k < -1
1 for k = -1
2 for -1 < k < 1
3 for k = 1
4 for 1 < k < 5/4
3 for k = 5/4
2 for k > 5/4
It is clear from the graph that there are no roots for k < -1, and one root for k = -1 (namely x = -1). Then for k > -1 there are two roots except for a small interval [1, 1+h]. At k = 1, there are 3
roots (x = -2, 0, 1). The upper bound is at the local maximum between 0 and 1. For such x, y = x + 1 - x2 = 5/4 - (x - 1/2)2, so the local maximum is at 5/4. Thus there are 3 roots at k = 5/4 and 4
roots for k ∈ (1, 5/4).
3. Answer: circle diameter AB, where OB is the normal to p
Let B be the foot of the perpendicular from O to p. We claim that the locus is the circle diameter AB. Any line in p through A meets this circle at one other point K (except for the tangent to the
circle at A, but in that case A is obviously the foot of the perpendicular from O to the line). Now BK is perpendicular to AK, so OK is also perpendicular to AK, and hence K must be the foot of the
perpendicular from O to the line.
4. Put a = (1+√5)/2, b = (1-√5)/2. Then a, b are the roots of x^2 - x - 1 = 0 and satisfy a + b = 1, ab = -1. We show by induction that f[n] = (a^n+1 - b^n+1)/√5. We have f[0] = (a-b)/√5 = 1, f[1] =
(a^2-b^2)/√5 = (a+1 - b-1)/√5 = 1, so the result is true for n = 0, 1. Finally, suppose f[n] = (a^n+1 - b^n+1)/√5 and f[n+1] = (a^n+2 - b^n+2)/√5. Then f[n+2] = f[n+1] + f[n] = (1/√5)(a^n+1(a+1) - b^
n+1(b+1) ) = (a^n+1a^2 - b^n+1b^2)/√5, so the result is true for n+1.
Source: http://www.kidsmathbooks.com
2nd Vietnam Mathematical Olympiad 1963 Problems
1. A conference has 47 people attending. One woman knows 16 of the men who are attending, another knows 17, and so on up to the last woman who knows all the men who are attending. Find the number of
men and women attending the conference.
2. For what values of m does the equation x^2 + (2m + 6)x + 4m + 12 = 0 has two real roots, both of them greater than -2.
3. Solve the equation sin^3x cos 3x + cos^3x sin 3x = 3/8.
4. The tetrahedron SABC has the faces SBC and ABC perpendicular. The three angles at S are all 60^o and SB = SC = 1. Find its volume.
5. The triangle ABC has perimeter p. Find the side length AB and the area S in terms of ∠A, ∠B and p. In particular, find S if p = 23.6, A = 52.7 deg, B = 46 4/15 deg.
1. Suppose there are m women. Then the last woman knows 15+m men, so 15+2m = 47, so m = 16. Hence there are 31 men and 16 women.
2. Answer: m ≤ -3
For real roots we must have (m+3)^2 ≥ 4m+12 or (m-1)(m+3) ≥ 0, so m ≥ 1 or m ≤ -3. If m ≥ 1, then -(2m+6) ≤ -8, so at least one of the roots is < -2. So we must have m ≤ -3.
The roots are -(m+3) ±√(m^2+2m-3). Now -(m+3) ≥ 0, so -(m+3) + √(m^2+2m-3) ≥ 0 > -2. So we need -(m+3) - √(m^2+2m-3) > -2, or √(m^2+2m-3) < -m-1 = √(m^2+2m+1), which is always true.
3. Answer: 7½^o + k90^o or 37½^o + k90^o
We have sin 3x = 3 sin x - 4 sin^3x, cos 3x = 4 cos^3x - 3 cos x. So we need 4 sin^3x cos^3x - 3 sin^3x cos x + 3 sin x cos^3x - 4 sin^3x cos^3x = 3/8 or 8 sin x cos x(cos^2x - sin^2x) = 1, or 4 sin
2x cos 2x = 1 or sin 4x = 1/2. Hence 4x = 30^o + k360^o or 150^o + k360^o. So x = 7½^o + k90^o or 37½^o + k90^o.
Source: Nguyễn Thị Lan Phương, http://www.kidsmathbooks.com
1st Vietnam Mathematical Olympiad 1962 Problems
1. Prove that 1/(1/a + 1/b) + 1/(1/c + 1/d) ≤ 1/(1/(a+c) + 1/(b+d) ) for positive reals a, b, c, d.
2. f(x) = (1 + x)(2 + x^2)^1/2(3 + x^3)^1/3. Find f '(-1).
3. ABCD is a tetrahedron. A' is the foot of the perpendicular from A to the opposite face, and B' is the foot of the perpendicular from B to the opposite face. Show that AA' and BB' intersect iff AB
is perpendicular to CD. Do they intersect if AC = AD = BC = BD?
4. The tetrahedron ABCD has BCD equilateral and AB = AC = AD. The height is h and the angle between ABC and BCD is α. The point X is taken on AB such that the plane XCD is perpendicular to AB. Find
the volume of the tetrahedron XBCD.
5. Solve the equation sin^6x + cos^6x = 1/4.
1. A straightforward, if inelegant, approach is to multiply out and expand everything. All terms cancel except four and we are left with 2abcd ≤ a^2d^2 + b^2c^2, which is obviously true since (ad -
bc)^2 ≥ 0.
2. Differentiating gives f '(x) = (2 + x^2)^1/2(3 + x^3)^1/3 + terms with factor (1 + x). Hence f '(-1) =3^1/22^1/3.
3. Let the ray AB' meet CD at X and the ray BA' meet CD at Y. If AB' and A'B intersect, then X = Y. Let L be the line through A' parallel to CD. Then L is perpendicular to AA'. Hence CD is
perpendicular to AA'. Similarly, let L' be the line through B' parallel to CD. Then L' is perpendicular to BB', and hence CD is perpendicular to BB'. So CD is perpendicular to two non-parallel lines
in the plane ABX. Hence it is perpendicular to all lines in the plane ABX and, in particular, to AB.
Suppose conversely that AB is perpendicular to CD. Consider the plane ABY. CD is perpendicular to AB and to AA', so CD is perpendicular to the plane. Similarly CD is perpendicular to the plane ABX.
But it can only be perpendicular to a single plane through AB. Hence X = Y and so AA' and BB' belong to the same plane and therefore meet.
4. Put a = sin^2x, b = cos^2x. Then a and b are non-negative with sum 1, so we may put a = 1/2 + h, b = 1/2 - h. Then a^3 + b^3 = 1/4 + 3h^2 ≥ 1/4 with equality iff h = 0. Hence x is a solution of
the equation given iff sin^2x = cos^2x = 1/2 or x is an odd multiple of π/4.
Source: Nguyễn Thị Lan Phương, http://321math.blogspot.com
What's correlation between the structure of the mathematical theory and object structure?
My ideas about mathematics:
• Sign language of mathematically is a system non-contradiction.
• Mathematical language is a language system form of symbolic.
• Mathematics is a scientific inference, the type of theoretical knowledge.
• System of mathematical objects are determined a priori object class but applied mathematics achievement test pilot first.
Dictionary of Classical and Theoretical Mathematics
Title: Dictionary of Classical and Theoretical Mathematics
Author: Catherine Cavagnaro, William T. Haight, II
Publisher: © 2001 by CRC Press LLC
Price: $58.95
Product Description: http://www.amazon.com/Dictionary-Classical-Theoretical-Mathematics-Comprehensive/dp/1584880503
Title: English Vietnamese Mathematics Dictionary
Authors: Chính Đức Phan, Khanh Minh Lê, Lập Tấn Nguyễn, Thịnh Đình Lê, Thúy Công Nguyễn, Văn Bác Nguyễn.
Url: http://www.violympic.org/english-vietnamese-mathematics-dictionary.pdf
Title: Report on the Fundamental Lemma
Author: Châu Bảo Ngô, School of mathematics, Institute for Advanced Study, Princeton, NJ 08540 USA.
Download: http://www.kidsmathlearning.com/Report-on-the-Fundamental-Lemma-by-Ngo-Bao-Chau.pdf
• A Statement of the Fundamental Lemma, Thomas C. Hales
Amazon Books
On Central Critical Values of the Degree Four L-Functions for GSp(4): The Fundamental Lemma (Memoirs of the American Representations of Fundamental Groups of Algebraic Varieties (Lecture
Mathematical Society) - Mass Market Paperback (July 1, 2003) by Masaaki Furusawa and Joseph A. Shalika. Notes in Mathematics) - Paperback (Feb. 3, 2000) by Kang Zuo.
In what ways do mathematics affect our lives? (Consider things like human-based computing, statistics and probability, applied mathematics). When can information be considered property that can be
protected by formal laws?
What do visual elements (pattern, shape, size, proportion, texture, color, and proximity of buildings) contribute to the identity of a city?
• http://en.wikipedia.org/wiki/Applied_mathematics
• http://www.ams.org/notices/200111/rev-blank.pdf
Counting to ten with numbers:
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Counting to ten with words:
one, two, three, four, five, six, seven, eight, nine, ten
Counting to ten with objects:
first ten (10) numbers and the quantities they represent
Counting allows us to know the total number of things or objedts in a group. In order to do so, we separate the items from the group one by one, and we the next larger number to each item removed,
till none is left and the total number is discovered.
In other words, counting is answering: how many? Beforehand, in order to count, we must know the unique numbers which identify each quantity of things.
Source: http://321math.blogspot.com
English - Vietnamese Math Glossary: A
│about │khoảng chừng │
│above │ở trên │
│asolute value │giá trị tuyệt đối, trị số tuyệt đối │
│accurate │chính xác │
│accurately label work│công việc có nhãn hiệu chính xác │
│act it out │làm │
│acute angle │góc nhọn │
│acute triangle │tam giác nhọn │
│add │cộng, tính cộng │
│addend │phần hay số được cộng thêm vào │
│addition │phép cộng, tính cộng, cộng, toán cộng │
│addition fact │cơ sở lập luận của phép cộng │
│addition sentence │mệnh đề phép cộng │
│addition sign │dấu cộng │
│additive inverses │phần nghịch đảo tính cộng │
│after │sau, sau khi │
│afternoon │buổi trưa │
│algebra │đại số │
│algebraic expression │biểu thức đại số │
│algebraic patterns │khuôn thức đại số │
│algebraic relationship │mối liên hệ đại số │
│algebraic relationships │các sự liên hệ đại số │
│algebraically │có tính chất đại số │
│alike │giống nhau │
│all │tất cả │
│all together │chung tất cả │
│almost │gần, hầu như │
│amount │số lượng │
│analog clock │đồng hồ có kim chỉ giờ và phút │
│analyze │phân tích │
│angle (∠) │góc │
│angle adjacent │góc kề │
│answer │đáp số, kết quả, trả lời │
│ante meridian (a.m.) │trước giờ ngọ (trước 12 giờ trưa) │
│application │sự áp dụng │
│apply │áp dụng │
│approach │giải (bài toán), đạt tới (kết quả) │
│appropriate mathematical language │từ toán học thích hợp │
│organize work │xếp đặt bài toán, công việc │
│arc │cung │
│area │diện tích │
│argument │lập luận, bàn luận │
│argument conjecture counterexample│dẫn chứng dựa trên lập luận phỏng đoán │
│arithmetic (numeric) expression │biểu thức số học │
│arithmetic expression │biểu thức toán học │
│arrange │xếp đặt, sắp xếp │
│array │mảng, chuỗi số sắp theo thứ tự │
│as long as │miễn là, với điều kiện là │
│associative property │đặc tính liên kết │
│attribute │liên hệ, trực thuộc │
│autumn (fall) │mùa thu │
│average │trung bình │
│axis (axes) │trục │
Game: Addition War
Goal: 1 – The learner will read write, model, and compute with rational numbers
Materials: directions; a deck of cards with 4 each of the numbers 1 through 10 (available on-site); paper (preferably graph paper); pencil
Procedure: In problem solving, it might be helpful to think about solutions and answers as two different things. An answer is the final result to a problem, while a solution presents both the answer
and the strategy by which it was found.
- Follow the directions on the worksheet, playing several rounds with the student.
- This activity is excellent for basic practice until student commits the basic facts to memory.
Materials 1 deck of cards with 4 each of the numbers l through 10
Number of players 2-4
Object of the game To collect the most cards.
Directions Shuffle the cards and place the deck number-side down on the
playing surface.
Each player turns over 2 cards and calls Out the sum of the 2 numbers. The player with the largest sum wins the round and takes all the cards. In case of a tie for the largest sum. each tied player
turns over 2 more cards and calls ouT the sum. The player with the highest sum wins the round and takes all the cards from both plays.
Answers can be checked with an Addition Table or with a calculator.
Play continues until there are too few cards left for each player to have another turn. The player who took the most cards wins. Or, players may toss a penny to determine whether the player with the
most or the fewest cards wins.
Variation Each player turns over 3 cards and finds the sum.
Advanced version Players turn over 4 cards, form two 2-digit numbers, and find the sum. Players should consider how they form their numbers since different arrangements have different sums. For
example, a player turns over 2, 5, 7, and 4. 74 + 52 has a greater sum than 25 + 47.
Game: Subtraction War
Goal: 1 – The learner will read write, model, and compute with rational numbers
Materials: directions; a deck of cards with 4 each of the numbers 1 through 10 (available on-site); paper (preferably graph paper); pencil
Objective(s): The learner will compute with rational numbers (Goal 7).
Materials: directions; a deck of cards with 4 each of the numbers 1 through 10 (available on-site); paper (preferably graph paper); pencil
Procedure: In problem solving, it might be helpful to think about solutions and answers as two different things. An answer is the final result to a problem, while a solution presents both the answer
and the strategy by which it was found.
- Follow the directions on the worksheet, playing several rounds with the student.
- This activity is excellent for basic practice until student commits the basic facts to memory.
Materials 1 deck of cards with 4 each of the numbers l through 10
Number of players 2-4
Object of the game To collect the most cards.
Directions Shuffle the cards and place the deck number-side down on the playing surface.
Each player turns over 3 cards, finds the sum of any two of the numbers, then finds the difference between the sum and the third number. The player with the largest difference takes the cards.
A 4, 8, and 3 are turned over. There are 3 combinations that will result in a positive number.
4 + 8 = 12: 12 - 3 = 9
3 + 8 = 11: 11 - 4 = 7
3 + 4 = 7 : 8 - 7 = 1
Advanced version Players turn over cards, form two 2-digit numbers, and find the difference. Players should consider now they form their numbers. 75 - 24 has a greater difference than 57 - 42.
Game: Multiplication War
Goal: 1 – The learner will read write, model, and compute with rational numbers
Materials: directions; a deck of cards with 4 each of the numbers 1 through 10 (available on-site); paper (preferably graph paper); pencil
Procedure: In problem solving, it might be helpful to think about solutions and answers as two different things. An answer is the final result to a problem, while a solution presents both the answer
and the strategy by which it was found.
- Follow the directions on the worksheet, playing several rounds with the student.
- This activity is excellent for basic practice until student commits the basic facts to memory.
Materials 1 deck of cards with 4 each of the numbers l through 10
Number of players 2-4
Object of the game To collect the most cards.
Directions Shuffle the cards and place the deck number-side down on the playing surface.
The game is played the same way as Addition Top-it, except that players find the product of the numbers instead of the sum. The player with the largest product wins the round and takes all the cards.
Answers can be checked with a Multiplication Table or with a calculator.
Variation: Players turn over 3 cards. form a 2-digit number and multiply by the remaining number.
Game: Division War
Goal: 1 – The learner will read write, model, and compute with rational numbers
Materials: directions; a deck of cards with 4 each of the numbers 1 through 10 (available on-site); paper (preferably graph paper); pencil
Procedure: In problem solving, it might be helpful to think about solutions and answers as two different things. An answer is the final result to a problem, while a solution presents both the answer
and the strategy by which it was found.
- Follow the directions on the worksheet, playing several rounds with the student.
- This activity is excellent for basic practice until student commits the basic facts to memory.
Materials 1 deck of cards with 4 each of the numbers l through 10
Number of players 2-4
Object of the game To collect the most cards.
Directions Shuffle the cards and place the deck number-side down on the playing surface.
Each player turns over 3 cards and uses them to generate division problems as follows.
Choose 2 cards to form The dividend. Use the remaining card as the divisor.
Divide and drop the remainder. The player with the largest quotient wins the round and takes all the cards.
Advanced version Turn over 4 cards and choose three of them to form a 3-digit number. Divide it by the remaining number. The arrangement of the numbers may result in a greater quotient. For example:
462/5 is greater than 256/4, but 654/2 is even greater.
Skills: Time, Patterns, Multiplication (basic facts)
1. David James and Pitter Pan began watching a movie at 3:30. The movie ended at 4:45. How long was the movie? Look at a clock with hands if you need help figuring this out.
______ hours and ______ minutes long
2. Leah’s dog, Belle, buried 5 bones in the backyard on Monday. On Tuesday, she buried 7 bones in the backyard. On Wednesday, she buried 9 bones. If the pattern continues, how many bones will Belle
bury on Saturday? Make a table to help you find your answer.
Look at a clock with hands if you need help figuring this out.
3. John had two dozen fishing lures. His father had four dozen lures. How many lures did they have in all? Show your work and label your answer.
Make a table to help you find your answer.
4. There were 8 cars in the parking lot. Each car had 4 tires. How many tires were in the parking lot? Show your work and label your answer.
Show your work and label your answer.
5. There were six ice skaters on the ice rink. Each skater had two skates on. How many skates were there in all?
Show your work and label your answer.
Sourse: http://www.kidsmathbooks.com
When we are given a long division to do it will not always work out to a whole number. Sometimes there will be numbers left over. These are known as remainders.
e.g: 435 ÷ 25
When we are given a long division to do it will not always work out to a whole number.
Sometimes there will be numbers left over. These are known as remainders.
How to do Long Division with Remainders?
When we are given a long division to do it will not always work out to a whole number. Sometimes there will be numbers left over. These are known as remainders. Taking an example similar to that on
the Long Division page it becomes more clear: 435 ÷ 25. If you feel happy with the process on the Long Division
page you can skip the first bit.
│ │ 4 ÷ 25 = 0 remainder 4 │ The first number of the dividend is divided by the divisor. │
│ │ │ The whole number result is placed at the top. Any remainders are ignored at this point. │
│ │ 25 × 0 = 0 │ The answer from the first operation is multiplied by the divisor. The result is placed under the number divided into. │
│ │ 4 – 0 = 4 │ Now we take away the bottom number from the top number. │
│ │ │ Bring down the next number of the dividend. │
│ │ 43 ÷ 25 = 1 remainder 18 │ Divide this number by the divisor. │
│ │ │ The whole number result is placed at the top. Any remainders are ignored at this point. │
│ │ 25 × 1 = 25 │ The answer from the above operation is multiplied by the divisor. The result is placed under the last number divided into. │
│ │ 43 – 25 = 18 │ Now we take away the bottom number from the top number. │
│ │ │ Bring down the next number of the dividend. │
│ │ 185 ÷ 25 = 7 remainder 10 │ Divide this number by the divisor. │
│ │ │ The whole number result is placed at the top. Any remainders are ignored at this point. │
│ │ 25 × 7 = 175 │ The answer from the above operation is multiplied by the divisor. The result is placed under the number divided into. │
│ │ 185 – 175 = 10 │ Now we take away the bottom number from the top number. │
│ │ │ There is still 10 left over but no more numbers to bring down. │
│ │ │ With a long division with remainders the answer is expressed as 17 remainder 10 as shown in the diagram │
"Teaching Double Division can help in teaching long division by reinforcing the principles of division and giving students success with a less frustrating alternative.
Double Division does not depend on memorizing the multiplication facts or estimating how many times one number goes into another. It may take 50% longer, but it is far less frustrating and probably
easier to understand than Long Division.
It's Easy!
Step 1 - Double, double, double.
Step 2 - Subtract off multiples.
Step 3 - Add up your answer."
Adapted from http://www.doubledivision.org
• http://www.doubledivision.org
• http://en.wikipedia.org/wiki/Long_division
More books about long division for kids
Mathimagination Series: Book A, beginning multiplication and division; Book B, operations with whole numbers; Book C, number Decimals and Percentages With Pre- And Post-Tests: Place Value,
theory, sets and number bases; Book D, fractions; Book E, decimals and percent Addition, Subtraction, Multiplication, Division
1. K-6 Math Practice - IXL: www.ixl.com/math
2. Cool Math: www.coolmath4kids.com
3. Education resource for K-8 Kids and Teachers: www.funbrain.com
4. Dr Mike's Math Games for Kids: www.dr-mikes-math-games-for-kids.com
Beginning Math Activities
FunBrain Activities
1. Number Recognition/Sequencing
1. Bunny Count ( count and match numbers and characters)
2. One False Move (sequence numbers from lowest to highest)
3. Guess the Number (guess number with high low clues)
2. Math Brain Activities (25 Board Games to teach skills)
Funschool: Preschool
1. Fishin' Mission (count the number and fish and put them in the net)
2. Stacker (stack the blocks in sequence order)
3. Connect the Numbers (from smallest to largest order)
Learning Planet Activities
1. Number Recognition/Counting/Sequencing
1. Count Your Chickens ( counting activity)
2. 1 2 3 Order (what comes next... up to 10)
3. Number Train ( make a train and count the cars)
IXL Math Practice Activities (click on grade school level)
Base 10 Count (group ones into ten... see what number is made)
KidPort Math Activities by Grade:
1. Addition and Subtraction Activities
2. Addition or Subtraction or Multiplication or Division
3. Funbrain Activities with + - * / and whole numbers
4. Watch Addem (Number Movies)
3. Factors/ Multiples/ Primes/ Powers/ Triangular Numbers
4. Fractions
1. What is a Fraction?
2. Equivalent Fractions
3. Fraction/Decimal Relationships
4. All About Fractions by AAA Math (Explanation, interactive practice and challenge games about fractions)
5. I Know That: Fishy Fractions
5. Geometry/ Area / Perimeter
6. Graphing/ Data Collections/ Coordinate Points
1. Line Jumper Activity by Funbrain (solve + and - INTEGER problems using the NUMBER LINE)
2. Builder Ted Activity by BBC Maths File (activity that paces integers in numerical order)
3. Mystery Picture with Integers (by Dositey)
8. Online Logic Puzzles/Games
9. Mean, Mode, Median
10. Measurement/ Weight
1. Measure It Activity by Funbrain ( read a ruler in centimeters or inches)
2. Animal Weigh In Activity by BBC Maths File ( activity that balances weights on a scale)
11. Missing number in the series / Patterns
12. PreAlgebra/ Algebra Activities
13. Place Value
14. Probability
1. Fish Tank Activity by BBC Maths File ( guess probability of catching fish in the tank)
2. Mrs Glosser's lesson on Probability (online lessons and activity)
3. What Are Your Chances? (National Center for Education Statistics)
15. Rounding
17. Metric
19. Math in Daily Life
20. Math Problems of the Week/ Word Problems
21. Math questions and answers
22. Famous Mathematicians and History of Mathematics ( over 1000 biographies)
23. Vocabulary
24. Everyday Math Resources | {"url":"http://www.kidsmathbooks.com/2010_05_01_archive.html","timestamp":"2014-04-18T13:06:37Z","content_type":null,"content_length":"386046","record_id":"<urn:uuid:2b0fa1f3-d815-42b1-9d0f-2cad7ef906e5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00078-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear independence and span
March 30th 2013, 06:09 AM #1
Junior Member
Sep 2012
Q: If S is a linearly independent subset of a vector space V, then is it true in general that for no proper subset T of S, span(T) = span(S) ?
The result should be true if S is a basis. But in the above question one can't make out whether or not S is a basis, right ? So what should be the answer ?
Please help
Re: Linear independence and span
Q: If S is a linearly independent subset of a vector space V, then is it true in general that for no proper subset T of S, span(T) = span(S) ?
The result should be true if S is a basis. But in the above question one can't make out whether or not S is a basis, right ? So what should be the answer ?
Please help
Well, you can think of $S$ as a basis for the subspace of $V$ generated by $S$ itself, so...
Re: Linear independence and span
Let S be the set of vectors $\{v_1, v_2, ..., v_n\}$ and T the subset $\{v_1, v_2, ..., v_m\}$ with m< n, of course. Obviously, $v_n$ is in Span(S) so if Span(S)= Span(T) then we must have $v_n=
a_1v_1+ a_2v_2+ \cdot\cdot\cdot+ a_mv_m$. From that it follows that $a_1v_1+ a_2v_2+ \cdot\cdot\cdot+ a_mv_m- v_m= 0$ contradicting the fact that the vectors in S were independent.
March 30th 2013, 06:47 AM #2
Super Member
Jan 2008
March 30th 2013, 07:02 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/advanced-algebra/216014-linear-independence-span.html","timestamp":"2014-04-20T02:22:41Z","content_type":null,"content_length":"38073","record_id":"<urn:uuid:06110946-2a9e-4d63-9564-2f5956b879b2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00570-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: set/cat "foundations"
Vaughan Pratt pratt at cs.Stanford.EDU
Fri Feb 27 01:53:00 EST 1998
From: Till Mossakowski 1:53PM 2/26/98.
>Perhaps a problem of categorical foundations is that the meta-theory
>of first-order logic is naive set theory. For categorical foundations,
>one might need a more category-based formalism, like sketches
>(but sketches themselves are definitely too weak). Hopefully, such a
>formalism would make Lawvere's axioms more concise.
From: Harvey Friedman Thu, 26 Feb 1998 16:09:44 +0100
>You may well be putting your finger on an important aspect of what is
>missing, and what needs to be done to make categorical foundations a
>reality. I should add that I don't fully understand exactly what you are
>saying here. What is a sketch?
A sketch is the categorical counterpart of a first-order theory.
It specifies the language of the theory in terms of limits and colimits
of diagrams. The language of (finitary) quantifier-free logic is
representable entirely with finite product (FP) sketches, i.e. no colimits
and only discrete limits. FL sketches allow all limits, e.g. pullbacks
which come in handy if you want to axiomatize composition of morphisms
as a total operation (not possible with ordinary first order logic or
FP sketches).
Colimits extend the expressive power of sketches in much the same way
that least-fixpoint operators extend the expressive power of first order
logic (made precise by a very nice theorem of Adamek and Rosicky), but
completely dually to limits. (Fixpoint operators are not obviously dual
to anything in first order logic.)
The machinery of sketches is either appealingly economical and elegant
or repulsively complex and daunting depending on whether you look at it
from the perspective of category theory or set theory.
As a formalism for categorical foundations sketches have the same
weakness as Colin's axiomatization of categories: they are based
on ordinary categories, with no 2-cells. (Again let me stress the
importance of 2-categories, i.e. not just line segments but surface
patches, for foundations.) On the one hand I'm sure this is not an
intrinsic limitation of sketches, on the other I don't know what's been
done along those lines to date. Higher-dimensional sketches are surely
well worth pursuing.
Vaughan Pratt
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-February/001249.html","timestamp":"2014-04-20T01:14:14Z","content_type":null,"content_length":"4731","record_id":"<urn:uuid:638edec0-550f-48ef-a15f-4515d71a7324>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Re: sqrt and divide
Robert Kern robert.kern at gmail.com
Tue Apr 11 23:16:03 CDT 2006
Stefan van der Walt wrote:
> Hi all
> Two quick questions regarding unintuitive numpy behaviour:
> Why is the square root of -1 not equal to the square root of -1+0j?
> In [5]: N.sqrt(-1.)
> Out[5]: nan
> In [6]: N.sqrt(-1.+0j)
> Out[6]: 1j
It is frequently the case that the argument being passed to sqrt() is expected
to be non-negative and all of their code strictly deals with numbers in the real
domain. If the argument happens to be negative, then it is a sign of a bug
earlier in the code or a floating point instability. Returning nan gives the
programmer the opportunity for sqrt() to complain loudly and expose bugs instead
of silently upcasting to a complex type. Programmers who *do* want to work in
the complex domain can easily perform the cast explicitly.
> Is there an easier way of dividing two scalars than using divide?
> In [9]: N.divide(1.,0)
> Out[9]: inf
x/y ?
> (also
> In [8]: N.divide(1,0)
> Out[8]: 0
> should probably ruturn inf / nan?)
inf and nan are floating point values. The definition of int division used when
both arguments to divide() are ints also yields ints, not floats.
Robert Kern
robert.kern at gmail.com
"I have come to believe that the whole world is an enigma, a harmless enigma
that is made terrible by our own mad attempt to interpret it as though it had
an underlying truth."
-- Umberto Eco
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-April/007508.html","timestamp":"2014-04-17T06:52:27Z","content_type":null,"content_length":"4124","record_id":"<urn:uuid:bcddb5bf-970f-4d40-8a4b-38516624b064>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
Impact location, using four sound sensors. - Arduino Forum
Another line of thought.
When the soundwaves arrive the soundwave has a definite form which will be slightly different at the three points ABC. The essence is that the peakvolume of the soundwave is proportional (linear,
quadratic or otherwise) with the distance. The farther away the weaker the sound. As one could calibrate the micro's with a signalstrength/distance table the math would become simpler again.
Don't know if the differences are within the noise-level of the signals/ micro's/arduino ADC but a simple test could reveal this. Furthermore soundwaves can be deformed by obstacles in the open field
etc. Still it has some potential worth investigating.
Question: How big is ABCD in square meters? smaller/bigger?
Some additional hyperbola math: - | {"url":"http://forum.arduino.cc/index.php/topic,52583.msg379029.html","timestamp":"2014-04-19T01:57:22Z","content_type":null,"content_length":"117721","record_id":"<urn:uuid:081d4aa8-07ad-4cbd-9b1e-3e74312cfcac>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00252-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gérard Ben Arous
Publications (57)70.35 Total impact
[show abstract] [hide abstract]
ABSTRACT: The speed v(β) of a β-biased random walk on a Galton-Watson tree without leaves is increasing for β ≥ 1160. © 2013 Wiley Periodicals, Inc.
Communications on Pure and Applied Mathematics 01/2014; · 3.34 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We introduce a general model of trapping for random walks on graphs. We give the possible scaling limits of these "Randomly Trapped Random Walks" on Z. These scaling limits include the
well known Fractional Kinetics process, the Fontes-Isopi-Newman singular diffusion as well as a new broad class we call Spatially Subordinated Brownian Motions. We give sufficient conditions for
convergence and illustrate these on two important examples.
[show abstract] [hide abstract]
ABSTRACT: We consider a biased random walk Xn on a Galton–Watson tree with leaves in the sub-ballistic regime. We prove that there exists an explicit constant γ = γ(β) ∈ (0, 1), depending on the
bias β, such that |Xn| is of order nγ. Denoting Δn the hitting time of level n, we prove that Δn/n1/γ is tight. Moreover, we show that Δn/n1/γ does not converge in law (at least for large values
of β). We prove that along the sequences nλ(k) = ⌊λβγk⌋, Δn/n1/γ converges to certain infinitely divisible laws. Key tools for the proof are the classical Harris decomposition for Galton–Watson
trees, a new variant of regeneration times and the careful analysis of triangular arrays of i.i.d. heavy-tailed random variables.
The Annals of Probability 01/2012; 40(1). · 1.38 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: The speed $v(\beta)$ of a $\beta$-biased random walk on a Galton-Watson tree without leaves is increasing for $\beta \geq 717$.
[show abstract] [hide abstract]
ABSTRACT: We analyze the landscape of general smooth Gaussian functions on the sphere in dimension N, when N is large. We give an explicit formula for the asymptotic complexity of the mean number
of critical points of finite and diverging index at any level of energy and for the mean Euler characteristic of level sets. We then find two possible scenarios for the bottom energy landscape,
one that has a layered structure of critical values and a strong correlation between indexes and critical values and another where even at energy levels below the limiting ground state energy the
mean number of local minima is exponentially large. These two scenarios should correspond to the distinction between one-step replica symmetry breaking and full replica-symmetric breaking of the
physics literature on spin glasses. In the former, we find a new way to derive the asymptotic complexity function as a function of the 1RSB Parisi functional.
[show abstract] [hide abstract]
ABSTRACT: We prove the Einstein relation, relating the velocity under a small perturbation to the diffusivity in equilibrium, for certain biased random walks on Galton--Watson trees. This
provides the first example where the Einstein relation is proved for motion in random media with arbitrary deep traps.
Annales de l Institut Henri Poincaré Probabilités et Statistiques 06/2011; · 0.93 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: As a model of trapping by biased motion in random structure, we study the time taken for a biased random walk to return to the root of a subcritical Galton-Watson tree. We do so for
trees in which these biases are randomly chosen, independently for distinct edges, according to a law that satisfies a logarithmic non-lattice condition. The mean return time of the walk is in
essence given by the total conductance of the tree. We determine the asymptotic decay of this total conductance, finding it to have a pure power-law decay. In the case of the conductance
associated to a single vertex at maximal depth in the tree, this asymptotic decay may be analysed by the classical defective renewal theorem, due to the non-lattice edge-bias assumption. However,
the derivation of the decay for total conductance requires computing an additional constant multiple outside the power-law that allows for the contribution of all vertices close to the base of
the tree. This computation entails a detailed study of a convenient decomposition of the tree, under conditioning on the tree having high total conductance. As such, our principal conclusion may
be viewed as a development of renewal theory in the context of random environments. For randomly biased random walk on a supercritical Galton-Watson tree with positive extinction probability, our
main results may be regarded as a description of the slowdown mechanism caused by the presence of subcritical trees adjacent to the backbone that may act as traps that detain the walker. Indeed,
this conclusion is exploited in \cite{GerardAlan} to obtain a stable limiting law for walker displacement in such a tree.
[show abstract] [hide abstract]
ABSTRACT: We consider Random Hopping Time (RHT) dynamics of the Sherrington - Kirkpatrick (SK) model and p-spin models of spin glasses. For any of these models and for any inverse temperature we
prove that, on time scales that are sub-exponential in the dimension, the properly scaled clock process (time-change process) of the dynamics converges to an extremal process. Moreover, on these
time scales, the system exhibits aging like behavior which we called extremal aging. In other words, the dynamics of these models ages as the random energy model (REM) does. Hence, by extension,
this confirms Bouchaud's REM-like trap model as a universal aging mechanism for a wide range of systems which, for the first time, includes the SK model.
[show abstract] [hide abstract]
ABSTRACT: We give an asymptotic evaluation of the complexity of spherical p-spin spin-glass models via random matrix theory. This study enables us to obtain detailed information about the bottom
of the energy landscape, including the absolute minimum (the ground state), the other local minima, and describe an interesting layered structure of the low critical values for the Hamiltonians
of these models. We also show that our approach allows us to compute the related TAP-complexity and extend the results known in the physics literature. As an independent tool, we prove a LDP for
the k-th largest eigenvalue of the GOE, extending the results of Ben Arous, Dembo and Guionnett (2001).
[show abstract] [hide abstract]
ABSTRACT: We consider the family of two-sided Bernoulli initial conditions for TASEP which, as the left and right densities ($\rho_-,\rho_+$) are varied, give rise to shock waves and rarefaction
fans---the two phenomena which are typical to TASEP. We provide a proof of Conjecture 7.1 of [Progr. Probab. 51 (2002) 185--204] which characterizes the order of and scaling functions for the
fluctuations of the height function of two-sided TASEP in terms of the two densities $\rho_-,\rho_+$ and the speed $y$ around which the height is observed. In proving this theorem for TASEP, we
also prove a fluctuation theorem for a class of corner growth processes with external sources, or equivalently for the last passage time in a directed last passage percolation model with
two-sided boundary conditions: $\rho_-$ and $1-\rho_+$. We provide a complete characterization of the order of and the scaling functions for the fluctuations of this model's last passage time $L
(N,M)$ as a function of three parameters: the two boundary/source rates $\rho_-$ and $1-\rho_+$, and the scaling ratio $\gamma^2=M/N$. The proof of this theorem draws on the results of [Comm.
Math. Phys. 265 (2006) 1--44] and extensively on the work of [Ann. Probab. 33 (2005) 1643--1697] on finite rank perturbations of Wishart ensembles in random matrix theory.
The Annals of Probability 05/2009; 39(2011). · 1.38 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We continue here the study of free extreme values begun in Ben Arous and Voiculescu (2006). We study the convergence of the free point processes associated with free extreme values to a
free Poisson random measure (Voiculescu (1998), Barndorff-Nielsen and Thorbjornsen (2005)). We relate this convergence to the free extremal laws introduced in Ben Arous and Voiculescu (2006) and
give the limit laws for free order statistics.
Probability Theory and Related Fields 04/2009; · 1.39 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We survey in this paper a universality phenomenon which shows that some characteristics of complex random energy landscapes are model-independent, or universal. This universality,
called REM-universality, was discovered by S. Mertens and H. Bauke in the context of combinatorial optimization. We survey recent advances on the extent of this REM-universality for equilibrium
as well as dynamical properties of spin glasses. We also focus on the limits of REM-universality, i.e., when it ceases to be valid. Mathematics Subject Classification (2000)
82B44-82D30-82C44-60G15-60G55 KeywordsSpin glasses-random energy model-extreme values-Gaussian processes-statistical mechanics-disordered media
12/2008: pages 45-84;
[show abstract] [hide abstract]
ABSTRACT: We consider a version of Glauber dynamics for a p-spin Sherrington– Kirkpatrick model of a spin glass that can be seen as a time change of simple random walk on the N-dimensional
hypercube. We show that, for all p ≥ 3 and all inverse temperatures β>0, there exists a constant γ β ,p >0, such that for all exponential time scales, exp(γ N), with γ < γ β ,p , the properly
rescaled clock process (time-change process) converges to an α-stable subordinator where α = γ/β 2<1. Moreover, the dynamics exhibits aging at these time scales with a time-time correlation
function converging to the arcsine law of this α-stable subordinator. In other words, up to rescaling, on these time scales (that are shorter than the equilibration time of the system) the
dynamics of p-spin models ages in the same way as the REM, and by extension Bouchaud’s REM-like trap model, confirming the latter as a universal aging mechanism for a wide range of systems. The
SK model (the case p = 2) seems to belong to a different universality class.
Communications in Mathematical Physics 08/2008; 282(3):663-695. · 1.97 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Ageing has become the paradigm for describing dynamical behavior of glassy systems, and in particular spin glasses. Trap models have been introduced as simple caricatures of the
effective dynamics of such systems. In this Letter we show that in a wide class of mean field models and on a wide range of time scales, ageing occurs precisely as predicted by the random energy
model-like trap model of Bouchaud and Dean. This is the first rigorous result concerning ageing in mean field models other than the random energy model and the spherical model.
Journal of Statistical Mechanics Theory and Experiment 04/2008; 2008(04):L04003. · 1.87 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We give a general proof of aging for trap models using the arcsine law for stable subordinators. This proof is based on abstract conditions on the potential theory of the underlying
graph and on the randomness of the trapping landscape. We apply this proof to aging for trap models on large, two-dimensional tori and for trap dynamics of the random energy model on a broad
range of time scales. © 2006 Wiley Periodicals, Inc.
Communications on Pure and Applied Mathematics 02/2008; 61(3):289 - 329. · 3.34 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We study the statistics of the largest eigenvalues of real symmetric and sample covariance matrices when the entries are heavy tailed. Extending the result obtained by Soshnikov in \
cite{Sos1}, we prove that, in the absence of the fourth moment, the top eigenvalues behave, in the limit, as the largest entries of the matrix.
Annales de l Institut Henri Poincaré Probabilités et Statistiques 11/2007; · 0.93 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: Let $X_N$ be an $N\ts N$ random symmetric matrix with independent equidistributed entries. If the law $P$ of the entries has a finite second moment, it was shown by Wigner \cite{wigner}
that the empirical distribution of the eigenvalues of $X_N$, once renormalized by $\sqrt{N}$, converges almost surely and in expectation to the so-called semicircular distribution as $N$ goes to
infinity. In this paper we study the same question when $P$ is in the domain of attraction of an $\alpha$-stable law. We prove that if we renormalize the eigenvalues by a constant $a_N$ of order
$N^{\frac{1}{\alpha}}$, the corresponding spectral distribution converges in expectation towards a law $\mu_\alpha$ which only depends on $\alpha$. We characterize $\mu_\alpha$ and study some of
its properties; it is a heavy-tailed probability measure which is absolutely continuous with respect to Lebesgue measure except possibly on a compact set of capacity zero.
Communications in Mathematical Physics 08/2007; · 1.97 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: We introduce here a new universality conjecture for levels of random Hamiltonians, in the same spirit as the local REM conjecture made by S. Mertens and H. Bauke. We establish our
conjecture for a wide class of Gaussian and non-Gaussian Hamiltonians, which include the $p$-spin models, the Sherrington-Kirkpatrick model and the number partitioning problem. We prove that our
universality result is optimal for the last two models by showing when this universality breaks down.
[show abstract] [hide abstract]
ABSTRACT: We give the “quenched” scaling limit of Bouchaud’s trap model in d≥2. This scaling limit is the fractional-kinetics process, that is the time change of a d-dimensional Brownian motion
by the inverse of an independent α-stable subordinator.
The Annals of Probability 01/2007; 35(2007). · 1.38 Impact Factor
[show abstract] [hide abstract]
ABSTRACT: This work addresses potential theoretic questions for the standard nearest neighbor random walk on the hypercube $\{-1,+1\}^N$. For a large class of subsets $A\subset\{-1,+1\}^N$ we
give precise estimates for the harmonic measure of $A$, the mean hitting time of $A$, and the Laplace transform of this hitting time. In particular, we give precise sufficient conditions for the
harmonic measure to be asymptotically uniform, and for the hitting time to be asymptotically exponentially distributed, as $N\to\infty$. Our approach relies on a $d$-dimensional extension of the
Ehrenfest urn scheme called lumping and covers the case where $d$ is allowed to diverge with $N$ as long as $d\leq\alpha_0\frac{N}{\log N}$ for some constant $0<\alpha_0<1$.
Top Journals
• 2008
□ Weierstrass Institute for Applied Analysis and Stochastics
Berlín, Berlin, Germany
□ CUNY Graduate Center
New York City, New York, United States
• 2006
□ Université Paris-Sud 11
Orsay, Île-de-France, France
• 1998–2005
□ École Polytechnique Fédérale de Lausanne
Lausanne, Vaud, Switzerland
• 1995–1997
□ French National Centre for Scientific Research
Lutetia Parisorum, Île-de-France, France | {"url":"http://www.researchgate.net/researcher/9305154_Gerard_Ben_Arous","timestamp":"2014-04-19T21:20:12Z","content_type":null,"content_length":"342093","record_id":"<urn:uuid:7bfd3b4c-95a8-4b33-a6e5-ad4998ef5d23>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
Give Counterexample: b_(n+1)-b_n --> 0, then b_n --> L
September 21st 2009, 05:42 PM #1
Junior Member
May 2007
Give Counterexample: b_(n+1)-b_n --> 0, then b_n --> L
given: a_n = b_(n+1) - b_n
[if the limit of a_n = 0, then (b_n) has a limit] = FALSE.
please provide a counterexample. with all my heart i think the converse is true. if have already proven the converse of the converse.
i have no idea where to even begin, since it seems obvious that ... should the difference between consecutive terms approach zero, then the terms approach an agreeable limit...
i.e. i keep coming up with sequences that actually DO converge (e.g. (-1)^n/n ...)
given: a_n = b_(n+1) - b_n
[if the limit of a_n = 0, then (b_n) has a limit] = FALSE.
please provide a counterexample. with all my heart i think the converse is true. if have already proven the converse of the converse.
i have no idea where to even begin, since it seems obvious that ... should the difference between consecutive terms approach zero, then the terms approach an agreeable limit...
i.e. i keep coming up with sequences that actually DO converge (e.g. (-1)^n/n ...)
if you're assuming that L is "finite", then a counter-example would be $b_n = \ln n.$ the converse is obviously true.
Another is $b_n=H_n$, where $H_n=\sum_{k=1}^n\frac{1}{k}$.
$b(n):=\sqrt{n}$ for $n\in\mathbb{N}_{0}$.
By the way, any function satisfying $f,f^{\prime}>0>f^{\prime\prime}$ (these imply that $a>0$ tends to $0$ asymptotically) with $f(\infty)=\infty$ (this implies $b$ diverges to $\infty$) can be
your sequence $b$.
Draw such a function on a paper and try to see what I mean...
Yeah, as strange as it seems, I get it. f(x) -> Inf while f'(x) -> 0. Thanks for that generality.
@redsoxfan325: I also thought of that odd series.
September 21st 2009, 06:53 PM #2
MHF Contributor
May 2008
September 21st 2009, 08:25 PM #3
September 22nd 2009, 01:25 AM #4
September 22nd 2009, 04:33 AM #5
Junior Member
May 2007 | {"url":"http://mathhelpforum.com/calculus/103618-give-counterexample-b_-n-1-b_n-0-then-b_n-l.html","timestamp":"2014-04-16T05:51:44Z","content_type":null,"content_length":"45615","record_id":"<urn:uuid:2ac4c15d-8a4f-4730-9e70-240a107d75de>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimating Gear Fatigue Life
Fatigue failure of gears can lead to the catastrophic failure of equipment, taking into account that gears are important elements in the power transmission systems of many modern machines. Because of
this, effective procedures and information to evaluate the load capacity and useful life of gears are needed by specialists in several fields of engineering application, including those involved with
disaster preparedness and management in the fields of transportation, power generation, and the mechanical industry. The actual practice of engineering and increase of the work speeds in current
applications of gears has required better specification of steel fatigue behavior for numbers of cycles greater than 106 or 107. In this sense, AGMA Standard 2105-D04 has introduced useful
information to consider the fatigue load capacity of steel gears in the case of a high number of cycles. In this presentation, the procedure and formulas to estimate a value of gear life expectancy
for a high number of cycles is given. The procedure takes into account the pitting resistance (surface fatigue failure) and bending strength capacity (volumetric fatigue failure) of spur and helical
gears. Formulas are based on the AGMA Standard 2105-D04 for calculation of the load capacity of cylindrical gears.
The Stress-life Method
The distinguishing characteristic of materials associate with the lost of resistance under the action of repeated or fluctuating stresses is called fatigue failure. The study of fatigue failure is
not an exact and absolute science, of which precise results can be obtained. The prediction of fatigue fracture is very often approximate and relative, with many components of the statistical
calculation, and there are a great many factors to be considered, even for very simple load cases. In this sense the determination of the fatigue limit for materials with industrial purposes—in
particular the steel—demands a great variety of test to define the magnitude of fatigue limit reported at a specific number of cycles.
In practice, gears are mostly operated under variable loads. Even in a continuous process the load acting on gear teeth is fluctuating due to the tooth contact process and operational conditions
under which the gears shall perform. Under these variable loads a tooth breakage, which most often results in a total gear failure, must be take into account during the stages of gear design or load
capacity calculation. This fact has demanded that new fatigue tests for gear materials be carried out and the fatigue resistance behavior with a high number of load cycles be analyzed.
As it is known, there are a great many factors to be considered during the study of fatigue phenomena. The methods of fatigue failure analysis are inexact and only approximate results can be
obtained. Thus, more-exact methods require that more data be derived from practical testing and statistical calculation. A Whöler’s, or strength-life (σFAT - N) diagram is the most widely used graph
to provide the corresponding fatigue strength of a material reported at a specific number of stress cycles (see
Figure 1
). In the Whöler diagram it is usual to represent the logarithm of the fatigue strength (Log σFAT) in the function of the logarithm of the number of cycles (Log N).
The fatigue failure analysis based on stress-life method is especially useful for a wide range of gear design applications and represents high-cycle applications adequately. In particular, the steel
for gears requires a great variety of tests to define the fatigue strength versus the number of load cycles. In theory, it is often accepted that the line in the case of stress cycles greater than
106 or 107 cycles behaves with slope zero and failure will not occur, no matter how great the number of cycles. The stress value corresponding with the point of inflection in the graph is declared
fatigue limit or endurance limit.
Figure 1
shows the actual appearance of gear steel behavior with a small and very significant modification: the graph becomes not totally horizontal after the steel has been stressed for a number of cycles
greater than the basic number of cycles for established typical fatigue strength (N = 106 …107). Moreover, it is possible to distinguish a significant change in the slope of the line near to 106
cycles. It is different than the classical infinite life appearance of steel behavior.
Gear performance demands load capacity for a number of stress cycles greater than the basic number of cycles for fatigue strength. In these situations it is useful to consider the fatigue resistance
level in case of a high number of stress cycles.
The necessity for greater accuracy in the determination of fatigue limit for steel with applications in high speed gears has led to tests and new studies in the zone of a high stress cycle. AGMA
Standard 2105-D04 is a good example of improvements and precision of the steel gear behavior. Formulas to evaluate the permissible strength for the volumetric and superficial fatigue of steel with
application on cylindrical involute gears with external teeth gears are given on AGMA Standard 2105-D04 as follows.
[σF]: Permissible bending stress taking into account fatigue strength, [MPa].
[σH]: Permissible contact stress taking into account fatigue strength, [MPa].
σFlim: Fatigue limit for bending stress and unidirectional loading, [MPa].
σHlim: Fatigue limit taking into account contact stress, [MPa].
SF: Safety factor for bending strength.
SH: Safety factor for pitting.
YN: Stress cycle factor for bending strength.
ZN: stress cycle factor for pitting resistance.
Yθ: Temperature factor.
YZ: Reliability factor.
ZW: Hardness ratio factor for pitting resistance.
Particularly, the stress cycle factors take into account the strength-life characteristics of the gear material. Factors ZN and YN, adjust the fatigue limit stress for the required number of cycles
of operation as compared with fatigue limit stress established by testing at the basic number of cycles (N = 106 …107 cycles). In the case of gears, the number of stress cycles is defined as the
number of mesh contacts, under load, of the gear tooth being analyzed.
At the present time there is insufficient data to provide accurate stress cycle curves for all types of gears and gear applications. Experience, however, suggests that new stress cycle curves for
pitting resistance and bending strength of steel gears as shown in AGMA Standard 2105-D04. Taking into account the current information about the behavior of the fatigue load capacity of steel for
gears, it becomes clear how important it is to formulate a new direction and a method for estimating expected life in the case of a high number of cycles.
The purpose of this paper is to establish a procedure and formulas to estimate a value of gear expected life for a high number of cycles. The procedure takes into account the pitting resistance
(surface fatigue failure) and bending strength capacity (volumetric fatigue failure) of spur and helical gears. The equations presented have been redefined according to the formulas for load capacity
in AGMA Standard 2105-D04.
Determination of Stress Cycle Factors
Rating methods accepted by standards to evaluate the load capacity of external spur and helical involute gear teeth operating on parallel axes are based on the contact stress resistance and bending
strength [1, 2, 3, 4]. The formulas evaluate gear tooth capacity as influenced by the major factors which affect progressive pitting of the teeth and gear tooth fracture at the fillet radius. The
pitting and fracture of gear teeth are considered to be a fatigue phenomenon depending on stress cycles. Certification of gear load capacity is based on the confrontations of stress calculated by
gear-tooth rating formulae with the bending and contact permissible stresses for gear materials.
The actual cylindrical gear-tooth rating formulae for pitting resistance are based on Hertz’s results for the calculation of contact pressure between two curved surfaces. They have also been improved
with modifications in the new standards to consider load sharing between adjacent teeth, the load increment due to external and internal dynamic loads, uneven distribution of load over the facewidth
due to mesh misalignment caused by inaccuracies in manufacture, and elastic deformations, etc. The formulae for bending-strength rating are based on cantilever-projection theory. The maximum tensile
stress at the tooth-root (in the direction of the tooth height) which may not exceed the permissible bending stress for the material is the basis for rating the bending strength of gear teeth. Just
the same as in the calculation of tooth contact stress for pitting resistance, the calculating of tooth root strength takes into account load sharing between adjacent teeth, an increment of nominal
load due to non-uniform distribution of load on the tooth face, and some external and internal dynamic load.
AGMA Standard 2105-D04 provides the following rating formulas and permissible stresses applicable for calculating the pitting resistance and bending strength of external cylindrical involute gear
teeth operating on parallel axes.
σF: Bending tooth-root stress, [MPa].
σH: Contact tooth-flank stress, [MPa].
ZE: Elastic coefficient, [Mpa1/2].
FT: Transmitted tangential load, [N].
KO: Overload factor.
KV: Dynamic factor.
KH: Load distribution factor.
KS: Size factor.
ZR: Surface condition factor for pitting resistance
KB: Rim thickness factor.
b: Facewidth, [mm]
dw1: Operating pitch diameter of pinion, [mm].
mt: Transverse module, [mm]
ZI: Geometry factor for pitting resistance.
YJ: Geometry factor for bending strength.
By means of mathematical processing of formulas (3) and (4) it is possible to determine the stress cycle factors for pitting resistance and bending strength according to equations (5) and (6).
Determination of the Expected Fatigue Lifetime
Knowing the interrelation of factors ZN and YN with the fatigue limit stress equivalent to a certain number of load cycles, it is possible to determine the useful expected fatigue lifetime in the
condition of same bending and contact stresses in the teeth with corresponding permissible stresses for failure. Under these conditions, the number of load cycles expected by pitting (nLh) or fatigue
fracture (nLf) can be evaluated with the stress cycle factors ZN and YN determined by the formulas (5)-(6) and graphical information presented on AGMA 2105-D04 (see
Figure 2
Figure 3
). Once certain that the numbers of load cycles corresponding to calculated values of factors ZN and YN , the hours of expected fatigue lifetime (HσF and HσH) can be known by means of equations (7)
and (8).
nLh : ?Number of load cycles expected by pitting in corresponding with stress cycle factors ZN in
Figure 3
nLf : ?Number of load cycles expected by fatigue fracture in corresponding with stress cycle factors YN in
Figure 3
n : ?Rotational speed,(min-1)
q : ?Number of load application by 1 turn of gear. It can be different for bending stress or contact stress.
Sample Case
With the intention of demonstrating the procedure to estimate the useful expected fatigue lifetime of cylindrical gears, the calculation of the expected useful life of the pinion in a helical gear is
presented (see
Table 1
Table 2
). In particular, the gear transmission analyzed corresponds to the first stage of speed reducer applied in the gear transmission of a sugar cane mill. Field studies show gear failure by pitting
after 10 years of sugar cane harvesting. It should be noted that the calculation of gear load capacity by pitting resistance was sufficient in case of classical theory of fatigue-life. The results
that take into account the new fatigue resistance level with precision of stress cycle factors are more real (see
Table 2
In general, safety factors must be established from a thorough analysis of the service experience with a particular application. A minimum safety factor is normally established for the designer by
specific agreement between the manufacturer and purchaser. When specific service experience is not available, a thorough analytical investigation should be made. It is certain that the magnitude of
safety and reliability factors can condition the value of estimating life, for good designs with proven values of safety and reliability are important (see
Table 3
Table 4
An effective procedure, formulas, and information to estimate a value of expected fatigue life in the case of a steel cylindrical gear with a high number of cycles has been given. Formulas are based
in the AGMA Standard 2105-D04 for calculation of the load capacity of cylindrical gears.
In this paper the stress cycle factors take into account the strength-life characteristics of the gear material, and it used the factors ZN and YN to adjust the fatigue limit stress for the required
number of cycles of operation. The procedure is fixed taking into account the pitting resistance and bending strength capacity of spur and helical gears.
Knowing the interrelation of factors ZN and YN with the fatigue limit stress equivalent to a certain number of load cycles, it is possible to determine the useful expected fatigue lifetime in the
condition of the same bending and contact stresses in the teeth with corresponding permissible stresses for failure. Under these conditions the number of load cycles expected by pitting (nLh) or
fatigue fracture (nLf) can be evaluated with the stress cycle factors ZN and YN determined by the formulas (
Formula 5
Formula 6
) and the graphical information presented on AGMA 2101-D04 (see
Figure 2
Figure 3
). Once certain that the numbers of load cycles correspond to calculated values of factors ZN and YN , the hours of expected fatigue lifetime (HσF and HσH) can be determined by means of equations (
Formula 7
) and (
Formula 8
Some results of field studies show a good approximation between data from the field and the values obtained by means of the procedure described in this paper, but it is necessary to conduct more
testing and data application to improve the results due to the great many factors to be considered in fatigue failure.
You must Log In to comment.
Be the first to comment! | {"url":"http://www.gearsolutions.com/article/detail/5356/estimating-gear-fatigue-life","timestamp":"2014-04-18T13:07:54Z","content_type":null,"content_length":"33499","record_id":"<urn:uuid:c0f29974-d17a-4f99-8646-92586e74e34e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00428-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebra Archive | March 08, 2008 | Chegg.com
1) write an equation for the perimeter of the track, using theinformation in the diagram..
P=2*2W+[2π(W/2)]/2*2= 4W+πW
2) Find the value of w which ensure that the inside lane of thetrack will produce a lap length of no less than 400 meters..
W(4+π)=400>>W= 56.022m
but i don't know how to solve the rest.. help me
3) IAAF rules states that each lane should measure about 1.25meters in width. Find the perimeter of the outer edge of the insidelane..
4) Suppose that the track is to have 6 lanes. Find the perimeter ofthe outer edge of each lane..
please help me..
• Show less | {"url":"http://www.chegg.com/homework-help/questions-and-answers/algebra-archive-2008-march-08","timestamp":"2014-04-20T06:09:18Z","content_type":null,"content_length":"30577","record_id":"<urn:uuid:005c5b10-193d-4542-a364-fc361b21d4f5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Suitable change of variables for this triple integral?
May 18th 2011, 03:47 AM #1
Feb 2009
Suitable change of variables for this triple integral?
Can anyone help me with coming up with a suitable change of variables for this triple integral? (last part of question)
having trouble getting spherical polars to work, if anyone has any other suggestions it would be great!
this is from here, not a take home test
Course Material for Calculus in Three Dimensions and Applications | Mathematical Institute - University of Oxford
sheet 3
Hint Use the Spectral Theorem for the corresponding quadratic form and apply the substitution $(x,y,z)^t=P(u,v,w)^t$ with $P$ orthogonal such that $P^{-1}AP=D$ (D diagonal) .
Yeah.. thanks, but as that's not in my first year course the questions i'm facing aren't really geared towards that approach, it would probably be better for me to understand how they want me to
do it
Thanks you though
May 18th 2011, 04:37 AM #2
May 18th 2011, 04:43 AM #3
Feb 2009
May 18th 2011, 07:12 AM #4 | {"url":"http://mathhelpforum.com/calculus/180936-suitable-change-variables-triple-integral.html","timestamp":"2014-04-16T11:24:46Z","content_type":null,"content_length":"40545","record_id":"<urn:uuid:3b36975a-8c4e-4fda-a883-6225045d2ba6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: The relation between circumreference and radius of circle
Replies: 4 Last Post: May 6, 2013 2:58 PM
Messages: [ Previous | Next ]
KBH Re: The relation between circumreference and radius of circle
Posted: May 5, 2013 12:58 PM
Posts: 105
Registered: 3/14/ Look at
C = 2 * Pi * R
and Pi is just the central angle of a half-circle in radians.
Next, with a viewpoint that the purpose of Pi is to compute arc lengths then go looking for arc-lengths-by-integration. That's not rounded numbers producing an exact result but just
a different method
Date Subject Author
5/5/13 The relation between circumreference and radius of circle JT
5/5/13 Re: The relation between circumreference and radius of circle KBH
5/5/13 Re: The relation between circumreference and radius of circle ross.finlayson@gmail.com
5/5/13 Re: The relation between circumreference and radius of circle Brian Q. Hutchings
5/6/13 Re: The relation between circumreference and radius of circle Brian Q. Hutchings | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2455269&messageID=8912208","timestamp":"2014-04-21T07:43:29Z","content_type":null,"content_length":"21028","record_id":"<urn:uuid:9a6fee39-32d2-4339-8416-d130fe8f13ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
s common for Beer Mug and Power Factor?
Understanding power factor is not that hard. We have some very common example from the real life you will understand for sure, but first let’s start with some introduction of power factor.
To understand power factor, we’ll first start with the definition of some basic terms:
kW is Working Power (also called Actual Power or Active Power or Real Power). It is the power that actually powers the equipment and performs useful work.
kVAR is Reactive Power. It is the power that magnetic equipment (transformer, motor, relay etc.) needs to produce the magnetizing flux.
kVA is Apparent Power. It is the “vectorial summation” of KVAR and KW.
Example From the Real Life
Let’s look at a simple analogy in order to better understand these terms….
Let’s say it’s friday evening, and you are with your friends at your favorite pub after really hot day. You order up a big mug of your favorite beer for you and for your friends. The thirst-quenching
portion of your beer is represented by KW (the big pic on top).
Unfortunately, life isn’t perfect. Along with your ale comes a little bit of foam. (And let’s face it…that foam just doesn’t quench your thirst.) This foam is represented by KVAR.
The total contents of your mug, KVA, is this summation of KW (the beer) and KVAR (the foam).
So, now that we understand some basic terms, we are ready to learn about power factor:
Power Factor (P.F.) is the ratio of Working Power to Apparent Power.
Looking at our beer mug analogy above, power factor would be the ratio of beer (KW) to beer plus foam (KVA).
Thus, for a given KVA:
1. The more foam you have (the higher the percentage of KVAR), the lower your ratio of KW (beer) to KVA (beer plus foam). Thus, the lower your power factor.
2. The less foam you have (the lower the percentage of KVAR), the higher your ratio of KW (beer) to KVA (beer plus foam). In fact, as your foam (or KVAR) approaches zero, your power factor
approaches 1.0.
Our beer mug analogy is a bit simplistic. In reality, when we calculate KVA, we must determine the “vectorial summation” of KVAR and KW. Therefore, we must go one step further and look at the angle
between these vectors.
Power Triangle
The “Power Triangle” illustrates this relationship between KW, KVA, KVAR, and Power Factor:
Note that in an ideal world looking at the beer mug analogy:
1. KVAR would be very small (foam would be approaching zero)
2. KW and KVA would be almost equal (more beer; less foam)
There are dosen of tools and technical articles/guides published at EEP that can help you to understand power factor and its controlling. Hope these can help:
Resource: powerstudies.com
Recommended EE articles
15 Comments
Electromagnetic Stresses On Busbar System | EEP
Apr 02, 2014
[…] peak or fully asymmetrical short circuit current is dependent on the power factor (cos φ) of the busbar system and its associated connected electrical plant. The value is obtained by […]
Gaudencio Toto
Mar 10, 2014
Excellent example, served me to explain the power factor to my clients.
We need more examples applicable to real life. Thanks
Barbara Kerr
Feb 12, 2014
Is it correct to say that the equipment itself ‘eats’ a bit of the electricity that can be generated? So there is always a bit of loss in the generation process, lost to the equipment itself?
Thanks much for this helpful article!
Loh Kon Min
Feb 03, 2014
Great job….I love your power factor analogy using beer as an example!!!
lenin pugal
Feb 02, 2014
i need more very common example from the real life like this.
Leave a Comment | {"url":"http://electrical-engineering-portal.com/beer-mug-and-power-factor","timestamp":"2014-04-17T18:27:23Z","content_type":null,"content_length":"54845","record_id":"<urn:uuid:e98bfa71-5d3b-4ef9-89ef-bfe98c1dce15>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Pramod Achar
Associate Professor, Ph.D. Massachusetts Institute of Technology
Research interest: Representation theory of algebraic groups
Office: 266 Lockett Hall
Office hours: Tu 2:00pm–3:00pm; Th 10:00am–11:30am
Telephone: +1 225 578 7990
Email: pramod@math.lsu.edu
William Adkins
Director of Graduate Studies
Professor, Ph.D. University of Oregon
Research interest: Algebraic geometry, linear algebra over commutative rings
Office: 350 Lockett Hall
Office hours: MW 9:00am–10:00am; TuTh 11:30am–12:30pm
Telephone: +1 225 578 1601
Email: adkins@math.lsu.edu
Burak Aksoylu
Adjunct Assistant Professor, Ph.D. University of California, San Diego
Research interest: Numerical analysis, numerical solutions to PDEs, scientific computing, multilevel iterative methods.
Email: burak@math.lsu.edu
Yaniv Almog
Associate Professor, Ph.D. Technion - Israel Institute of Technology
Research interest: Ginzburg-Landau theory of superconductivity, fluid Mechanics
Office: 304 Lockett Hall
Office hours: Tu 2:30pm–3:30pm; Th 2:30pm–2:30pm
Telephone: +1 225 578 5329
Email: almog@math.lsu.edu
Yuri Antipov
Professor, Ph.D. Moscow State University (Russia)
Research interest: Integral and functional equations of continuum mechanics
Office: 388 Lockett Hall
Telephone: +1 225 578 1567
Email: antipov@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Scott Baldridge
Associate Professor, Ph.D. Michigan State University
Research interest: Geometric topology, differential geometry, gauge theory.
Office: 380 Lockett Hall
Office hours: TuTh 1:30pm–3:00pm
Telephone: +1 225 578 1670
Email: sbaldrid@math.lsu.edu
Blaise Bourdin
Associate Professor, Ph.D. University of Paris XIII (France)
Research interest: Mathematics of materials science, computational mechanics, optimal design, scientific computing
Office: 344 Lockett Hall
Office hours: MW 1:30pm–2:30pm
Telephone: +1 225 578 1612
Email: bourdin@math.lsu.edu
Susanne C. Brenner
SIAM Fellow, AMS Fellow, AAAS Fellow
Michael F. and Roberta Nesbit McDonald Professor, Ph.D. The University of Michigan
Research interest: Numerical analysis, finite element methods, multigrid methods, domain decompostion methods
Office: 268 Lockett Hall
Telephone: +1 225 578 1678
Email: brenner@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
W. George Cochran
Associate Professor, Ph.D. University of Michigan
Research interest: Probability
Office: 308 Lockett Hall
Office hours: M 9:30am–11:00am; Th 12:00pm–1:15pm
Telephone: +1 225 578 1614
Email: cochran@math.lsu.edu
Daniel C. Cohen
Professor, Ph.D. Northeastern University
Research interest: Algebraic topology, arrangements of hyperplanes
Office: 372 Lockett Hall
Office hours: MW 10:30am–12:00pm
Telephone: +1 225 578 1576
Email: cohen@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Pallavi Dani
Assistant Professor, Ph.D. University of Chicago
Research interest: Geometric group theory
Office: 224 Lockett Hall
Telephone: +1 225 578 1588
Email: pdani@math.lsu.edu
Oliver Dasbach
Associate Professor, Ph.D. University of Düsseldorf (Germany)
Research interest: Low dimensional topology
Office: 306 Lockett Hall
Office hours: MW 10:30am–11:30am
Telephone: +1 225 578 1784
Email: kasten@math.lsu.edu
Mark Davidson
Professor, Ph.D. University of California, Irvine
Research interest: Representations of Lie groups
Office: 312 Lockett Hall
Office hours: TuTh 10:30am–12:00pm
Telephone: +1 225 578 1581
Email: davidson@math.lsu.edu
Charles Neal Delzell
Associate Chair for Instruction
Professor, Ph.D. Stanford University
Research interest: Real algebraic geometry
Office: 301A Lockett Hall
Telephone: +1 225 578 1619
Email: delzell@math.lsu.edu
Guoli Ding
Professor, Ph.D. Rutgers University
Research interest: Graph theory, combinatorics
Office: 396 Lockett Hall
Office hours: TuTh 9:30am–10:30am
Telephone: +1 225 578 1671
Email: ding@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Ricardo Estrada
Professor, Ph.D. Pennsylvania State University
Research interest: Asymptotic analysis, generalized functions, integral equations
Office: 342 Lockett Hall
Office hours: MW 1:00pm–2:00pm
Telephone: +1 225 578 1677
Email: restrada@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Patrick Gilmer
A.K. and Shirley Barton Professor, Ph.D. University of California, Berkeley
Research interest: Knot theory, low dimensional manifolds
Office: 376 Lockett Hall
Office hours: TuTh 1:30pm–2:30pm
Telephone: +1 225 578 1574
Email: gilmer@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hongyu He
Associate Professor, Ph.D. Massachusetts Institute of Technology
Research interest: Representation Theory and Harmonic Analysis
Office: 254 Lockett Hall
Office hours: TuTh 11:00am–12:00pm
Telephone: +1 225 578 1657
Email: hongyu@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hui-Hsiung Kuo
Nicholson Professor, Ph.D. Cornell University
Research interest: Probability, stochastic analysis
Office: 318 Lockett Hall
Office hours: MTuWThF 1:30pm–2:30pm
Telephone: +1 225 578 1610
Email: kuo@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Jimmie Lawson
Boyd Professor Emeritus, Ph.D. University of Tennessee
Research interest: Operator and matrix means, topological algebra and domain theory, control theory
Office: 216 Lockett Hall
Telephone: +1 225 578 1672
Email: lawson@math.lsu.edu
Robert Lipton
S.B. Barton Professor, Ph.D. Courant Institute of Mathematical Sciences
Research interest: Mathematical materials science, partial differential equations
Office: 384 Lockett Hall
Telephone: +1 225 578 1569
Email: lipton@math.lsu.edu
Amha Lisan
Associate Professor, Ph.D. Howard University
Research interest: Topological algebra
Office: 354 Lockett Hall
Office hours: MTuWTh 1:30pm–2:30pm
Telephone: +1 225 578 1578
Email: lisan@math.lsu.edu
Richard A. Litherland
Professor, Ph.D. Cambridge University (England)
Research interest: Algebraic topology, knot theory
Office: 378 Lockett Hall
Office hours: MTuWThF 1:30pm–2:30pm
Telephone: +1 225 578 1573
Email: lither@math.lsu.edu
Ling Long
Associate Professor, Ph.D. Pennsylvania State University
Research interest: Number Theory
Office: 256 Lockett Hall
Telephone: (225)-578-1677
Email: llong@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
James Madden
The Patricia Hewlett Bodin Distinguished Professor, Ph.D. Wesleyan University
Research interest: Real algebraic geometry, mathematics education
Office: 213 Prescott Hall
Telephone: +1 225 578 7988
Email: madden@math.lsu.edu
Karl Mahlburg
Assistant Professor, Ph.D. University of Wisconsin
Research interest: Number theory, combinatorics
Office: 228 Lockett Hall
Telephone: +1 225 578 2658
Email: mahlburg@math.lsu.edu
Michael Malisoff
Roy Paul Daniels Professor, Ph.D. Rutgers University
Research interest: Mathematical systems and control theory
Office: 392 Lockett Hall
Telephone: +1 225 578 6714
Email: malisoff@math.lsu.edu
Jorge Morales
Professor, Ph.D. University of Geneva (Switzerland)
Research interest: Algebraic number theory , algebraic groups, galois cohomology
Office: 262 Lockett Hall
Office hours: MWF 9:30am–10:30am
Telephone: +1 225 578 1655
Email: morales@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Frank Neubrander
The Demarcus D. Smith Alumni Professor, Ph.D. University of Tübingen (Germany)
Research interest: Laplace and convolution transforms, generalized functions, evolution equations, mathematics education
Office: 209 Prescott Hall
Office hours: MW 5:30pm–7:00pm; Tu 11:30am–12:30pm
Telephone: +1 225 578 7677
Email: neubrand@math.lsu.edu
Siu-Hung “Richard” Ng
Associate Professor, Ph.D. Rutgers University-New Brunswick
Research interest: Hopf algebras and tensor categories, quantum groups
Office: 248 Lockett Hall
Telephone: +1 225 578 1659
Email: sng@lsu.edu
Phuc Cong Nguyen
Assistant Professor, Ph.D. University of Missouri
Research interest: Partial differential equations, harmonic analysis, nonlinear potential theory
Office: 204 Lockett Hall
Telephone: +1 225 578 2657
Email: pcnguyen@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Gestur Olafsson
Alumni Professor, Ph.D. University of Goettingen (Germany)
Research interest: Harmonic analysis, representation theory, geometry
Office: 322 Lockett Hall
Office hours: Tu 1:30pm–2:30pm; Th 1:30pm–2:30pm
Telephone: +1 225 578 1608
Email: olafsson@math.lsu.edu
Bogdan Oporowski
Professor, Ph.D. The Ohio State University
Research interest: Graph theory, matroid theory
Office: 352 Lockett Hall
Office hours: Tu 1:30pm–2:30pm
Telephone: +1 225 578 1579
Email: bogdan@math.lsu.edu
James Oxley
Boyd Professor, Ph.D. University of Oxford (England)
Research interest: Matroid theory, graph theory
Office: 370 Lockett Hall
Office hours: TuTh 11:00am–12:00pm
Telephone: +1 225 578 1577
Email: oxley@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Robert Perlis
Department Chair
Cecil Taylor Alumni Professor, Ph.D. Massachusetts Institute of Technology
Research interest: Algebraic number theory
Office: 301C Lockett Hall
Office hours: M 10:00am–11:00pm
Telephone: +1 225 578 1618
Email: perlis@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Leonard Richardson
Herbert Huey McElveen Professor, Ph.D. Yale University
Research interest: Harmonic analysis on homogeneous spaces
Office: 386 Lockett Hall
Office hours: MWF 11:30am–12:30pm; TuTh 12:00pm–1:00pm
Telephone: 225 578 1568
Email: rich@math.lsu.edu
Boris Rubin
Professor, Ph.D. Rostov State University (Russia)
Research interest: Integral geometry, harmonic analysis, convex geometry
Office: 348 Lockett Hall
Office hours: M 11:00am–12:00pm
Telephone: +1 225 578 1580
Email: borisr@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Daniel Sage
Professor, Ph.D. University of Chicago
Research interest: Representation theory, algebraic geometry, hopf algebras and quantum groups, materials science
Office: 394 Lockett Hall
Telephone: +1 225 578 1564
Email: sage@math.lsu.edu
Ambar Sengupta
Hubert Butts Professor, Ph.D. Cornell University
Research interest: Probability, mathematical physics
Office: 324 Lockett Hall
Telephone: +1 225 578 1607
Email: sengupta@math.lsu.edu
Stephen Shipman
Associate Professor, Ph.D. University of Arizona
Research interest: Applied analysis: electrodynamics, scattering theory, metamaterials
Office: 314 Lockett Hall
Office hours: M 9:30am–11:30am; W 1:30pm–3:30pm
Telephone: +1 225 578 1674
Email: shipman@math.lsu.edu
Lawrence Smolinsky
Director of Actuarial Science
Roy Paul Daniels Professor, Ph.D. Brandeis University
Research interest: Geometry & topology
Office: 382 Lockett Hall
Office hours: Th 9:00am–10:00am
Telephone: +1 225 578 1570
Email: smolinsk@math.lsu.edu
Neal W. Stoltzfus
Professor, Ph.D. Princeton University
Research interest: Knots, Links & Algebraic Invariants, low Dimensional Topology, braids & Mapping Class Group
Office: 258 Lockett Hall
Telephone: +1 225 578 1656
Email: stoltz@math.lsu.edu
Padmanabhan Sundar
Professor, Ph.D. Purdue University
Research interest: Stochastic Analysis, stochastic Partial Differential Equations
Office: 316 Lockett Hall
Office hours: MW 2:00pm–3:30pm
Telephone: +1 225 578 1611
Email: sundar@math.lsu.edu
Li-yeng Sung
Professor, Ph.D. State University of New York at Stony Brook
Research interest: Partial differential equations, inverse scattering, numerical analysis
Office: 208 Lockett Hall
Telephone: +1 225 578 1598
Email: sung@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Michael M. Tom
Professor, Ph.D. Pennsylvania State University
Research interest: Partial differential equations
Office: 310 Lockett Hall
Office hours: MWF 10:30am–11:30am
Telephone: +1 225 578 1613
Email: tom@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Shea Vela-Vick
Assistant Professor, Ph.D. University of Pennsylvania
Research interest: Contact and symplectic geometry, low-dimensional topology, Riemannian geometry
Office: 252 Lockett Hall
Telephone: +1 225 578 1565
Email: shea@math.lsu.edu
Dirk Vertigan
Professor, Ph.D. University of Oxford (England)
Research interest: Combinatorial Algebra, Algebraic Combinatorics
Office: 328 Lockett Hall
Telephone: +1 225 578 1605
Email: vertigan@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Shawn Walker
Assistant Professor, Ph.D. University of Maryland
Research interest: Finite element methods, free boundary problems, PDE-constrained (shape) optimization
Office: 210 Lockett Hall
Office hours: TuTh 2:00pm–3:00pm
Telephone: +1 225 578 1603
Email: walker@math.lsu.edu
Xiaoliang Wan
Assistant Professor, Ph.D. Brown University
Research interest: Stochastic modeling, numerical methods for stochastic PDEs, minimum action method.
Office: 226 Lockett Hall
Office hours: MW 3:00pm–4:30pm
Telephone: +1 225 578 6367
Email: xlwan@math.lsu.edu
Peter Wolenski
Russell B. Long Professor, Ph.D. University of Washington
Research interest: Control theory, nonsmooth and variational analysis
Office: 326 Lockett Hall
Office hours: MF 12:30pm–1:30pm
Telephone: +1 225 578 1606
Email: wolenski@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Milen Yakimov
Sloan Research Fellow
Professor, Ph.D. University of California, Berkeley
Research interest: Noncommutative algebra, relations to geometry and combinatorics
Office: 390 Lockett Hall
Office hours: M 9:30am–10:30am; W 11:30am–12:30pm; Th 9:00am–10:00am
Telephone: +1 225 578 1566
Email: yakimov@math.lsu.edu
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
Hongchao Zhang
Assistant Professor, Ph.D. University of Florida
Research interest: Nonlinear optimization and its applications, numerical analysis, numerical linear algebra
Office: 220 Lockett Hall
Telephone: +1 225 578 1982
Email: hozhang@math.lsu.edu | {"url":"https://www.math.lsu.edu/dept/people/professors","timestamp":"2014-04-20T13:27:50Z","content_type":null,"content_length":"84629","record_id":"<urn:uuid:5766204e-8b23-4a40-875b-76727ea6bbe1>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Parabolic evolution equations in which the coefficients are the generators of infinitely differentiable semigroups. II.
(English) Zbl 0706.35060
[For part I, see ibid. 32, No.1, 107-124 (1989; Zbl 0693.35074).]
This paper continues the study of a linear evolution equation of parabolic type
$du/dt+A\left(t\right)u=f\left(t\right),\phantom{\rule{1.em}{0ex}}0<t\le T,\phantom{\rule{1.em}{0ex}}u\left(0\right)={u}_{0}\phantom{\rule{2.em}{0ex}}\left(\mathrm{E}\right)$
in a Banach space $X$ in which $A\left(t\right)$, $0\le t\le T$, are the generators of infinitely differentiable semigroups on $X$. We interpolate two results presented in part I, in which the two
extreme cases that the domains $𝒟\left(A\left(t\right)\right)$ of $A\left(t\right)$ are independent of $t$ and that $𝒟\left(A\left(t\right)\right)$ are completely variable with $t$ were discussed.
Now $𝒟\left(A\left(t\right)\right)$ are assumed to vary with $t$ temperately in the sense that
${\parallel A\left(t\right)\left(\lambda -A\left(t\right)\right)}^{-1}\left(A{\left(t\right)}^{-1}-A{\left(s\right)}^{-1}\right){\parallel }_{ℒ\left(X\right)}\le {N|t-s|}^{\mu }{\left(|\lambda |+1\
right)}^{-u }$
with some suitable exponents $0<\mu$, $u \le 1$. Under this condition, a fundamental solution (evolution operator) $U\left(t,s\right)$, $0\le t,s\le T$, on $X$ for (E) is constructed. The strict
solution $u$ to (E) is given in the form
$u\left(t\right)=U\left(t,0\right){u}_{0}+{\int }_{0}^{t}U\left(t,\tau \right)f\left(\tau \right)d\tau ,\phantom{\rule{1.em}{0ex}}0\le t\le T·$
35G10 Initial value problems for linear higher-order PDE
35K25 Higher order parabolic equations, general
47D06 One-parameter semigroups and linear evolution equations
34G10 Linear ODE in abstract spaces | {"url":"http://zbmath.org/?q=an:0706.35060","timestamp":"2014-04-16T22:41:51Z","content_type":null,"content_length":"25222","record_id":"<urn:uuid:8b5e2e1b-96af-453b-b540-51b649d97480>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00041-ip-10-147-4-33.ec2.internal.warc.gz"} |
We cannot even approximate real motion by intervals of constant velocity. No real object can instantaneously change from one velocity to another. The change in velocity takes place during an
interval of time. For example in the interval of time from t[1]to t[2], Δt, the velocity may change from v[1] to v[2], Δv. The ratio Δv/Δt is called acceleration. If the acceleration is not
constant and the time interval is finite then this ratio is the average acceleration. Acceleration is the slope of the velocity-time graph. Recall that velocity was the slope of the position-time
graph. If acceleration is not constant then the instantaneous acceleration at any time is the slope of the tangent line on the velocity-time graph at that time.
For example, the Power Wheels started from a dead stop and reached a velocity of 1 m/s in 2 s. The average acceleration is (1 m/s)/(2 s) = 0.5 m/s/s. The car reached a higher value of
instantaneous acceleration during the 2 s interval which you might try to estimate by drawing a tangent line. We usually write 0.5 m/s^2. Writing the units of acceleration as m/s/s is acceptable
and is not ambiguous if one applies the usual rule of evaluating operators from left to right.
So far the concept of acceleration is probably pretty familiar to you because of its common usage to describe speeding up. The term is used in physics in a much broader sense. For example if v[1]
is larger than v[2] then we would have a negative acceleration. This happens, for example, when one is travelling in the positive direction and is slowing down. Colloquially, we describe this
with the term deceleration, but technically, the term acceleration applies to this case as well. Furthermore, travelling in the negative direction and slowing down gives a positive acceleration
and travelling in the negative direction and speeding up is negative acceleration. The sign of the acceleration is not necessarily the same as the velocity's. This means that the acceleration's
direction is not always the same as the direction of the veolcity. When we start talking about two and three dimensional motion this fact becomes even more important. To repeat, in common usage,
acceleration means going faster in the forward direction. In physics it means the ratio of velocity change and time interval during which that velocity change occurred. That's not always the same
Playing with a fan cart
In order to illustrate what happens when a constant force is applied I have brought a fan cart. This is a little cart on wheels with a fan that can blow and propel it forward or backward
depending on which way the fan is turned. When I turn on the fan, I believe that everyone would agree that the fan produces a force on the cart. When I hold the cart the force is balanced by my
arm, the net force is zero and it stays put. When I put the cart on a table and let go, the fan's force is no longer balanced and that unbalanced force accelerates the cart forward. I wish to let
the cart go on a track in various situations and show you what the graphs of velocity vs. time look like in those situations. You will also see how the force shows up in each case.
First I hold the cart on the track with the fan on. The cart is pointing towards the positive side of our number line and the fan is pointing backward so the force pushes in the positive
direction. I'll start the time clock, then let go of the cart. The cart starts from zero velocity a little after the t=0 point (because I waited a bit before I let it go). Then the velocity
increases with a constant positive slope. The slope of this graph is the acceleration and it is positive. The net force is proportional to the acceleration, so this graph implies a positive net
force on the cart.
Next I turn the cart around and give it a big shove towards the positive end of the track. After I let go the fan's force pushes in the negative direction, and the graph shows negative
acceleration. When the cart comes to a stop and I catch it, the velocity graph stops at point B. But if the cart is allowed to keep going, it will turn around and keep accelerating in the
negative direction.
Notice that even though the velocity is instantaneously zero at point B on the graph, the slope never changes. Thus the acceleration remains constant throughout the run, even when the velocity
passes though zero. The idea that there can be a nonzero acceleration when the velocity is zero seems odd, but it follows from the way we define velocity and acceleration.
Thus negative acceleration can occur both
□ when the cart is going in the positive direction and is slowing down,
□ when it is going in the negative direction and is speeding up and
□ when the velocity is momentarily zero while the cart is changing from moving in the positive direction motion to the negative direction.
Finally think about the graph when I go to the other end of the track, and shove the cart towards the negative direction. When the fan is pointed in the direction of the shove, the force is
positive and the cart slows down. The acceleration is positive because the negative velocity decreases. As the cart passes through point B, even though the velocity is momentarily zero, the
positive acceleration never ceases as it turns around and accelerates towards the positive end of the track.
Thus positive acceleration can occur
□ when the cart is going in the positive direction and is speeding up,
□ when it is going in the negative direction and is slowing down and
□ when the velocity is momentarily zero while the cart is changing from moving in the negative direction to the positive direction.
Relations for constant acceleration
The graphical way of deriving displacements from a velocity-time graph is completely general. No matter what the motion is, as long as it can be represented graphically, we can estimate the
displacement during any time interval by estimating the area under the curve.
Formulas for constant acceleration can be derived and are useful even though constant acceleration rarely occurs exactly. Falling objects, if they are heavy and dense enough, may approach
constant acceleration fairly closely. In other cases, such as the braking of a train, assumming constant acceleration may be a useful first approximation to the actual braking motion. Thus the
formulae for constant acceleration are usually a staple of first year physics courses. (They also form a convenient topic for problems early on in the course.)
First sketch the velocity-time graph for constant positive acceleration and positive velocity. The displacement between any two times is got by figuring out the area under the curve. This area is
a quadrilateral which is not rectangular unless the acceleration is zero. We can figure out the area by first getting the area of the rectangle and adding the area of the triangle. (Remember that
the area of the triangle is 1/2 base × height.)
Area of quadrilateral = Area of rectangle + Area of triangle
$$ \Delta x = (\Delta t) (v_1) + \frac{1}{2}\Delta t \Delta v$$
recall that $\Delta v = a \Delta t$ which comes directly from the definition of acceleration.
$$\Delta x = v_1 \Delta t + \frac{1}{2}a (\Delta t)^2$$
This is the equation of a parabola, and if you look at the graph of position vs time for constant acceleration you will see that a parabola is a reasonable curve to fit the graph.
Even though we used a picure of positive acceleration and positive velocity to derive this equation, you should verify that it is general and applies in cases of negative acceleration or negative
velocity or both.
Problem: Draw velocity-time graph for negative acceleration, positive velocity and verify that the equation applies. Repeat for the other two combinations of signs of acceleration and
This equation is so important and popular among physics teachers that it is usally called one of the "Kinematic equations". Sometimes slightly different notation may be used. For example, the one
I remember goes like this:
$$x= x_0 + v_0t + \frac{1}{2}at^2$$
In this case $t$ is used for $\Delta t$ and $x_0 = x - \Delta x$.
Remember that this equation applies only for constant acceleration.
Another kinetmatic equation is useful when you know inital and final velocities, but not the acceleration.
Going back to the original analysis of constant acceleration $$ \Delta x = (\Delta t) (v_1) + \frac{1}{2}\Delta t \Delta v$$ we can substitute $$\Delta v = v_2 - v_1$$ $$ \Delta x = (\Delta t)
(v_1) + \frac{1}{2}\Delta t\left(v_2 - v_1 \right)$$ Simplifying $$ \Delta x = \Delta t\frac{v_1 + v_2}{2} $$ This is saying that during a period of constant acceleration, the distance travelled
by an object is the same as if it travelled at a velocity exactly half-way between the initial and final velocities. In other words, the average velocity is the average of the initial and final
velocities. This is only true if the acceleration is constant. It's very useful for many kinematics problems.
Example: A car accelerates constantly from 30 km/h to 70 km/h during 10 s. How far does the car go during this time?
Solution: Draw a graph:
We want the shaded area, which represents displacement. Use $$\Delta x = \Delta t\frac{v_1 + v_2}{2} $$ $$\Delta x = 10 \rm s\frac{30 {\rm km/h} + 70 {\rm km/h}}{2}$$ $$\Delta x =( 10 \rm s)
( 50 {\rm km/h) (1h/3600s)} = 0.138 \rm km$$
Example: A car is backing to a stop at a constant rate of slowing. It goes 50 m and stops in 7 s. How fast was it going when it started slowing down?
Solution: Draw a graph:
We know the shaded area is −50 m. The stopping time is 7s and forms the base of the triangle. We also know that the final speed, $v_2$ is 0.
Solve: $$\Delta x = \Delta t\frac{v_1 + 0}{2} $$ for $v_1$. $$v_1 = 2\frac{\Delta x}{\Delta t} = 2\frac{-50 \rm m}{7 \rm s} = -14.3 {\rm m/s}$$
As I said, many traditional early problems in your physics courses involve manipulating these "kinematic equations" and you may wish to memorize them. If you forget the equation relating
displacement to constant acceleration, just redraw the little graph and figure out the areas as we did above. | {"url":"http://www.sfu.ca/phys/100/lectures/lecture7/lecture7a.html","timestamp":"2014-04-18T08:47:16Z","content_type":null,"content_length":"13403","record_id":"<urn:uuid:356e1ed2-8059-435f-b2d6-11d5532959a9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] untenable matrix behavior in SVN
Anne Archibald peridot.faceted@gmail....
Wed Apr 30 22:16:22 CDT 2008
2008/4/30 Charles R Harris <charlesr.harris@gmail.com>:
> Some operations on stacks of small matrices are easy to get, for instance,
> +,-,*,/, and matrix multiply. The last is the interesting one. If A and B
> are stacks of matrices with the same number of dimensions with the matrices
> stored in the last two indices, then
> sum(A[...,:,:,newaxis]*B[...,newaxis,:,:], axis=-2)
> is the matrix-wise multiplication of the two stacks. If B is replaced by a
> stack of 1D vectors, x, it is even simpler:
> sum(A[...,:,:]*x[...,newaxis,:], axis=-1)
> This doesn't go through BLAS, but for large stacks of small matrices it
> might be even faster than BLAS because BLAS is kinda slow for small
> matrices.
Yes and no. For the first operation, you have to create a temporary
that is larger than either of the two input arrays. These invisible
(potentially) gigantic temporaries are the sort of thing that puzzle
users when as their problem size grows they suddenly find they hit a
massive slowdown because it starts swapping to disk, and then a
failure because the temporary can't be allocated. This is one reason
we have dot() and tensordot() even though they can be expressed like
this. (The other is of course that it lets us use optimized BLAS.)
This rather misses the point of Timothy Hochberg's suggestion (as I
understood it), though: yes, you can write the basic operations in
numpy, in a more or less efficient fashion. But it would be very
valuable for arrays to have some kind of metadata that let them keep
track of which dimensions represented simple array storage and which
represented components of a linear algebra object. Such metadata could
make it possible to use, say, dot() as if it were a binary ufunc
taking two matrices. That is, you could feed it two arrays of
matrices, which it would broadcast to the same shape if necessary, and
then it would compute the elementwise matrix product.
The question I have is, what is the right mathematical model for
describing these
One idea is for each dimension to be flagged as one of "replication",
"vector", or "covector". A column vector might then be a rank-1 vector
array, a row vector might be a rank-1 covector array, a linear
operator might be a rank-2 object with one covector and one vector
dimension, a bilinear form might be a rank-2 object with two covector
dimensions. Dimensions designed for holding repetitions would be
flagged as such, so that (for example) an image might be an array of
shape (N,M,3) of types ("replication","replication","vector"); then to
apply a color-conversion matrix one would simply use dot() (or "*" I
suppose). without too much concern for which index was which. The
problem is, while this formalism sounds good to me, with a background
in differential geometry, if you only ever work in spaces with a
canonical metric, the distinction between vector and covector may seem
peculiar and be unhelpful.
Implementing such a thing need not be too difficult: start with a new
subclass of ndarray which keeps a tuple of dimension types. Come up
with an adequate set of operations on them, and implement them in
terms of numpy's functions, taking advantage of the extra information
about each axis. A few operations that spring to mind:
* Addition: it doesn't make sense to add vectors and covectors; raise
an exception. Otherwise addition is always elementwise anyway. (How
hard should addition work to match up corresponding dimensions?)
* Multiplication: elementwise across "repetition" axes, it combines
vector axes with corresponding covector axes to get some kind of
generalized matrix product. (How is "corresponding" defined?)
* Division: mostly doesn't make sense unless you have an array of
scalars (I suppose it could compute matrix inverses?)
* Exponentiation: very limited (though I suppose matrix powers could
be implemented if the shapes are right)
* Change of basis: this one is tricky because not all dimensions need
come from the same vector space
* Broadcasting: the rules may become a bit byzantine...
* Dimension metadata fiddling
Is this a useful abstraction? It seems like one might run into trouble
when dealing with arrays whose dimensions represent vectors from
unrelated spaces.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2008-April/033415.html","timestamp":"2014-04-16T07:29:42Z","content_type":null,"content_length":"7254","record_id":"<urn:uuid:c69fd823-9ded-44b6-98a1-db18f5534a7a>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
SQL Server: Part 2 : All About SQL Server Statistics :Histogram
In the Part 1 about SQL server Statistics, we have discussed about density vector information stored in the the statistics. In this post, let us discuss about the histogram. Let us create a copy of
SalesOrderDetail table and two indexes on top of that as we did in our first part.
USE mydb
SELECT * INTO SalesOrderDetail FROM AdventureWorks2008.Sales.SalesOrderDetail
CREATE UNIQUE CLUSTERED INDEX ix_SalesOrderDetailID ON SalesOrderDetail(SalesOrderDetailID)
CREATE NONCLUSTERED INDEX ix_productid ON SalesOrderDetail(productid)
Let us see the histogram information of the non clustered index.
DBCC SHOW_STATISTICS('dbo.SalesOrderDetail', 'ix_productid') WITH HISTOGRAM
You can see 200 records in the output. I have shown only the first 18 records. To create the histogram, SQL server split the data into different buckets (called steps) based on the value of first
column of the index.Each record in the output is called as bucket or step.The maximum number of bucket is 200 based on the data distribution.Histogram is a statistical representation of your data.In
other words it is the distribution of records based on the value of first column of the index. Histogram is always based only on the first column of the index even if the index is composite one.This
is one of the reason why it is always suggested to have most selective column as the first column of the index, but there are exceptions.
Let us look at the output of the histogram. It tried to put the 121317 records in the table into 200 buckets (steps) based on the value of productid.
The RANGE_HI_KEY column represent the upper boundary of each bucket.The lower boundary of each bucket is the RANGE_HI_KEY+1 of the previous bucket. For the first bucket, the lower boundary is the
smallest value of the column on which the histogram is generated.
The RANGE_ROWS column represent the number records in that bucket range but not equal to the value of RANGE_HI_KEY. The value 0 on the first record says, there is no record in the table whose
productid value is less than 707. If you look into the 11th record with RANGE_HI_KEY value 718, we have 218 in the RANGE_ROWS column .This says ,there are 218 records with productid value is greater
than 716 (previous RANGE_HI_KEY) and productid value is less than 718. The output of below query proves that:
SELECT COUNT(*) FROM SalesOrderDetail WHERE productid>716 AND productid<718
The EQ_ROWS is the number of records in the table matching with RANGE_HI_KEY. For the first records, 3083 in the EQ_ROWS says that there are 3083 records in the table with productid 707.The output of
below query proves that:
SELECT COUNT(*) FROM SalesOrderDetail WHERE productid=707
The DISTINCT_RANGE_ROWS represent the number of distinct records (distinct productid ) between two RANGE_HI_KEY values. If you look into the 11th record with RANGE_HI_KEY value 718, we have value 1
in the DISTINCT_RANGE_ROWS column .This says ,there is only 1 distinct records withe productid value is greater than 716 (previous RANGE_HI_KEY) and productid value is less than 718. The output of
below query proves that:
SELECT COUNT(distinct productid) FROM SalesOrderDetail WHERE productid>716 AND productid<718
The AVG_RANGE_ROWS column represent the average number of rows per distinct values.This is equivalent to RANGE_ROWS / DISTINCT_RANGE_ROWS when RANGE_ROWS value is greater than 0. Otherwise
AVG_RANGE_ROWS is considered as 1.
How SQL server optimizer use the histogram for cardinality estimation ? Let us consider the execution plan of the below query.
SELECT productid FROM SalesOrderDetail WHERE productid>=716 AND productid<=718
From where the Estimated Number of rows (1513) is calculated ? Let us go to the histogram
Add the highlighted values which will match to 1513 which is the estimated number of rows in the execution plan . 1076 is the number of records with productid value 716 218 is the number of records
with productid value greater than 716 and productid less than 718 219 is the number of records with productid value 718
Sum of these three values is the estimated number of rows in the above execution plan.
When there is complex where condition ,optimizer create required statistics called column statistics and use complex algorithm on top of the histogram data for cardinality estimation. We will discuss
about that in the next post.
If you liked this post, do like my page on FaceBook
5 comments:
1. Thanks for the great article. There is one syntax change for others that read this article: change the word DENSITY_VECTOR to HISTOGRAM. You could use both and separate with a comma but the
density vector is not what is pictured. Otherwise its a good refresher article for a lot of us and a new command for others.
DBCC SHOW_STATISTICS('dbo.SalesOrderDetail', 'ix_productid') WITH DENSITY_VECTOR, HISTOGRAM
1. Thanks for pointing out. I will make the change.
2. Superb article!
3. This will really come in handy especially if you'd want to review some stuff. Thank you very much for sharing this SQL server solution.
4. Thanks for posting this article. This will come handy in my field of work. Thanks again, I cant wait to tell my friends about this blog. | {"url":"http://www.practicalsqldba.com/2013/06/sql-server-part-2-all-about-sql-server.html","timestamp":"2014-04-20T06:21:45Z","content_type":null,"content_length":"117652","record_id":"<urn:uuid:80b3f514-bd73-4d08-9678-b756406def02>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
Correspondence between operads and monads requires tensor distribute over coproduct?
up vote 7 down vote favorite
In checking the details of the correspondence between operads over a symmetric monoidal category and monads on some associated endofunctor of the category, I cannot make the obvious proof work
without assuming that the monoidal product distributes over coproducts. But no such assumption is mentioned in my sources (for example Operads, Algebras, Modules by May (PDF).) Am I missing something
in my argument?
Let $\mathcal{C}$ (mathcal C) be an operad (for simplicity take it non-symmetric) over a symmetric monoidal category $\mathcal{V},$ with composition $\gamma\colon \mathcal{C}(n)\otimes \mathcal{C}
(m_1)\otimes\dotsb\otimes\mathcal{C}(m_n)\to\mathcal{C}(m_1+\dotsb+m_n).$ We define a functor $C\colon \mathcal{V}\to\mathcal{V}$ (Roman C) by $CX = \coprod_i \mathcal{C}(i)\otimes X^{\otimes i}$.
Then one wants to verify that the operad structure $\gamma$ gives a monad on $C$. That is, we need a natural morphism $C^2X\to CX,$ or $\coprod_i \mathcal{C}(i)\otimes \left(\coprod_j \mathcal{C}(j)\
otimes X^{\otimes j}\right)^{\otimes i}\to \coprod_k \mathcal{C}(k)\otimes X^{\otimes k}$. By universal prop of coproducts, it will suffice to exhibit an arrow $\mathcal{C}(n)\otimes \left(\coprod_j
\mathcal{C}(j)\otimes X^{\otimes j}\right)^{\otimes n}\to \coprod_k \mathcal{C}(k)\otimes X^{\otimes k}$ for all $n$.
Clearly we have $\mathcal{C}(n)\otimes \mathcal{C}(m_1)\otimes X^{\otimes m_1}\otimes \dotsb \otimes \mathcal{C}(m_n)\otimes X^{\otimes m_n}\to \mathcal{C}(n)\otimes \mathcal{C}(m_1)\otimes\dotsb\
otimes \mathcal{C}(m_n) X^{\otimes m_1+\dotsb+m_n}\to \\ \mathcal{C}(m_1+\dotsb+m_n)\otimes X^{\otimes m_1+\dotsb+m_n}\to\coprod_k\mathcal{C}(k)\otimes X^{\otimes k},$ where the first arrow is by
symmetry of the monoidal structure, the second arrow is the operad composition $\gamma,$ and the third arrow is the canonical inclusion into the coproduct.
Therefore by universal prop of coproducts, we have $\coprod_{m_1,\dotsc,m_n}\mathcal{C}(n)\otimes \mathcal{C}(m_1)\otimes X^{\otimes m_1}\otimes \dotsb \otimes \mathcal{C}(m_n)\otimes X^{\otimes m_n}
\to\coprod_k\mathcal{C}(k)\otimes X^{\otimes k}.$
In general, again using inclusion morphisms of coproducts, we have arrows $\mathcal{C}(m_\ell)\otimes X^{\otimes m_\ell}\to \coprod_j \mathcal{C}(j)\otimes X^{\otimes j}.$ Then by functorality of the
monoidal product, we have $\mathcal{C}(n)\otimes \mathcal{C}(m_1)\otimes X^{\otimes m_1}\otimes \dotsb \otimes \mathcal{C}(m_n)\otimes X^{\otimes m_n}\to \mathcal{C}(n)\otimes \left(\coprod_j \
mathcal{C}(j)\otimes X^{\otimes j}\right)^{\otimes n}$. By universal property of coproducts, we therefore have an arrow from the coproduct $\coprod_{m_1,\dotsc,m_n}\mathcal{C}(n)\otimes \mathcal{C}
(m_1)\otimes X^{\otimes m_1}\otimes \dotsb \otimes \mathcal{C}(m_n)\otimes X^{\otimes m_n}\to \mathcal{C}(n)\otimes \left(\coprod_j \mathcal{C}(j)\otimes X^{\otimes j}\right)^{\otimes n}$.
To summarize, we have the obvious maps $\mathcal{C}(n)\otimes \left(\coprod_j \mathcal{C}(j)\otimes X^{\otimes j}\right)^{\otimes n}\leftarrow \coprod_{m_1,\dotsc,m_n}\mathcal{C}(n)\otimes \mathcal
{C}(m_1)\otimes X^{\otimes m_1}\otimes \dotsb \otimes \mathcal{C}(m_n)\otimes X^{\otimes m_n} \to \\ \coprod_k \mathcal{C}(k)\otimes X^{\otimes k}.$ Unless we know that the arrow on the left is an
isomorphism, we do not get the structure map for a monad on $C$. And that arrow on the left will generally not be an isomorphism if the monoidal product in $\mathcal{V}$ does not distribute over the
coproduct. For example, if the monoidal product is the coproduct itself.
at.algebraic-topology ct.category-theory operads monads
4 You're completely correct. I've also found such things. The point is that people often consider operads in closed symmetric monoidal categories, hence $\otimes$ preserves $\coprod$. I had wondered
about the insistence on closedness. First I thought it was only to define endomorphism operads, but it also solves more subtle problems like the one you point out. – Fernando Muro Dec 25 '13 at
1 I've certainly always assumed this distributivity, and I'm pretty sure it (or something close to it) is needed. If it's not mentioned in that paper of Peter May's, it must just have been a slip on
his part. – Tom Leinster Dec 25 '13 at 17:09
3 Quite right, Tom. I did mention taking V to be closed on page 3 of the paper Joe cites and Fernando rightly points out that that fixes everything. I should have assumed that or the distributivity
explicitly (as I'm sure I did elsewhere). In all of my applications, V has been closed. – Peter May Dec 25 '13 at 20:59
Thank you for clarifying, gentlemen. I had thought requiring monoidal product to preserve coproducts would be an unusual requirement, which is why I had to ask. I somehow forgot that this was
implied by closedness of the monoidal product, which is a natural for enrichment categories. So obvious in hindsight... thanks again! – Joe Hannon Dec 26 '13 at 2:13
add comment
1 Answer
active oldest votes
I also noticed this at some point. I think you are right. One reference where this assumption is explicitly spelled out is the paper of Getzler and Jones.
up vote 4 down vote accepted
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology ct.category-theory operads monads or ask your own question. | {"url":"http://mathoverflow.net/questions/152785/correspondence-between-operads-and-monads-requires-tensor-distribute-over-coprod","timestamp":"2014-04-18T23:59:39Z","content_type":null,"content_length":"59587","record_id":"<urn:uuid:2c388721-426b-4a42-9a33-045097aa8102>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
Large Scale - Large Numbers - Large Efforts: Historical
5.2 Redshifts and distances
Light and particles (galaxies) move along (different) geodesics. When, in an expanding universe, light travels along a smooth geodesic from the source towards the observer, its frequency changes with
the changing scale factor according to z[cosm] = R / R (Sect. 3.2.1). This is the global effect.
Redshift z[cosm] is related to the distance r of the object, given by the invariable fraction of the scale length, and the present scale factor R[0]. When distance is measured by a distance-dependent
object property, such as apparent brightness or angular extent, the deduced distance value depends not only on the cosmological parameters: Hubble constant H[0], deceleration parameter q[0] and
curvature constant k, but also on the measuring process. If the process is brightness measurement the resulting distance is luminosity distance r[M]; angular diameters give angular distances r[],
parallax measurements parallax distances r[p], etc. The differences occur because the scale factor R[0] enters differently into the measured quantities.
McCrea (1935, following Tolman, 1930, Walker, 1933, and others) gives an extensive discussion of distance determinations which he introduces:
``. . . . any specific astronomical measurement of `distance' . . . . carried out in any relativity model of space-time must lead to a result which depends on the particular operations of
For small distances from the observer, z can be approximated by the relativistic or the classical Doppler formula, and the distance r is determined from v / H[0].
Contributions to the observed redshift z[total] result from the warping of the smooth geodesic due to local mass concentrations:
Local effects on the light path contribute z[l]. Local effects, which can be described as peculiar motions of the emitting particle (galaxy) and the observer along particle geodesics, contribute z[m]
. When they are sufficiently small, z and r can again be approximated by the classical or special relativistic Doppler formula. The superposition z[total] = z[cosm] + z[l] + z[m] makes it
observationally difficult to separate the cosmological and the local contributions. | {"url":"http://ned.ipac.caltech.edu/level5/Seitter/Seitter5_2.html","timestamp":"2014-04-17T21:58:45Z","content_type":null,"content_length":"4290","record_id":"<urn:uuid:4a38de72-52fd-4a93-bf59-c566dc66a973>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Washington Science Tutor
Find a Washington Science Tutor
...In principle, Algebra 1 and some modest extensions are all the math background that is needed for this part of physics. I find however, that some students - even some with good grades in math -
just get lost in a "forest" of algebra and cannot see the physics ideas that are the essence of the ma...
13 Subjects: including physics, ACT Science, chemistry, calculus
...I bring a lot more to the table than what you see "on paper" and am constantly told by my students how well I explain things. Though other tutors may be cheaper, I am more efficient due to my
experience and explanations. After 5 lessons, one of my SAT Math students improved her Math test score by 140 points last year!
28 Subjects: including physical science, logic, algebra 1, algebra 2
I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a
single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi...
15 Subjects: including electrical engineering, physics, geometry, calculus
...My areas of expertise are Biology, Physics, Chemistry, Algebra, and Geometry. As a tutor, my goal is to help students gain confidence in their intellectual abilities and to carry that
confidence into other areas of life. My educational background and volunteer experiences qualify me to tutor middle and high school subject material in multiple ways.
10 Subjects: including biology, physiology, physics, chemistry
...I graduated with a Bachelor of Science in Computer Science from the George Washington University in May 2012. I had more than 3 years' intense training in programming, especially in C and Java,
both of which have been widely used in my daily job. I also have the tutored C and Java courses when I'm an undergraduate.
27 Subjects: including chemistry, Chinese, Java, accounting | {"url":"http://www.purplemath.com/washington_navy_yard_dc_science_tutors.php","timestamp":"2014-04-18T23:48:53Z","content_type":null,"content_length":"24053","record_id":"<urn:uuid:0ce6e2b2-5c2e-4f0e-9a73-f7bd16448d7e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00623-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Haskell-cafe] Re: Num instances for 2-dimensional types
Ben Franksen ben.franksen at online.de
Fri Oct 9 15:57:56 EDT 2009
Joe Fredette wrote:
> A ring is an abelian group in addition, with the added operation (*)
> being distributive over addition, and 0 annihilating under
> multiplication. (*) is also associative. Rings don't necessarily need
> _multiplicative_ id, only _additive_ id.
Yes, this is how I learned it in my Algebra course(*). Though I can imagine
that this is not universally agreed upon; indeed most of the more
interesting results need a multiplicative unit, IIRC, so there's a
justification for authors to include it in the basic definition (so they
don't have to say let-R-be-a-ring-with-multiplicative-unit all the time ;-)
(*) As an aside, this was given by one Gernot Stroth, back then still at the
FU Berlin, of whom I later learned that he took part in the grand effort to
classify the simple finite groups. The course was extremely tight but it
was fun and I never again learned so much in one semester.
More information about the Haskell-Cafe mailing list | {"url":"http://www.haskell.org/pipermail/haskell-cafe/2009-October/067559.html","timestamp":"2014-04-17T10:30:33Z","content_type":null,"content_length":"3581","record_id":"<urn:uuid:2a307e63-f533-4b87-82a2-6ca207927da3>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00644-ip-10-147-4-33.ec2.internal.warc.gz"} |
Buy Conditionally
The 151 quizzes are good. I caught almost everyone with the trusty old “differentiate a constant” wheeze… $y = 11e^\pi \Rightarrow {{dy}\over{dx}}= 0$ of course, but you wouldn’t’ve known it by these
guys on the day. In fact—math teacher at play— $\{11, \pi 11e^{\pi-1}, 11e^\pi, 11\root7\of{e^{22}}, e^\pi, 0\}$ is the “answer set” (sure as heck not the solution set!)—with $11e^\pi$ appearing
several times (once as ${{11}\over{e^{-\pi}}}$) and a variant ($11\pi e^{(\pi - 1)}$… slightly better) on one other value.
Now. If I’m going to go in claiming that anybody “should have known” something… then how come they should’ve? Let’s see. 11e^x, 13e^\pi, cos(x), 1/x^2, polynomial, radical. How come two of ‘em are
alike? What’s different about ‘em? This is pre-math stuff here: mere “test-taking skills”. Okay. What else?
Am I gonna say everybody oughta be trying to imagine a graph? Like, every single time? Well, maybe it’d be a good idea… at least right in here at first… to at least consider it. The trig function
everybody darn well oughta know is some wavy thing; the polynomial’s a cubic so again one has a pretty good idea. The radical and the rational function are a little more esoteric and I wouldn’t blame
a Calc I student a bit for not thinking too hard about it if nothing pops up for free in instant recall; if it comes to thinking about it, maybe it’d be a good time to break out the grapher. As for
the problems with e‘s in ‘em, I do feel that everybody oughta have a mental image of “exponential growth” ($y = A_0 e^{Kx}$, say [with A_0 and K positive constants]): it sure is a simple doggone
thing to sketch…
So then. With any luck, one then asks oneself, “what’s the graph of e-to-the-pi then, eh?” and arrives somehow at the necessary insight. In fact, if one realizes only that one does not yet know the
answer, one might, again, think of using the graphing feature of the calculator (or, of course, think of estimating e^pi directly on the command line ["homescreen"... whatever]). Once you see that
flat line… on screen or paper or in imagination… you’ve got it.
OK, true enough. Is that all you got? Well, I suppose a drill-instructor style teacher might favor repeating over and over until blue in the face “pi and e are constants“. Might be worth a try…
Am I supposed to be, like, hanging on every bloody word then? Well, yes, ideally, but hanging on every bloody symbol would be a darn good place to start. One is always led to ask, “what does this
symbol mean in this context?”—indeed, this sounds suspiciously like a description of reading itself (and not just of reading “math”—calculations and such).
One is curious at this point as to whether they’d’ve done any better with y= e^3… but enough.
Because the good news is that there were five otherwise-perfect papers and several other solid high scores; in particular, the class redeemed itself very nicely with the “epsilon-delta” proof.
Nonetheless, here are some trouble spots.
$\bullet$No “Let $\epsilon \rangle 0$“. For this section, this line is mandatory. (One may of course “fix” epsilon or some other very slight variant.) I’m declaring by fiat that “all epsilon-delta
proofs begin by making epsilon the name of a constant“.
$\bullet$Equal sign for implication.
$\bullet$No mention of $\delta$.
$\bullet$ “$\delta = {\epsilon\over{|slope|}}$“—right there in the calculation as if english words were appropriate in the middle of algebraic expressions. In the actual exercise, the grader is
hoping to see $\epsilon\over5$. But hold on. I put the “mixed media” (algebra-and-plain-english) on the very blackboard myself! Ah, but it was in a marginal note about problems of this type
generically… in particular problems (of this type—limits of linear functions) one will work with honest algebraic code. Should students be expected to just recognize slangy shorthand as such in its
context while we’re preaching meticulous attention to detail in some other part of the work? Maybe. We can’t be explicit about everything.
$\bullet$The very inequalities I most want to see… in reverse order. Or in variant orders harder to classify… “no apparent order at all” having made at least one appearance.
$\bullet$Inequalities replaced by equations; reversed inequalities.
The thing is… at least some of the lack of clarity here ought to be considered my fault. In particular, of the calculations after fixing epsilon and writing delta as a function of epsilon… the
calculations that we should be careful to present in the reverse order to the one we actually discover it in… I now feel one should explicitly say “(Then) The Following Are Equivalent” (and I’ll
introduce the abbreviation TFAE for this situation in the next class). If we’re going to obsess over this definition… and we should… then it’s the logical structure that seems to present the biggest
challenge; the equivalence of a certain set of inequalities is pretty close to the heart of the matter; we should be trying to spell this out as clearly as we know how.
I’ll probably have more to say about this…
May 9, 2009 at 4:57 pm
I was part of as good a high school class as you could imagine teaching, and we got nailed (all but one of us) by e^271 or something like that. Can’t fix that without deducting points (if it happens
again, that’s another story)
And on the details, I’m not so sure what’s bad about the word in the middle of the algebra, except that we don’t like it. Do we have a good reason not to like it?
May 9, 2009 at 6:00 pm
i think we want to imagine algebraic “code”
as something that in principle can be “understood”
even by a machine… so everything is *imagined*
(the reality is far different) as having a definition
lying around nearby that can be looked up when
some part of the code isn’t understood.
“slope” is of course a perfectly good variable name,
better than “M” or “f’(x)” in some contexts…
so we sure as heck don’t have a reason
to *abhor* it… but a variable name in algebraic code
with lots of x’s and y’s and whatnot oughtn’t to be
five letters long *without a darn good reason*…
and anyway the *symbol* “slope” (pronounced,
let’s say, “s,l,o,p,e”) *hasn’t* been defined by us
(as code) in *this* context.
i guess i’m saying in part that the distinction
between the *meta*-language and the “real code”
*is* important… and that it’s *convenient*
to use “plain english words” as sort of a *marker*
for “not actual code”. obviously, in computer code
(for example) one sees words-as-variables
quite commonly (also in other fields even further
from maths). in maths… calculations are hard.
if we’re actually going to *use* the variables
in by-hand calculations, we’ll want ‘em short.
i’ll admit to a sort of bigotry *against*
mixing in words with algebraic symbols:
even sin(x) and cos(x)… but particularly
stuff like Aut(G) (for the set of “automorphisms”
of a group)… used to sort of annoy me.
somewhere along the line i realized that,
lacking any better idea, maybe i’d learn
to love it the way it is. and maybe i have.
but yeah. learning to read code is hard.
“keep it simple”. i can’t teach ‘em to write
good english… nobody can. by narrowing
our focus to almost nothing… a few lousy
little algebraic symbols and some logic…
we can be so precise about *one little thing*
as to know for sure when something’s
exactly right. it’s just bound to be a good idea,
in this famously make-it-or-break-it moment,
to lead ‘em in the direction of the actual practices
of existing texts.
if $f(x) = Mx + B$ is a linear function,
we can compute the limit of f at a
by letting $\epsilon > 0$ and putting $\delta = {\epsilon\over{|M|}}$.
now, *that*’s how you mix english with your code.
and then, *out loud* you say
“divide epsilon by the absolute value of the slope”.
i make no effort to write at the blackboard in this style
usually. but i darn well have it in mind that ideally
a student will learn from me how to *approach* writing
in this style (if they’re that one-in-a-thousand
that actually *wants* to).
May 9, 2009 at 6:12 pm
oh, p.s.
like i said in the post…
it was “5″ i wanted
in the actual event
(not “M” or “slope”…).
May 10, 2009 at 2:22 am
I’ll take it as a transition, a good place to move further from…
But then again, I spend time teaching kids, explicitly, to translate from phrases into algebraic expressions. I expect them occasionally to leave phrases in their work.
May 11, 2009 at 3:38 pm
It’s probably because I mostly teach middle school kids, but I like the word-as-a-variable approach. It helps my students to think about “what does this symbol [or this equation] mean in this
context?” Too many letters and symbols crowded together tends to produce that deer-in-the-headlights reaction. But then, your students are not supposed to be beginners like mine.
I like your “answer set.” I probably did some of those in my college days, too—although it’s been so long, I don’t remember any specifics.
May 11, 2009 at 4:02 pm
thanks for the feedback.
choosing an appropriate level of formality
for a given context is looking to me like
one of the most important things for instructors
to *do*. ideally this would be done
at higher levels (textbook committees
and suchlike course-designing entities)
but in my experience these can’t be trusted.
if students *only* realize that *whatever* we do,
it’s never the final, one-best-way, for-all-time
ONLY way to do it… we’ll have done at least
*some* service… if they realize that careful choices
of notations can yield spectactular payoffs
when calculations become necessary,
we’ve got a big win.
May 15, 2009 at 10:02 am
[...] on Math Ed presents Buy Conditionally posted at Community College [...] | {"url":"http://calciii.wordpress.com/2009/05/07/buy-conditionally/","timestamp":"2014-04-18T02:59:00Z","content_type":null,"content_length":"65867","record_id":"<urn:uuid:3479b758-9f16-42ab-9a74-63505767747b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
48 search hits
Habilitationsordnung der Mathematisch–Naturwissenschaftlichen Fachbereiche der Johann Wolfgang Goethe-Universität Frankfurt am Main vom 04.02.1992 (ABl. 1992, S.816 ff.), zuletzt geändert am 28.
April 2002 (StAnz. 41/2003, S. 4024 – 4025) : genehmigt durch Beschluss des Präsidiums der Johann Wolfgang Goethe-Universität Frankfurt am Main am 27. Januar 2009 ; hier: Änderung bzw. Ergänzung
Promotionsordnung der Mathematisch-Naturwissenschaftlichen Fachbereiche der Johann Wolfgang Goethe-Universität in Frankfurt am Main vom 26. Mai 1993 (ABL.1/94, S. 21) zuletzt geändert am 05.09.2007
(Uni-Report 13.11.2008) : genehmigt durch Beschluss des Präsidiums der Johann Wolfgang Goethe-Universität Frankfurt am Main am 27. Januar 2009 ; hier: Änderung (2009)
Screened perturbation theory for 3d Yang-Mills theory and the magnetic modes of hot QCD : International Workshop on QCD Green’s Functions, Confinement, and Phenomenology - QCD-TNT09, September 07 -
11 2009, ECT Trento, Italy (2009)
Owe Philipsen Daniel Bieletzki York Schröder
Perturbation theory for non-abelian gauge theories at finite temperature is plagued by infrared divergences which are caused by magnetic soft modes ~ g2T, corresponding to gluon fields of a 3d
Yang-Mills theory. While the divergences can be regulated by a dynamically generated magnetic mass on that scale, the gauge coupling drops out of the effective expansion parameter requiring
summation of all loop orders for the calculation of observables. Some gauge invariant possibilities to implement such infrared-safe resummations are reviewed. We use a scheme based on the
non-linear sigma model to estimate some of the contributions ~ g6 of the soft magnetic modes to the QCD pressure through two loops. The NLO contribution amounts to ~ 10% of the LO, suggestive of
a reasonable convergence of the series.
Lattice calculations at non-zero chemical potential: the QCD phase diagram (2009)
Owe Philipsen
The so-called sign problem of lattice QCD prohibits Monte Carlo simulations at finite baryon density by means of importance sampling. Over the last few years, methods have been developed which
are able to circumvent this problem as long as the quark chemical potential is m=T <~1. After a brief review of these methods, their application to a first principles determination of the QCD
phase diagram for small baryon densities is summarised. The location and curvature of the pseudo-critical line of the quark hardon transition is under control and extrapolations to physical quark
masses and the continuum are feasible in the near future. No definite conclusions can as yet be drawn regarding the existence of a critical end point, which turns out to be extremely quark mass
and cut-off sensitive. Investigations with different methods on coarse lattices show the lightmass chiral phase transition to weaken when a chemical potential is switched on. If persisting on
finer lattices, this would imply that there is no chiral critical point or phase transition for physical QCD. Any critical structure would then be related to physics other than chiral symmetry
Towards a determination of the chiral critical surface of QCD (2009)
Owe Philipsen
The chiral critical surface is a surface of second order phase transitions bounding the region of first order chiral phase transitions for small quark masses in the fmu;d;ms;mg parameter space.
The potential critical endpoint of the QCD (T;m)-phase diagram is widely expected to be part of this surface. Since for m = 0 with physical quark masses QCD is known to exhibit an analytic
crossover, this expectation requires the region of chiral transitions to expand with m for a chiral critical endpoint to exist. Instead, on coarse Nt = 4 lattices, we find the area of chiral
transitions to shrink with m, which excludes a chiral critical point for QCD at moderate chemical potentials mB < 500 MeV. First results on finer Nt = 6 lattices indicate a curvature of the
critical surface consistent with zero and unchanged conclusions. We also comment on the interplay of phase diagrams between the Nf = 2 and Nf = 2+1 theories and its consequences for physical QCD.
Dynamical lattice computation of the Isgur-Wise functions τ1/2 and τ3/2 (2009)
Benoit Blossier Marc Wagner Olivier Pène
We perform a two-flavor dynamical lattice computation of the Isgur-Wise functions t1/2 and t3/2 at zero recoil in the static limit. We find t1/2(1) = 0.297(26) and t3/2(1) = 0.528(23) fulfilling
Uraltsev’s sum rule by around 80%. We also comment on a persistent conflict between theory and experiment regarding semileptonic decays of B mesons into orbitally excited P wave D mesons, the
so-called “1/2 versus 3/2 puzzle”, and we discuss the relevance of lattice results in this context.
fB and fBs with maximally twisted Wilson fermions (2009)
Gregorio Herdoiza Karl Jansen Vittorio Lubicz Cecilia Tarantino Francesco Sanfilippo Chris Michael Andrea Shindler Silvano Simula Carsten Urbach
We present a lattice QCD calculation of the heavy-light decay constants fB and fBs performed with Nf = 2 maximally twisted Wilson fermions, at four values of the lattice spacing. The decay
constants have been also computed in the static limit and the results are used to interpolate the observables between the charmand the infinite-mass sectors, thus obtaining the value of the decay
constants at the physical b quark mass. Our preliminary results are fB = 191(14)MeV, fBs = 243(14)MeV, fBs/ fB = 1.27(5). They are in good agreement with those obtained with a novel approach,
recently proposed by our Collaboration (ETMC), based on the use of suitable ratios having an exactly known static limit.
First results of ETMC simulations with Nf = 2+1+1 maximally twisted mass fermions (2009)
Rémi Baron Benoit Blossier Philippe Boucaud Albert Deuzeman Vincent Drach Federico Farchioni Vicent Gimenez Gregorio Herdoiza Karl Jansen Chris Michael István Montvay David Palao Elisabetta Pallante
Olivier Pène Siebren Reker Carsten Urbach Marc Wagner Urs Wenger
We present first results from runs performed with Nf = 2+1+1 flavours of dynamical twisted mass fermions at maximal twist: a degenerate light doublet and a mass split heavy doublet. An overview
of the input parameters and tuning status of our ensembles is given, together with a comparison with results obtained with Nf = 2 flavours. The problem of extracting the mass of the K- and
D-mesons is discussed, and the tuning of the strange and charm quark masses examined. Finally we compare two methods of extracting the lattice spacings to check the consistency of our data and we
present some first results of cPT fits in the light meson sector.
Jahresbericht 2008/2009 / Institut für Kernphysik am Fachbereich Physik der Goethe-Universität Frankfurt am Main (2009)
The O(N=2) model in polar coordinates at nonzero temperature (2009)
Martin Grahl
Chapter 1 contains the general background of our work. We briefly discuss important aspects of quantum chromodynamics (QCD) and introduce the concept of the chiral condensate as an order
parameter for the chiral phase transition. Our focus is on the concept of universality and the arguments why the O(4) model should fall into the same universality class as the effective
Lagrangian for the order parameter of (massless) two-flavor QCD. Chapter 2 pedagogically explains the CJT formalism and is concerned with the WKB method. In chapter 3 the CJT formalism is then
applied to a simple Z(2) symmetric toy model featuring a one-minimum classical potential. As for all other models we are concerned with in this thesis, we study the behavior at nonzero
temperature. This is done in 1+3 dimensions as well as in 1+0 dimensions. In the latter case we are able to compare the effective potential at its global minimum (which is minus the pressure)
with our result from the WKB approximation. In chapter 4 this program is also carried out for the toy model with a double-well classical potential, which allows for spontaneous symmetry breaking
and tunneling. Our major interest however is in the O(2) model with the fields treated as polar coordinates. This model can be regarded as the first step towards the O(4) model in
four-dimensional polar coordinates. Although in principle independent, all subjects discussed in this thesis are directly related to questions arising from the investigation of this particular
model. In chapter 5 we start from the generating functional in cartesian coordinates and carry out the transition to polar coordinates. Then we are concerned with the question under which
circumstances it is allowed to use the same Feynman rules in polar coordinates as in cartesian coordinates. This question turns out to be non-trivial. On the basis of the common Feynman rules we
apply the CJT formalism in chapter 6 to the polar O(2) model. The case of 1+0 dimensions was intended to be a toy model on the basis of which one could more easily explore the transition to polar
coordinates. However, it turns out that we are faced with an additional complication in this case, the infrared divergence of thermal integrals. This problem requires special attention and
motivates the explicit study of a massless field under topological constraints in chapter 8. In chapter 7 we investigate the cartesian O(2) model in 1+0 dimensions. We compare the effective
potential at its global minimum calculated in the CJT formalism and via the WKB approximation. Appendix B reviews the derivation of standard thermal integrals in 1+0 and 1+3 dimensions and
constitutes the basis for our CJT calculations and the discussion of infrared divergences. In chapter 9 we discuss the so-called path integral collapse and propose a solution of this problem. In
chapter 10 we present our conclusions and an outlook. Since we were interested in organizing our work as pedagogical as possible within the narrow scope of a diploma thesis, we decided to make
extensive use of appendices. Appendices A-H are intended for students who are not familiar with several important concepts we are concerned with. We will refer to them explicitly to establish the
connection between our work and the general context in which it is settled. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/all/start/0/rows/10/yearfq/2009/institutefq/Physik","timestamp":"2014-04-19T14:47:32Z","content_type":null,"content_length":"42535","record_id":"<urn:uuid:8b060014-02ed-4b07-8c79-0a9ad05c6aef>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
geometric meaning of Ricci-flatness
up vote 3 down vote favorite
What is the geometric meaning of Ricci-flatness? We know that if the Riemann tensor at a point vanished, manifold is flat at this point. but I don't know When the Ricci tensor vanished at a point,
what is shape of manifold at this point? and a same question about scalar curvature.
riemannian-geometry dg.differential-geometry curvature ricci-flow
3 math.stackexchange.com/questions/339057/… ... I suggest making an edit to your post on the other site, so that you can receive better help. – Chris Gerig Apr 4 '13 at 18:56
add comment
1 Answer
active oldest votes
You find in Wikipedia:
• "Indeed, if $\xi$ is a vector of unit length on a Riemannian n-manifold, then $Ric(\xi,\xi)$ is precisely (n−1) times the average value of the sectional curvature, taken over all
up vote 1 down the 2-planes containing $\xi$."
• In Riemann normal coordinates, the Taylor expansion of the Riemannian volume has vanishing first order term, and the second order term is $1/6$ times the Ricci curvature.
add comment
Not the answer you're looking for? Browse other questions tagged riemannian-geometry dg.differential-geometry curvature ricci-flow or ask your own question. | {"url":"http://mathoverflow.net/questions/126508/geometric-meaning-of-ricci-flatness","timestamp":"2014-04-18T16:07:21Z","content_type":null,"content_length":"51694","record_id":"<urn:uuid:08bc5067-6506-4772-91d9-711003eab283>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help required in a formula
December 20th 2007, 01:41 AM #1
Dec 2007
Help required in a formula
Hello all,
x = [(a-b(1+i)^-n)*y] / [(1-(1+i)^(n-y))/i)+y].
i need to get the reverse formulae for a,b,i,n and y. i need this information urgently. please help.
Thanks in Advance
What, exactly, are you asking for? You need to solve the above equation for, say, y in terms of the remaining variables? (Then a, then b, then i, then n?)
a and b are straightforward. $<br /> <br /> a = x(\frac {1-(1+i)^{n-y}}{i}+y)+b(1+i)^{-n}$
$<br /> b = (a - x(\frac{1-(1+i)^{n-y}}{i}+y))(1+i)^n$
It does not appear to be possible to solve explicitly for i and y. I have not proved this, but I cannot see any way to do it.
n can be solved for by first forming a quadratic in $(1+i)^n$:
$<br /> x(\frac {1-(1+i)^{-y}(1+i)^n}{i}+y)-a+b(1+i)^{-n} = 0$
$<br /> \frac {x}{i(1+i)^y}(1+i)^n+(\frac {x}{i}+xy-a)+b(1+i)^{-n} = 0$
$<br /> (1+i)^n = \frac{a-xy-\frac{x}{i}\pm \sqrt{(\frac {x}{i} +xy-a)^2+4\frac {x}{i(1+i)^y}b}}{2\frac{x}{i(1+i)^y}}$
$<br /> n = \frac {\log (\frac{a-xy-\frac{x}{i}\pm \sqrt{(\frac {x}{i} +xy-a)^2+4\frac {x}{i(1+i)^y}b}}{2\frac{x}{i(1+i)^y}})}{\log (1+i)}$
December 20th 2007, 12:25 PM #2
December 20th 2007, 04:05 PM #3
Senior Member
Dec 2007
December 20th 2007, 11:38 PM #4
Dec 2007 | {"url":"http://mathhelpforum.com/algebra/25132-help-required-formula.html","timestamp":"2014-04-21T04:41:48Z","content_type":null,"content_length":"40970","record_id":"<urn:uuid:0f303c9a-4818-4e46-a999-e6faf73163d7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
On Maximal and Minimal Solutions for Set-Valued Differential Equations with Feedback Control
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 816218, 11 pages
Research Article
On Maximal and Minimal Solutions for Set-Valued Differential Equations with Feedback Control
Faculty of Mathematics and Computer Science, University of Science, Ho Chi Minh City, Vietnam
Received 11 September 2011; Accepted 8 November 2011
Academic Editor: Ibrahim Sadek
Copyright © 2012 Ngo Van Hoa and Nguyen Dinh Phu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
In this paper, we present the existence of extremal solutions of set-valued differential equations with feedback control on semilinear Hausdorff space under Hukuhara derivative which is developed
under the form , ,for all with the monotone iterative technique and we will verify that monotone sequence of approximate solutions converging uniformly to the solution of the problem, that is
useful for optimization problems.
1. Introduction
Recently, the study of set differential equations was initiated in a metric space and some basic results of interest were obtained. Some interesting results in this direction have been obtained in a
series of works of Professor V. Lakshmikantham and other authors (see [1–5]). Professor V. Lakshmikantham and the other authors considered set differential equations (SDEs) and had some important
results on existence, comparison, and stability criteria for SDEs: where , , and.
Based on these results, the authors gave the concept of set-valued control differential equation and studied existence and comparison of its solutions (see [6]). In this paper, we investigate an
existence result of Peano’s type and then consider the existence of extremal solution of set-valued control differential equations. For this purpose, one needs to introduce a partial order in , prove
the required comparison result for strict inequalities, and then, utilizing it, discuss the existence of extremal solutions.
This paper is organized as follows: in Section 2, we recall some basic concepts and notations which are useful in next sections. In Section 3, we present on the existence of extremal solutions for
SSDEs on semilinear Hausdorff space with the monotone iterative technique and we will verify that monotone sequence of approximate solutions converging uniformly to the solution of the problem.
2. Preliminaries
We recall some notations and concepts presented in detail in recent series works of Professor V. Lakshmikantham et al. (see [1]). Let denote the collection of all nonempty compact convex subsets of .
Given , the Hausdorff distance between and is defined by where denotes the Euclidean norm in and —the zero points set in . It is known that is a complete metric space and is a complete and separable
with respect to metric .
We define the magnitude of a nonempty subset of : where is the zero element of which is regarded as a one point set. -norm in is finite when the supremum in (2.2) is attained with .
The Hausdorff metric (1.1) satisfies the properties below:
for all and . If , and , then It is known that is a complete metric space and if the space is equipped with the natural algebraic operations of addition and nonnegative scalar multiplication, then
becomes a semilinear metric space which can be embedded as a complete cone into a corresponding Banach space.
Let . The set satisfying is called the Hausdorff difference (the geometric difference) of the sets and and is denoted by the symbol . Given an interval in . We say that the set mapping has a Hukuhara
derivative at a point , if exist in the topology of and are equal to .
By embedding as a complete cone in a corresponding Banach space and taking into account result on the differentiation of Bochner integral, we find that ifwhere is integrable in the sense of Bochner,
then exists and the equality a.e on holds.
The Hukuhara integral of is given by for any compact set .
Some properties of the Hukuhara integral are in [1]. If is integrable, one has If are integrable, then is integrable and
3. Main Results
We consider the set-valued differential equations (SSDEs) with feedback control under the form where and is a feedback control, state set .
Definition 3.1. The mapping set is called to be a solution of (3.1) on if and only if the following conditions are satisfied:
(i) with Hukuhara derivative by ;(ii);(iii) is integrable on ;(iv)for all , the integral in (3.2) is Hukuhara integral.
In this section, we will use the monotone iterative technique to solve the minimal and maximal solutions of (3.1). To construct the set monotone sequence, we first introduce the following definition.
Definition 3.2. We denote(i)by the subfamily of consisting of sets such that any is a nonnegative (positive) vector of components satisfying for ,(ii)by the subfamily of consisting of sets such that
any is a nonpostive (negative) vector of components satisfying for .By Definition 3.2, we notice that is a positive cone in and is the nonempty interior of . is a negative cone in and is the nonempty
interior of . We can therefore induce a partial odering in . Thus, if is , that is, with any is satisfying for and is , that is, with any is satisfying for . Now we define the ordering in .
Definition 3.3. For any , if there exists a such that and , then we write . Similarly, if there exists a such that and , then we write .
Theorem 3.4. Assume the following:(H1) is monotone nondecreasing in for every , that is, for fixed , wherever , and is monotone nondecreasing in for every , that is, for fixed , wherever ; (H2) there
exist such that and(H3) for any with and some positive number real such that then for provided .
Proof. For any , we define and we note that . By using (2.5), we infer . Let be the supremum of all positive number such that implies on . Thus and . Using (H1)–(H3), we get Equation (2.5), together
with (3.5), implies that there exists an such that This contradicts that is the supremum in view of the continuity of function involved and consequently that the inequality holds for . Taking the
limit yields the desired result. This proof is complete.
Corollary 3.5. Let such that for all , then implies that for all .
Proof. It is clear from the proof of Theorem 3.4.
Definition 3.6. are said to be the lower solution and upper solution of (3.1) respectively if
Theorem 3.7 (existence of solution). Assume are lower solution and upper solution of (3.1), respectively, and assumptions (H1), (H3) are satisfied, then there exists solution of (3.1).
Proof. For any , we define and . Let be the supremum of all positive number such that implies on . Thus and , by putting the above we infer that Similarly, and . By using Theorem 3.4, we have . Since
are lower and upper solutions of (3.1), we have that where is solution of (3.1). Now, we wish to show that on . If it is not true, then there exists a such that and on . This implies that and .
Equation (2.5), together with , implies that there exists an such that This contradicts that , hence we have that . Similarly, we can show that and hence relation holds for all . Now as , we conclude
that . The proof is complete.
Definition 3.8. Let , are said to be minimal and maximal solutions of (3.1), respectively, if they both are solution of (3.1) and satisfy for every solution of (3.1) with for all , where are the
lower and upper solutions of (3.1) respectively with for all .
Theorem 3.9. Assume that(M1) equation (3.1) has the lower solution and upper solution with and for all ;(M2) hypotheses (H1), (H3) satisfy;(M3) is map bounded sets into bounded sets in .Then there
exists monotone sequence and in such that , as in , where , are the minimal and maximal solutions of (3.1), respectively.
Proof. Let us construct the set of integrodifferential sequences by for , we prescribe and , for all . From (2.5), (3.1) and using Definition 3.1 we get First, we claim that the iterations are such
that Now we show that . Consequently, we have to show that (i) , (ii) and (iii) . By using Definition 3.6 and (3.11), (3.12), then (i) is proved. Indeed, by is a lower solution of (3.1) and following
Definition 3.6 we get , addition Hence and using Corollary 3.5 we infer for all . Similarly, we use Definition 3.6 and (3.11), (3.12), then (ii) is proved. Using (M1), we get addition and Corollary
3.5, then (iii) is proved.
By using inductive method, we assume on , then we have to claim that , by means of the monotone property of we obtain From , and by virtue of Corollary 3.5 we get and for all . Again, by means of the
monotone property of and our assumption, we have for all . Using again Corollary 3.5, we get . Consequently, Combining (3.11) and is continuous multiplication, it follows that , are continuous for .
Now using the corresponding of (3.11) and the properties of the Hausdorff metric and the Hukuhara integral, together with the assumption (M3), we prove the equicontinuity of the sequences and below.
Consider for any , we have Hence and are uniformly bounded and equicontinuity on . On using Ascoli-Arzela theorem (see [1]) in this setup, we obtain a subsequence which converges uniformly to on .
Arguing in a similarly to the , we conclude that converges uniformly to on . Next, we again consider (3.12), (3.18), respectively, and by using the convergence properties we infer that Moreover, by
means of (3.18) we easily get on .
Finally, we show that and are the minimal and maximal solutions of (3.1), respectively. Let be any solution of (3.1) such that for all and and we need to prove that on . Suppose that for some , on .
By using monotone nondecreasing of , , we get where . Applying Corollary 3.5, then we get on for all . Similarly, we get for all . By using assumption from the principle of mathematical induction, we
infer that for all . Taking limit as , then we obtain . The proof is complete.
Corollary 3.10. If addition to the assumptions of Theorem 3.7 assume that satisfies for and , then is the unique solution of (3.1).
Example 3.11. We consider set-valued differential equation with feedback control in : where with is a contraction feedback.
We see that satisfies (M1)–(M3). Now, we show that (3.23) exits as extremal solutions on . We prescribe , are lower and upper solutions of (3.23) for all . We note that and . Next, let us construct
the set sequences by for all we verify that monotone sequences of constructions above such that (a)and is a minimal of (3.23);(b)and is a maximal of (3.23).First, we prove (a). Indeed, let , then for
each positive integer , we consider Because and , otherwise , hence . By using Corollary 3.5, to get for all and . On the other hand with fixed.
Since the family of functions is equicontinuous and uniformly bounded on , it follows Ascoli-Arzela theorem (see [4]) that there exists a decreasing sequence and uniform limit exits on . Obviously ,
the uniform continuity of implies that tends uniformly to as , and thus Which in turn yields that the limit is a solution of (3.23) on .
Next we will show that is a required maximal solution of (3.23) on . For this purpose, we observe that and is nondecreasing, hence we get
By using Corollary 3.5, then we get on . The uniqueness of maximal solution show that tends uniformly to is the maximal solution of (3.23) with Finally, we will prove (b). Similarly, let , then for
each positive integer , we consider Because and , otherwise , hence . By using Corollary 3.5, to get for all and . On the other with fixed.
Since the family of functions is equicontinuous and uniformly bounded on , it follow by Ascoli-Arzela theorem (see [4]) that there exists a decreasing sequence and uniform limit exits on . Obviously
, the uniform continuity of implies that tends uniformly to as , and thus which in turn yields that the limit is a solution of (1) on .
Next we will show that is a required minimal solution of (3.23) on . For this purpose, we observe that and is nondecreasing, hence we get
By using Corollary 3.5, then we get on . The uniqueness of minimal solution show that tends uniformly to is the minimal solution of (3.23) with
Based on (3.25) combining (3.28), (3.29) and (3.32), we will solve the minimal and maximal solutions of (3.23). Its graphical representation can be seen in Figure 1.
The authors gratefully acknowledge the referees for their careful reading and many valuable remarks which improved the presentation of the paper.
1. V. Lakshmikantham, T. G. Bhaskar, and J. V. Devi, Theory of Set Differential Equations in Metric Spaces, Cambridge Scientific Publishers, Cambridge, UK, 2006.
2. B. G. Pachpatte, Integral and Finite Difference Inequalities and Applications, vol. 205 of North-Holland Mathematics Studies, Elsevier, Oxford, UK, 2006.
3. S. Hong, “Differentiability of multivalued functions on time scales and applications to multivalued dynamic equations,” Nonlinear Analysis, Theory, Methods and Applications, vol. 71, no. 9, pp.
3622–3637, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
4. S. Hong and J. Liu, “Phase spaces and periodic solutions of set functional dynamic equations with infinite delay,” Nonlinear Analysis, Theory, Methods and Applications, vol. 74, no. 9, pp.
2966–2984, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
5. J. V. Devi, “Generalized monotone iterative technique for set differential equations involving causal operators with memory,” International Journal of Advances in Engineering Sciences and Applied
Mathematics, vol. 3, no. 1–4, pp. 74–83, 2011. View at Publisher · View at Google Scholar
6. N. D. Phu and T. T. Tung, “Some results on sheaf-solutions of sheaf set control problems,” Nonlinear Analysis, Theory, Methods and Applications, vol. 67, no. 5, pp. 1309–1315, 2007. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/aaa/2012/816218/","timestamp":"2014-04-20T06:08:43Z","content_type":null,"content_length":"681323","record_id":"<urn:uuid:cb7413ad-7dae-4360-9940-5aec84325f23>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00156-ip-10-147-4-33.ec2.internal.warc.gz"} |
Galois extension
March 15th 2009, 07:48 PM #1
Mar 2009
Galois extension
Explain why $\mathbb{Q} \subset \mathbb{Q}(^3\sqrt2)$ is not a Galois extension. Find the smallest extension field $E/\mathbb{Q}(^3\sqrt2)$ so that $E/\mathbb{Q}$ is Galois. Determine the
isomorphism class of $\text{Gal}(E/\mathbb{Q})$.
I know how to prove that $\mathbb{Q} \subset \mathbb{Q}(^3\sqrt2)$ is not a Galois extension. But I don't know how to do the next two parts of this question. Thanks in advance.
Explain why $\mathbb{Q} \subset \mathbb{Q}(^3\sqrt2)$ is not a Galois extension. Find the smallest extension field $E/\mathbb{Q}(^3\sqrt2)$ so that $E/\mathbb{Q}$ is Galois. Determine the
isomorphism class of $\text{Gal}(E/\mathbb{Q})$.
I know how to prove that $\mathbb{Q} \subset \mathbb{Q}(^3\sqrt2)$ is not a Galois extension. But I don't know how to do the next two parts of this question. Thanks in advance.
Notice that the minimal polynomial of $\sqrt[3]{2}$ over $\mathbb{Q}$ is $x^3 - 2$. The roots of this polynomial is $\sqrt[3]{2},\zeta\sqrt[3]{2},\zeta^2\sqrt[3]{2}$. Thus, the splitting field of
this polynomial is $E=\mathbb{Q}(\zeta,\sqrt[3]{2})$. Notice that $E/\mathbb{Q}$ is Galois with $\mathbb{Q}(\sqrt[3]{2})\subset E$ and it is the smallest extension because it is splitting field
of the minimal polynomial. Now we see that $\text{Gal}(E/\mathbb{Q}) = S_3$.
Notice that the minimal polynomial of $\sqrt[3]{2}$ over $\mathbb{Q}$ is $x^3 - 2$. The roots of this polynomial is $\sqrt[3]{2},\zeta\sqrt[3]{2},\zeta^2\sqrt[3]{2}$. Thus, the splitting field of
this polynomial is $E=\mathbb{Q}(\zeta,\sqrt[3]{2})$. Notice that $E/\mathbb{Q}$ is Galois with $\mathbb{Q}(\sqrt[3]{2})\subset E$ and it is the smallest extension because it is splitting field
of the minimal polynomial. Now we see that $\text{Gal}(E/\mathbb{Q}) = S_3$.
How do we know this is isomorphic to $S_3$ and not $\mathbb{Z}_6$? I don't see how we know which group of order 6 it is isomorphic to.
There are only two groups of order 6.
A cyclic (abelian) one.
And $D_6 \cong S_3$ which is non abelian.
Write out the automorphisms it is pretty clear it is not cyclic nor abelian.
March 15th 2009, 07:54 PM #2
Global Moderator
Nov 2005
New York City
May 11th 2009, 01:59 PM #3
Mar 2009
May 11th 2009, 08:47 PM #4 | {"url":"http://mathhelpforum.com/advanced-algebra/78919-galois-extension.html","timestamp":"2014-04-18T00:58:55Z","content_type":null,"content_length":"44778","record_id":"<urn:uuid:47b246ac-6945-45fc-bf20-28273aebd8a6>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
p-adic Analysis Compared with Real
The subject matter of this book is wonderful, and so is the book itself. Indeed, the author wisely plugs bigtime — both in her title and throughout her book — into the most wonderful thing about the
subject — that it’s an “alternative” to real analysis. Thus all the formulas and theorems in real analysis have analogous formulas and theorems in p-adic analysis. Briefly: in real analysis we
complete the normed space of rationals, taking the norm to be the usual absolute value. Why not do the same “completion thing” using some other norm or norms? There are as many of these “new” normed
spaces as there are primes, and for each prime we denote our completion by Q[p].
It doesn’t take long to define what these “new” p-adic norms are, and Katok does this on page 20 (after wisely including the basics of metric spaces and their completions). It then becomes more and
more apparent that there are two salient facts that often make p-adic analysis “weird”: (1) The norms have discrete values and (2) they’re non-Archimedean (and therefore satisfy the Strong Triangle
There are many places where the author does the kind of thing I like — namely, use language, sometimes colloquial, to illuminate her points. For example, she often, throughout the book (pp. 76, 77,
and 118, for example) dubs various consequences of the strong triangle inequality “the strongest wins” (although she doesn’t do this on p. 11, where she first introduces the strong triangle
inequality; I think it would be great if she did).
Among my favorite parts of the book are p. 43 and p. 47, where she answers two questions that had immediately occurred to me when I first read her definition of p-adic norms: (1) Are there any other
norms on the rationals that we can play with (and compare to real and p-adic)? And (2) What if p isn’t prime? (However, I’m having trouble seeing why the usual absolute value norm is supposed to
correspond to p = ∞.)
It’s clear, and commendable, that the author does intend to give the “gyst” of most of her proofs (though not all) and she also gives appropriate and motivating (and curiosity-satisfying)
counterexamples — for instance, on p. 118 — A differentiable function with never-vanishing derivative but not “injective at 0” — meaning, as she puts it, not injective in any neighborhood of 0 — so f
has no inverse. This, she points out, is different from real analysis — and again commendably, she almost always takes care to let us know the differences between the results of p-adic vs. the real
analysis counterparts.
However, I felt that her very-first mention, and definition, of the p-adic norm (p. 20), was unsatisfying. She gives the technical definition, then immediately embarks on remarks, propositions, etc.,
without first giving the reader a sense of what the initial definition means in plain English. This to me feels like a missed opportunity. If it were my book, I would add, immediately, to that
technical definition: Every rational number x is the quotient of two integers, each of which has a prime decomposition. If we write x ‘in lowest terms’, we of course can have a power of p in either
only the numerator or only the denominator, not both. Well, ord(x) tells us “how much p is in the numerator‘ — which includes the idea that if p turns out to appear in the denominator, then we ‘make
ord(x) negative’. And then the “actual” p-adic norm of x (as opposed to the just-defined ord(x)) would be p to negative that power — unless, of course, x = 0, in which case our norm has to be 0,
since that’s what happens with all norms. Thus we can say, rather intuitively, that the p-adic norm of x gives us the absolute value of the reciprocal of “the p-part’ of x.”
< p > Perhaps that’s cumbersome but it’s a good “optional” description. It isn’t technically
to do that, and students can figure it out for themselves, but they probably wouldn’t while listening to a lecture. but to me it adds to the beauty (and I wonder whether, in her classes, the author
does indeed do it that way, or some other intuitive way.) Similarly, I would give at least a few quick examples of sequences that converge to 0 in the p-adic norm.
But it’s only on pp. 20-21 that I notice this phenomenon. In general, the author is — happily — not afraid to state the obvious when that’s indicated. She also often does a beautiful job of stating
and clarifying the non -obvious..
I’ll end with some further wonderful differences between p-adic and real. I counted fifteen “basic” ones; I’ll present my favorite five:
● All open balls are also closed.
● There are only a countable number of open balls in Q[p].
● A series converges if and only if the sequence of its terms converges to 0. (Ah, how simple real analysis would be if it could boast such a theorem! On the other hand, the theory of p-adic series
is not trivial!)
● A convergent series always converges unconditionally.
However, lest we conclude that life in p-adic lanes is easy street: The power series corresponding to exp(x) does not converge everywhere. In fact, it doesn’t even converge for x = 1, so there is no
p-adic analogue of e. Perhaps we should count our blessings that the “usual” norm is not p-adic, that p-adic is only optional, for those mathematicians like the author, and like me, who love to delve
into the unusual.
Marion Cohen has a new book of poetry about the experience of math, Crossing the Equal Sign. She teaches part time at Arcadia University. Check out her other writings and math limericks on her site
marioncohen.com, and email her at: mathwoman199436@aol.com | {"url":"http://www.maa.org/publications/maa-reviews/p-adic-analysis-compared-with-real","timestamp":"2014-04-19T17:58:32Z","content_type":null,"content_length":"100596","record_id":"<urn:uuid:193e6982-eda3-4c30-b183-30dce2a1eb39>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Family of five
October 22nd 2007, 07:30 PM #1
Oct 2007
Family of five
A family of five people randomly picks a name of a family member from a hat to decide for whom to buy a present. What is the probability that nobody picks their own name? If someone picks their
own name, all the names are returned and everyone picks again. What is the probability that it took exactly two tries for everyone to pick a name of another family member"
Any help is appreciated
A family of five people randomly picks a name of a family member from a hat to decide for whom to buy a present. What is the probability that nobody picks their own name? If someone picks their
own name, all the names are returned and everyone picks again. What is the probability that it took exactly two tries for everyone to pick a name of another family member"
Any help is appreciated
These are called derangements.
October 23rd 2007, 05:50 AM #2
Global Moderator
Nov 2005
New York City | {"url":"http://mathhelpforum.com/statistics/21104-family-five.html","timestamp":"2014-04-17T10:47:40Z","content_type":null,"content_length":"32334","record_id":"<urn:uuid:0d691493-2ab1-4a36-8b84-fb07455a4b64>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: RE: RE: A math question
Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: RE: RE: A math question
From David Hoaglin <dchoaglin@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: RE: RE: A math question
Date Fri, 21 Dec 2012 10:52:36 -0500
The median is a function of a distribution. You did not specify the
distribution of x. If you explained the context for your question, we
might be able to shed better light.
David Hoaglin
On Fri, Dec 21, 2012 at 10:42 AM, CJ Lan <CJ@jupiter.fl.us> wrote:
> Al,
> I think the "median" is a function, just like the "mean" is a function, not just a number.
> BTW, the derivative of abs(x^3) should be 3*x*abs(x) because the derivative can be derived as
> abs(x^3)/(x^3)*(3*x^2) = 3*abs(x)*x^2/x = 3*x*abs(x).
> I wish others could shed a light too? Thx.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/faqs/resources/statalist-faq/
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2012-12/msg00798.html","timestamp":"2014-04-16T19:04:58Z","content_type":null,"content_length":"8750","record_id":"<urn:uuid:4f8007ed-f465-4617-b1ca-d30f75263298>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tom Hagstrom's Radiation Boundary Condition Page
This purpose of this page is to collect and disseminate useful information related to the accurate and efficient near-field truncation of the computational domain for the simulation of wave
propagation problems in the time domain. A multitude of methods have been proposed for this task; see for example the relatively recent review articles:
New results on absorbing layers and radiation boundary conditions , Topics in Computational Wave Propagation, M. Ainsworth et al eds., Springer-Verlag, 2003, 1-42.
Radiation boundary conditions for Maxwell's equations: a review of accurate time-domain formulations , J. Comput. Math., 25, 2007, 305-336.
Or the recent lecture:
A Lecture at the GAMM Workshop 2009
Current contents are limited to two techniques which can be used for problems such as the scalar wave equation, Maxwell's equations and the acoustic system in uniform, isotropic media. The first
involves compressions of the radiation boundary kernel arising in nonlocal formulations of exact conditions on special boundaries. The second involves optimal local boundary condition sequences
applicable on polygonal boundaries.
Exact formulas for radiation boundary conditions on special boundaries - planes, cylinders, and spheres - can be obtained by separation of variables. For the standard models these formulas all
involve a nonlocal operator of the form F^-1KF where F is a spatial harmonic transform (Fourier on the plane, spherical harmonics on the sphere) and K is a temporal convolution. The direct evaluation
of K is inefficient as it involves the entire time history of the solution on the boundary. However, a fast, low-memory approximate evaluation is achieved by replacing the convolution kernel by a sum
of exponentials and noting that convolution with an exponential kernel is equivalent to solving an ode and thus requires no global memory. It can be proven that the kernels arising from the scalar
wave equation and related models can be approximated to an accuracy ε using O(log 1/ε· log cT/λ) exponentials where T is the simulation time and λ is the wavelength. See:
Rapid evaluation of nonreflecting boundary kernels for time-domain wave propagation , SIAM J. Numer. Anal., 37, 2000, 1138-1164.
Nonreflecting boundary conditions for the time-dependent wave equation , J. Comput. Phys., 180, 2002, 270-296.
To use these compressed approximations one only needs to know the amplitudes and exponents in the sum-of-exponential approximations. You can access these here for ε=1E-6:
Sphere kernel
There are two drawbacks to the nonlocal conditions. The first is the need to use spatial harmonic transforms, which is some expense in 3+1 dimensions and involves some effort to couple with the
interior scheme. The second, and most important, is the restriction on the shape of the artificial boundary. Particularly for high-aspect-ratio scatterers it would be more efficient to bound the
computational domain by a box rather than a sphere. This can be done using local methods such as local radiation boundary condition sequences or perfectly matched absorbing layers. However, these
local methods can suffer severe accuracy degradation over time. To correct this defect we have introduced a new parametrization of local boundary condition sequences which we call Complete Radiation
Boundary Conditions (CRBCs). These involve involve inhomogeneous rational approximants to the transform of the exact planar kernel which interpolate it in the right half-plane. Boundary conditions
are built out of modified ``Higdon-type'' operators: a[j]d/dt+(1-a[j]^2)/(a[j] T) ±cd/dn. The cosines a[j] are determined by specifying the boundary condition order and the dimensionless parameter η=
δ/cT where T is the simulation time, c is the wavespeed, and δ is the minimal separation between the artificial boundary and any sources, scatterers, or other inhomogeneities. For details see the
Complete radiation boundary conditions: minimizing the long time error growth of local methods.
A table of parameters a[j] for boundary condition orders P=4,8,... (tolerances > 1E-8) and η=1E-2,...,1E-6 along with maximum values of the reflection coefficient may be accessed below. Note that the
number of cosines required by a condition of order P is 2P+2.
Optimal cosines for certain values of P and η.
These cosines are computed by the following MATLAB function which implements the Remez algorithm. Its inputs are η and P and the outputs are the cosines (a[j], j=1, ... , 2P+2) and the maximum of the
complex reflection coefficient. Note that the function may fail due to conditioning issues if P is chosen too large.
A MATLAB function for computing optimal cosines.
We note that direct applications to second-order formulations have so far used different parametrizations based on a combination of Gauss-Lobatto and Yarvin-Rokhlin quadrature nodes. For details see:
Radiation boundary conditions for time-dependent waves based on complete plane wave expansions, J. Comput. Appl. Math., to appear.
I greatfully acknowledge the contribution of many collaborators in this effort including Brad Alpert, Dan Givoli, Leslie Greengard and Tim Warburton. The work is currently suppoorted by the National
Science Foundation via grant DMS-06010067 and has also been supported in part by ARO Grant DAAD19-03-1-0146, AFOSR Contract FA9550-05-1-0473, and the Israel-US Binational Science Foundation. Any
conlusions or recommendations expressed here are my own and do not necessarily reflect the views of NSF, ARO, AFOSR, BSF, or my collaborators. If you have any questions, comments, or suggestions
please contact me at:
thagstrom at smu dot edu
Last updated: January 5, 2009. | {"url":"http://faculty.smu.edu/thagstrom/rbcpac.html","timestamp":"2014-04-17T18:24:23Z","content_type":null,"content_length":"7393","record_id":"<urn:uuid:1c57ebfb-490f-4ee6-8879-4fc19da31ae2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to evaluate this integral problem..
June 3rd 2010, 08:46 PM #4
June 3rd 2010, 08:29 PM #3
Super Member
June 3rd 2010, 07:44 PM #2
June 3rd 2010, 06:56 PM #1
sorry, F(x)=2.. I've got the answer by drawing the graph... thanx to mr. fantastic and mr. ANDS!
If your problem is written correctly, you are really only dealing with one function; look at your limits of integration - it starts off at 1, and your two functions are actually "split" at x=1.
The only function that we care about is F(x)=2 (or is that -2?). This one should be pretty easy to do without even using calculus (although you should, just to show that you know how to get the
right answer).
I suggest you first draw the graph of f(x) and then use it to see how to set up the integrals.
(Even easier of course would be to shade the area represented by the integral and use simple geometry to get the answer).
How to evaluate this integration
f(x) = 2x , x<=1
f(x) - 2, x>1
$\int_{1}^{5} f(x) dx$
If your problem is written correctly, you are really only dealing with one function; look at your limits of integration - it starts off at 1, and your two functions are actually "split" at x=1. The
only function that we care about is F(x)=2 (or is that -2?). This one should be pretty easy to do without even using calculus (although you should, just to show that you know how to get the right
I've got the answer by drawing the graph... thanx to mr. fantastic and mr. ANDS! | {"url":"http://mathhelpforum.com/calculus/147685-how-evaluate-integral-problem.html","timestamp":"2014-04-16T15:08:54Z","content_type":null,"content_length":"40365","record_id":"<urn:uuid:180fabb6-0583-4168-b5d6-9d175e7b762d>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00482-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate Future Value
Edit Article
Use the Standard Formula for Calculating Compound InterestUse a Spreadsheet Calculator to Calculate Future Value
Edited by IngeborgK, Amy, Jeff
Provided that the variables are known, solving for the future value of an investment can be accomplished in a few, simple steps. Creating a future value spreadsheet calculator can simplify the
process by reducing the potential for human error. A spreadsheet also can help you determine the future value of multiple investments by entering new variables into corresponding fields. Another
advantage to using a spreadsheet calculator is that it can more easily account for periodic payments than a simple, compound interest formula. This article provides step-by-step instructions for
using the standard formula for solving for future value, and how to create a spreadsheet calculator in order to automate the process.
Method 1 of 2: Use the Standard Formula for Calculating Compound Interest
1. 1
Use the compound interest formula to calculate the future value of an investment of a single deposit over a set period of time. The future value of an investment (FV) is equal to the present
value (PV) multiplied by 1 plus the interest rate (r) times the amount of time (t) in years (represented as an exponent). Plug the variables into the following equation and solve for FV to
determine the future value of an investment: FV=PV(1+r)^t.
□ Follow the example to solve for FV. $10,000 invested for 10 years at an interest rate of 9% will yield a future value of $23673.64: FV=10,000(1+.09)^10. Insert your own variables to determine
the future value of a given investment.
Method 2 of 2: Use a Spreadsheet Calculator to Calculate Future Value
1. 1
Enter the column headings and data labels. Type "Future Value Calculator" in cell A1, "Payment Periods" in cell A2, "Payments Made" in cell A3, "Number of Years" in A4, "Interest Rate" in cell
A5, "Present Value" in cell A6, and "Future Value" in cell A7.
2. 2
Format the width of column A. Position the mouse pointer above the column heading, in between the line separating columns A and columns B. Click and drag the separator to double the width of
column A.
3. 3
Format the column heading and table borders.
□ Select cells A1 and B1, and press the "Merge cells" button on the formatting toolbar.
□ Click in cell A1, drag to select cells A1 through B7, and click the "Borders" button. Select the "All borders" option from the pull-down menu.
□ Click in cell A1 again. Click the "Center text" button, and then the "Bold text" button on the formatting toolbar.
4. 4
Format the data labels. Click in cell B5 and click the percent (%) button in the "Quick formatting" menu on the formatting toolbar. The data will display as a percentage.
□ Select cells B7 and B8 together, and click the Currency ($) button in the "Quick formatting" menu on the formatting toolbar. The data will display as a dollar amount.
5. 5
Enter the future value equation into the spreadsheet calculator. Click in cell B8 and type the following formula: =FV(B5,B4,B3,-B6,B2). When using spreadsheet software programs other than
Microsoft Excel, use a semicolon instead of a comma to separate the values in the formula (=FV(B5;B4;B3;-B6;B2).
6. 6
Test the future value calculator to ensure functionality. Enter 40 in cell B2, and 40 again in cell B3 to represent the payment periods and the payments made. Enter 10 in cell B4, .09 in cell B5,
and 10,000 in cell B6. The future value result should be $23,011.23. If the future value in cell B7 does not read $23,011.23, confirm that the formula has been entered correctly.
Article Info
Categories: Investments and Trading
Recent edits by: Amy, IngeborgK
Thanks to all authors for creating a page that has been read 5,807 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Calculate-Future-Value","timestamp":"2014-04-17T13:14:01Z","content_type":null,"content_length":"61344","record_id":"<urn:uuid:33de24f2-e6e8-42cd-81c7-298b6113175c>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: 2 Contextfree Grammars and Pushdown Automata
As we have seen in Exercise 15, regular languages are not even su#cient to cover wellbalanced
brackets. In order even to treat the problem of parsing a program we have to consider more
powerful languages. Our aim in this section is to generalize the concept of a regular language
to cover this example (and others like it), and to find ways of describing such languages. The
analogue of a regular expression is a contextfree grammar while finite state machines are
generalized to pushdown automata. We establish the correspondence between the two and
then finish by describing the kinds of languages that are not captured by these more general
2.1 Contextfree grammars
The question of recognizing wellbalanced brackets is the same as that of dealing with nesting
to arbitrary depths, which certainly occurs in all programming languages. To cover these
examples we introduce the following definition.
Definition 11 A contextfree grammar is given by the following:
. an alphabet # of terminal symbols, also called the object alphabet;
. an alphabet N of nonterminal symbols, the elements of which are also referred to as
auxiliary characters, placeholders or, in some books, variables, where N # # = #; 7
. a special nonterminal symbol S # N called the start symbol;
. a finite set of production rules, that is strings of the form R # # where R # N is a non
terminal symbol and # # (# # N) # is an arbitrary string of terminal and nonterminal | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/692/0193268.html","timestamp":"2014-04-16T10:39:32Z","content_type":null,"content_length":"8672","record_id":"<urn:uuid:c5a82177-3769-4a91-9d28-87e6744bc765>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
June 29th 2012, 02:58 AM #1
Jun 2012
I've a problem, I have to find a vector v=[vx vy]' (column) such that:
where y=[y1 y2] (row) and x,y1,y2,vx,vy are a real scalar.
Does anyone know an analytic solution?
Re: y*v<x
Any vector?
Your equation can be written as vx*y1 + vy*y2 < x.
There are many solutions (unless y1=y2=0 and x<=0). In the general case, you can freely choose one component, for example vx=0. The remaining inequality vy*y2<x is satisfied for vy = x/y2 -1,
assuming y2 !=0.
June 29th 2012, 05:23 AM #2
Junior Member
Jun 2012
June 29th 2012, 05:42 AM #3
Jun 2012 | {"url":"http://mathhelpforum.com/advanced-algebra/200480-y-v-x.html","timestamp":"2014-04-18T00:35:37Z","content_type":null,"content_length":"29630","record_id":"<urn:uuid:11b85f3c-6a3f-42b2-bd87-89d3c167d9cd>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homotopy Extension Property involving mapping cylinder
up vote 0 down vote favorite
Suppose we have a map $f:X\to Y$ and we form the mapping cylinder $M_f$. Hatcher claims that it is obvious that the pair $(M_f, X \cup Y)$ satisfies the homotopy extension property. Equivalently we
could find a retraction of $M_f \times I$ to $M_f\times \{0} \cup (X \cup Y)\times I$. I don't see how we can get this latter result, however.
at.algebraic-topology homotopy-theory
2 Have you looked at the proof of the HEP for subcomplexes of a CW-complex? You can adapt the proof directly, and write the extension explicitly. IMO math.stackexchange.com is a more appropriate
forum for this kind of question. – Ryan Budney Sep 3 '11 at 0:16
This result is in the basis of all homotopy theory, once you understand this you will be able to go directly to the abstract/algebraic homotopy literature ;-) – Fernando Muro Sep 3 '11 at 1:20
add comment
3 Answers
active oldest votes
Neil has given an explicit retraction. But it may be useful to note that you can obtain results like this from a combination of some "easier" facts:
• The pair $(I,\{0,1\})$ has the HEP.
• If $(L,K)$ has the HEP where $K$ and $L$ are locally compact Hausdorf, and if $Z$ is any space, then $(Z\times L, Z\times K)$ has the HEP.
up vote 4
down vote • If $(U,A)$ has the HEP, and $g:A\to B$ is any map, then $(V,B)$ has the HEP, where $V$ is the pushout of $U$ along $g$.
(I'll leave the proofs of these as an exercise; you only need the second one for $(L,K)=(I,\{0,1\})$ anyway.) Then note that $M_f$ can be obtained from $X\amalg Y$ by gluing it to a copy
of $X\times I$ along $X\times \{0,1\}$.
I always prefer proofs like this because they remind you that the thing you're trying to prove just boils down to some fact about the unit interval, or something similarly elementary.
– Dylan Wilson Sep 3 '11 at 20:20
Thank you Charles! This really helped me understand after proving these. In your second bullet point, though, is locally compact Hausdorff necessary? Just take $id \times r$ where is
$r$ is the assumed retraction. Hatcher also says any space $Z$ will work and he mentions no assumptions on $(L,K)$ except that $K$ will be closed under Hausdorff conditions. Thanks
again. – Kyle Sep 4 '11 at 20:00
Kyle: it may be the condition I gave is too strong (or maybe it is not really the right condition at all). Here is what I am worried about with your proof: you want to know that the
product of $Z$ with $L\times 0\cup K\times I$ is homeomorphic to $(Z\times L\times 0)\cup (Z\times K\times I)$. In general, taking products with a fixed space does not commute with
colimits in Top. – Charles Rezk Sep 5 '11 at 15:38
I'm not sure this is an issue here. Since the domain of $id\times r : Z\times (L\times I) \to Z\times (L\times I)$ is certainly continuous being the product of continuous maps and our
topology being product topology. Then we can just compute that the image is what we want and we get this is actually a retraction. I'll have to think about if those spaces are
homeomorphic or not. At a quick glance it would seem you're trying to get at that $Z\times L \times I$ is homeomorphic to $Z\times (L\times I)$ and then the subspaces were looking at
are the same sets. Thanks again for commenting :) – Kyle Sep 8 '11 at 4:37
err sorry i meant to say since $id\times r$ is certainly continous – Kyle Sep 8 '11 at 4:38
add comment
I'll assume you want the convention where $M_f$ is $(X\times I)\cup Y$ with $(x,0)$ attached to $f(x)$. Now $M_f\times I=(X\times I^2)\cup(Y\times I)$ with $(x,0,t)$ attached to $(f(x),t)
$. We want to retract this onto the space $$ Q=(M_f\times\{0\})\cup(((X\times\{1\})\cup Y)\times I) $$ Note that $X\times\{0\}\times I$ gets identified with part of $Y\times I$ and so is
contained in $Q$. Thus $Q=(X\times U)\cup(Y\times I)$, where
up vote 2 $$ U=(\{0,1\}\times I)\cup (I\times\{0\}), $$ and again $(x,0,t)$ is attached to $(f(x),t)$. Now let $r$ be a retraction from $I\times I$ onto $U$, say by radial projection from the point
down vote $(1/2,1)$. We can then fit $1\times r:X\times I^2\to X\times U$ together with the identity map on $Y\times I$ to get the required retraction of $M_f\times I$ onto $Q$.
add comment
This question is answered by Chapter 7 "Cofibrations", Example 2 on p. 280 of my book `Topology and groupoids' with full proof. In fact it was in the first (1968) edition of this
book, published by McGraw Hill.
Other things in that Chapter are a gluing theorem for homotopy equivalences, the exact sequence of a fibration of groupoids, ....
up vote 1 down
vote In other Chapters you will find the Phragmen-Brouwer property, the Jordan Curve Theorem, covering morphisms of groupoids, the fundamental groupoid of an orbit space, ...
See http://www.bangor.ac.uk/~mas010/topgpds.html
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology homotopy-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/74406/homotopy-extension-property-involving-mapping-cylinder","timestamp":"2014-04-18T08:41:16Z","content_type":null,"content_length":"67096","record_id":"<urn:uuid:13eda90c-2c0f-4f62-b4dc-a970b9ddd3d9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Greenwood, IL Math Tutor
Find a Greenwood, IL Math Tutor
...I earned a Master's degree in Literature and Rhetoric with a perfect 4.0 and achieved secondary teacher certification in English. My bachelor's degree is also in English. I have been teaching
college composition for more than six years.
17 Subjects: including algebra 1, algebra 2, grammar, geometry
...Please feel free to contact me if you have further questions.In addition to having studied math through college-level Calculus, most of my tutoring at the high school I work at has been for
math. My first two years at the school were as an academic support teacher in which I helped students with...
20 Subjects: including prealgebra, algebra 1, ACT Math, Spanish
...I have been substitute teaching grades K-12 for about 4 years now. I was raised in a very large family (2nd oldest out of 6 kids). Aside from tutoring I continue to enjoy helping others as an
Assistant Scoutmaster, Knights of Columbus member, and musician. Please do not hesitate to contact me if you would like some help!
59 Subjects: including algebra 1, ACT Math, algebra 2, chemistry
...I have been teaching for 13 years and have taught Pre-Algebra, Algebra, Geometry, Algebra II, Pre-Calculus, and Calculus. I can also tutor in Trigonometry and Discrete Mathematics. The year I
taught AP Calculus 16 out of 17 students (94.11%) passed the AP calculus exam; 3 students received scor...
14 Subjects: including discrete math, algebra 1, algebra 2, calculus
...Basically, I have been a student of mathematics for my Master's in mathematics. My subjects at graduation level have been English and mathematics. I have been tutoring students at all levels
of schooling and even at college level.
14 Subjects: including statistics, probability, algebra 1, algebra 2
Related Greenwood, IL Tutors
Greenwood, IL Accounting Tutors
Greenwood, IL ACT Tutors
Greenwood, IL Algebra Tutors
Greenwood, IL Algebra 2 Tutors
Greenwood, IL Calculus Tutors
Greenwood, IL Geometry Tutors
Greenwood, IL Math Tutors
Greenwood, IL Prealgebra Tutors
Greenwood, IL Precalculus Tutors
Greenwood, IL SAT Tutors
Greenwood, IL SAT Math Tutors
Greenwood, IL Science Tutors
Greenwood, IL Statistics Tutors
Greenwood, IL Trigonometry Tutors
Nearby Cities With Math Tutor
Belshaw, IN Math Tutors
Brunswick, IN Math Tutors
Central Park, IL Math Tutors
Creston, IN Math Tutors
Dune Acres, IN Math Tutors
East Hazel Crest, IL Math Tutors
La Grange Highlands, IL Math Tutors
Lake Dalecarlia, IN Math Tutors
Lakes Of Four Seasons, IN Math Tutors
North Hayden, IN Math Tutors
Oldtown, IL Math Tutors
Palmer, IN Math Tutors
Phoenix, IL Math Tutors
South Suburban, IL Math Tutors
Thornton, IL Math Tutors | {"url":"http://www.purplemath.com/Greenwood_IL_Math_tutors.php","timestamp":"2014-04-17T11:14:51Z","content_type":null,"content_length":"24011","record_id":"<urn:uuid:173e182b-d673-4527-b395-dd606d1b58e1>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00017-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculate Jensen's Alpha with Excel
Calculate Jensen’s Alpha with Excel
Jensen’s Alpha is a risk-adjusted performance benchmark that tells you how by much the returns of an actively managed portfolio are above or below market returns.
Originating in the late 1960s, Jensen’s Alpha (often abbreviated to Alpha) was developed to evaluate the skill of active fund managers in stock picking.
• A positive Alpha means that a portfolio has beaten the market, while a negative value indicates underperformance
• A fund manager with a negative alpha and a beta greater than one has added risk to the portfolio but has poorer performance than the market
Careful stock picking and financial engineering means that investors can add alpha to a portfolio without adversely affecting beta.
According to the Capital Asset Pricing Model, Alpha is defined by this equation
alpha = r[s] – [r[f] + β (r[b] - r[f])]
where r[s] is the expected portfolio return, r[f] is the risk-free rate, β is the portfolio beta, and r[b] is the market return. Beta describes the volatility of the portfolio with respect to that
of the wider market, and is calculated with this equation
The market return is usually described by the expected return of an index fund, like the FTSE or S&P500.
Calculate Alpha with Excel
Thse steps describe how you can calculate Alpha with Excel (there’s a link to download the tutorial spreadsheet at the bottom). The screegrabs describe the formulae used in the spreadsheet.
Step 1: Put the returns of your portfolio and the benchmark index into Excel, and calculate the average returns
Step 2. Define your risk free rate. If the returns specified in Step 1 are monthly returns, then your risk free rate has to be on a monthly basis.
Step 3. Calculate the portfolio Beta, and then the Alpha.
Download Excel Spreadsheet to Calculate Jensen’s Alpha with Excel
11 Responses to "Calculate Jensen’s Alpha with Excel"
1. The F9 cell Alpha formula is wrong : it should be “B20-E6-F8*(C20-E6)” rather than “B20-E6+F8*(C20-E6)”
□ I’ve made the appropriate corrections.
Thank you for your eagle-eyed correction!
☆ Also, you should correct the CAPM formula for alpha: instead of “alpha = rs – rf + β (rb – rf)”, please write “alpha = rs – [rf + β (rb – rf)]“.
○ Done! Thank you.
2. Isn’t this a monthly alpha calculation? How would you annualize?
□ You annualize a monthly Jensen’s Alpha with this calculation:
(1+alpha)^12 – 1
3. Your calculation is based on the assumption that the risk free rate was the same for every period which is unlikely to be the case. How would you modify the calculation to take into account
varying risk free rates over the time period covered?
4. Given the monthly and yearly calculation does this then mean a daily alpha would be(1+alpha)^250-1?
5. If I have data for 9 years worth of back testing, how do I calculate the average annual alpha? Here are my inputs:
Beta: 0.88
Portfolio Return: 925%
Risk Free Return: 16.76%
Market Return:16.13%
Is calculating alpha over such a long period accepted?
6. Is it possible to find Alpha through a regression, if so how? I need to find if alpha is significant, t-stat will do. | {"url":"http://investexcel.net/jensens-alpha-excel/","timestamp":"2014-04-20T01:06:39Z","content_type":null,"content_length":"38151","record_id":"<urn:uuid:73832ec2-f1c0-4d10-a823-725612ea8946>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the remainder when (2x4 – 3x3 + x2 – 3x + 2) ÷ (x – 2) ?
• 9 months ago
• 9 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51d08fa1e4b08d0a48e3dc32","timestamp":"2014-04-20T00:57:20Z","content_type":null,"content_length":"58566","record_id":"<urn:uuid:4d901678-cc39-439b-abbc-ac00f8a2300e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reflexive, Transitive, Symmetric
October 22nd 2008, 11:23 AM #1
Oct 2008
Reflexive, Transitive, Symmetric
What are naturally occuring examples of relations that satisfy two of the following properties, but not the third: symmetric, reflexive, and transitive.
Hello, terr13!
So far, I have two of the examples . . .
What are naturally occuring examples of relations that satisfy two of the
following properties, but not the third: Reflexive, Symmetric, and Transitive?
Let $\circ$ = "is a brother of"
. . (The first is male, and both parties are children of the same parents.)
Reflexive: . $x \circ x$
. . $x\text{ is a brother of himself . . . True.}$
Symmetric: . $\text{If }x \circ y\text{, then }y \circ x$
. . $\text{If }x\text{ is a brother of }y\text{, then }y\text{ is a brother of }x.$
Not necessarily true . . . $y\text{ could be a {\bf sister} of }x.$
Transitive: . $\text{If }x \circ y\text{ and }y \circ z\text{, then }x \circ z$
. . $\text{If }x\text{ is a brother of }y\text{ and }y\text{ is a brother of }z\text{ , then }x\text{ is a brother of }x\text{ . . . True}$
The relationship $\circ$ is reflective and transitive, but not symmetric.
Let $\star$ = "knows" (is acquainted with).
Reflexive: . $x \star x$
. . $x\text{ knows himself.}\quad\hdots$ .True.
Symmetric: . $\text{If }x \star y\text{, then }y \star x$
. . $\text{If }x\text{ knows }y\text{, then }y\text{ knows }x\quad\hdots$ .True
Transitive: . $\text{If }x \star y\text{ and }y \star z\text{, then }x \star z$
. . $\text{If }x\text{ knows }y\text{ and }y\text{ knows }z\text{, then }x\text{ knows }z.\quad\hdots$ .not necessarily true
The relationship $\star$ is reflexive and symmetric, but not transitive.
Thanks for the fast reply, but the one I still have the most trouble with is finding one that is not reflexive. For not symmetric, I was thinking of using $\leq$. The problem I have with non
reflexive is if we say the relation is !, and we have x!y and y!x, if x!y, and y!z, then x!z. But if we look at those two, we can use the symmetric relation in the transitive one and say if x!y,
and y!x, then x!x, which proves reflexiveness.
Is the brother of is not reflexive
Let $\circ$ = "is a brother of"
. . (The first is male, and both parties are children of the same parents.)
Reflexive: . $x \circ x$
. . $x\text{ is a brother of himself . . . True.}$
This is not standard English usage. If a mother and father have three children, all male, and you ask one of them "How many brothers do you have?", he will answer "Two" not "Three".
Symmetric and transitive but not reflexive
Thanks for the fast reply, but the one I still have the most trouble with is finding one that is not reflexive. For not symmetric, I was thinking of using $\leq$. The problem I have with non
reflexive is if we say the relation is !, and we have x!y and y!x, if x!y, and y!z, then x!z. But if we look at those two, we can use the symmetric relation in the transitive one and say if x!y,
and y!x, then x!x, which proves reflexiveness.
The last sentence is fallacious. Maybe there is an x for which x!y is false for all y. Example: the usual definition of "divides" on all integers requires that m|n if there is a unique integer q
for which n = qm. The result is that m|m for every nonzero integer, but 0 does not divide 0, so the relation is not reflexive.
An easier way to come up with counterexamples is to look at small finite relations. For this problem, try the relation R on {1,2} defined by 2R2 (and nothing else).
Last edited by SixWingedSeraph; May 20th 2009 at 02:27 PM. Reason: Added title
More about symmetric and transitive but not reflexive
This sort of relation is called a "partial equivalence relation" and is a big deal in theoretical computer science. You can start learning about it from Wikipedia here:
Partial equivalence relation - Wikipedia, the free encyclopedia
October 22nd 2008, 12:45 PM #2
Super Member
May 2006
Lexington, MA (USA)
October 22nd 2008, 03:36 PM #3
Oct 2008
May 20th 2009, 02:12 PM #4
May 2009
Saint Paul, Minnesota, USA
May 20th 2009, 02:19 PM #5
May 2009
Saint Paul, Minnesota, USA
May 20th 2009, 02:24 PM #6
May 2009
Saint Paul, Minnesota, USA | {"url":"http://mathhelpforum.com/discrete-math/55140-reflexive-transitive-symmetric.html","timestamp":"2014-04-21T10:06:24Z","content_type":null,"content_length":"51089","record_id":"<urn:uuid:38a29a5c-5cb2-4468-b4f4-ba417a6bf903>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Supersaturation problem for color-critical graphs
The Library
Supersaturation problem for color-critical graphs
Pikhurko, Oleg and Yilma, Z. B. Supersaturation problem for color-critical graphs.
Full text not available from this repository.
The \emph{Tur\'an function} $\ex(n,F)$ of a graph $F$ is the maximum number of edges in an $F$-free graph with $n$ vertices. The classical results of Tur\'an and Rademacher from 1941 led to the study
of supersaturated graphs where the key question is to determine $h_F(n,q)$, the minimum number of copies of $F$ that a graph with $n$ vertices and $\ex(n,F)+q$ edges can have. We determine $h_F(n,q)$
asymptotically when $F$ is \emph{color-critical} (that is, $F$ contains an edge whose deletion reduces its chromatic number) and $q=o(n^2)$. Determining the exact value of $h_F(n,q)$ seems rather
difficult. For example, let $c_1$ be the limit superior of $q/n$ for which the extremal structures are obtained by adding some $q$ edges to a maximal $F$-free graph. The problem of determining $c_1$
for cliques was a well-known question of Erd\H os that was solved only decades later by Lov\'asz and Simonovits. Here we prove that $c_1>0$ for every {color-critical} $F$. Our approach also allows us
to determine $c_1$ for a number of graphs, including odd cycles, cliques with one edge removed, and complete bipartite graphs plus an edge.
Actions (login required) | {"url":"http://wrap.warwick.ac.uk/49776/","timestamp":"2014-04-16T07:48:20Z","content_type":null,"content_length":"31832","record_id":"<urn:uuid:8a17b786-c0ca-4d10-8799-b26dbd4f530d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kirsten Garnier-Frudden's Math Classes
Helpful Math Websites:
Kahn AcademyKahn Academy offers hundreds of YouTube Videos in multiple subjects. If there is a specific concept you need extra practice on just type it into the search bar and you will be able to
watch a video lecture as well as practice the concept. This is a great website to also become familiar with the new common core standards.
Teacher TubeTeacher Tube is just like YouTube, but has many educational videos on a variety of subjects.YayMath
This site has multiple videos from Algebra 1, Geometry, and Algebra 2. Most videos also have an online quiz and worksheet attached. Videos are created by a high school math teacher who makes topics
interesting by dressing up in costumes each day. It's quite hilarious!
Common Core Standards | {"url":"http://coremath.weebly.com/","timestamp":"2014-04-16T04:12:12Z","content_type":null,"content_length":"17162","record_id":"<urn:uuid:5c55ebe8-e2ad-4b36-b8d4-93065ff5a383>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
You are running into precision limits. The numbers you are dealing with are
very small and your doing a lot of multiplications and trig calculations to
create intermediate results. Each intermediate result has to be rounded to
fit into the 8 byte double. The fact that the robotstix doesn't give you
results below 1.6 nautical miles shows that you're hitting these limits and
your intermediate results are being rounded.
Additionally, while the Amtel double is 8 bytes, its only an 8-bit
microcontroller vs. the 32/64-bit desktop you have. If you're pushing
precision limits this is probably what gives the difference between the
robostix and your desktop. As for the differences between your final
results, (2.9 NM vs 2.5 NM) the differences are only big because the units
are nautical miles. Considering how small your numbers are before the final
calculation its not surprising that they could differ by a few tenths from
machine to machine.
To get around this there are a few things you can do. The first it to try
to rearrange the math to either cancel out operations that aren't necessary
or to avoid creating results that can't be represented. For example, I
could do this:
a = 1 / 3;
b = 21 * a;
however the calculation in 'a' will be 0.3333..., an number which simply
cannot be represented on any computer. However, if I combined those steps
to divide by 3 directly and avoid storing an intermediate value:
b = 21 / 3;
there is a defined, finite result that can be easily stored with no loss of
Another thing you can do is work in the smallest units possible. For
example, if I wanted to write a program that calculated the position of a
spacecraft, I might choose AU (1 astronomical unit == distance between the
earth and the sun) as my unit of measure. This gives me good results when
calculating my distance to nearby planets, (e.g. Mars is 1.29 AU away)
however if I'm trying to dock with a satelite this unit way to large, (e.g.
"we are 0.000000000006684 AU and closing").
In your case if you care about tenths of miles, then you probably want to
work in a smaller unit like feet or inches. I don't know how this changes
your calculations but working with a smaller unit can help you avoid
amplifying the 'noise' of precision loss.
Finally, what you're doing has probably already been done many times
before. Try to find a real code example (non-textbook) of this calculation
someplace where somebody has already considered the limitations of computer
On 5/12/07, Jon Keller <jon@...> wrote:
> I've asked this same question around some of the avr lists but am having
> troubles getting an answer so I thought I may as well try this list as
> well, sorry if its a little off topic.
> I'm trying to calculate the distance between two gps coordinates on the
> robostix and am running into some weird issues. I'm using the code below
> for the calc. When I run this same code within perl and C on my standard
> workstation I get the correct values, which for the coordinates below are
> "NMiles: 2.5005995199488 SMiles: 2.87763854023694 SKm: 4.63111031889908"
> However when running the same code over the robostix I get the following
> results.
> "NMiles: 2.9072017670 SMiles: 3.3455481529 SKm: 5.3841376305"
> Obviously theres quite a big difference in the results, could anyone
> shed some light on why? Another very strange problem I've noticed is
> that if I make the two coords closer together, the results on the
> robostix stop decreasing when they hit a certain value, for instances
> the nautical miles result stops at 1.678 no matter how close the coords
> get to each other. Any help is greatly appreciated, Cheers.
> -Jon
> double lon1 = 174.681821;
> double lon2 = 174.629821;
> double lat2 = deg2rad(-36.728889);
> double lat1 = deg2rad(-36.728889);
> double dlon = lon2 - lon1;
> if (dlon > 180) {
> dlon -= 360;
> }
> if (dlon < -180) {
> dlon += 360;
> }
> double rdlon = deg2rad(dlon);
> double cosdist = sin(lat1) * sin(lat2) + cos(lat1) * cos(lat2) *
> cos(rdlon);
> double rdist = acos(cosdist);
> double nautical_miles = rad2deg(rdist) * 60;
> double statue_miles = fabs((rad2deg(rdist) * 60.) * 1.15077945);
> double statue_km = fabs((rad2deg(rdist) * 60.) *
> 1.8520000031807997);
> print_double("Distance - NMiles: %.2lf SMiles: %.2lf SKm:
> %.2lf\n", nautical_miles, statue_miles, statue_km);
> -------------------------------------------------------------------------
> This SF.net email is sponsored by DB2 Express
> Download DB2 Express C - the FREE version of DB2 express and take
> control of your XML. No limits. Just data. Click to get it now.
> http://sourceforge.net/powerbar/db2/
> _______________________________________________
> gumstix-users mailing list
> gumstix-users@...
> https://lists.sourceforge.net/lists/listinfo/gumstix-users | {"url":"http://sourceforge.net/p/gumstix/mailman/message/16748978/","timestamp":"2014-04-20T08:16:58Z","content_type":null,"content_length":"27713","record_id":"<urn:uuid:5c95dacb-b7ab-496d-90b8-e5d39651dc0c>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Does a photon observe other photons moving past it at the speed of light?
They say that the speed of light is the same for all inertial observers, but what if the observer is a particular photon within a laser beam? Each photon in a laser beam would see all the other
photons passing it at speed c, therefore the laser beam would instantly break up. How do you solve this conundrum?
The answer is that a photon can't count as an observer. Time slows down for objects that move at close to the speed of light. For an object moving *at* the speed of light, time would grind to a halt.
So how fast you clock something moving past you if you can't measure time turns out to be a nonsensical question.
Still Curious?
Get More 'Curious?' with Our New PODCAST:
Related questions:
More questions about The Theory of Relativity: Previous | Next
How to ask a question:
If you have a follow-up question concerning the above subject, submit it here. If you have a question about another area of astronomy, find the topic you're interested in from the archive on our site
menu, or go here for help.
Main Page | About Us | For Teachers | Astronomy Links | Ask a Question | View a Random Question | Our Podcast
Table 'curious.Referrers' doesn't existTable 'curious.Referrers' doesn't exist
URL: http://curious.astro.cornell.edu/question.php?number=574
This page has been accessed 20748 times since September 16, 2003.
Last modified: September 16, 2003 11:28:20 PM
Legal questions? See our
copyright, disclaimer and privacy policy
Ask an Astronomer
is hosted by the
Astronomy Department
Cornell University
and is produced with
Warning: Your browser is misbehaving! This page might look ugly. (Details) | {"url":"http://curious.astro.cornell.edu/question.php?number=574","timestamp":"2014-04-21T07:39:25Z","content_type":null,"content_length":"12136","record_id":"<urn:uuid:7438c98f-f058-4a34-9a6f-af35ea69f7f9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics: Buoyancy and Archimedes' Principle Video | MindBites
Physics: Buoyancy and Archimedes' Principle
About this Lesson
• Type: Video Tutorial
• Length: 12:53
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 138 MB
• Posted: 07/01/2009
This lesson is part of the following series:
Physics (147 lessons, $198.00)
Physics: Fluids (13 lessons, $13.86)
Physics: Fluid Statics (9 lessons, $6.93)
This lesson was selected from a broader, comprehensive course, Physics I. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/
product/physics. The full course covers kinematics, dynamics, energy, momentum, the physics of extended objects, gravity, fluids, relativity, oscillatory motion, waves, and more. The course features
two renowned professors: Steven Pollock, an associate professor of Physics at he University of Colorado at Boulder and Ephraim Fischbach, a professor of physics at Purdue University.
Steven Pollock earned a Bachelor of Science in physics from the Massachusetts Institute of Technology and a Ph.D. from Stanford University. Prof. Pollock wears two research hats: he studies
theoretical nuclear physics, and does physics education research. Currently, his research activities focus on questions of replication and sustainability of reformed teaching techniques in (very)
large introductory courses. He received an Alfred P. Sloan Research Fellowship in 1994 and a Boulder Faculty Assembly (CU campus-wide) Teaching Excellence Award in 1998. He is the author of two
Teaching Company video courses: “Particle Physics for Non-Physicists: a Tour of the Microcosmos” and “The Great Ideas of Classical Physics”. Prof. Pollock regularly gives public presentations in
which he brings physics alive at conferences, seminars, colloquia, and for community audiences.
Ephraim Fischbach earned a B.A. in physics from Columbia University and a Ph.D. from the University of Pennsylvania. In Thinkwell Physics I, he delivers the "Physics in Action" video lectures and
demonstrates numerous laboratory techniques and real-world applications. As part of his mission to encourage an interest in physics wherever he goes, Prof. Fischbach coordinates Physics on the Road,
an Outreach/Funfest program. He is the author or coauthor of more than 180 publications including a recent book, “The Search for Non-Newtonian Gravity”, and was made a Fellow of the American Physical
Society in 2001. He also serves as a referee for a number of journals including “Physical Review” and “Physical Review Letters”.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
~ mventura
~ mventura
I've got two little objects here. They weigh about the same. I'm going to put them both into this glass of colored water. And when I put this one in, it floats. And when I put this one in, it sinks.
So, what's the story here? They weighed the same. Everybody has a pretty decent intuition about this, although you usually think that light things float and heavy things sink, and that's not what's
going on here. What's going on is closely related to that. It's that the object, which was less dense, is floating and the object, which was more dense, is sinking.
So I'd like to quantify that, and in doing so, we're going to come up with a formula, which really describes the forces of fluids on objects in fluids. And that's really important in lots of physical
applications if you want to understand the physics of boats or icebergs or any situation whatsoever where you have one object inside of a fluid.
So let's try to analyze what's going on here, and I'm just going to do it using Newton's laws. I've got some fluid. And let me imagine that I've got an object with mass, M sub object (M[O]),
submerged entirely. And it's got height (h) and area (a). It's just a little cube. And let me think about the forces acting on it. It's in a fluid, so there's pressure. And the pressure on the sides
is equal - and of course, I'm pointing my fingers. Pressure doesn't point. Forces point. The pressure is a number on these sides and the force is perpendicular to the object. So there are equal
forces on the side. There are not equal forces on top and bottom - not arising from the fluid only - not just arising from the pressure. Because, remember, this object has weight.
And so let's just try to separate out the forces arising from the fluid and the forces arising from gravity. Those are two distinct sets of forces. So, what are the forces arising only from the
fluid? Let me save gravity for later. We'll add that in. But right now, I'm just going to consider the upward force, which arises from the pressure below. And that's just pressure is force per area.
So force is pressure times area. So we're going to get P[1]A down, where P[1] is the pressure at the top. And P[2]A up, where P[2] is the pressure at the bottom. So this is level 1 and the bottom is
level 2. Now, I know how the pressure in a fluid is related to depth. Pressure at level 2 is pressure at level 1 plus times g times h. And let's be careful, because there are two densities in this
What's the density here? It's not the density of the object. I'm talking about the pressure in the fluid. And so that's the density of the fluid. And let me be very careful when I write this down.
And I have P[2] is P[1 ]plus the density of the fluid times g times h, and all that times area . So these are the two forces - the down force and the up force - from the fluid. So if I were just to
add these as vectors - which really means taking this one and subtracting that one, because they're in opposite directions - I will have the net force of the fluid on this object. You'll notice, I
have a P[1]A[ ]up and I'm subtracting this P[1]A down. So I end up with a net upward force from the fluid along - of [fluid] g. And then what's left over - h times area - that's just the volume of
this object.
So let me give that net force of the fluid alone on the object a name. I'm going to call it the buoyant force, because the water, or the fluid, is somehow buoying up this object. Now I've got to be
very careful when I write this formula down. This is what I just derived. And I derived it for an object, which was entirely submerged. If you want to deal with the most general case, you could have
an object, which is partially submerged. And then, what's the appropriate volume? Just think through this proof and convince yourself it's only the volume of the object under the fluid level. So
people called that V[displaced]. V[displaced] represents the amount - you know what? That's a funny word. In this case, if you stick an object with mass, M[O], and volume, V[O], in the fluid, fluids
are incompressible. So you're shoving some water aside in order to put the object in there. You're displacing some of the water. And how much are you displacing? In this case, you are simply
displacing the volume of the object. And if you only put it halfway under, you're only displacing half of the volume of the object. So I'm trying to be careful to clearly write the volume of water
displaced or fluid displaced, rather than the volume of the object itself, because this is the correct formula.
This was first discovered over 2,000 years ago by Archimedes, in Greece - a very brilliant fellow. Archimedes phrased it in words rather than in this formula. He said the buoyant force on an object
will be the weight of the fluid which is displaced. So that's the same thing, because what's the weight of something? It's the mass times g. So what Archimedes is saying is the mass of the fluid
displaced is density of the fluid times the volume of the fluid displaced. So it's a neat idea. It's a really useful formula. Because any time you put an object into a fluid - look what's mattering
here is not what the object is made of. It's what the fluid is. It's the fluid which is pushing on the object, and that's what this buoyant force is.
So having this formula, now we can answer the question that I started off with. How do you decide if an object is going to sink or not? If it's going to sink, the force of gravity on it, which I've
been neglecting up to now, is going to be bigger than the buoyant force. There are only two forces. Buoyant force is the net force up. Gravity is the net force down. And so if gravity wins, it's
going to sink. Now let's just think about these formulas. The force of gravity on the object is the mass of the object times g. And the mass of the object, M[O], is the density of the object times
the volume of the object times g. So if it's going to sink, that's got to be greater than the buoyant force, which is the density of the fluid times g times the volume displaced. And if we're
sinking, how much water are we displacing - how much fluid? Well, if it's sinking, then it's totally underwater, and so it's displacing the entire volume of the object. So in this case, it is safe to
go ahead and use V[displaced] = V[O]. And then we see the g's cancel and the V[O] cancels. The condition for sinking is density of object has to be greater than the density of the fluid in which it's
immersed - very simple criterion for whether something sinks or floats. And it is not a question of who's heavier. You can be heavier, but still have less density. That depends on both your weight
and your - I should be careful. Density is not weight. It's mass per volume. So you can have a massive object, and as long as it's got a big volume, mass per volume can be small and it will float.
A lovely example, which everybody loves - a helium balloon - why is this floating? Well, the simple answer is because the density of the balloon must be less than the density of the air and so the
buoyant force on the balloon is larger than the downward force. If you thought about this without using Archimedes' principle, you might use the following logic. A balloon is made of rubber, and
before it's inflated, the rubber is massive and gravity pulls it down to the table. Rubber of a balloon doesn't just float. Now I fill it up with helium. And helium is massive. It's got a density, so
I'm just adding mass to that balloon. So I started off with something that was massive. I added mass. Why the heck does it float? And the answer is, yes, indeed, there's plenty of mass here. But
there's less mass in this volume than there would be mass of air in this volume. And so up it goes.
Archimedes invented this principle in the context of a very famous problem and the problem was this. Let's just focus your attention on the picture here. There's a crown. Archimedes was in Greece and
he was working with a king of a local city-state leader who had received a crown as a gift and wanted to know whether it was gold or whether they were lying to him and just pretending it was gold.
And he didn't want to melt it down. He wanted to figure out, without destroying this beautiful crown, whether it was made of gold. This is 2,200 years ago - primitive technology. So how are you going
to figure out whether something's made of gold? You know what the density of gold is - even 2,200 years ago. So if you could just figure out the mass and the volume, you divide them, and you'll know
what the density is.
Well, it's easy to find the mass. You just hang it from a scale. So that's little force diagram is here to show you. Here's the crown. There's a tension in the string up and the weight of the crown
down and g - that's it. And so T[1] - Mg = 0. So this little scale here is going to read the weight of the crown, because the scale reads the tension in the rope. So it's easy to figure out the
weight, and you divide by g. So you do know the mass of the crown. How are you going to find the volume of a funny-shaped crown? This was the puzzle, 2,200 years ago, which Archimedes solved. The
story goes that he sat down into his tub one day, pensing over this little problem, and the water spilled out, because it was filled to the brim. And he realized that this was a clever way of
measuring volume. If you displace water, you're going to be able to figure out the volume of the object you've put in the water by just looking to see how much is the volume of the water you've
displaced. So you could have done that with the crown - just put it into a pot filled to the brim with water and seen how much volume of water spilled out and that would tell you the volume of the
crown. In principle, that's all you need to do.
In practice, that's a tough experiment, because there's this funny meniscus thing with water and it's got to be filled to the brim and then it's dribbling all over the place. It's just hard to
measure that volume of water displaced. So Archimedes came up with a really clever idea, and what he did was he just weighed it and then he immersed it in a bucket of water. It's now totally
underwater. So the force diagram has changed. There's still Mg down. You can't hide from gravity, even when you're underwater. There's still going to be some tension, but it's less. P[2] - the new
tension in the string - is less, because there's a new force in the problem. You have an object submerged in water. There's a buoyant force. F[B]. And so if you measure the tension - that's easy;
that's the scale reading - and you already knew Mg, you can deduce the buoyant force. And remember, what's the formula for the buoyant force? Density of fluid, which we know, times g times the volume
displaced - that's the volume of the crown. So this is actually a very accurate measure of the volume of the crown and now you can test for the density.
The story goes that Archimedes was so excited by coming up with this idea, that he jumped out of the tub stark naked and ran down the streets yelling, "Eureka! Eureka!" I've found it. I've found it.
Archimedes principle. It's old. It's derivable, by Newton's second law. And it's very useful. It tells you what is the force of a fluid on an object? We don't have to draw all these pressures and
forces and everything anymore. We just know immediately what's the buoyant force. We can use this formula.
Let me just show you one last demo which I just think is remarkable. We take a bucket of water here. And I'm going to grab an ordinary 10-pound bowling ball. This is a real bowling ball. And what do
you suppose is going to happen if I put it in the water? The whole question is: Is the density of a bowling ball greater than or less than the density of water? And the amazing thing is that water -
1000 kg/m^3 - is very dense. In fact, it's denser than a bowling ball. Isn't it just so great? Bowling balls float because they're less dense than water.
Fluid Statics
Buoyancy and Archimedes' Principle Page [3 of 3]
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/4575-physics-buoyancy-and-archimedes-principle","timestamp":"2014-04-18T23:56:57Z","content_type":null,"content_length":"66349","record_id":"<urn:uuid:99ea4a19-4387-42d2-b697-44a9d05650c1>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
arrangement puzzle
April 11th 2007, 10:00 AM #1
Junior Member
Feb 2007
arrangement puzzle
Nine men?Cantor, Cauchy, Descartes, Euler, Fermat, Gauss, Leibniz, Newton, and Pascal?play the nine positions on a baseball team (pitcher, catcher, first base, second base, third base, short
stop, right field, center field, and left field). Determine the position of each player using the following information:
a. Descartes and Cauchy each lost $20 playing poker with the pitcher.
b. Fermat is taller than Newton and shorter than Gauss, but each of these weighs more than the first baseman.
c. The third baseman lives across the corridor from Pascal in the same apartment building.
d. Cantor and the outfielders play bridge in their spare time.
e. Gauss, Cantor, Cauchy, the right fielder, and the center fielder are bachelors; the rest are married.
f. Of Euler and Newton, one plays an outfield position.
g. The right fielder is shorter than the center fielder.
h. The third baseman is the brother of the pitcher?s wife.
i. Leibniz is taller than the first-, second-, and third-basemen, the shortstop, the pitcher and catcher, except for Pascal, Descartes, and Euler.
j. The third baseman, the shortstop, and Fermat made $300 betting on horse races.
k. The second baseman is engaged to Cantor?s sister.
l. The second baseman beat Pascal, Cauchy, Fermat, and the catcher at cards.
m. Euler lives in the same building as his own sister but dislikes the catcher.
n. Euler, Cauchy, and the shortstop lost $200 each playing slots.
o. The catcher and his wife have three daughters, and the third basemen and his wife have two sons, but Leibniz is being sued for divorce.
Nine men?Cantor, Cauchy, Descartes, Euler, Fermat, Gauss, Leibniz, Newton, and Pascal?play the nine positions on a baseball team (pitcher, catcher, first base, second base, third base, short
stop, right field, center field, and left field). Determine the position of each player using the following information:
a. Descartes and Cauchy each lost $20 playing poker with the pitcher.
b. Fermat is taller than Newton and shorter than Gauss, but each of these weighs more than the first baseman.
c. The third baseman lives across the corridor from Pascal in the same apartment building.
d. Cantor and the outfielders play bridge in their spare time.
e. Gauss, Cantor, Cauchy, the right fielder, and the center fielder are bachelors; the rest are married.
f. Of Euler and Newton, one plays an outfield position.
g. The right fielder is shorter than the center fielder.
h. The third baseman is the brother of the pitcher?s wife.
i. Leibniz is taller than the first-, second-, and third-basemen, the shortstop, the pitcher and catcher, except for Pascal, Descartes, and Euler.
j. The third baseman, the shortstop, and Fermat made $300 betting on horse races.
k. The second baseman is engaged to Cantor?s sister.
l. The second baseman beat Pascal, Cauchy, Fermat, and the catcher at cards.
m. Euler lives in the same building as his own sister but dislikes the catcher.
n. Euler, Cauchy, and the shortstop lost $200 each playing slots.
o. The catcher and his wife have three daughters, and the third basemen and his wife have two sons, but Leibniz is being sued for divorce.
Cantor != cf, rf, lf, sb, p, c, tb
Cauchy != p, cf, rf, lf, sb, ss, c, tb
Descartes != p, fb, sb, tb, ss, c, (of)
Euler != fb, sb, tb, ss, p, c, (of)
Fermat != fb, tb, ss, sb
Gauss != fb, cf, rf, lf, p, c, tb
Leibniz != fb, sb, tb, ss, p, c, (of)
Newton != fb, cf, rf, lf
Pascal != tb, fb, sb, ss, p, c
pitcher =
catcher =
first base = cauchy
second base = gauss
third base =
short stop = cantor
right field = leibniz, euler, descartes
center field = leibniz, euler, descartes
left field = leibniz, euler, descartes
There might be a mistake in here. Apparently Leibniz, Euler, Descartes AND Pascal can only be out fielders. That's 4 people for 3 positions.
May 3rd 2007, 04:05 PM #2 | {"url":"http://mathhelpforum.com/math-challenge-problems/13579-arrangement-puzzle.html","timestamp":"2014-04-16T08:36:59Z","content_type":null,"content_length":"35222","record_id":"<urn:uuid:325c14ff-37f6-432f-a168-a38903d257c9>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00039-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measurement & Experimentation Laboratory
Measurement & Experimentation Laboratory
This course will serve as your introduction to working in an engineering laboratory. You will learn to gather, analyze, interpret, and explain physical measurements for simple engineering systems in
which only a few factors need be considered. This experience will be crucial to your success in analyzing more complicated systems in subsequent coursework and in the practice of mechanical
We frequently encounter measurement systems in our everyday lives. Consider the following examples:
1. The many gauges found on the control panel of a motor vehicle indicate vehicle speed, engine coolant temperature, transmission setting, cabin temperature, engine speed, and oil
pressure—amongst many other measurements.
2. A routine visit to a physician often entails several measurements of varying complexity—internal temperature, blood pressure, internal appearance, heart rate, respiration rate, and tissue
texture, amongst many, many more.
3. The experienced cook may use several measurements to successfully “cook until done”—for example, he or she might measure internal temperature, external coloration, external temperature and
exposure time, internal coloration, aroma, and texture.
Any one of these measurement systems may require substantial attention to detail. Consider the elaborate ritual of procedure that occurs next time you have your blood pressure measured in a routine
physical examination. Or perhaps observe the careful baker measuring the temperature in the final stages of baking. You might ask: “What type of thermometer is used? How large is the probe? What
is the response time of the probe (how long do we have to let it equilibrate for each measurement)? What is the accuracy of the measurement? What is the precision of the measurement? Where in the
product are the measurements taken? How many measurements are taken? How are the measurements recorded? And finally, what possible actions might be taken as a result of those measurements?”
The primary purpose of this course is not to make you an expert at all types of measurements important to mechanical engineering, but rather to expose you to the use and analysis of a few such
techniques so that you may readily adapt new techniques as appropriate in subsequent coursework and in your engineering career. Each section of this course is accompanied by hands-on or virtual
exercises. The units of this course are intended to stand alone, but you may find it worthwhile to revisit previous sections and exercises after completing later sections of the course.
Welcome to
ME301: Measurement & Experimentation Laboratory
. General information about this course and its requirements can be found below.
Course Designer: Dr. Steve Gibbs Primary Resources:
This course comprises a range of different free, online materials. However, the course makes primary use of the following:
Requirements for Completion:
In order to complete this course, you will need to work through each unit and all of its assigned material. All units build on previous units, so it will be important to progress through the course
in the order presented.
Note that you will only receive an official grade on your Final Exam. However, in order to adequately prepare for this exam, you will need to work through the assessments at the end of each unit in
this course.
In order to
this course, you will need to earn a 70% or higher on the Final Exam. Your score on the exam will be tabulated as soon as you complete it. If you do not pass the exam, you may take it again.
Time Commitment:
This course should take you a total of approximately 111 hours to complete. Each unit includes a time advisory that lists the amount of time you are expected to spend on each subunit and assignment.
These time advisories should help you plan your time accordingly. It may be useful to take a look at the time advisories before beginning this course in order to determine how much time you have over
the next few weeks to complete each unit. Then, you can set goals for yourself. For example, Unit 1 should take you approximately 19 hours to complete. Perhaps you can sit down with your calendar and
decide to complete Subunit 1.1 (a total of 4 hours) on Monday night, Subunit 1.2 (a total of 6 hours) on Tuesday and Subunit 1.3 and 1.4 (3 hours) on Wednesday night, etc.
It is extremely important that you give each assignment the amount of reading and review necessary to grasp the main points and lines of enquiry. Also, on completing the assessments, take a moment to
consider how the materials you have just studied relate to the topics covered in previous sections of the course.
This course features a number of Khan Academy™ videos. Khan Academy™ has a library of over 3,000 videos covering a range of topics (math, physics, chemistry, finance, history and more), plus over
300 practice exercises. All Khan Academy™ materials are available for free at
Upon successful completion of this course, the student will be able to:
• Interpret and use scientific notation and engineering units to describe physical quantities
• Present engineering data and other information in graphical and/or tabular format
• Use automated systems for data acquisition and analysis for engineering systems
• Work in teams for experiment design, data acquisition, and data analysis
• Use elementary concepts of physics to analyze engineering situations and data
• Summarize and present experimental design, implementation, and data in written format
• Use new technology and resources to design and perform experiments for engineering analysis
In order to take this course you must:
√ Have access to a computer.
√ Have continuous broadband Internet access.
√ Have the ability/permission to install plug-ins or software (e.g., Adobe Reader or Flash).
√ Have the ability to download and save files and documents to a computer.
√ Have the ability to open Microsoft files and documents (.doc, .ppt, .xls, etc.).
√ Be competent in the English language.
√ Have read the
Saylor Student Handbook.
Preliminary Information
• Introductory Materials
Prior to working through the bulk of this course, please spend some time acquainting yourself with the subject of this course by using the following resources.
Time Advisory show close
□ Reading: Simon-Fraser University: Stephen Lower’s Chem1 General Chemistry Virtual Textbook: “Getting Started: Stuff You Should Know Before Delving Too Far into Chemistry”
Link: Simon-Fraser University: Stephen Lower’s Chem1 General Chemistry Virtual Textbook: “Getting Started: Stuff You Should Know Before Delving Too Far into Chemistry” (PDF)
Instructions: Please skim the preliminary chapter of this text in order to familiarize yourself with the common measurements and systems of units used. Pay particular attention to Sections 3:
“Energy, Heat, and Temperature: An Introduction,” 4: “Units and Dimensions,” and 5: “The Meaning of Measure.” You will learn more about the details of the use of such units in later sections
of the course.
Terms of Use: This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 Generic License. It is attributed to Stephen Lower and can be found in its original form here.
□ Reading: National Institute of Standards and Technology (NIST): Dr. W. J. Youden’s “Experimentation and Measurement”
Link: National Institute of Standards and Technology (NIST): Dr. W. J. Youden’s “Experimentation and Measurement”(PDF)
Also available in:
Instructions: Click the link for the PDF Experimentation and Measurement under the heading “Calibration Related Publications.” Read the Forward, the Preface, and the first three Chapters.
This is friendly, relaxing reading. You may be concerned that the material appears out-of-date, but part of the purpose of the reading is to acquaint you with some recent historical context
for experiment and measurement techniques.
Terms of Use: This material is in the public domain.
Expand All Resources Collapse All Resources
□ Unit 1: Scientific Notation, Data Analysis, and Experimental Error
This unit consists of a review of some basic concepts you may remember from courses in mathematics and experimental science. You may skim through the material, but you will need to be precise
in later work about the nomenclature used for reporting errors and statistics. The reference immediately below is a handbook for engineering statistics; it is not meant to be read from start
to finish but rather to be used as a reference for specific problems at hand. You may wish to familiarize yourself with the nomenclature and organization of the handbook.
Time Advisory show close
Learning Outcomes show close
☆ Reading: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010)
Link: NIST/SEMATECH’s e-Handbook of Statistical Methods(2010) (PDF)
Also available in:
Instructions: Use this handbook as a resource throughout the course. Peruse the introductory material (“How To Use this Handbook” and “Tools and Aids”) at this stage in order to
facilitate later use.
Terms of Use: This material is in the public domain.
□ 1.1.1 Scientific Notation
□ 1.1.2 Significant Figures
☆ Reading: Connexions: Sunil Kumar Singh’s “Significant Figures”
Link: Connexions: Sunil Kumar Singh’s “Significant Figures” (PDF)
Also available in:
EPub Format
Instructions: Read these notes and pay particular attention to the effect of mathematical operations upon significant figures. For example, consider how many significant figures might be
in the result of the operation 1.23(4.4/6,873 +2.0).
Terms of Use: This work is licensed under a Creative Commons Attribution 2.0 Generic License. It is attributed to Sunil Kumar Singh and can be found in its original form here.
□ 1.2.1 Introduction to Statistics
☆ Reading: National Institute of Standards and Technology (NIST): Dr. W. J. Youden’s Experimentation and Measurement
Link: National Institute of Standards and Technology (NIST): Dr. W. J. Youden’s “Experimentation and Measurement”(PDF)
Also available in:
Instructions: Click the link for the PDF Experimentation and Measurement under “Calibration Related Publications.” Read Chapters 4 (Typical Collections of Measurements) and 5 (Mathematics
of Measurement).
Terms of Use: This material is in the public domain.
□ 1.2.2 Mean
☆ Reading: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010): “Measures of Location”
Link: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010): “Measures of Location” (PDF)
Also available in:
Instructions: Read the linked section above on descriptors of averages. Note that there are other descriptors (besides the mean and median) that may be useful. As you read, you may wish
to consider hypothetical cases in which the median might be more useful than the mean.
Terms of Use: This material is in the public domain.
□ 1.2.3 Variance and Standard Deviation
☆ Reading: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010): “Measures of Scale”
Link: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010): “Measures of Scale” (PDF)
Also available in:
Instructions: Read the linked section above, which presents descriptors of width. The most commonly used descriptors are variance and standard deviation.
Terms of Use: This material is in the public domain.
□ 1.2.4 Skewness and Higher Moments
□ 1.3 The Normal or Gaussian Distribution
☆ Reading: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010): “Normal Distribution”
Link: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010): “Normal Distribution” (PDF)
Also available in:
Instructions: Read the linked section on the properties of the Normal or Gaussian distribution. Pay attention to the significance of the mean and standard deviation to the position and
width of the distribution.
Terms of Use: This material is in the public domain.
□ 1.4 Sources of Error
□ 1.5 Error Propagation
□ 1.6 Parameter Estimates and Confidence Intervals
□ 1.7.1 Calculation of Mean, Standard Deviation, and Variance
☆ Reading: College of St. Benedict and Saint John’s University: “Descriptive Statistics”
Link: College of St. Benedict and Saint John’s University: “Descriptive Statistics” (PDF)
Instructions: Read the linked section above. Perform the calculations of statistics as prompted. You may wish to experiment with peculiar distributions exhibiting distinctive shapes (such
as bimodal, triangular, or square distributions) and compare them with other distributions.
Terms of Use: The material above has been reposted with permission for educational use by Thomas W Kirkman. It can be viewed in its original form here.
□ 1.7.2 Linear Least-Squares Estimates of Slope and Intercept
☆ Reading: Yale University Department of Statistics’ “Inference in Linear Regression”
Link: Yale University Department of Statistics’ “Inference in Linear Regression” (HTML)
Instructions: This section caters to those with an abstract, mathematical bent. You can develop some appreciation for the concepts involved by skimming the text and studying the graphs as
examples rather than by trying to understand the details of the mathematical analysis. Note that this material may be more appealing after you have some experience using the methods.
Terms of Use: Please respect any copyright and terms of use displayed on the webpage above.
□ 1.7.3 Nonlinear Regression
☆ Web Media: University of Colorado’s PHET Curve-Fitting Demonstration Package: “Curve Fitting”
Link: University of Colorado’s PHET Curve-Fitting Demonstration Package: “Curve Fitting” (Adobe Flash)
Instructions: Drag data points from the data-point-bin onto the graph. Adjust the error bars. Examine the best-fit curves for different situations. You should play with this exercise to
get an intuitive feel for how data and error bars affect the best fit curves. Please also try to fit data with different types of curves (for example, lines and higher order polynomials).
By playing around with this online demo, you can quickly obtain experience that can be easily applied to concrete situations.
Terms of Use: This material is licensed under the GNU General Public License v2.0. It is attributed to PhET Interactive Simulations, University of Colorado and the original version can be
found here.
□ Unit 1 Assessment
☆ Assessment: The Saylor Foundation's "ME301: Unit 1 Quiz"
Link: The Saylor Foundation’s “ME301: Unit 1 Quiz”
Instructions: Please complete the linked assessment.
You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after
clicking the link. This quiz should require less than one hour to complete.
□ Unit 2: Graphical and Tabular Data Presentation
Standard styles for presenting data in graphical and tabular form have evolved over time for efficiency of data communication. Contemporary computer tools for data presentation allow us to
use such styles in order to generate graphics. Even with these tools, the user must still determine the most appropriate format for conveying the desired information and appropriately
labeling the graphic. This unit will introduce several formats for concise data presentation that will be useful in subsequent work.
The information presented in this unit should supplement your coursework in ME304: Engineering Communication and ME101: Introduction to Mechanical Engineering. For complex graphics, you
should refer to ME101: Introduction to Mechanical Engineering, but one message bears repeating: title and caption all graphics, label all axes, provide a key for all symbols, and specify
whether error bars are one or two standard deviations.
You probably have access to several utilities for creating graphical presentations of data. The list below contains topics that you should explore in your own graphical data analysis utility.
As a start, you may wish to read or refer to one or more of the following. After you have skimmed these resources, you should review them to make sure that you have a grasp of the following
concepts. You should practice using each of the ideas below in a computer environment of your choice.
Time Advisory show close
Learning Outcomes show close
☆ Reading: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010): “A Gallery of Graphical Techniques”
Link: NIST/SEMATECH’s e-Handbook of Statistical Methods (2010): “A Gallery of Graphical Techniques” (PDF)
Also available in:
Instructions: If in need of inspiration for graphical analysis and presentation of data, it may be helpful to refer to the linked material above. Many of the plotting techniques are
useful for very specific experimental situations. You may wish to skim through a few of the descriptions to get a feel for the types of plots that other people have used.
Terms of Use: This material is in the public domain.
☆ Reading: The Evil Tutor: Markus Weichselbaum (University of Western Australia)’s “How Not to Create Graphs and Figures”
Link: The Evil Tutor: Markus Weichselbaum (University of Western Australia)’s “How Not to Create Graphs and Figures” (HTML)
Instructions: Read this (tongue-in-cheek) document. Although many of the tips may seem obvious, it is important to have such a list of common errors in mind when creating a graph for a
document in the late stages of preparation, when all participants are tired and the thinking may not be clear.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
☆ Reading: Prince George’s Community College: S. Sinex and B. Gage’s “Using Excel for Handling, Graphing, and Analyzing Scientific Data”
Link: Prince George’s Community College: S. Sinex and B. Gage’s “Using Excel for Handling, Graphing, and Analyzing Scientific Data” (PDF)
Instructions: Scroll down to about halfway down the page and download the “Excel(2003)” file under “Data Handling and Analysis….” Read this pamphlet and learn to manipulate data in a
spreadsheet in order to create a graph.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ Unit 2 Assessment
☆ Assessment: The Saylor Foundation's "ME301: Unit 2 Quiz"
Link: The Saylor Foundation's "ME301: Unit 2 Quiz"
Instructions: Please complete the linked assessment.
You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after
clicking the link.
□ Unit 3: Electrical Circuits and Measurements
A review of basic circuit principles and measurements is included here for completeness, but you should refer to PHYS102: Introduction to Electromagnetism for a discussion of the fundamental
physics. In this unit, you will review elementary concepts of circuit analysis for utility in signal transduction, conditioning, and measurement.
Note: Much of the material in Units 3 and 4 are interdependent and refer you to discussion available in the resource “All About Circuits”; you are encouraged to peruse that resource in full
at your own pace if you do are not familiar enough with concepts in electrical engineering to understand the material as it is presented here. In addition, many of the sections in “All About
Circuits” contain example problems that you should follow and review for particular emphasis.
Note: There is a review quiz for both Units 3 and 4 at the end of Unit 4.
Time Advisory show close
Learning Outcomes show close
□ 3.1 Standards and Units
☆ Reading: All About Circuits: “Volume 1, Chapter 1: Basic Concepts of Electricity”
Link: All About Circuits: “Volume 1, Chapter 1: Basic Concepts of Electricity” (PDF)
Instructions: Read Chapter 1. You may skim this material if you are already familiar with it, but it is light, non-mathematical reading. You may benefit from the analogy of fluid flow and
electrical flow. To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 3.2.1 Resistors
☆ Reading: All About Circuits: “Volume 1, Chapter 2: OHM’s LAW”
Link: All About Circuits: “Volume 1, Chapter 2: OHM’s LAW” (PDF)
Instructions: Read Chapter 2. Again, you may skim the chapter if you are already familiar with the concepts. You may wish to practice with calculations of power, current, voltage, and
resistance. To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 3.2.2 Inductors
□ 3.2.3 Capacitors
☆ Reading: All About Circuits: “Volume 2, Chapter 4: Reactance and Impedance – Inductive”
Link: All About Circuits: “Volume 2, Chapter 4: Reactance and Impedance -- Inductive” (PDF)
Instructions: Read Chapter 4. You may find the section on “Capacitor Quirks” mildly amusing. The text does not emphasize the units, but you may wish to look at the size of typical
capacitors you might find in electrical appliances. To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 3.3.1 Circuit Analysis
□ 3.3.2 Current Relationships
□ 3.3.3 Voltage Relationships
□ 3.4 Elementary Measurements
□ 3.5.1 Diodes
□ 3.5.2 Signal Amplifiers
□ 3.5.3 Filters
☆ Reading: All About Circuits: “Volume 2, Chapter 8: Filters”
Link: All About Circuits: “Volume 2, Chapter 8: Filters” (PDF)
Instructions: Read the linked section and ask yourself why analog filters are still useful in modern, digital systems. To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 3.6.1.1 Voltmeter Usage
□ 3.6.1.2 Ohmmeter Usage
□ 3.6.1.3 Ammeter Usage
□ 3.6.1.4 Other Simple Experiments
□ 3.6.2 Inductors and Capacitors in AC Circuits
□ Unit 4: Computer Assisted Data Acquisition
Current data acquisition methods often employ electronic signal transduction and digital recording for subsequent analysis. Since these methods are so widespread and since errors in
inappropriate implementation may manifest differently than they would if you were to make the same mistake reading a meter, you should understand some of the details of the process and some
of the common artifacts of inappropriate implementation. Consider, for example, the differences between an artifact observed in imperfect digital versus imperfect analog imagery or audio
Note: Much of the material in Units 3 and 4 are interdependent and refer you to material in “All About Circuits”; you are encouraged to peruse that resource in full at your own pace if you
are not familiar enough with concepts in electrical engineering to understand the material as it is presented here.
Time Advisory show close
Learning Outcomes show close
□ 4.1 Analog Signal Processing
☆ Reading: All About Circuits: “Volume 1, Chapter 9: Analog and Digital Signals”
Link: All About Circuits: “Volume 1, Chapter 9: Analog and Digital Signals” (PDF)
Instructions: This section should provide you with enough background information to understand analog signals in general and pneumatic and electrical signals in particular. The analogy
between fluid and electrical systems may appeal to those with practical fluid mechanics experience. To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 4.1.1 Signal Waveforms
☆ Reading: All About Circuits: “Volume 2, Chapter 1: Basic AC Theory” and “Volume 2, Chapter 7: Mixed-Frequency Signals”
Link: All About Circuits: “Volume 2, Chapter 1: Basic AC Theory” (PDF) and “Volume 2, Chapter 7: Mixed-Frequency Signals” (PDF)
Instructions: These sections introduce alternating current (AC) signals. The material is rich; you may wish to revisit the sections after the initial study. The main idea is that steady
signals permit the communication of only one piece of information: the amplitude of that signal. By combining that amplitude with measurements of time, we can communicate a new piece of
information at each new time. Schemes for encoding information into signal amplitude and time variation can be quite complex; take, for example, the different communication protocols for
radio, television, cellular telephone, etc. In order to understand these technologies, you must have a firm grasp of the underlying physics and mathematics of time-varying signals.
You should first skim the above sections and flag any topics or symbols that you do not understand. You may then revisit these sections after you have completed both Units 3 and 4 in
their entirety. To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 4.1.2 Voltage and Current Signal Systems
☆ Reading: All About Circuits: “Volume 1, Chapter 9: Current Signal Systems” and “Voltage Signal Systems”
Link: All About Circuits: “Volume 1, Chapter 9: Current Signal Systems” and “Voltage Signal Systems” (PDF)
Instructions: Read these sections; they introduce the ideas and circuit symbols for ideal current and voltage sources and present background material on their utility and non-idealities.
At a minimum, upon completing this reading, you should be familiar with the symbols involved for future study. To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 4.1.3 Filtering
☆ Reading: All About Circuits: “Volume 2, Chapter 8: Filters”
Link: All About Circuits: “Volume 2, Chapter 8: Filters”(PDF)
Instructions: Read the linked chapter on filters. The text is sufficiently self-explanatory and contains an introductory discussion. You might wish to keep in mind the following
questions: How can digital signal processors be used to build filters? Is it necessary to use analog filtering devices? To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 4.1.4 Bridge Circuits
□ 4.2 Digitization
☆ Reading: All About Circuits: “Volume 4, Chapter 13: Introduction to Digital-Analog Conversion”
Link: All About Circuits: “Volume 4, Chapter 13: Introduction to Digital-Analog Conversion” (PDF)
Instructions: Read the linked section above for an introduction to relevant terminology. You should understand the relationship between analog signals, digital signals, and binary
numbers. To view as a PDF, click the PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
☆ Reading: All About Circuits: “Volume 4, Chapter 13: Practical Considerations of ADC Circuits”
Link: All About Circuits: “Volume 4, Chapter 13: Practical Considerations of ADC Circuits” (PDF)
Instructions: Read the linked section. Focus on the two topics (i.e. the headings for subunits 4.2.1 and 4.2.2) listed below. For example, you might keep the following questions in mind:
What is the percent linear resolution in a 24-bit binary encoding? How often must sound be sampled in order to accurately represent 20 kHz signals to the ear? To view as a PDF, click the
PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 4.3 Common Artifacts from Improper Digitization
Note: The most common artifacts from digitization occur as a result of inappropriate sampling rate and/or signal amplitude. In principle, these problems are similar to, for example, seeing
only every hundredth frame in a video presentation or having the volume knob up way too high on an audio presentation. There are, however, many other subtle artifacts which may emerge upon
close scrutiny. You may be familiar with the vast difference in the sorts of problems that occur with digital television or cell phone transmission versus analog transmission. Likewise,
problems may manifest differently depending upon the processing that is used to get from the original digital signal to the result.
For a more detailed theoretical discussion of this topic, you may refer to Unit 15 of ME205: Numerical Methods for Engineers. (HTML)
□ 4.4 Tutorial for a Commercial Data Acquisition System
☆ Reading: National Instruments’ LabVIEW Tutorials: “LabVIEW Basics”
Link: National Instruments’ LabVIEW Tutorials: “LabVIEW Basics” (HTML and Adobe Flash)
Instructions: Familiarize yourself with the capabilities of this virtual instrumentation environment. This is an example of common functionality for such a system in a laboratory. If you
have access to a similar system, take the time to acquaint yourself with it.
Terms of Use: This material has been released under the terms of the Design Science License.
□ Units 3 and 4 Assessment
☆ Assessment: The Saylor Foundation's "ME301: Units 3 and 4 Quiz"
Link: The Saylor Foundation's "ME301: Units 3 and 4 Quiz"
Instructions: Please complete the linked assessment.
You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after
clicking the link.
□ Unit 5: Measurements of Linear Dimension
The measurement of length is as fundamental to mechanical engineering as it is to everyday life (consider the variety of length scales we use on a day-to-day basis: the hand, finger, foot,
rod, nose, and hair!) Coupled with other information, length measurements can yield complex geometric information. In this unit, you will learn about a few tools that enable us to precisely
measure lengths and related quantities over vastly different length scales. Many more tools are available than can be described here.
Time Advisory show close
Learning Outcomes show close
□ 5.1 Units and Standards
□ 5.2 Calipers
Note: Calipers are claw-like devices used for measuring linear dimension. They are particularly useful for measuring the outer diameters of cylindrical or round objects or the internal
diameters of pipes and the like. They are one of the few instruments that still makes use of Vernier scales.
☆ Reading: University of Toronto: David Harrison’s “Reading a Vernier Caliper”
Link: University of Toronto: David Harrison’s “Reading a Vernier Caliper”(PDF)
Instructions: Read this discussion of the Vernier scale and test your knowledge with the Java applet. If you have access to a set of calipers, you may wish to practice with measuring the
thickness of a series of nominally identical coins.
Terms of Use: This work is licensed under a Creative Commons Attribution 2.5 Taiwan License. It is attributed to David Harrison and can be found in its original form here.
□ 5.3 The Sine Bar
☆ Web Media: Wisc-ONLINE: Barbara Anderegg’s “SINE BAR”
Link: Wisc-ONLINE: Barbara Anderegg’s “SINE BAR” (Adobe Flash)
Instructions: View this slide show, which demonstrates the use of a sine bar for measuring angles. The sine bar is often used in conjunction with gauge blocks in machine shops for precise
manufacturing of equipment. You may wish to review the trigonometry involved in the use of sine bars.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ 5.4 Instrumentation
☆ Activity: The Saylor Foundation’s Instrumentation Activity
Instructions: Modern instrumentation is capable of precise and accurate measurements of lengths over many length scales. The most current information about commercially available
instrumentation is readily accessible via informational advertisements on YouTube. For each of the following items (5.4.1-5.4.5 listed below), review at least one such advertisement and
answer the following questions:
1. What are the technical capabilities of the instrument?
2. What is the cost of the instrument?
3. What is the level of training required to operate and obtain meaningful data from the instrument?
4. What types of systems are amenable to study by the instrument?
In your search, you may find many other types of instrumentation for similar purposes with slightly different names. The list bellow will help you get you started.
Example: Type “profilometer” into the YouTube search window and peruse the resulting product videos.
□ Unit 5 Assessment
☆ Assessment: The Saylor Foundation's "ME301: Unit 5 Quiz"
Link: The Saylor Foundation's "ME301: Unit 5 Quiz"
Instructions: Please complete the linked assessment.
You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after
clicking the link.
□ Unit 6: Time Measurements
Throughout history, we have marked time by the motion of objects in the sky that indicate the passage of hours, days, months, and years. More accurate measures of time gradually emerged in
response to the demands of navigation, commerce, communications, and curiosity. Today, atomic clocks operate with a time resolution of one part in 10^15.
In this unit, you will learn about standards of time measurement, the limits of human reaction times, and the practical limits of precise time measurement via readily available,
computer-based sensors.
Time Advisory show close
Learning Outcomes show close
□ 6.1 Human Reaction Times
Note: When performing any physical measurement, you must know the limits of the measurement system you are using. Here, we will briefly explore the limits of human reaction times.
□ 6.1.1 Dropping Meter Stick Exercise
Instructions: Estimate the time required for a person to let go of a meter stick and grasp it again by measuring the distance that the meter stick falls under the acceleration of gravity. You
will need to use Newton’s laws and some simple calculations to determine that d= at^2/2 where d = the distance dropped, t= the reaction time, and a= the acceleration of gravity. Repeat the
measurement several times for several different individuals. Calculate statistics (e.g. means, standard deviations).
□ 6.1.2 Flashing Light Exercise
Instructions: Estimate human reaction time by performing a computer- based reaction time test (i.e. press a mouse button when the light flashes). Compute the mean reaction time for an
individual and the standard deviation of reaction times for that individual. Compare that reaction time with the one determined from the meter-stick experiment.
☆ Web Media: Human Benchmark’s Reaction Time Test
Link: Human Benchmark’s Reaction Time Test (HTML and Adobe Flash)
Instructions: Perform the reaction time test at least five times. Calculate the mean and standard deviation. How does this time differ from the one calculated in 5.1.1.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ 6.2 Clock History and Mechanisms
☆ Reading: NIST’s History of Time and Frequency: “History of Timekeeping Devices”
Link: NIST’s History of Time and Frequency: “History of Timekeeping Devices” (PDF)
Instructions: Read the linked section above and the links contained therein. Use the Internet Time Service link to explore how the time is set on your computer.
Terms of Use: This material is in the public domain.
□ Unit 6 Assessment
☆ Assessment: The Saylor Foundation's "ME301: Unit 6 Quiz"
Link: The Saylor Foundation's "ME301: Unit 6 Quiz"
Instructions: Please complete the linked assessment.
You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after
clicking the link.
□ Unit 7: Force, Torque, and Pressure Measurements
Force, torque, and pressure measurements can be related by temporal and geometric coupling. Consider the schematic of a see-saw balance. The relative masses of objects M1 and M2 can be
determined by the torques they exert about point P at different distances (L1 and L2) from that point under the acceleration of gravity g. Many more sophisticated geometries and sensing
arrangements can be coupled to allow measurements of related quantities. In this unit, you will review some of the common configurations for such measurements.
Time Advisory show close
Learning Outcomes show close
□ 7.1.1 Units and Standards
☆ Reading: National Physics Laboratory (UK): “SI Unit of Force”
Link: National Physics Laboratory (UK): “SI Unit of Force” (HTML)
Instructions: Read the linked section above and familiarize yourself with commonly used units of force. For example, how is a dyne related to an ounce of force?
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ 7.1.2 Inference of Mass from Weight
□ 7.1.3 Strain or Deflection Measurements
☆ Reading: All About Circuits: “Volume 1, Chapter 9: Strain Gauges”
Link: All About Circuits: “Volume 1, Chapter 9: Strain Gauges” (PDF)
Instructions: Read this section and consider the following issues: Why does the resistance of the strain gauge depicted in the resource cartoon increase under tension? How might strain
measurements be confounded by changes in temperature? Might you design a strain gauge to work by measuring changes in capacitance? To view as a PDF, click the PDF link in the top right
Terms of Use: This material has been released under the terms of the Design Science License.
□ 7.2.1 Units and Standards
□ 7.2.2 Static Versus Dynamic Pressure
Note: These concepts arise from Bernoulli’s equation. They are not to be confused with Gauge and Absolute pressures. Gauge pressure is the system pressure minus some reference (atmospheric
☆ Reading: NASA Glenn Research Center’s “Bernoulli’s Equation” and “Pitot-Static Tube”
Link: NASA Glenn Research Center’s “Bernoulli’s Equation” (PDF) and “Pitot-Static Tube” (PDF)
Instructions: Read the two web pages linked above and consider the following issues: Do static and dynamic pressures have the same units? What is the origin of the terminology? How would
you use measurements of both to determine the speed of an airplane?
Terms of Use: This material is in the public domain.
□ 7.2.3 Barometers and Manometers
☆ Reading: Georgia State University Hyperphysics Pages: “Fluid Pressure Measurement”
Link: Georgia State University Hyperphysics Pages: “Fluid Pressure Measurement” (HTML)
Instructions: Read the linked page above and those following as interested. Consider the following questions during your reading: What is the difference between a barometer and a
manometer? Why is mercury often used as the fluid in a manometer? Why might one use another fluid? You may wish to play with the applet to consider the effects of fluid properties on the
observed measurements.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ 7.2.4 Pressure Transducers
☆ Reading: National Instruments’ Guide for Pressure Measurements: “Measuring Pressure with Pressure Sensors”
Link: National Instruments’ Guide for Pressure Measurements: “Measuring Pressure with Pressure Sensors” (HTML)
Instructions: Read the first three sections in the linked material above, entitled “What is Pressure?”, “The Pressure Sensor,” and “Pressure Measurement.” What factors influence the
time-response of a pressure transducer?
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ 7.3 Torque Measurements
☆ Web Media: Khan Academy’s “Introduction to Torque”
Link: Khan Academy’s “Introduction to Torque” (YouTube)
Also available in:
iTunes U
Instructions: This video should be a review of concepts you learned in physics coursework. You may refer to previous or subsequent videos in the series if you need additional exposure.
Make sure you understand appropriate units for torque.
Terms of Use: This video is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License. It is attributed to the Khan Academy.
☆ Activity: The Saylor Foundation’s Dynamometers Activity
Link: The Saylor Foundation’s Dynamometers Activity
Instructions: Perform the same exercise on YouTube for dynamometers as you did for length measurement devices in Section 5.4 of this course.
□ Unit 7 Assessment
☆ Assessment: The Saylor Foundation’s “ME301: Unit 7 Quiz”
Link: The Saylor Foundation’s “ME301: Unit 7 Quiz”
Instructions: Please complete the linked assessment.
You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after
clicking the link.
□ Unit 8: Temperature Measurements
Temperature control is fundamental to most chemical, biological, and mechanical processes. In order to determine which temperature sensor or transducer is appropriate for a given situation,
you must consider a number of factors, including operating environment and desired temporal and measurement sensitivity. In this unit, you will review common temperature scales and the
characteristics of commonly-used temperature measuring devices.
Time Advisory show close
Learning Outcomes show close
□ 8.1 Temperature Scales
☆ Reading: NASA: Cryogenics and Fluids Branch of the Goddard Space Flight Center’s “Temperature Scales and Absolute Zero”
Link: NASA: Cryogenics and Fluids Branch of the Goddard Space Flight Center’s “Temperature Scales and Absolute Zero” (HTML)
Instructions: Read the text and then calculate the following: room and body temperature in degrees Fahrenheit, Centigrade, Kelvin, and Rankine. What is the meaning of negative absolute
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ 8.2 Expansion Thermometers
Note: You are probably familiar with the liquid-in-glass expansion thermometer, although its use is declining with time. You may have encountered one in a chemistry laboratory, in cooking
candy, or for a body temperature measurement. Another type of expansion thermometer makes use of the differential expansion of two metals and is hence called a bimetallic expansion
thermometer. These have been used in devices like thermostats, in which some mechanical action is required as a function of temperature.
☆ Reading: University of California – Riverside: Beverly Lynds’ “About Temperature”
Link: University of California – Riverside: Beverly Lynds’ “About Temperature” (PDF)
Instructions: Read the linked section above. The discussion is wide-ranging, but is particularly useful for understanding the physics of liquid and gas expansion thermometers.
Terms of Use: The material above has been reposted with permission for educational use by Beverly T. Lynds. It can be viewed in its original form here.
☆ Reading: Rice University: Al Van Helden’s “History of the Thermometer”
Link: Rice University: Al Van Helden’s “History of the Thermometer” (PDF)
Instructions: This section is interesting for its historical content and commentary. In particular, note that Galileo revived Greek technology for temperature measurement.
Terms of Use: The material above has been reposted with permission for educational use by Al Van Helden. It can be viewed in its original form here.
☆ Reading: Simon-Fraser University: Stephen Lower’s Chem1 General Chemistry Virtual Textbook: “Energy, Heat, and Temperature”
Link: Simon-Fraser University: Stephen Lower’s Chem1 General Chemistry Virtual Textbook: “Energy, Heat, and Temperature” (PDF)
Instructions: Read section 3: “Temperature and Its meaning.”
Terms of Use: This work is licensed under a Creative Commons Attribution-NonCommercial 2.5 Generic License. It is attributed to Stephen Lower and can be found in its original form here.
☆ Reading: University of Michigan’s Wiki Pages: “Temperature Sensors”
Link: University of Michigan’s Wiki Pages: “Temperature Sensors” (PDF)
Instructions: You may use this resource as a quick reference for terms that you do not understand and as a survey for many other temperature measurement methods. There is an accompanying
video link embedded in the wiki pages (HTML).
Terms of Use: This work is licensed under a Creative Commons Attribution 3.0 Unported License. It is attributed to the University of Michigan and can be viewed in its original form here.
□ 8.3 Thermistors and Resistance Temperature Detectors
☆ Reading: Bucknell University: Professor Mastascusa’s “Thermistors”
Link: Bucknell University: Professor Mastascusa’s “Thermistors” (HTML)
Instructions: Read the linked section above. Pay attention to the thermistor’s temperature range and power usage. Why would you choose a thermistor over other devices?
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ 8.4 Thermocouples
☆ Reading: Bucknell University: Professor Mastascusa’s “Tempreature Sensor: The Thermocouple”
Link: Bucknell University: Professor Mastascusa’s “Temperature Sensor: The Thermocouple”(HTML)
Instructions: Read the text linked above and consider the following questions: How is temperature related to thermocouple voltage? What types of metals are used for thermocouples? Over
what temperature ranges are thermocouples appropriate?
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
☆ Reading: All About Circuits: “Volume 1, Chapter 9: Thermocouples”
Link: All About Circuits: “Volume 1, Chapter 9: Thermocouples” (PDF)
Instructions: Read the linked section above, which is particularly useful for understanding the importance of the reference junction to thermocouple operation. To view as a PDF, click the
PDF link in the top right corner.
Terms of Use: This material has been released under the terms of the Design Science License.
□ 8.5 Dynamics of Sensors
☆ Reading: Bucknell University: Professor Mastascusa’s “Sensor Dynamics”
Link: Bucknell University: Professor Mastascusa’s “Sensor Dynamics” (HTML)
Instructions: Read the linked section above. How do you expect the response time of the sensor to scale with the size of the sensor? You will make use of this knowledge in Exercise 8.6.
Terms of Use: Please respect the copyright and terms of use displayed on the webpage above.
□ 8.6 Exercise: Dynamics of a Temperature Measurement
Instructions: Select a thermometer and record information concerning the origins and type of the device. Some common types of kitchen thermometers would be suitable. Tabulate and plot the
temperature reading versus time after removing the device from ice water and placing it in boiling water or room temperature water. Make sure that the thermometer has equilibrated with the
ice water before removing it and that the volumes of the water baths are much larger than that of the sensor. How long does it take the thermometer to reach 95% of its change in reading?
Repeat the measurements. For each trial:
1. Plot temperature as a function of time.
2. Plot (T(t)-T[final])/(T[initial]-T[final]) versus time, where T[final] is the temperature of the hot water, T[initial] is the temperature of the cold water, and T(t) are the temperature
3. Plot the logarithm of (T(t)-T[final])/(T[initial]-T[final]) versus time.
4. What is the slope of the curve in 3?
5. What is the response time of your thermometer?
□ Unit 8 Assessment
☆ Assessment: The Saylor Foundation’s “ME301: Unit 8 Quiz”
Link: The Saylor Foundation’s “ME301: Unit 8 Quiz”
Instructions: Please complete the linked assessment.
You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of charge, after
clicking the link.
□ Unit 9: Dynamic Measurements and Control
In this unit, you will implement what you learned in previous units to make dynamic measurements and use these measurements in order to control a piece of equipment. In lieu of performing the
actual exercise, you will write a detailed procedure and submit hypothetical data for two of the following topics. The topics are intentionally open-ended so that you have the freedom to
define the problem as your resources permit. Each report should consist of the following elements.
9a. Introduction: Explain the exercise in the broader context of mechanical engineering.
9b. Purpose: Briefly explain the specific, limited objectives of the exercise.
9c. Equipment: Where possible, list commercially-available equipment, complete with all available specifications.
9d. Procedure
9e. Theory: Outline the physical basis for the measurements and the details of data analysis.
9f. Hypothetical Data: Generate hypothetical data, complete with estimated errors.
9g. Analysis of Data and Presentation of Results
9h. Recommendations for Future Experimenters
Time Advisory show close
Learning Outcomes show close
□ 9.1 Dynamic Strain Measurements
Instructions: Design an apparatus that can observe the oscillations that occur when a cantilevered beam is suddenly loaded with a weight. You may use commercially available strain gauges.
□ 9.2 Accelerometry
Instructions: Design an apparatus to measure the acceleration that occurs when a stationary object is suddenly hit by a moving object (e.g. a mass swinging from a pendulum).
□ 9.3 Temperature Control of a Light Bulb
Instructions:An incandescent light bulb dissipates most of its energy as heat. By adjusting the duty cycle of the bulb (i.e. by turning it on and off to control the amount of heat
dissipated), one can control the temperature of the surface of the bulb. It is your task to choose an appropriate temperature sensor and design a feedback control algorithm to keep the
surface temperature at a set point by adjusting the amount of time that the bulb is lit versus dark.
□ Final Exam
☆ Final Exam: The Saylor Foundation's ME301 Final Exam
Link: The Saylor Foundation's ME301 Final Exam
Instructions: You must be logged into your Saylor Foundation School account in order to access this exam. If you do not yet have an account, you will be able to create one, free of
charge, after clicking the link. | {"url":"http://www.saylor.org/courses/me301/","timestamp":"2014-04-16T10:10:50Z","content_type":null,"content_length":"182421","record_id":"<urn:uuid:1169a225-cb4f-4aaa-9a50-27112fdf432c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 606.05005
Autor: Salamon, Peter; Erdös, Paul
Title: The solution to a problem of Grünbaum. (In English)
Source: Can. Math. Bull. 31, No.2, 129-138 (1988).
Review: The paper characterizes the set of all possible values for the number of lines determined by n points for n sufficiently large. For \binom{k}{2} \leq (n-k), the lower bound of Kelly and Moser
for the number of lines in a configuration with n-k collinear points is shown to be sharp and it is shown that all values between M[max](k) and M[max](k) are assumed with the exception of M[max]-1
and M[max]-3. Exact expressions are obtained for the lower end of the continuum of values leading down from \binom{n}{2}-4. In particular, the best value of c = 1 is obtained in Erdös' previous
expression cn^3/2 for this lower end of the continuum.
Reviewer: P.Salamon
Classif.: * 05A15 Combinatorial enumeration problems
05B25 Finite geometries (combinatorics)
51E20 Combinatorial structures in finite projective spaces
Keywords: connecting lines; lines determined by points
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/60605005.htm","timestamp":"2014-04-19T14:33:47Z","content_type":null,"content_length":"3977","record_id":"<urn:uuid:b55cc114-ae34-4733-9681-09e9e47698a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
16 search hits
The Top-Dog Index: A New Measurement for the Demand Consistency of the Size Distribution in Pre-Pack Orders for a Fashion Discounter with Many Small Branches (2008)
Sascha Kurz Jörg Rambau Jörg Schlüchtermann Rainer Wolf
We propose the new Top-Dog-Index, a measure for the branch-dependent historic deviation of the supply data of apparel sizes from the sales data of a fashion discounter. A common approach is to
estimate demand for sizes directly from the sales data. This approach may yield information for the demand for sizes if aggregated over all branches and products. However, as we will show in a
real-world business case, this direct approach is in general not capable to provide information about each branchs individual demand for sizes: the supply per branch is so small that either the
number of sales is statistically too small for a good estimate (early measurement) or there will be too much unsatisfied demand neglected in the sales data (late measurement). Moreover, in our
real-world data we could not verify any of the demand distribution assumptions suggested in the literature. Our approach cannot estimate the demand for sizes directly. It can, however,
individually measure for each branch the scarcest and the amplest sizes, aggregated over all products. This measurement can iteratively be used to adapt the size distributions in the pre-pack
orders for the future. A real-world blind study shows the potential of this distribution free heuristic optimization approach: The gross yield measured in percent of gross value was almost one
percentage point higher in the test-group branches than in the control-group branches.
The Integrated Size and Price Optimization problem (2012)
Miriam Kießling Sascha Kurz Jörg Rambau
We present the Integrated Size and Price Optimization Problem (ISPO) for a fashion discounter with many branches. Based on a two-stage stochastic programming model with recourse, we develop an
exact algorithm and a production-compliant heuristic that produces small optimality gaps. In a field study we show that a distribution of supply over branches and sizes based on ISPO solutions is
significantly better than a one-stage optimization of the distribution ignoring the possibility of optimal pricing.
On the minimum diameter of plane integral point sets (2007)
Sascha Kurz Alfred Wassermann
Since ancient times mathematicians consider geometrical objects with integral side lengths. We consider plane integral point sets P, which are sets of n points in the plane with pairwise integral
distances where not all the points are collinear. The largest occurring distance is called its diameter. Naturally the question about the minimum possible diameter d(2,n) of a plane integral
point set consisting of n points arises. We give some new exact values and describe state-of-the-art algorithms to obtain them. It turns out that plane integral point sets with minimum diameter
consist very likely of subsets with many collinear points. For this special kind of point sets we prove a lower bound for d(2,n) achieving the known upper bound n^{c_2loglog n} up to a constant
in the exponent.
On the characteristic of integral point sets in $\mathbb{E}^m$ (2005)
Sascha Kurz
We generalise the definition of the characteristic of an integral triangle to integral simplices and prove that each simplex in an integral point set has the same characteristic. This theorem is
used for an efficient construction algorithm for integral point sets. Using this algorithm we are able to provide new exact values for the minimum diameter of integral point sets.
Maximal integral point sets over Z^2 (2008)
Sascha Kurz Andrey Radoslavov Antonov
Geometrical objects with integral side lengths have fascinated mathematicians through the ages. We call a set P={p(1),...,p(n)} in Z^2 a maximal integral point set over Z^2 if all pairwise
distances are integral and every additional point p(n+1) destroys this property. Here we consider such sets for a given cardinality and with minimum possible diameter. We determine some exact
values via exhaustive search and give several constructions for arbitrary cardinalities. Since we cannot guarantee the maximality in these cases we describe an algorithm to prove or disprove the
maximality of a given integral point set. We additionally consider restrictions as no three points on a line and no four points on a circle.
Lotsize optimization leading to a p-median problem with cardinalities (2007)
Constantin Gaul Sascha Kurz Jörg Rambau
We consider the problem of approximating the branch and size dependent demand of a fashion discounter with many branches by a distributing process being based on the branch delivery restricted to
integral multiples of lots from a small set of available lot-types. We propose a formalized model which arises from a practical cooperation with an industry partner. Besides an integer linear
programming formulation and a primal heuristic for this problem we also consider a more abstract version which we relate to several other classical optimization problems like the p-median
problem, the facility location problem or the matching problem.
Integral point sets over Z_n^m (2007)
Axel Kohnert Sascha Kurz
There are many papers studying properties of point sets in the Euclidean space or on integer grids, with pairwise integral or rational distances. In this article we consider the distances or
coordinates of the point sets which instead of being integers are elements of Z_n, and study the properties of the resulting combinatorial structures.
Integral point sets over finite fields (2007)
Sascha Kurz
We consider point sets in the affine plane GF(q)^2 where each Euclidean distance of two points is an element of GF(q). These sets are called integral point sets and were originally defined in
m-dimensional Euclidean spaces. We determine their maximal cardinality I(GF(q),2). For arbitrary commutative rings R instead of GF(q) or for further restrictions as no three points on a line or
no four points on a circle we give partial results. Additionally we study the geometric structure of the examples with maximum cardinality.
Inclusion-maximal integral point sets over finite fields (2007)
Michael Kiermaier Sascha Kurz
We consider integral point sets in affine planes over finite fields. Here an integral point set is a set of points in $GF(q)^2$ where the formally defined Euclidean distance of every pair of
points is an element of $GF(q)$. From another point of view we consider point sets over $GF(q)^2$ with few and prescribed directions. So this is related to Redeis work. Another motivation comes
from the field of ordinary integral point sets in Euclidean spaces. In this article we study the spectrum of integral point sets over $GF(q)^2$ which are maximal with respect to inclusion. We
give some theoretical results, constructions, conjectures, and some numerical data.
Enumeration of integral tetrahedra (2007)
Sascha Kurz
We determine the numbers of integral tetrahedra with diameter d up to isomorphism for all d<=1000 via computer enumeration. Therefore we give an algorithm that enumerates the integral tetrahedra
with diameter at most d in O(d^5) time and an algorithm that can check the canonicity of a given integral tetrahedron with at most 6 integer comparisons. For the number of isomorphism classes of
integral 4x4 matrices with diameter d fulfilling the triangle inequalities we derive an exact formula. | {"url":"http://opus.ub.uni-bayreuth.de/opus4-ubbayreuth/solrsearch/index/search/searchtype/authorsearch/author/Sascha+Kurz/start/0/rows/10/languagefq/eng/sortfield/title/sortorder/desc","timestamp":"2014-04-17T10:00:24Z","content_type":null,"content_length":"50371","record_id":"<urn:uuid:b9a816b3-7230-4c5c-a110-7bd4ae91dd18>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
equation describing the same line
Re: equation describing the same line
Subtract 3x from both sides.
Subtract 4y from both sides.
Put the 0 on the other side.
Which matches the second one.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=19402","timestamp":"2014-04-18T05:59:04Z","content_type":null,"content_length":"14369","record_id":"<urn:uuid:ad19174a-58b2-4996-9250-9ca62423f6bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00367-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Posts by liz
Total # Posts: 992
So I found the Ka for HPO4^2- which is 2.2x10^-13 Then i found HPO42- on right side of the table and found the ka value which is 6.2x10^-8. Then, I used kb=kw/ka to and got 1.61x10^-7 Since kb>ka the
reaction: HPO4^2- + H2O <--> H2PO4- + OH- would be the one most like...
I don't really understand what k2 and k3 means.. To find the predominant reaction, could I compare the ka and kb values for HPO4^2-?
I answered that, but the teacher marked me wrong.. so I don't understand why? The predominant reaction should be HPO4^2- acting as an acid right?
What is the equilibrium constant expression for the predominant equilibrium in HPO4^2- (aq)?
Nevermind, I think I understand it now. Thanks.
c) Ca (s) reacts and produces Co2(g) Can you explain to me why it's b please?
which of the following occurs when a sample of 0.1M HNO3 is tested? a) phenolphthalein turns pink b) bromthymol blue turns yellow c) Ca9s) reacts and produces CO2 (g) d) Na (s) reacts and rpoduces
NO2 (g)
12th Grade Calculus
Find d^2y/dx^2 by implicit differentiation. x^(1/3) + y^(1/3) = 4 I know that first you must find the 1st derivative & for y prime I got 1/3x^(-2/3) + 1/3y^(-2/3) dy/dx = 0 Then for dy/dx I got dy/dx
= [-1/3x^(-2/3)] / [1/3y^(-2/3)] I think that from here I would use the quoti...
The high rate of eating disorders in the United States is mainly attributed to social factors such as media, cultural influences, and family factors. My questions is should it be, The high rate of
eating disorders in the United States IS/ARE mainly attributed?
Suppose two dice (one red, one green) are rolled. Consider the following events. A: the red die shows 5. B: the numbers add to 4. C: at least one of the numbers is 3. D: the numbers do not add to 10.
Express the event "The numbers do not add to 4." in symbols. 1 B D ...
f two indistinguishable dice are rolled, what is the probability of the event {(3, 3), (2, 3), (5, 3)}? 1 What is the corresponding event for a pair of distinguishable dice? 2 {(3, 3), (2, 3), (5,
3)} {(3, 3), (2, 3), (5, 3), (3, 5), (3, 2)} {(3, 3), (2, 2), (5, 5), (3, 5), (3...
math probabilty
Find the (theoretical) probability of the given event, assuming that the coins are distinguishable and fair, and that what is observed are the faces uppermost. Six coins are tossed; the result is at
most one head.
Complete the following probability distribution table and then calculate the stated probabilities. Outcome a b c d e Probability 0.1 1Your answer is correct. 0.43 0.1 0.27 (a) P({a, c, e}). 2Your
answer is correct. (b) P(E ∪ F), where E = {a, c, e} and F = {b, c, e}, P(E...
Math Probability
Find the (theoretical) probability of a given event, assuming that the dice are distinguishable and fair, and that what is observed are numbers uppermost. Two dice are rolled; the numbers add to 7. 1
A 1-L flask is filled with 1.45 g of argon at 25 degrees C. A sample of ethane vapor is added to the same flask until the total pressure is 1.35 atm. What is the partial pressure of argon in the
which of the following genres is defined by form rather than theme: tragedy, sonnet, comedy, satire, or drama.
what is the answer to this; Alvan said that if 2 triangles are not congruent, then at least one of the 3 sides of one triangle is not congruent to the corresponding side of the other triangle. do you
agree with alvan? justify your answer.
The issue that discusses whether development is more a function of biology or more a function of the environment is the ______________ controversy.
I'm thinking Gd (element 64).
A poker hand consists of five cards from a standard deck of 52. Find the number of different poker hands of three of a kind (three of one denomination, one of another denomination, and one of a
third). 1
A bag contains 3 red marbles, 2 green ones, 1 lavender one, 2 yellows, and 4 orange marbles. How many sets of four marbles include all the red ones? 1Your answer is incorrect.
help me find subjects and verbs in college sentences
Find velocity of electron emitted by metal whose threshold frequency is 2.1*10^14 and when exposed to visible light wavelength of 5.09*10^-7
Funny, I just got to this problem in my physics homework. Use the Conservation of angular momentum idea, L=Iw. You can find this with your given initial values. Then, you use that L in a new
equation, except with the I final and w final (which you want to ultimately find). Use...
bus math
Meg's pension plan is an annuity with a guaranteed return of 7% interest per year (compounded monthly). She would like to retire with a pension of $20000 per month for 20 years. If she works 28 years
before retiring, how much money must she and her employer deposit per mon...
bus math
Determine the selling price of a 15-year, 4.725% bond, with $1000 maturity value, with a yield of 4.735%. (Assume twice-yearly interest payments. Round your answer to the nearest cent.)
Cathy owes six overdue movies to her local video store. Since each of Cathy s overdue videos is a comedy, only comedies are currently overdue at Cathy s local video store. Which one of the following
premises, if added to the statements above, would allow the conclusi...
where does the 4.9 come from?
Math Approximation fractions
I really don't understand how to determine which number is greater and which one is smaller can anyone help auorlda ?
PHYSICS ---HELP!
A laser beam is directed at the Moon, 380,000 km from Earth. The beam diverges at an angle of 6.2×10−5 rad. What diameter spot will it make on the Moon? Express your answer using two significant
Urban Community College is planning to offer courses in Finite Math, Applied Calculus, and Computer Methods. Each section of Finite Math has 40 students and earns the college $40,000 in revenue. Each
section of Applied Calculus has 40 students and earns the college $60,000, wh...
Find the intersection of the line through (0, 1) and (4.4, 2) and the line through (1.9, 3) and (5.3, 0). (Round your answers to the nearest tenth.) (x, y) =
You operate a gaming Web site, where users must pay a small fee to log on. When you charged $4 the demand was 510 log-ons per month. When you lowered the price to $3.50, the demand increased to 765
log-ons per month. (a) Construct a linear demand function for your web site and...
A horizontal force of 20 N acted on a 10 kg mass at rest. The force accelerated the mass over a rough surface so that the mass had a speed of 6.0 m/s after 5.0 seconds. How much friction was acting
on the mass?
Find the dimensions of a square cardboard box (open top) that holds 100 cubic inches and is 4 inches deep?
college pre-calc
how do you know if there are holes in a rational function?
Physics Help!!
Three identical masses of 500 kg each are placed on the x axis. One mass is at x_1 = -11.0 cm, one is at the origin, and one is at x_2 = 37.0 cm. What is the magnitude of the net gravitational force
F_grav on the mass at the origin due to the other two masses? Take the gravita...
a patrol of timing paths each vehicle needs to travel the first half of the distance at a speed of 140 km / h. if the speed limit is 80km / h. which should be the highest average speed of the car in
the second half of the section, to avoid being fined?
a car crosses the street ABC. the segment AB = average speed of 60km / h for 2 hours, the length BC = average speed of 90km / h for 1 h. the average speed of car travel in AC is? the route is a
straight line
business math
How long, to the nearest year, will it take an investment to triple if it is continuously compounded at 16% per year?
After several drinks, a person has a blood alcohol level of 200 mg/dL (milligrams per deciliter). If the amount of alcohol in the blood decays exponentially, with one fourth being removed every hour,
find the person's blood alcohol level after 2 hours.
A bacteria culture starts with 1,000 bacteria and doubles in size every 2 hours. Find an exponential model for the size of the culture as a function of time t in hours. f(t) = 1
Physics Help!!
Suppose the kinetic energy of an object moving is 15 J and the potential energy is 25 J at some point in time. Then I measure the potential energy some time later and find it to be 10 J. What is the
new kinetic energy?
I think I'm doing the same lab as you. So what I did was find the new concentration of nitric acid in the solution (because by adding it to water, you've just diluted ur concentration). I got .167.
Since nitric acid is a strong acid, it dissociates completely, right? S...
f(x) = 4 e**(5 x); f(x)=A b**x
After several drinks, a person has a blood alcohol level of 200 mg/dL (milligrams per deciliter). If the amount of alcohol in the blood decays exponentially, with one fourth being removed every hour,
find the person's blood alcohol level after 2 hours.
After several drinks, a person has a blood alcohol level of 200 mg/dL (milligrams per deciliter). If the amount of alcohol in the blood decays exponentially, with one fourth being removed every hour,
find the person's blood alcohol level after 2 hours.
if AX= kX k belongs to real numbers A=|4 -2| X=|x| |-2 4| |y| and x is not equal to cero, and cero is not equal to y, find k
if AX = kX, k belongs to reals where A=|4 -2| <-(is a matrix) X=|x|<-(matrix) |-2 4| |y| and x is not equal to cero, cero is not equal to y, find k
Use exponential regression to model the price P(t) as a function of time t since 1994. Include a sketch of the points and the regression curve. (Round the coefficients to 3 decimal places.)
After several drinks, a person has a blood alcohol level of 200 mg/dL (milligrams per deciliter). If the amount of alcohol in the blood decays exponentially, with one fourth being removed every hour,
find the person's blood alcohol level after 2 hours.
for example of the stone dropped from a ballon and striking the ground and taking into account air resistance sketch a) a distance-time graph b) a velocity-time graph c)an acceleration-time graph
Find equations for exponential functions that pass through the given pair of points. (Round all coefficients to 4 decimal places if necessary.) (-2, 3) and (3, 4)
After several drinks, a person has a blood alcohol level of 200 mg/dL (milligrams per deciliter). If the amount of alcohol in the blood decays exponentially, with one fourth being removed every hour,
find the person's blood alcohol level after 2 hours.
A bacteria culture starts with 1,000 bacteria and doubles in size every 2 hours. Find an exponential model for the size of the culture as a function of time t in hours.
Obtain exponential functions in the form f(t) = Aert, if f(t) is the value after t years of a $9,000 investment depreciating continuously at an annual rate of 8.5%. f(t) = 1
physics! center of mass
a long non uniform board of length 8 m and mass m = 12 kg is suspended by two ropes if the tensions in the ropes are mg/3 (on left) and 2mg/3 (on right) what is the location of the board's center of
mass thank you!
physics! center of mass
a long non uniform board of length 8 m and mass m = 12 kg is suspended by two ropes if the tensions in the ropes are mg/3 (on left) and 2mg/3 (on right) what is the location of the board's center of
mass thank you!
langauge arts
PASSAGE: How did one village bring disaster on itself? On a morning in early spring, 1873, the people of Oberfest left their houses and took refuge in the town hall. No one knows why precisely. A
number of rumors had raced through the town during recent weeks, were passed on a...
3 is right! Thank you!! But how'd you get the answer?
Im still confused. Wouldnt each side just be 0.33 m then?
How would i determine that with only one value?
The corners of a square lie on a circle of diameter 0.33 m. Each side of the square has length L. Find L.
PHI103: Informal Logic
eating candy will help his daughter improve in areas other than math
PHI103: Informal Logic
Barney s lawsuit against Pinetree Café is unwarranted.
17 + 1 = 9(s) solve for s 18 = 9(s) s = 2
what do you divide 83 and 100 with to get to 5/6
business hs
positive attitude manners
English fallacies
my assignment is to find quotes from thimas jefferson's query 14 and label them according to fallacies and i just dont get it. some quotes include "Comparing them by their faculties of memory,
reason, and imagination, it appears to me, that in memory they are equal to...
Medical Billing and Coding
Ok thank you This is what I had wrote: Healthcare, as it exists today, is an ever escalating, competitive process whereby providers and institutions thrive financially. If universal health care
coverage were made available to all people, the competition among the providers and...
Medical Billing and Coding
Explain why the lack of universal health care coverage can raise health care costs.
Medical Billing and Coding
I have found the answer thank you to all who helped
Medical Billing and Coding
The investigational new drug (IND) application contains which of the following information?
12th grade - Math
Explain. 1.) To solve x(x-5)=0 either factor may be equal to zero. 2.) | 8-15i | = 17 3.) The solution for x^2 = -2 cannot be found graphically. [any help or explanation is greatly appreciated]
12th grade - Math
Identify the system as consistent, inconsistent or dependent. Explain your choice. 1.) 3x - 2y = 12 6x - 4y = 24 2.) x= -5 y= 4
A unit of area often used in measuring land areas is the hectare, defined as 104 m2. An open-pit coal mine consumes 72 hectares of land, down to a depth of 29 m, each year. What volume of earth, in
cubic kilometers, is removed in this time?
Solve for the indicated variable p I don't know what you do to solve it. A=P+Prt
Physics - Uniform circular motion
centripetal acceleration? v^2 / r = ac
I actually used that formula and got that answer (35.625 m/s) but does that take into account the speed of the police car? (or is that a red herring)
I actually used that formula and got that answer (35.625 m/s) but does that take into account the speed of the police car? (or is that a red herring)
A speeding car is pulling away from a police car. The police car is moving at 30 m/s. The radar gun in the police car emits an electromagnetic wave with a frequency of 20.0 x 10^9 Hz. The wave
reflects from the speeding car and returns to the police car where the frequency is ...
A 8.1 kg stone is at rest on a spring. The spring is compressed 12 cm by the stone. The stone is then pushed down an additional 27 cm and released. To what maximum height (in cm) does the stone rise
from that position?
Substitute -21 into -x, which gives you -(-21). Two negatives make a positive, therefore -x = 21.
14 to 49
What do you see as the purpose of health insurance? Should there be limits on the amount of health care provided? Word count met? If yes, what criteria should we use to ration health care? If no, how
should health care be financed so that everyone has access?
writing call
In what ways are full-sentence outlines more beneficial than topic outlines? o Explain why it may or may not be simpler to write your paper instead of first creating a full-sentence outline. o What
steps will you take to turn your outline into the body of your rough draft?
university of phoenix
I need to write a paper on the immigration law SB1070 in one week.
I need to see if the paragraph I wrote is Ok?
An enclosed cylinder has 3 moles of gas with a volume of 60 L and a temperature of 400 K. What is the pressure inside the container? Round to the nearest tenth. Don't forget the units.
If you were balancing a chemical equation that contained the substance sodium nitrate, NaNO3, composed of a sodium ion, Na+1, and a nitrate ion, (NO3)-1, what number or numbers could you change in
order to balance the equation?
Find the measure of angle DBC if the measure of angle ABD is 36 degrees and the measure of angle ABC is 74 degrees.
Fe + 2HCl = FeCl2 + H2 A piece of metallic iron (10 moles) was dissolved in concentrated hydrochloric acid. The reaction formed hydrogen gas and iron chloride. How many grams of HCl were consumed?
Don't forget the units.
#1 is c.) socialism, communisn
Find the Pythagorean triple for m = 14
list and describe the four steps necessary to establish the proper ICD-9-CM code.
how many grams of Fe2O3 are formed when 16.7g of Fe reacts completely with oxygen?
who gets to decide if you go to summer school?
Trade between the two countries is relatively modest when compared to trade with their immediate continental neighbours, but still significant. France is Canada's seventh largest trading partner
overall, and the third largest in Europe. Annual bilateral trade between the t...
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=liz&page=6","timestamp":"2014-04-18T08:46:01Z","content_type":null,"content_length":"29608","record_id":"<urn:uuid:9b108e7c-1bb3-41fa-b087-5a5477314c5b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Learn how to layout simple transitions
This tutorial will show you how to layout a 14/10 to 14/8 transition with the bottom flat and total length will be 12".
Above, The first thing you need to do is cut a piece of steel 12" wide and approx.. 36" long. We need to consider the way we need to connect the fitting to the other duct. We are going to use S-Slip
& drive connections on this one.These connectors require 1" so we need to scribe a line along the top and bottom at 1" This will give us the required allowance.
We will make this transition in 2 piece, The one piece will contain the bottom and 2 sides. Now we know the duct work is 14" wide so we will start by Making 2 lines (A) 14" apart from each other (
Keeping them somewhat centered on the steel ) you'll find out that if you don't you will not have enough steel to layout the sides
Now that you have the bottom layout done, you need to determine the sides and you know that 1 side is 10" x 14"( 14" being the bottom and 10" being the sides)so make a mark 10" from line (A) to line
(C). Do this on both sides as I did above.
Now that you have the 14" x 10" done you need to do the same thing for the 8" side (On the top of the plan) Mark your line 8" from line (A) to line (B) on both sides.
Now you need to connect points (C) & (B) and you may start to see the fold out view of the bottom and the 2 sides...
Now that we have the 2 sides and bottom we need to consider how we will connect the top. There are several different methods. The 2 most common are the Pittsburgh and the snap lock. Most Pittsburgh
machines require 1" allowance and the lock former requires 1 5/16". We will say we are going to use the Pittsburgh. So we need to add a line ( D,E) parallel to lines(B,C) out by 1"
Once this is done you can cut along lines (D,C) and notch in about 1" as shown above.
Here ( above ) you should have something that looks like this once you cut out the sides that you don't need.
The next thing you need to do is cross brake all three sides (This helps reduce noise in the ductwork) You should cross brake any duct larger than 8".
To make the top of the fitting you must find the true length, being line (B,C) plus the 2" extra for the s-slips. This piece will be 14"wide plus the allowance for the 1/4" or size you need depending
on what type of connection your using. (c) 2001 The Sheetmetal Shop. Reproduced with kind permission.
Copyright 2001 TheSheetmetalShop.Com | {"url":"http://sheetmetalworld.com/sheet-metal-news/fabrication-tutorials/22-sheet-metal-tutorials/5963-learn-how-to-layout-simple-transitions","timestamp":"2014-04-18T13:06:50Z","content_type":null,"content_length":"19269","record_id":"<urn:uuid:7d1b92b9-4fc6-417f-afce-5a9bb655d697>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00095-ip-10-147-4-33.ec2.internal.warc.gz"} |
, 1993
"... ion elimination Definition 3.1. A monoidal category where every object has a commutative comonoid structure is said to be semi-cartesian. An action category is a K\Omega -category with a
distinguished admissible commutative comonoid structure on every object. A semi-cartesian category is cartesi ..."
Cited by 21 (9 self)
Add to MetaCart
ion elimination Definition 3.1. A monoidal category where every object has a commutative comonoid structure is said to be semi-cartesian. An action category is a K\Omega -category with a
distinguished admissible commutative comonoid structure on every object. A semi-cartesian category is cartesian if and only if each object carries a unique comonoid structure, and such structures
form two natural families, \Delta and !. The naturality means that all morphisms of the category must be comonoid homomorphisms. In action categories, the property of semi-cartesianness is fixed as
structure: on each object, a particular comonoid structure is chosen. This choice may be constrained by some given graphic operations, with respect to which the structures must be admissible. The
proof of proposition 2.6 shows that such structures determine the abstraction operators, and are determined by them. This is the essence of the equivalence of action categories and action calculi. As
the embodiment of 2...
"... Quantum algorithms are sequences of abstract operations, performed on non-existent computers. They are in obvious need of categorical semantics. We present some steps in this direction,
following earlier contributions of Abramsky, Coecke and Selinger. In particular, we analyze function abstraction i ..."
Cited by 2 (2 self)
Add to MetaCart
Quantum algorithms are sequences of abstract operations, performed on non-existent computers. They are in obvious need of categorical semantics. We present some steps in this direction, following
earlier contributions of Abramsky, Coecke and Selinger. In particular, we analyze function abstraction in quantum computation, which turns out to characterize its classical interfaces. Some quantum
algorithms provide feasible solutions of important hard problems, such as factoring and discrete log (which are the building blocks of modern cryptography). It is of a great practical interest to
precisely characterize the computational resources needed to execute such quantum algorithms. There are many ideas how to build a quantum computer. Can we prove some necessary conditions? Categorical
semantics help with such questions. We show how to implement an important family of quantum algorithms using just abelian groups and relations. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=8035117","timestamp":"2014-04-20T21:23:03Z","content_type":null,"content_length":"15019","record_id":"<urn:uuid:f2b74012-273e-4ab8-b3f3-fb2e9fc19b67>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Global existence of small solutions to the Davey-Stewartson and the Ishimori systems.
(English) Zbl 0827.35120
Summary: We study the initial-value problems for the Davey-Stewartson systems and the Ishimori equations. Elliptic-hyperbolic and hyperbolic-elliptic cases were treated by the inverse scattering
techniques. Elliptic-elliptic and hyperbolic-elliptic cases were studied without the use of the inverse scattering techniques. Existence of a weak solution to the Davey- Stewartson systems for the
elliptic-hyperbolic case was also obtained in [J. M. Ghidaglia and J. C. Saut, Nonlinearity 3, No. 2, 475- 506 (1990; Zbl 0727.35111)] with a smallness condition on the data in ${L}^{2}$ and a
blow-up result was also obtained for the elliptic-elliptic case. By using the sharp smoothing property of solutions to the linear Schrödinger equations the local existence of a unique solution to the
Davey-Stewartson systems for the elliptic-hyperbolic and hyperbolic- hyperbolic cases was established in [F. Linares and G. Ponce, Ann. Inst. Henri Poincaré, Anal. nonlinéaire 10, No. 5, 523-548
(1993; Zbl 0807.35136)] in the usual Sobolev spaces with a smallness condition on the data.
We prove the local existence of a unique solution to the Davey-Stewartson systems for the elliptic-hyperbolic and hyperbolic-hyperbolic cases in some analytic function spaces without a smallness
condition on the data. Furthermore we prove existence of global small solutions of these equations for the elliptic-hyperbolic and hyperbolic-hyperbolic cases in some analytic function spaces.
35Q55 NLS-like (nonlinear Schrödinger) equations
35D05 Existence of generalized solutions of PDE (MSC2000)
35E15 Initial value problems of PDE with constant coefficients
76B15 Water waves, gravity waves; dispersion and scattering, nonlinear interaction | {"url":"http://zbmath.org/?q=an:0827.35120","timestamp":"2014-04-18T05:45:08Z","content_type":null,"content_length":"22024","record_id":"<urn:uuid:26c90b9b-725b-4791-b361-a3c544d3e44d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Computing Resources
Math Applications
The following programs are installed on all of the computers in the Math Center (BH 211A) and the Math Computer Lab (BH 215). Clicking on the program name will direct you to documentation or
tutorials for that program.
Gap 4r4 Geometer's Sketchpad 5 IBM SPSS Statistics 19
LaTeX Lindo 6.1 Maple 8
Mathematica 5.2 and 8 Matlab R2011a Minitab 16
Non Euclid R Spherical Easel
Math Computing on the Web
SAGE a free, open-source mathematics software system
Wolfram Alpha a mathematical search engine by the makers of Mathematica
More about LaTeX
LaTeX is the document preparation software used by most mathematicians (and many other scientists) to write journal articles and textbooks. It creats accurate and elegant renderings of any
mathematical expression you might wish to type. To create a document, you write a source file using a markup language similar to html, and then compile it into a PDF. While learning LaTeX may seem
daunting at first, there are many resources available to help you, and it is well worth the time to learn!
LaTeX is freely available on the web. If you would like to install it on your own computer, you will need to install both a TeX distribution and a text editor. The LaTeX Project site contains
information about how to download and install LaTeX for various operating systems. Texmaker is a free text editor.
The following introductory documents were created by Brian Schiller, one of our Math Fellows: | {"url":"http://www.wwu.edu/math/math_resources/computing.shtml","timestamp":"2014-04-16T05:47:47Z","content_type":null,"content_length":"14918","record_id":"<urn:uuid:9360b67c-c268-415d-be8c-4bc07622476c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00648-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geomtric sequence question
April 27th 2008, 11:49 PM #1
Junior Member
Mar 2008
Geomtric sequence question
Hey guys just started a new subject in maths today on geometric sequence
Any way here is the question I am having trouble with..
Find the value of x such that 3 - x; x; 2 - x are successive terms of a G.S and state the value of common ratio, r.
If $3-x\, ,x\, ,2-x$ are in GP, then
$x^2 = (3-x)(2-x)$
$\Rightarrow x^2 = 6 - 5x + x^2$
$\Rightarrow x = \frac65$
So the common ratio is $\frac{x}{3-x} = \frac{\frac65}{3-\frac65} = \frac{6}{15-6} = \frac23$
well geometric progression has the form of:
the first term:[tex] ar^0 = a
the second term:[tex] ar^1 = ar
the third term: $ar^2 = ar^2$
to obtain r we divide a term with its previous term,
$\frac{ar^n}{ar^{n-1}} = r$
$\frac{2ndterm}{1stterm} = \frac {ar}{a}$
and this is true through out the progression
$a_n = ..., ~3-x , ~x , ~2 - x,~...$
If this is a geometric sequence, every new value will be (previous . r).
$a_{n+1} = r\cdot a_n$
So, $2-x = x \cdot r$
$x = (3-x)\cdot r$
Then, we can write $r = \frac{2-x}{x} = \frac{x}{3-x}$
Now just solve it. Is it OK so far? This is the same as what isomorphism did, I only wanted to explain because I saw that you've just started this topic.
Now let's find a rule for these kinds of questions.
If $a$ is a term of a geometric sequence, the next terms will be $a\cdot r$ and $a \cdot r^2$.
$a,~a.r, ~a.r^2$
You can easily see that $(a)\cdot (a.r^2) = (a.r)^2$.
So, $a_{n-1} \cdot a_{n+1} = a_n^2$
Hello, smplease!
Find the value of $x$ such that $3 - x,\; x,\; 2 - x$ are successive terms of a G.S
State the value of common ratio, $r.$
From the definition of the common ratio, we have: . $\begin{array}{cccc}\dfrac{x}{3-x} &=& r & {\color{blue}[1]} \\ \\ [-3mm] \dfrac{2-x}{x} &=& r & {\color{blue}[2]} \end{array}$
Equate [1] and [2]: . $\frac{x}{3-x} \:=\:\frac{2-x}{x}\quad\Rightarrow\quad\boxed{ x \:=\:\frac{6}{5}}$
Substitute into [2]: . $r \:=\:\frac{2-\frac{6}{5}}{\frac{6}{5}}\quad\Rightarrow\quad\box ed{ r \:=\:\frac{2}{3}}$
April 28th 2008, 12:00 AM #2
April 28th 2008, 12:02 AM #3
Mar 2008
http://en.wikipedia.org/wiki/Malaysia now stop asking me where is malaysia...
April 28th 2008, 12:09 AM #4
April 28th 2008, 03:18 PM #5
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/algebra/36342-geomtric-sequence-question.html","timestamp":"2014-04-21T11:22:39Z","content_type":null,"content_length":"47800","record_id":"<urn:uuid:cdc962a2-011b-4429-be74-dd87935d8423>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Introduction to Modern Astrophysics 2nd Edition Chapter 26 Solutions | Chegg.com
(a) Volume of each star is,
Number density of stars in the disk is,
So, the fraction of the disk occupied is the product of the volume of each star and the number density of stars in it.
Use the equations (1) and (2) to get,
Remember the radius of an M-main sequence star is | {"url":"http://www.chegg.com/homework-help/an-introduction-to-modern-astrophysics-2nd-edition-chapter-26-solutions-9780805304022","timestamp":"2014-04-21T08:07:09Z","content_type":null,"content_length":"37827","record_id":"<urn:uuid:ca2ea5e8-f16d-4f3b-b98e-e6fbd68af4c6>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
SPOJ.com - Problem AGSCHALL
SPOJ Problem Set (challenge)
11451. Aritho-geometric Series (AGS) (Challenge)
Problem code: AGSCHALL
Arithmetic and geometric Progressions are 2 of the well known progressions in maths.
Arithmetic progression(AP) is a set in which the difference between 2 numbers in constant. for eg, 1,3,5,7,9 .... In this series the difference between 2 numbers is 2.
Geometric progression(GP) is a set in which the ratio of 2 consecutive numbers is same. for eg, 1,2,4,8,16.... In this the ratio of the numbers is 2.
What if there is a series in which we multiply a(n) by 'r' to get a(n+1) and then add 'd' to a(n+1) to get a(n+2)...
For eg .. lets say d=1 and r=2 and a(1) = 1..
series would be 1,2,4,5,10,11,22,23,46,47,94,95,190 ......
We add d to a(1) and then multiply a(2) with r and so on ....
Your task is, given 'a' , 'd' & 'r' to find the a(n) term .
since the numbers can be very large , you are required to print the numbers modulo 'mod' - mod will be supplied int the test case.
first line of input will have number 't' indicating the number of test cases.
each of the test cases will have 2 lines
firts line will have 3 numbers 'a' ,'d' and 'r'
2nd line will have 2 numbers 'n' & 'mod'
a- first term of the AGS
d-the difference element
r - the ratio element
n- nth term required to be found
mod- need to print the result modulo mod
For each test case print "a(n)%mod" in a separate line.
Description - for the first test case the series is 1,2,4,5,10,11,22,23,46,47,94,95,190..
13th term is 190 and 190%7 = 1
Note - the value of a , d , r , n & mod will be less than 10^8 and more than 0.
for every series 2nd term will be a+d and third term will be (a+d)*r .. and so on ..
Added by: Devil D
Date: 2012-04-24
Time limit: 1s
Source limit: 10000B
Memory limit: 256MB
Cluster: Pyramid (Intel Pentium III 733 MHz)
Languages: BF C C# C++ 4.3.2 C++ 4.0.0-8 C99 strict LISP sbcl D FORT ICON ICK JAR JAVA JS LUA NEM NICE NODEJS PRLG SCALA SCM guile SCM qobi SED ST TCL WSPC
Resource: Own
hide comments
2013-04-13 18:58:01 (Tjandra Satria Gunawan)(曾毅昆)
it's hard to shorten the ~400B code from my old submission to ~200B @_@ take about 2 hours...
2013-04-13 18:58:01 Aditya Pande
nice problem
Last edit: 2012-10-23 09:00:34
2013-04-13 18:58:01 piyush agarwal
any tricky case please??
2013-04-13 18:58:01 demacek
Any reason for that strange language restriction?
people knowing python,ruby,C & perl have advantage .
Wanted to allow pascal people to have a chance
2013-04-13 18:58:01 numerix
Any reason for that strange language restriction?
people knowing python,ruby & perl have advantage . Wanted to allow C people to have a chance
Re( Xeronix ) : Then allow C only. Because i believe Go, pike, PHP users also have the advantages you are referring to.
Done .....
Last edit: 2012-04-27 05:39:11
2013-04-13 18:58:01 XeRoN!X
@Author, set Assessment type = minimise score.
Last edit: 2012-04-25 18:32:01 | {"url":"http://www.spoj.com/problems/AGSCHALL/","timestamp":"2014-04-20T08:14:43Z","content_type":null,"content_length":"22934","record_id":"<urn:uuid:939cdd5d-9227-4d37-9ac9-57629f433cf3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recharge Rates and Aquifer Hydraulic Characteristics for Selected Drainage Basins in Middle and East Tennessee
U.S. Geological Survey, Water-Resources Investigations Report 90-4015
by Anne B. Hoos
This report is available as a pdf below
Quantitative information concerning aquifer hydrologic and hydraulic characteristics is needed to manage the development of ground-water resources. These characteristics are poorly defined for the
bedrock aquifers in Middle and East Tennessee where demand for water is increasing. This report presents estimates of recharge rate, storage coefficient, diffusivity, and transmissivity for
representative drainage basins in Middle and East Tennessee, as determined from analyses of stream-aquifer interactions. The drainage basins have been grouped according to the underlying major
aquifer, then statistical descriptions applied to each group, in order to define area1 distribution of these characteristics.
Aquifer recharge rates are estimated for representative low, average, and high flow years for 63 drainage basins using hydrograph analysis techniques. Net annual recharge during average flow years
for all basins ranges from 4.1 to 16.8 in/yr (inches per year), with a mean value of 7.3 in. In general, recharge rates are highest for basins underlain by the Blue Ridge aquifer (mean value11.7 in/
yr) and lowest for basins underlain by the Central Basin aquifer (mean value 5.6 in/yr). Mean recharge values for the Cumberland Plateau, Highland Rim, and Valley and Ridge aquifers are 6.5, 7.4, and
6.6 in/yr, respectively.
Gravity drainage characterizes ground-water flow in most surficial bedrock aquifer in Tennessee. Accordingly, a gravity yield analysis, which compares concurrent water-level and streamflow
hydrographs, was used to estimate aquifer storage coefficient for nine study basins. The basin estimates range from 0.002 to 0.140; however, most estimates are within a narrow range of values, from
0.01 to 0.025. Accordingly, storage coefficient is estimated to be 0.01 for all aquifers in Middle and East Tennessee, with the exception of the aquifer in the inner part of the Central Basin, for
which storage coefficient is estimated to be 0.002.
Estimates of aquifer hydraulic diffusivity are derived from estimates of the streamflow recession index and drainage density for 75 drainage basins; values range from 3,300 to 130,000 ft^2/d (feet
squared per day). Basin-specific and site-specific estimates of transmissivity are computed from estimates of hydraulic diffusivity and specific-capacity test data, respectively. Basin-specific, or
areal, estimates of transmissivity range from 22 to 1,300 ft^2/d, with a mean of 240 ft^2/d In general, areal transmissivity is highest for basins underlain by the Cumberland Plateau aquifer (mean
value 480 ft^2/d) and lowest for basins underlain by the Central Basin aquifer (mean value 79 ft^2/d). Mean transmissivity values for the Highland Rim, Valley and Ridge, and Blue Ridge aquifer are
320,140, and 120 ft^2/d respectively. Site-specific estimates of transmissivity, computed from specific-capacity data from 118 test wells in Middle and East Tennessee range from 2 to 93,000 ft^2/d
with a mean of 2,600 ft^2/d Mean transmissivity values for the Cumberland Plateau, Highland Rim, Central Basin, Valley and Ridge, and Blue Ridge aquifers are 2,800,1,200, 7,800, 390, and 65Oft Id, | {"url":"http://pubs.usgs.gov/wri/wri90-4015/","timestamp":"2014-04-20T08:14:48Z","content_type":null,"content_length":"6540","record_id":"<urn:uuid:88803853-864f-425e-9dd4-0758004d625f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00080-ip-10-147-4-33.ec2.internal.warc.gz"} |
Day-Trading 2.0 for small traders - Elite Trader
Jan 5th,
2008, 07:35 #1
Join Date: A couple of weeks ago I started a thread (“going back to the basics”) but I am moving it to elitetrader ‘cause I think a broader audience can enrich its content. Feel free to post
Oct 2006 any NON COMMERCIAL HONEST suggestions, ideas, questions or comments.
Posts: 359 The beginning …
“In the last few months its amazing how many posts I read of struggling scalpers trying “to learn” how to use LT rainbow (is a scalping method of multiple MA). All or at least most
of them share in my opinion one thing in common that it has nothing to do with the LT method: they don’t understand the basics of trading or even if they do, they underestimate its
Maybe the following posts are because I used to have these problems when I start trading (maybe I still have some?) or maybe because I need to socially reaffirm my beliefs. Whatever
the case, for me the following concepts are always useful (I hope also for someone else), especially for regaining objectivity when I start losing perspective after a couple of bad
trading days or after a round of outstanding trading weeks …
So let me start with 3 basic key concepts Direction, Timing and Momentum.
Jan 5th,
2008, #2
07:38 AM
Join Date: “Trading is just a probability game based on pattern recognition” (taken from italianfx email)
Oct 2006
If every potential trade is a probability game, the job for a trader/scalper is not forecast the future but to minimize risk using every available tool to find the best available
Posts: 359 scenario.
1. The importance of order in the analysis. In mathematics the order of the factors doesnt changes the result (5 x 8 = 40 and 8 x 5 = 40). However, when trading the order of analysis
change everything.
a. If you only focus in how to pull the trigger (“timing” i.e LT rainbow, Stoch Crossover, MA Crosses, CCI, etc ,etc ,etc ) you can have some results especially in trendy markets, but
you are doom to fail in the long run.
b. If you only focus in “direction” you will have 95% hit rate but only in your head ‘cause u will end up getting stopped out every single time, full of loses because whipsaws or
caught with million doubts every time a S/R or trendline is touch or broken.
But before analysing DIRECTION, TIMING and MOMENTUM...
2. Price moves in Waves. Regardless of the instrument and timeframe and despite of the market direction or its condition (in a trend or in a range) markets always move in waves. That’s
why the Elliott Wave Theory and its followers. But we scalpers are not interested in counting waves or forecasting the next possible move but in minimizing the probabilities of a bad
a.What we don’t know about waves:
i.The exact beginning or end of a wave
ii.The potential height of the next wave
b.What we know:
i. Where we have less probabilities for a successful trade regardless of the direction or condition of the market
c. Have u ever wonder why you end up trading the death lows or highs in a trend (in Rainbow terms even if is a clear trend and the spine is align and the flame just broke the last H
line…) … Your analysis might be right, the direction, condition and timing might be good but you are playing a low probability trade because you are forecasting that next wave is going
to be a lot deeper than the previous one when statistically this is not true.
3. The first factor for a scalper is to understand in which part of a wave is the market. Not using a statistical method but using common sense (sadly the least common of the senses).
Before you analyze trend S/R lines, Direction, Timing, Momentum and despite the drawdowns, etc a trader should ask himself in which part of the wave is the market. The only rule of
thumb is the later you enter a new wave the less probabilities you will have to be successful. (this a BP example in different timeframes smoothed with a LSMA for explicative
4. Look at the EUR/USD daily forex example below. If we have just traded without any further analysis the beginning of a new green wave we would have make millions. On the contrary if
you have waited until price breaks resistance you still will have make some pips (because is a big trend) but you are not playing high probability trades… I can post 1 millions of
example like this in any timeframe but my point is not that you should focus only on waves but that you, FIRST have to recognize where not to place a trade even if it follow the
direction of the market.
Jan 5th,
2008, 07:45 #3
Join Date: “Trading is just a probability game based on pattern recognition”
Oct 2006
Forgetting waves in the analysis definitively reduce the probability of good trades but is the analysis of direction where everything gets messed up.
Posts: 359
1. Markets are always going in a direction even if your timeframe is showing choppy action. The problem arises because the analysis of direction MUST be consistent with what, where
and when are u trading. I used to make the mistake of ploting trendlines in a 5 sec chart that corresponded to 5 min S/R level (without knowing), there are even people that use hourly
and even daily trendlines for scalping 5 secs charts. And you wonder why you aren’t consistent!!!
2. If you are in this forum is because +/- you consider yourself a scalper. I do not use the traditional LT rainbow for scalping, nevertheless I strongly adhere to LT principles which
are usually also forgotten in the analysis in favor of the “technicalities” of the rainbow. A quick reminder of 3 of them:
a. Identify the time and you are done for the day within 30-60min. meaning stop trying to catch every single move during a day, doing that will reduce a lot the probabilities of
placing good trades. In terms of direction: we only need to find the direction of these 30 or 60 minutes no more!!!
b. Identify the direction of the current move you see on the screen. For me, that’s the most important thing you will ever learn in scalping and sadly was the last thing I paid
attention to.
3. If “Trading is just a probability game based on pattern recognition” and you are +/-scalping (= meaning an average of 1 to 6 pips MAX per trade) and not any other type of trading,
a trader should recognize two very important but different concepts:
a. The direction for the current time (“macro” direction) and;
b. The direction of the current move (“wave” direction)
I ‘ll continue later
Jan 5th, 2008, 08:58 AM #4
Join Date: Dec 2007 Good post.
Location: Atlanta, GA What are LT Rainbows exactly?
Posts: 194 I
Jan 5th, 2008, #5
09:23 AM
Join Date: Oct Icarus5,
According to his author is a method for scalping very fast timeframes. In general you plot many Weighted Moving Averages (10,20,30 to 240 WMA) and you use Horizontal and Vertical
Posts: 359 Grids to trade... However, for me it is just a visual aid (I dont use it anymore...).
If you are intereseted, this is his original thread
Because of its popularity now he has his own forum:
Quote from Icarus5:
Good post.
What are LT Rainbows exactly?
Jan 5th, 2008, #6
09:33 AM
Join Date: Dec Very nice, thank you.
2007 **
Another question (just to make sure I understand were you are coming from).
Atlanta, GA Do you advocate scalping in the direction of a "trend"?
Posts: 194 For intra-day moves (mainly financial indice) there is a lot of back-and-forth in price action. And while trends do develop, there is also a substantial amount of money that can
be made from, say, having a dual confirmation type of technique and catching the ebb-and-flow of the waves.
Conduct Rules Privacy Policy Sitemap Copyright © 2014, Elite Trader. All rights reserved.
WHILE YOU'RE HERE, TAKE A MINUTE TO VISIT SOME OF OUR SPONSORS:
Futures Trading & Clearing Professional Trading Firm Direct Access Trading Direct Access Trading
Proprietary Trading Education Trading Software Provider Trading Software Provider Spread Trading Instruction
Futures and FX Trading Forex Trading Services Trading Software Provider Direct Access Trading
Professional Equities Trading Futures, Options & FX Trading Currency Trading System Building & Backtesting
Futures Trading Software Pro Gateway to World Markets Option Trading & Education Trading Software Provider
Professional Trading Analytics Direct Access Trading Futures Trade Execution Platform Trading Systems Provider | {"url":"http://www.elitetrader.com/vb/showthread.php?threadid=113456&perpage=6&pagenumber=1","timestamp":"2014-04-18T05:31:02Z","content_type":null,"content_length":"52687","record_id":"<urn:uuid:a0210ddd-d55b-45ca-839e-7c78f5ce58f2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maintainer diagrams-discuss@googlegroups.com
Paths in two dimensions are special since we may stroke them to create a 2D diagram, and (eventually) perform operations such as intersection and union.
Constructing path-based diagrams
stroke :: Renderable (Path R2) b => Path R2 -> Diagram b R2Source
Convert a path into a diagram. The resulting diagram has the names 0, 1, ... assigned to each of the path's vertices.
See also stroke', which takes an extra options record allowing its behavior to be customized.
Note that a bug in GHC 7.0.1 causes a context stack overflow when inferring the type of stroke. The solution is to give a type signature to expressions involving stroke, or (recommended) upgrade GHC
(the bug is fixed in 7.0.2 onwards).
stroke' :: (Renderable (Path R2) b, Atomic a) => StrokeOpts a -> Path R2 -> Diagram b R2Source
A variant of stroke that takes an extra record of options to customize its behavior. In particular:
• Names can be assigned to the path's vertices
StrokeOpts is an instance of Default, so stroke' with { ... } syntax may be used.
strokeT :: Renderable (Path R2) b => Trail R2 -> Diagram b R2Source
A composition of stroke and pathFromTrail for conveniently converting a trail directly into a diagram.
Note that a bug in GHC 7.0.1 causes a context stack overflow when inferring the type of stroke and hence of strokeT as well. The solution is to give a type signature to expressions involving strokeT,
or (recommended) upgrade GHC (the bug is fixed in 7.0.2 onwards).
data StrokeOpts a Source
A record of options that control how a path is stroked. StrokeOpts is an instance of Default, so a StrokeOpts records can be created using with { ... } notation.
vertexNames :: [[a]]
Atomic names that should be assigned to the vertices of the path so that they can be referenced later. If there are not enough names, the extra vertices are not assigned names; if there are too
many, the extra names are ignored. Note that this is a list of lists of names, since paths can consist of multiple trails. The first list of names are assigned to the vertices of the first trail,
the second list to the second trail, and so on.
The default value is the empty list.
Inside/outside testing
isInsideWinding :: P2 -> Path R2 -> BoolSource
Test whether the given point is inside the given (closed) path, by testing whether the point's winding number is nonzero. Note that False is always returned for open paths, regardless of the winding
isInsideEvenOdd :: P2 -> Path R2 -> BoolSource
Test whether the given point is inside the given (closed) path, by testing whether a ray extending from the point in the positive x direction crosses the path an even (outside) or odd (inside) number
of times. Note that False is always returned for open paths, regardless of the number of crossings.
newtype Clip Source
Clip tracks the accumulated clipping paths applied to a diagram. Note that the semigroup structure on Clip is list concatenation, so applying multiple clipping paths is sensible. The clipping region
is the intersection of all the applied clipping paths.
getClip :: [Path R2]
Typeable Clip
Semigroup Clip
AttributeClass Clip
Transformable Clip
clipBy :: (HasStyle a, V a ~ R2) => Path R2 -> a -> aSource
Clip a diagram by the given path:
• Only the parts of the diagram which lie in the interior of the path will be drawn.
• The bounding function of the diagram is unaffected. | {"url":"http://hackage.haskell.org/package/diagrams-lib-0.3/docs/Diagrams-TwoD-Path.html","timestamp":"2014-04-21T03:38:14Z","content_type":null,"content_length":"17739","record_id":"<urn:uuid:5a34e41c-0b1a-42cf-a767-8b371c76e06f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Reasoning for Elementary Teachers 4th Edition | 9780321286963 | eCampus.com
List Price: [S:$142.67:S]
In Stock Usually Ships in 24 Hours.
Questions About This Book?
Why should I rent this book?
Renting is easy, fast, and cheap! Renting from eCampus.com can save you hundreds of dollars compared to the cost of new or used books each semester. At the end of the semester, simply ship the book
back to us with a free UPS shipping label! No need to worry about selling it back.
How do rental returns work?
Returning books is as easy as possible. As your rental due date approaches, we will email you several courtesy reminders. When you are ready to return, you can print a free UPS shipping label from
our website at any time. Then, just return the book to your UPS driver or any staffed UPS location. You can even use the same box we shipped it in!
What version or edition is this?
This is the 4th edition with a publication date of 1/1/2006.
What is included with this book?
• The Used copy of this book is not guaranteed to inclue any supplemental materials. Typically, only the book itself is included.
• The Rental copy of this book is not guaranteed to include any supplemental materials. You may receive a brand new copy, but typically, only the book itself.
The fourth edition of Mathematical Reasoning has an increased focus on professional development and connecting the material from this class to the elementary and middle school classroom. The authors
have provided more meaningful content and pedagogy to arm readers with all the tools that they will need to become excellent elementary or middle school teachers. Thinking Critically. Sets and Whole
Numbers. Numeration and Computation. Number Theory. Integers. Fractions and Rational Numbers. Decimals and Real Numbers. Algebraic Reasoning and Representation. Statistics: The Interpretation of
Data. Probability. Geometric Figures. Measurement. Transformations, Symmetries, and Tilings. Congruence, Constructions, and Similarities. For all readers interested in mathematical reasoning for
elementary teachers.
Table of Contents
Thinking Critically
Some Surprising Tidbits
An Introduction to Problem Solving
Pólya's Problem-Solving Principles
More Problem-Solving Strategies
Additional Problem-Solving Strategies
Sets and Whole Numbers
Sets and Operations on Sets
Sets, Counting, and the Whole Numbers
Addition and Subtraction of Whole Numbers
Multiplication and Division of Whole Numbers
Numeration and Computation
Numeration Systems Past and Present
Nondecimal Positional Systems
Algorithms for Adding and Subtracting Whole Numbers
Algorithms for Multiplication and Division of Whole Numbers
Mental Arithmetic and Estimation
Getting the Most Out of Your Calculator
Number Theory
Divisibility of Natural Numbers
Tests for Divisibility
Greatest Common Divisors and Least Common Multiples
Codes and Credit Card Numbers: Connections to Number Theory
Representation of Integers
Addition and Subtraction of Integers
Multiplication and Division of Integers
Clock Arithmetic
Fractions and Rational Numbers
The Basic Concepts of Fractions and Rational Numbers
The Arithmetic of Rational Numbers
The Rational Number System
Decimals and Real Numbers
Computations with Decimals
Ratio and Proportion
Algebraic Reasoning and Representation
Algebraic Expressions and Equations
Graphing Functions in the Cartesian Plane
Statistics: The Interpretation of Data
The Graphical Representation of Data
Measures of Central Tendency and Variability
Statistical Inference
Empirical Probability
Principles of Counting
Theoretical Probability
Geometric Figures
Figures in the Plane
Curves and Polygons in the Plane
Figures in Space
The Measurement Process
Area and Perimeter
The Pythagorean Theorem
Surface Area and Volume
Transformations, Symmetries, and Tilings
Rigid Motions and Similarity Transformations
Patterns and Symmetries
Tilings and Escher-like Designs
Congruence, Constructions, and Similarities
Congruent Triangles
Constructing Geometric Figures
Similar Triangles
Manipulatives in the Mathematics Classroom
Graphing Calculators
A Brief Guide to The Geometer's Sketchpad
Answers to Selected Problems
Mathematical Lexicon
Index and Pronunciation Guide
Table of Contents provided by Publisher. All Rights Reserved. | {"url":"http://www.ecampus.com/mathematical-reasoning-elementary-teachers/bk/9780321286963","timestamp":"2014-04-17T01:07:39Z","content_type":null,"content_length":"52960","record_id":"<urn:uuid:f5cc8bcc-884c-4b09-a9c2-4380e26f8d03>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"} |
Northbrook Prealgebra Tutor
Find a Northbrook Prealgebra Tutor
My name is Melissa and I love helping students achieve success in Mathematics. During my time at Carroll University, I tutored for 3 of my 4 years, both on and off campus. During my final year
there, I was tutoring the majority of the courses we offered, on top of being the student supervisor of the Math Commons, our tutoring center.
11 Subjects: including prealgebra, calculus, geometry, algebra 1
...Other topics in which I am well versed are formulation of proofs, which is a major component of most discrete math courses, as well as introductory logic. I've been programming ever since I was
a child (1990 or so, I was 9 years old). I began programming in GWBASIC, and graduated to more complex...
22 Subjects: including prealgebra, calculus, computer programming, ACT Math
...Definitions, Postulates, Theorems, and Proofs meets the world of polygons and circles. By now, you know you can figure out answers, but do you know *why* those answers are right? Can you break
it down and provide evidence at each step?
14 Subjects: including prealgebra, geometry, ASVAB, GRE
...Before that I spent 4 years working as the head tutor at a non profit designed to prepare students for college, master entrance exams, and excel academically. I am available for tutoring across
the board and for career planning, interview preparation, and public speaking. Best,NnekaI am a certified and insured 200 RYT yoga instructor through the Yoga Alliance.
33 Subjects: including prealgebra, reading, geometry, biology
...I have my bachelors in engineering and I can help you improve your grades and even score better in an exam. I have worked with high school students as well as college students. I have helped
students excel in various exams.
14 Subjects: including prealgebra, geometry, GRE, algebra 1 | {"url":"http://www.purplemath.com/northbrook_prealgebra_tutors.php","timestamp":"2014-04-19T05:19:51Z","content_type":null,"content_length":"24174","record_id":"<urn:uuid:14f7a51f-e785-4f03-91da-be5888953917>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayesian model checking using tail area probability
- SOCIOLOGICAL METHODOLOGY 1995, EDITED BY PETER V. MARSDEN, CAMBRIDGE,; MASS.: BLACKWELLS. , 1995
"... It is argued that P-values and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent
variables, standard variable selection procedures can give very misleading results. Also, by selecting a singl ..."
Cited by 253 (19 self)
Add to MetaCart
It is argued that P-values and the tests based upon them give unsatisfactory results, especially in large samples. It is shown that, in regression, when there are many candidate independent
variables, standard variable selection procedures can give very misleading results. Also, by selecting a single model, they ignore model uncertainty and so underestimate the uncertainty about
quantities of interest. The Bayesian approach to hypothesis testing, model selection and accounting for model uncertainty is presented. Implementing this is straightforward using the simple and
accurate BIC approximation, and can be done using the output from standard software. Specific results are presented for most of the types of model commonly used in sociology. It is shown that this
approach overcomes the difficulties with P values and standard model selection procedures based on them. It also allows easy comparison of non-nested models, and permits the quantification of the
evidence for a null hypothesis...
- Stat. Sinica , 1996
"... Abstract: In applications, statistical models are often restricted to what produces reasonable estimates based on the data at hand. In many cases, however, the principles that allow a model to
be restricted can be derived theoretically, in the absence of any data and with minimal applied context. We ..."
Cited by 9 (2 self)
Add to MetaCart
Abstract: In applications, statistical models are often restricted to what produces reasonable estimates based on the data at hand. In many cases, however, the principles that allow a model to be
restricted can be derived theoretically, in the absence of any data and with minimal applied context. We illustrate this point with three well-known theoretical examples from spatial statistics and
time series. First, we showthatan autoregressive model for local averages violates a principle of invariance under scaling. Second, we showhowtheBayesian estimate of a strictly-increasing time
series, using a uniform prior distribution, depends on the scale of estimation. Third, we interpret local smoothing of spatial lattice data as Bayesian estimation and show why uniform local smoothing
does not make sense. In various forms, the results presented here have been derived in previous work � our contribution is to draw out some principles that can be derived theoretically, even though
in the past they may have been presented in detail in the context of speci c examples. Key words and phrases: ARMA, Bayesian statistics, conditional autoregression, image, scaling, sieve, spatial
smoothing, spatial statistics, time series.
- TEST , 1998
"... Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the
problems of model adequacy and model choice arise. We focus on the former. While model checking usually addr ..."
Cited by 8 (0 self)
Add to MetaCart
Recent computational advances have made it feasible to fit hierarchical models in a wide range of serious applications. If one entertains a collection of such models for a given data set, the
problems of model adequacy and model choice arise. We focus on the former. While model checking usually addresses the entire model specification, model failures can occur at each hierarchical stage.
Such failures include outliers, mean structure errors, dispersion misspecification, and inappropriate exchangeabilities. We propose another approach which is entirely simulation based. It only
requires the model specification and that, for a given data set, one be able to simulate draws from the posterior under the model. By replicating a posterior of interest using data obtained under the
model we can "see" the extent of variability in such a posterior. Then, we can compare the posterior obtained under the observed data with this medley of posterior replicates to ascertain whether the
former is in agr...
- Sociological Methodology , 1994
"... Introduction Raftery's paper addresses two important problems in the statistical analysis of social science data: (1) choosing an appropriate model when so much data are available that standard
P-values reject all parsimonious models; and (2) making estimates and predictions when there are not enou ..."
Cited by 3 (1 self)
Add to MetaCart
Introduction Raftery's paper addresses two important problems in the statistical analysis of social science data: (1) choosing an appropriate model when so much data are available that standard
P-values reject all parsimonious models; and (2) making estimates and predictions when there are not enough data available to fit the desired model using standard techniques. For both problems, we
agree with Raftery that classical frequentist methods fail and that Raftery's suggested methods based on BIC can point in better directions. Nevertheless, we disagree with his solutions because, in
principle, they are still directed off-target and only by serendipity manage to hit the target in special circumstances. Our primary criticisms of Raftery's proposals are that (1) he promises the
impossible: the selection of a model that is adequate for specific purposes without consideration of those purposes; and (2) he uses the same limited tool for model averaging as for model selection,
, 1996
"... In the linear model with unknown variances, one can often model the heteroscedasticity as var(y i ) = oe 2 f(w i ; `); where f is a fixed function, w i are the "weights" for the problem and ` is
an unknown parameter (f(w i ; `) = w \Gamma` i is a traditional choice). We show how to do a fully B ..."
Cited by 1 (0 self)
Add to MetaCart
In the linear model with unknown variances, one can often model the heteroscedasticity as var(y i ) = oe 2 f(w i ; `); where f is a fixed function, w i are the "weights" for the problem and ` is an
unknown parameter (f(w i ; `) = w \Gamma` i is a traditional choice). We show how to do a fully Bayesian computation in this simple linear setting and also for a hierarchical model. The full Bayesian
computation has the advantage that we are able to average over our uncertainty in ` instead of using a point estimate. We carry out the computations for a problem involving forecasting U.S.
Presidential elections, looking at different choices for f and the effects on both estimation and prediction. 1 Introduction In both the econometrics and statistics literature, a standard way to
model heteroscedasticity in regression is through a parametric model for the unequal variances, as described in many places, e.g. Amemiya (1985), Greene (1990), Judge et al. (1985), Carroll & Ruppert
(1988). M...
"... Introduction Markov chain simulation, and Bayesian ideas in general, allow a wonderfully flexible treatment of probability models. In this chapter, we discuss two related ideas: (1) checking the
fit of a model to data, and (2) improving a model by adding substantively meaningful parameters. Model i ..."
Add to MetaCart
Introduction Markov chain simulation, and Bayesian ideas in general, allow a wonderfully flexible treatment of probability models. In this chapter, we discuss two related ideas: (1) checking the fit
of a model to data, and (2) improving a model by adding substantively meaningful parameters. Model improvement by expansion is also an important technique in assessing the sensitivity of inferences
to untestable assumptions. We illustrate both these methods with an example of a mixture model fit to experimental data from psychology using the Gibbs sampler. Any Markov chain simulation is
conditional on an assumed probability model. As the applied chapters of this book illustrate, these models can be complicated and generally rely on inherently unverifiable assumptions. From a
practical standpoint, then, it is important to explore how inferences of substantive interest depend on the assumptions, and to test the assumptions where possible. 0.2 Model checking using posterior | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1959698","timestamp":"2014-04-23T08:23:43Z","content_type":null,"content_length":"28231","record_id":"<urn:uuid:894df3b7-29bd-4cec-9c62-6845ef63424b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Freed-Witten anomaly
String theory
Critical string models
Extended objects
Topological strings
Special and general types
Special notions
The effective bosonic exponentiated action functional of the type II superstring sigma-model for open strings ending on D-branes has three factors:
1. the higher holonomy of the background B-field over the string 2-dimensional worldsheet;
2. the ordinary holonomy of the Chan-Paton bundle on the D-brane along the boundary of the string;
3. the Berezinian path integral over the fermions.
Each single contribution is in general not a globally well defined function on the space of string configurations, instead each is a section of a possibly non-trivial line bundle over the
configuration space (the last one for instance of the Pfaffian bundle). Therefore the total action functional is a section of the tensor product of these three line bundles.
The non-triviality of this tensor product line bundle (as a line bundle with connection) is the Freed-Witten-Kapustin quantum anomaly. The necessary conditions for this anomaly to vanish, hence for
this line bundle to be trivializable, is the Freed-Witten anomaly cancellation condition.
More precisely, the naive holonomy of an ordinary Chan-Paton principal connection would be globally well defined. But in order to cancel the anomaly contribution from the other two factors, one may
take the Chan-Paton bundle to be a twisted bundle, the twist being the B-field restricted to the brane. Then its holonomy becomes anomalous, too, but there are then interesting configurations where
the product of all three anomalies cancels. This refined argument has been made precise by Kapustin, and so one should probably speak of the Freed-Witten-Kapustin anomaly cancellation.
We interpret the Freed-Witten-Kapustin mechanism in terms of push-forward in generalized cohomology in topological K-theory interpreted in terms of KK-theory with push-forward maps given by dual
morphisms between Poincaré duality C*-algebras (based on Brodzki-Mathai-Rosenberg-Szabo 06, section 7, Tu 06):
Let $i \colon Q \to X$ be a map of compact manifolds and let $\chi \colon X \to B^2 U(1)$ modulate a circle 2-bundle regarded as a twist for K-theory. Then forming twisted groupoid convolution
algebras yields a KK-theory morphism of the form
$C_{i^\ast \chi}(Q) \stackrel{i^\ast}{\longleftarrow} C_{\chi}(X) \,,$
with notation as in this definition. By this proposition the dual morphism is of the form
$C_{\frac{1}{i^\ast \chi \otimes W_3(T Q)}}(Q) \stackrel{i_!}{\longrightarrow} C_{\frac{1}{\chi \otimes W_3(T X)}}(X) \,.$
If we redefine the twist on $X$ to absorb this “quantum correction” as $\chi \mapsto \frac{1}{\chi \otimes W_3(T X)}$ then this is
$C_{i^\ast \chi\frac{W_3(i^\ast T X)}{W_3(T Q)}}(Q) \stackrel{i_!}{\longrightarrow} C_{\chi}(X) \,,$
where now we may interpret $\frac{W_3(i^\ast \tau_X)}{W_3(\tau_Q)}$ as the third integral Stiefel-Whitney class of the normal bundle $N Q$ of $i$ (see Nuiten).
Postcomposition with this map in KK-theory now yields a map from the $i^\ast \chi \otimes W_3(N Q)$-twisted K-theory of $Q$ to the $\chi$-twisted K-theory of $X$:
$i_! \colon K_{\bullet + W_3(N Q) + i^\ast \chi}(Q) \to K_{\bullet +\chi} \,.$
If we here think of $i \colon Q \hookrightarrow X$ as being the inclusion of a D-brane worldvolume, then $\chi$ would be the class of the background B-field and an element
$[\xi] \in K_{\bullet + W_3(N Q) + i^\ast \chi}(Q)$
is called (the K-class of) a Chan-Paton gauge field on the D-brane satisfying the Freed-Witten-Kapustin anomaly cancellation mechanism. (The orginal Freed-Witten anomaly cancellation assumes $\xi$
given by a twisted line bundle in which case it exhibits a twisted spin^c structure on $Q$.) Finally its push-forward
$[i_! \xi] \in K_{\bullet + \chi}(X)$
is called the corresponding D-brane charge.
The special case where the class of the restriction of the B-field to the D-brane equals the third integral Stiefel-Whitney class of the D-brane was discussed in
The generalization to the case that the two classes differ by a torsion class was considered in
Aspects of the interpretation of this by push-forward in generalized cohomology in twisted K-theory are formalized in
and section 10 of
• Matthew Ando, Andrew Blumberg, David Gepner, Twists of K-theory and TMF, in Robert S. Doran, Greg Friedman, Jonathan Rosenberg, Superstrings, Geometry, Topology, and $C^*$-algebras, Proceedings
of Symposia in Pure Mathematics vol 81, American Mathematical Society (arXiv:1002.3004)
(which discusses twists as (infinity,1)-module bundles).
The formulation by postcomposition with dual morphisms in KK-theory which we use above is based on the observations in section 7 of
and generalized to equivariant KK-theory in
A clean formulation and review is provided in
• Kim Laine, Geometric and topological aspects of Type IIB D-branes, Master thesis (arXiv:0912.0460)
In (Laine) the discussion of FW-anomaly cancellation with finite-rank gauge bundles is towards the very end, culminating in equation (3.41).
A discussion from the point of view of higher geometric quantization or extended prequantum field theory is at the end of
Lecture notes along these lines are in Lagrangians and Action functionals – 3d Chern-Simons theory of
The KK-theory-description of the FEK anomaly used above is discussed in | {"url":"http://ncatlab.org/nlab/show/Freed-Witten+anomaly","timestamp":"2014-04-20T03:18:01Z","content_type":null,"content_length":"60357","record_id":"<urn:uuid:c49ae271-c576-481c-b29f-18f4c43d1be9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00583-ip-10-147-4-33.ec2.internal.warc.gz"} |
help with ti-89 calc
July 19th 2011, 04:15 AM #1
Jan 2011
help with ti-89 calc
is there a way to plug this into the ti-89?
Find G'(x) if g(x) = x^2 / 2x^3+x+1
Find F'(x) if f(x) = Sqr root ( 5x^2+1)
theres a small line in front of the g/f on my practice problems.
Re: help with ti-89 calc
Do you want your calculator calculate derivatives for you? Or? ...
I don't think the ti-89 can do that but I think it's able to calculate a derivative in a specific point, for example when you need the gradient of the tangent line in the point x=a to the curve.
Re: help with ti-89 calc
July 19th 2011, 06:58 AM #2
July 19th 2011, 07:34 PM #3
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/calculators/184812-help-ti-89-calc.html","timestamp":"2014-04-17T18:55:45Z","content_type":null,"content_length":"35910","record_id":"<urn:uuid:fbcc969f-4ee3-43f2-bebb-f91c23499c47>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00226-ip-10-147-4-33.ec2.internal.warc.gz"} |
Paul Turán
Born: 18 August 1910 in Budapest, Hungary
Died: 26 September 1976 in Budapest, Hungary
Click the picture above
to see three larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Paul Turán's parents were Aranha Beck and Béla Turán. Paul Turán (or Turán Pál in Hungarian) was the eldest son having two brothers and one sister. The family were Jewish and so had to survive
through exceedingly difficult times, suffering discrimination and then violent anti-Semitism. Paul was a brilliant pupil at secondary school in Budapest, showing at this stage his remarkable
mathematical abilities.
Turán entered Pázmány Péter University of Budapest already showing his potential for research. Erdős writes [7]:-
We first met at the University of Budapest in September 1930 and immediately discovered our common interest in number theory.
In 1933 Turán was awarded his diploma which qualified him to teach mathematics and science, and he continued working for his doctorate. His first paper was published in 1933 and his next two papers,
both published in 1934, were very significant. One was A problem in the elementary theory of numbers which appeared in the American Mathematical Monthly. It was significant in being Turán's first
joint work with Erdős. The second was On a theorem of Hardy and Ramanujan which was published in the Journal of the London Mathematical Society. It was not the result which Turán proved here that was
significant, for he proved a result which had been known since 1917, namely that almost all integers n have asymptotically log log n prime factors. Rather it was the method of proof which, although
it does not use probabilistic terminology, in fact became one of the foundations of probabilistic number theory.
His Ph.D. was supervised by Fejér, and Turán was awarded the degree in 1935. His thesis On the number of prime divisors of integers, written in Hungarian, had been published in 1934 and contained his
new proof of the theorem of Hardy and Ramanujan referred to above. Even at this early stage he had built up an impressive international reputation and had seven papers in print by the end of 1935,
three of which had appeared in the Journal of the London Mathematical Society. One might have expected that this brilliant young mathematician would have easily found a university position. However,
this was far from the case since the severe discrimination against him because of his Jewish origins meant that he could not even obtain a post as a school teacher. In order to support himself
financially, and give himself the chance to continue his mathematical researches, he had to make a living as a private mathematics tutor. By the end of 1938, five years after his first paper
appeared, he had sixteen papers in print in internationally important journals world-wide. At last he managed to get a position as a school teacher when, in 1938, he was appointed as an assistant
teacher of mathematics at the Hungarian Rabbinical Training School in Budapest.
Not only was 1938 significant in that Turán now at least had employment, but it was also the year in which he had his most fruitful mathematical idea. Erdős writes in [7]:-
Probably the most important, most enduring and most original of Turán's results are in his power sum method and its applications. I was there when it originated in 1938. Turán mentioned these
problems and told me that they were not only interesting in themselves but their positive solution would have many applications. Their importance first of all is that they lead to interesting
deep problems of a completely new type; they have quite unexpectedly surprising consequences in many branches of mathematics - differential equations, numerical algebra, and various branches of
function theory.
In fact Turán invented the power sum method while investigating the zeta function and he first used the method to prove results about the zeros of the zeta function. Later Turán and S Knapowski [3]:-
... investigated the distribution of primes in the reduced residue classes mod k. ... The power sum method proved to be the unique procedure for investigating this problem up to now. Their
results in this field were published in nearly 20 papers and were called comparative number theory by the authors.
If times had been extremely hard for Turán up to 1938, then any appearance that they were about to get better was short lived for soon they became far worse. Turán had grown up during the years of
World War I which had proved a time of great hardship. After the war ended, the Treaty of Trianon of 1921 saw Hungary's territory reduced to about one third of its previous size. As events moved
towards World War II, Hungarian foreign policy looked towards Germany and Italy as allies who could help them to restore their lost territory. After the German invasion of Poland which began World
War II, Hungary was not involved at first but was still greatly influenced by Nazi policies. In 1940 Turán was sent to a labour camp, and he was in and out of various forced labour camps throughout
the war. This proved an horrific experience but, as we remark below, perhaps in the end his life was saved because of it. Alpár writes [3]:-
But not even [the labour camp] could stop his mathematical activity. In every situation, making use of the smallest opportunity, he carried on his research without books and journals, missing the
company of colleagues, jotting down his ideas and results on scraps of paper. Several of his new ideas, problems and now famous theorems, originate from that period. As G Alexits has written
about him: "When the fascist barbarism forced him to pull electric wires on poles he defended himself against the malevolent oppression be dealing with his mathematical ideas. Once he told me: "I
got my best ideas while pulling wires, because then I could be alone and nobody noticed that I was thinking."
Erdős, who had begun corresponding regularly with Turán from 1934, initially was able to get some contact with him in the labour camps. Erdős wrote to his father, Lajos Erdős, in Budapest, who then
wrote to Turán, copying out the relevant parts of his son's letters. Remarkably, even some of these letters have survived and they are reproduced in English translation in [20], but there is no
record of any correspondence between June 1941 and Spring 1945. We note that Vera T Sós, the author of [20], was Turán's wife and he wrote a number of joint papers with her. Another remarkable fact
is that extremal graph theory, an area which Turán founded, was one of the "best ideas" that he had while in the labour camps.
In 1941 Germany attacked Russia and Hungary supported them. After the Russian resistance was far greater than expected, Hungary mobilised all its forces to support the German offensive on Russia. The
Hungarian forces suffered a crushing defeat at Voronezh in western Russia in January 1943. In March 1944 Hungary fully cooperated with Nazi aims and Jews were forced to wear a yellow star, robbed of
their property, and forced into ghettos as in other Nazi-occupied areas. Except for the Jews in the forced-labour camps, like Turán, others were sent to the gas chambers of German concentration
camps. Turán's two brothers and his sister all died during the war. It is estimated that 550,000 of Hungary's 750,000 Jews were killed during the war. Turán was liberated from the labour camp in 1944
and was able to resume teaching at the Hungarian Rabbinical Training School in Budapest.
After World War II ended Turán was appointed as a Privatdozent at the Eötvös Lóránd University of Budapest (it had formerly been called the Pázmány Péter University of Budapest). Hungary signed a new
peace treaty in Paris on 10 February 1947, which restored the Trianon frontiers. Before this, however, Turán was able to make international contacts which let him visit Denmark for six months, then
the Institute for Advanced Study at Princeton for six months, in 1947. On his return to Hungary he was elected to the Hungarian Academy of Sciences in 1948, and received the Kossuth Prize from the
Hungarian government in the same year. In 1949 he was appointed to the Chair of Algebra and Number Theory at Eötvös Lóránd University of Budapest, a position he held until his death. From 1955 he was
Head of the Complex Function Theory Department in the Mathematical Institute of the Hungarian Academy of Sciences.
Erdős in [7] describes events just before his death:-
... in July 1976, at the meeting on combinatorics at Orsay in Paris, V T Sós (Mrs Turán) gave me the terrible news (which she had known for six years) that Paul had leukaemia. She told me that I
should visit him as soon as possible and that I should be careful in talking to him because he did not know the true nature of his illness. My first reaction was to say that perhaps he should
have been told ... She said that Paul loved life too much and with a death sentence hanging over him would not be able to live and work very well. ... I am now fairly sure that her decision was
right, since he clearly never tried to find out the true nature of his illness. in fact a few days before his death [his wife] and their son George (also a mathematician) tried to persuade him to
dictate some parts of his book to Halász or Pintz. he refused saying "I will write it when I feel better and stronger". Unfortunately he never had the chance. Fortunately his book was finished by
his students G Halász and J Pintz ...
The book mentioned here is On a new method of analysis and its applications which was published in 1984. Bob Odoni wrote a review:-
In 1953 the author published a book, A new method of analysis and its applications ... giving a systematic account of his methods for estimating "power sums", which he had developed (1941-53)
into a versatile and powerful technique with numerous applications to Diophantine approximations, zero-free regions for the Riemann zeta function and the error term in the prime number theorem,
and to problems in other parts of classical analysis. As regards the latter, Turán found new approaches to such topics as quasi-analytic classes, Fabry's gap theorem and the theory of lacunary
series, amongst others. The book was revised (with improved estimates) in a second edition, but this had a limited mathematical audience since it was only available in Chinese. In 1959 Turán
embarked on the preparation of a new, greatly expanded version of the book. Constant rewriting became necessary in the light of the new improvements and applications, and, at the time of his
death in 1976, the project had still not been completed to Turán's total satisfaction. The book under review represents the culmination of all this work ...
Odoni ends his review with this tribute to Turán's mathematics:-
In the opinion of the reviewer this book renders a great service to mathematicians working in a wide area of classical analysis, particularly analytic number theorists; Turán's methods are still
of great relevance in current research, and it is particularly gratifying to have all this material within the confines of a single volume. The book is a fitting tribute to Turán's remarkable
achievements in analysis, and the editors of the manuscript deserve high praise for their efforts in bringing it to publication.
We have mentioned some of Turán's mathematics above. However, it is impossible to do justice to the huge amount of work which he did, publishing around 150 papers. We mention, however, his work on
statistical group theory, much of which was undertaken jointly with Erdős. Of course conjugacy classes of the symmetric group S[n] on n letters are characterized by partitions of n, so the connection
with number theory is clear. Most questions discussed by Turán and Erdős on this topic concern the distribution of the order of random elements of the symmetric group S[n] . In some of the problems
they considered, all permutations are taken to be equally probable, some others are about the set of conjugacy classes, all equally probable. Turán and Erdős also proved that in a group of order n,
at least n log log n of the n^2 pairs of elements commute.
A mathematician who served under Turán in Budapest described him as:-
... outstanding in analytic number theory but not a good manager of a department.
However he did outstanding work for both the Hungarian Academy of Sciences, serving on numerous committees. He also served the János Bolyai Mathematical Society in many ways including a time as
president. Another major contribution made by Turán was his editing of the papers of Rényi and Fejér which is the main point made by Askey in the article [4]. Askey writes:-
I have used the Fejér papers often. Turán's editing was remarkable. He commented on many of the papers, setting them in context and telling what happened to the ideas Fejér introduced.
But this is only a part of the editorial work Turán undertook, being on the editorial boards of Acta Arithmetica, Archiv für Mathematik, Analysis Mathematica, Compositio Mathematica, Journal of
Number Theory, and essentially all Hungarian mathematical journals.
Turán received many honours in addition to the honours which we mentioned above. He received the Kossuth Prize from the Hungarian government for a second time in 1952. He also received the Szele
Prize from the János Bolyai Mathematical Society in 1975 for creating scientific schools. He was also elected a member of the American Mathematical Society, the Austrian Mathematical Society, and the
Polish Mathematical Society.
A special issue of Acta Mathematica devoted to Paul Turán was published in 1980.
Article by: J J O'Connor and E F Robertson
Click on this link to see a list of the Glossary entries for this page
List of References (22 books/articles)
Mathematicians born in the same country
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © November 2004 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Turan.html","timestamp":"2014-04-16T13:12:23Z","content_type":null,"content_length":"27236","record_id":"<urn:uuid:e13f2bbc-6cd2-43fb-8127-9bdbbcc96a95>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Eigenvalues of Sum of non-singular matrix and diagonal matrix
up vote 1 down vote favorite
Suppose $D={\rm diag}(d_i)$ is a diagonal matrix with all diagonal entries $d_i=\pm 1$. This implies $D^2=I$. Suppose $A$ is a non-singular Hermitian matrix. If we know that $A+A^{-1}+D$ has rational
eigenvalues, what can we say about eigenvalues of $A$?
matrices linear-algebra sp.spectral-theory
add comment
1 Answer
active oldest votes
If the question suggests that then $A$ also should have rational eigenvalues, then it is easy to produce counterexamples. For, say, $$ A=\begin{pmatrix} 1 & 1 \cr 1 & 2 \end{pmatrix}, \;
up vote 4 D=\begin{pmatrix} 1 & 0 \cr 0 & 1 \end{pmatrix} $$ we obtain $A+A^{-1}+D=3D+D=4D$, which has integer eigenvalues. The eigenvalues of $A$ are $\frac{3\pm \sqrt{5}}{2}$.
down vote
add comment
Not the answer you're looking for? Browse other questions tagged matrices linear-algebra sp.spectral-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/132812/eigenvalues-of-sum-of-non-singular-matrix-and-diagonal-matrix","timestamp":"2014-04-21T04:34:07Z","content_type":null,"content_length":"49263","record_id":"<urn:uuid:da1b3591-ff89-449a-a731-b1b283872f00>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00633-ip-10-147-4-33.ec2.internal.warc.gz"} |
How To Draw A Straight Line Graph
Straight line graph is the most basic graph any math student must learn how to draw.
The general equation for a straight line is given by
y = mx + c."m" is the gradient and "c" is the value of the intercept the line makes with the vertical axis.
Let's start with an example
y = (5/4) x + 2 .
The intercept that the straight line will make with the vertical axis is "+2" indicated by the red dot in Diagram 1.
Diagram 1
Next, we noticed that the gradient is m = (5/4). This meant that the vertical change is 5 units upwards (positive number) for a horizontal change of 4 units in the positive (right-going) direction.
Let's find the other spot on the straight line. (Diagram 2)
Diagram 2
The other spot is identified after moving from the first red dot 5 units up and 4 units right. The unit movement is based on the gradient (m = 5/4) set in the equation.
Note: To draw a straight line graph, we only need 2 points.
With the 2 points identified, we are now ready to draw the line connecting the 2 red points.
Diagram 3
To summarise, we just need to follow 3 steps:
1. Identify the intercept (c) from the equation and indicate it on the vertical axis
2. Find the other point using the gradient (m) as a guide with the point in step 1 as start reference
3. Connect the 2 points identified to obtain the straight line graph
This is as simple as A B C !
Another example:
y = -2 x + 7
" and the
", meaning, a drop of 2 units for 1 unit movement to the right. The graph is shown below (Diagram 4).
Diagram 4
Let's celebrate!
9 comments:
still confused pls help
Just focus on 2 items; the gradient and y-intercept. The gradient is the ratio of the number of units in the vertical to horizontal drawn. A good starting point to drawn the 2 points needed to
form a straight line is the y-intercept. This is a point directly on the y-axis (with x=0). After which, move the units from this starting point by the amount stated in the value of the gradient.
This will let you get the second point needed. With the 2 points, simply connect them up with a straight line. :-)
What's the gradient mean?! In simple terms.....
In short, a gradient is a number that tells one how steep is the slope.
Example: A number of 10 is steeper than a number of 3.
You can imagine a car going up a slope.
With a slope of 10, the car needs more "power" to go up the slope while a slope with a 3 will requires a much lesser "power" that the fomer slope with 10.
why do we drop 2 units for 1 unit movement to the right? im lost?
In reply to peter_rulz_10,
there are 2 movements in plotting a marker (or point in a graph).
They are the vertical and horizontal movement or direction. They are represented as the y-axis and x-axis respectively.
2 units drop meant a movement downwards along the y-axis and at the same time another movement going to the right along the x-axis of 1 unit.
==> These motions enable one to create the new marker (or point) for the straight line drawing.
what would you do if you are given the data in form of a word puzzle. So say they just tell you a car has travelled a certain distance in a certain amount of time plot this as a graph showing the
How would you work out the gradient in the first place and how could you plot this
My suggestion is to plot the time taken on the x-axis(horizontal axis) with the distance travelled on the y-axis (vertical axis). With the given data of time and distance, you should be able to
get one marker on the graph. The other marker needed to create the straight line will be the origin (0,0). Joining these 2 markers will be the straight line. Gradient can be gotten by m =
(distance / time taken).
However, if the objective of the question is to find the speed of the car, you can simply use speed = distance travelled / time taken without plotting the graph.
There are many tools in maths, selecting the appropriate ones is the real target behind learning maths. | {"url":"http://mathsisinteresting.blogspot.com/2008/08/how-to-draw-straight-line-graph.html","timestamp":"2014-04-20T11:31:05Z","content_type":null,"content_length":"109684","record_id":"<urn:uuid:2e3ad159-e521-4db3-b501-7086f6907bdf>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00273-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US4608649 - Differential cascode voltage switch (DCVS) master slice for high efficiency/custom density physical design
Cross reference is made to a number of patent applications each having a common filing date and a common assignee with the present application. These patent applications are as follows:
U.S. patent application Ser. No. 508,440, filed June 27, 1983 entitled "Differential Cascode Current Switch (DCCS) Master Slice for High Efficiency/Custom Density Physical Design" having as an
inventor J. W. Davis.
U.S. patent application Ser. No. 508,454, June 27, 1983, now abandoned, entitiled "A Differential CMOS Logic Circuit with an Efficient Load Means" having as inventors J. W. Davis and N. G. Thoma.
U.S. patent application Ser. No. 508,393, filed June 27, 1983 entitled "Field Effect Transistor (FET) Cascode Current Switch (FCCS)" having as inventors J. W. Davis and F. D. Jones.
This invention relates to Very Large Scale Integrated (VLSI) microelectronic circuits and more particularly to a more efficient utilization of master slices in the production of such circuits.
The evolution of microelectronics has been discussed in many books and articles. For example, in the Scientific American book entitled "Microelectronics", published in 1977 by W. H. Freeman and Co.
the book-publishing affiliate of Scientific American, a variety of individual articles address the nature of microelectronic elements, their design and fabrication particularly in the form of Large
Scale Integrated Circuits, their applications, and their impact for the future.
The IBM Journal of Research and Development has had a number of articles from time to time concerning various microelectronics technologies of this nature which are included in the May 1981 issue:
"VLSI Circuit Design", the May 1982 issue: "Packaging Technology" and the September 1982 issue: "Semiconductor Manufacturing Technology".
Complementary Metal Oxide Semiconductor (CMOS) technologies are of special interest in the present patent application. Other Metal Oxide Semiconductor (MOS) technologies are set forth in the
Scientific American book. A number of other manufacturing techniques of basic interest are also described, in the Scientific American book. As one example, on page 42 thereof Large Scale Integrated
circuits may be produced by computer control by proceeding through a number of steps including the use of optical techniques for generating topological patterns.
Other items of interest to microelectronics fabrication in particular, including master slice layout, logic cell layout and arrangements for achieving high density in such circuits include the
Technical Disclosure Bulletin article number PO 141,578 entitled "Cascode Decoder" by J. E. Gersbach and J. K. Shortle, published September 1965, Vol. 8, No. 4 at pp. 642-643 which concerns closely
controlled input voltages, differentially connected to the inputs of a cascode decoder thereby providing high speed operation with minimum power dissipation.
Technical Disclosure Bulletin article number FI871-0895 entitled "Bipolar FET High-Speed Logic Switch" by R. D. Lane, published May 1972, Vol. 14, No. 12 at pp. 3684-3685 relating to the high speed
operation of both positive and negative transitions at low power in a bipolar transistor current switch circuit by the provision of a pair of cross-connected field-effect transistor (FET) loads for
the current switch bipolar transistors.
Technical Disclosure Bulletin article number MA877-0019 entitled "Merged Transistor Logic Cell for Logic Master Slice Layout" by H. R. Gates published March 1978, Vol. 20, No. 10 at p. 4013. This
article relates to a logic cell layout which minimizes channel blocking for an integrated injection logic master slice array.
Technical Disclosure Bulletin article number GE878-0041 entitled "Integrated Logic Cell Array Layout" by K. Helwig published January 1980, Vol. 22, No. 8A, pp. 3258-3259 which concerns the density
and/or wiring capabilities of a merged transistor logic (MTL) array that can be improved by placing MTL cells in the X and Y directions adjacent to a common injector region.
Technical Disclosure Bulletin article number FR876-0335 entitled "Generation of Mask Layout from Topological Equations" by B. Vergnieres published December 1980, Vol. 23, No. 7A, pp. 2833-2835
provides for the combination of manual design and automatic design automation which together provides the greatest flexibility, rapidity and density for manufactured integrated circuits.
Technical Disclosure Bulletin article number EN880-0261 entitled "Cascode Parity Circuit" by E. L. Carter and H. T. Ward published August 1981, Vol. 24, No. 3, pp. 1705-1706 providing for a
customized cascode current switch circuit which facilitates parity generation with fewer logic stages than conventional circuits.
U.S. Pat. No. 3,233,223 to F. K. Buelow et al which provides for a high speed trigger having an output after a single transistor delay.
U.S. Pat. No. 3,446,989 to F. G. Allen et al having the provision of a multiple level integrated semiconductor logic circuit connected and operative to control a bistable element.
U.S. Pat. No. 3,475,621 to A. Weinberger relating to high-density integrated circuit arrangements for generating complex logical functions that include combinational and sequential logic.
U.S. Pat. No. 3,760,190 to C. W. Hannaford using a multiple input latching circuit which responds to the satisfaction of any one or more of a plurality of predetermined input signal conditions by the
production of an output signal persisting, until reset, irrespective of any change in the input signals.
U.S. Pat. No. 3,978,329 to C. R. Baugh et al relating particularly to digital logic circuits for performing digital arithmetic functions.
U.S. Pat. No. 4,176,287 to J. J. Remedi concerns digital decoders for decoding digital signals and especially a CMOS decoder capable of providing one or more of n decoded outputs.
U.S. Pat. No. 4,249,193 to J. Balyoz et al concerning an improved masterslice design technique including structure, wiring and method of fabricating thereby providing improved Large Scale Integrated
U.S. Pat. No. 4,295,149 to J. Balyoz et al which utilizes improved LSI semiconductor design structures thereby enabling increased density and optimized performance of semiconductor devices, circuits
and part number functions.
In the production of microelectronic circuitry heretofore a generalized approach has been the use of macro logic elements. In usual practice, the macro logic elements have been maintained in a
library for access as needed for circuit production. Usually a number of the macro elements have been combined in order to produce a finished chip or logic product. The macro libraries have to be
individually designed until a critical number of library entities are arrived at before a generalized logic machine can be efficiently implemented using them. The design of a full set might take many
years to complete thus resulting in difficulty in designing an efficient desired finished product. Further, the physical images used during circuit production and derived from the macros referred to
are further constrained as a result of the macro concepts.
The primary objective of the present invention is to overcome inherent limitations imposed heretofore due to use of macro concepts in DCVS circuit fabrication.
In accordance with the present invention, all macro logic elements utilized for microelectronic logic fabrication are decomposed into their most primitive elements. This enables design of equivalent
logic in a more generalized compact fashion.
In the practice of the present invention, significant advantages are realized by greater flexibility in circuit design, timesaving, and elimination of the complications encountered with the
previously used macro concepts. Further advantages result from the use of a small closed macro set all of which simplifies or enhances the automation of processes required to achieve the end logic
In the preferred embodiment described, a topological physical design is utilized for the support of a Differential Cascode Voltage Switch circuit/logic technology in an Automated Placement-Wiring
environment. This physical entity takes the form of a "brickwall" set of transistors in a Master Slice image.
For a better understanding of the present invention, together with other and further advantages and features thereof, reference is made to the description taken in connection with the accompanying
drawings, the scope of the invention being pointed out in the appended claims.
Referring to the drawings:
FIG. 1A illustrates a master slice in its entirety utilizing Differential Cascode Voltage Switch (DCVS) technology while FIG. 1B represents a portion of the master slice, more specifically the upper
right corner.
FIG. 2A illustrates the production steps in fabricating a master slice and FIG. 2B illustrates the steps in personalizing a master slice into an end product.
FIG. 3 shows a three level differential cascode current switch logic tree.
FIG. 4 illustrates an example of a three-high cascode tree in Complementary Metal Oxide Semiconductor (CMOS) field effect transistor (FET) technology.
FIGS. 5A, 5B and 5C serve as a symbolic representation of the cascode tree of FIG. 3 and FIG. 4.
FIGS. 6A and 6B illustrate a flow diagram representing the DCVS design methodology.
FIG. 7 illustrates a digital machine, diagramed in terms of logic groups.
FIG. 8 represents algorithmic determination of the longest path delay through a logic group.
FIG. 9 illustrates a logic group primary output to generic logic map output vector comparison.
FIGS. 10A, 10B and 10C represent a hierarchical placement approach starting at a very high machine level passing through an intermediate stage and resolving to the microblock placement.
FIG. 11 is a symbolic representation of the master slice shown in FIGS. 1A and 1B.
FIG. 12 is an exploded view at the device level of a portion of the master slice in FIGS. 1A and 1B.
FIG. 13 is a symbolic view of the wiring channels for the master slice.
FIG. 14 is a combination, actual and symbolic representation of one of the technology microblocks comprising a differential pair of transistors.
FIG. 15 represents the application of the technology microblocks including the microblock of FIG. 14 on the complete master slice symbolic image of FIG. 11.
FIG. 16 represents an example of a set of microblocks placed and interconnected in order to form a section of a computer function.
The following abbreviations are occasionally used herein:
______________________________________Abbreviation Definition______________________________________A, B, C, D, E, Q Boolean Variable LabelsA21, A22, C21-R21 Machine Diagram Logic Group LabelsBDLC/S Basic Design Language Control/Structure - a high level hardware descriptive and modelling languageBDLS Basic Design Language Structurea logic block interconnect languageDCVS Differential Cascode Voltage SwitchDOR Differential OR, DCVS microblockFET Field Effect Transistor, a term for either an MOS or a CMOS transistorGL/1 Graphic Language 1 - a vector format graphics design languageIDL Interactive Design Language - essentially same function as BDLC/S______________________________________
Described here is a design concept which enables cascode logic to become a generalized logic technology. This design concept is based on dealing with the technology in its most granular form in
describing a product's logical and physical essence. This provides the basis for dealing with the technology in a software intensive environment providing the high productivity necessary for
generalizing the logic technology. Exploiting this design methodology in a VLSI master slice environment yields high productivity in both the logical and physical design phases of the product.
A VLSI master slice 1 supporting the differential cascode voltage switch (DCVS) CMOS embodiment of cascode logic is illustrated in FIGS. 1A and 1B. As indicated in FIG. 1A, master slice 1 is
comprised of two general areas, an input/output cell perimeter 2 with off-chip driver receiver circuits and a cascode logic array 3. A portion 4 of the cascode logic array, will be described later,
in connection with FIG. 12. An enlarged view of the upper right corner of master slice 1 is shown in FIG. 1B. This illustrates the input/output cell perimeter 2, logic array 3, a test area including
test points 7 (only a few so designated) and power connector points 8. A primary attribute of fabricating a logical product in a master slice environment is an abbreviated production time. This
reduced processing time is accomplished by virtue of the fact that the master slices are preprocessed through a number of the fabricating steps and stockpiled in a substock. Functional products need
only to be processed through the remaining, reduced number of fabrication steps.
FIG. 2A illustrates the actual fabrication of a master slice for placement into substock. This fabrication consists of six stages designated 11 through 16. The fabrication process begins with an
oxidized semiconductor electronic quality wafer, such as silicon wafer stage 11. Wafers of other composition may be utilized depending on the technology. This silicon wafer is further processed to
produce devices of both N and P polarity to form Complementary Metal Oxide Semiconductor array. For further information concerning a process of this nature reference is made to the article entitled
"HMOS-CMOS--A Low-Power High Performance Technology" having Ken Yu et al as authors that appeared in the IEEE Journal of Solid State Circuits, Vol. SC-16, No. 5, October 1981, pp. 454-459. Such a
process has a two orthogonal wiring plane capability. At the conclusion of stage 16, the vertical structures of all the active components on the wafer are complete. Upon completion of stage 16, the
master slice wafers are stock piled in substock available for personalization into logic products. What remains to be processed are the contact holes and interconnection of the passive and active
devices to complete the physical design of a logical function.
The final processing or personalization phase begins with retrieval of the required number of master slices from substock and proceeding through the interconnection phases of the process at stage 20,
FIG. 2B. Contact holes are opened for the active and passive components, stage 21 and interconnected with metal wiring on multiple planes interconnected further with conducting interplane vias in
stage 22. Processing continues with the bonding of pads stage 23 to the logic service terminals probe testing of the completed personalized wafer in stage 24, sectioning the wafer in stage 25 into
die and sorting out good die in stage 26. The final phase involves assembling the sorted good die from stage 26 into a package stage 27, sealing that package in stage 28 and processing that package
through a final test stage 29. The foregoing procedures result in reduced processing time for each product in that only stages 20 through 29 must be performed to complete the definition of each
product designed against the master slice in substock.
A software intensive design system supporting both bipolar and FET master slice versions of cascode logic is described here having both high productivity and abbreviated processing time. A short
tutorial on cascode logic is now presented in order to establish a further basis for understanding the description of the preferred embodiment set forth herein.
Design System, DCCS Technology, and Cascode Logic Circuitry
Cascode logic has superior power-performance attributes compared to other logic circuit technologies, but a variety of factors have prevented it from becoming a generalized logic technology. A
software intensive design system supporting both Bipolar and FET master slice versions of cascode logic is described here having high productivity characteristics necessary for the general support of
the technology.
Cascode Logic is easily understood through the examination of a Bipolar embodiment, the Differential Cascode Current Switch, DCCS. DCCS is a differentially coupled current mode logic family, as its
name implies, that exhibits the inherent speed of emitter coupled logic while distributing its circuit power over an expansive logic function. FIG. 3 illustrates a three level DCCS logic with logic
elements R and I positive variables A, B, and C and negative variables A·, B·, and C·. A constant current source sets the unit of power the DCCS tree will consume in performing its designed logical
function. Logical operations are accomplished through selectively steering the tree current through various paths within the tree to one of two binary output summation points. Current steering is
accomplished by applying differential logic signals to each differential set of transistors in the tree, selecting the devices that will allow current to pass. The path indicated in FIG. 3 by arrows,
represents the resultant current path through the tree when the variable `A` and `B` are true, or at a positive potential and variable `C` is false, or at a negative potential.
The DCCS tree of FIG. 3, is represented in symbolic form by FIGS. 5A, 5B and 5C. The actual tree configuration is directly derived from the three variable truth table in FIG. 5A. An output vector `Q`
is developed from the complete conditional table of three variables. Since the problem involves three variables, it suggests the use of a three high DCCS tree, FIG. 5B. The output vector `Q` is
transposed across the top of the full three level tree expansion. The `Q` output vector across the top of the tree associates each tree output point with either the `1` or `0` conditional path. The
convention is that current flow propagates a logical `0`. Connecting each output point to its respective termination point, yields a three level cascode tree configured to provide the `Q` output
vector response to the various input conditions. Further examination of the tree configuration reveals apparent redundancy represented by the "X" blocks in FIG. 5B, and `don't care` states that may
be exploited in order to minimize the tree. This minimized result is illustrated by FIG. 5C. The example suggests that any Boolean problem concerned with three variables may be solved by a single
three high DCCS tree. This can be further demonstrated by developing various other output vectors from the conditional table given and configuring the tree solutions in the same manner as the
example. It now follows that any `n` variable Boolean expression may be solved by a single `n` high DCCS tree.
These conclusions demonstrate the functional power of cascode logic and convey a similar message for the electrical attributes of the technology. This functional power is achieved in the speed and
within the power dissipation of a single level logic gate. FIG. 4 illustrates an example of a three high cascode tree implemented in CMOS FET technology with elements P and N. These implementations
have exactly the same functional and electrical attributes, within their respective technological arenas. Given the obvious power of the cascode logic technology, it is an ideal candidate in which to
implement VLSI machines.
To realistically pursue VLSI machine designs, it is necessary to have a method of dealing with the problem as a whole in a highly productive design cycle. Pre-VLSI machine design methodologies
allowed for the partitioning of the machine into `independent` elements of the size and complexity a designer could deal with at a primitive level. This partitioning could be somewhat crude and still
not greatly impact the overall machine primitive level design. Partitioning was essentially limited and dictated by what could be supported on the less dense integrated chips of that era. With the
advent of VLSI and upward of 10,000 logic gates (or more) available on a single piece of silicon, the problem must be approached differently. Optimization of both the high level structure and
primitive implementation is necessary to fully realize the potential of VLSI. To this end the Cascode Logic Design methodology illustrated by the flow diagram of FIGS. 6A and 6B entitled "DCCS Design
Methodology" has been developed.
As can be seen from the diagram, the design system is driven from a high level modeling facility. The design system is presently compatible with IDL and BDLC/S as a means of developing and simulating
high level machine models. A strategic high level modeling tool comprising a high level language and compiler (High Level Design and Simulation, FIG. 6A) is utilized to model and simulate the high
level machine, and also to optimize it. All these tools essentially provide the same service with varying degrees of efficiency, that is, to allow the machine architect to deal with the entire
problem. These high level tools all interface with the Cascode Logic Design System through generic descriptions of the various machine functional partitions.
These generic descriptions take the form of logic maps or functional truth tables that represent the complete set of functional conditions the primitive logic must eventually emulate. The high level
tool actually passes a set of generic logic maps to the design system, each representing a machine sub-function or partition. These generic logic maps are similar to PLA pictures in that they
represent a set of output vectors dependent upon the conditions of a set of input variables. The design system deals with the set of logic groups as a synchronous sequential machine, such that each
performs a designated machine sub-function in a time spaced relationship to other logic groups or machine sub-functions.
A diagram illustrating a machine configuration in terms of logic groups is given by FIG. 7 entitled "Machine Diagram". An LSSD latch is appended to the input of each logic group and becomes the
mechanism for clocking the system. Each logic group is clocked, completes its assigned tasks and passes its output to the next sequential partition during the next clock phase. The LSSD latches, as
well as other machine register resources, are passed into the design system as `Black Box` elements. The high level modeling tool also passes some extraneous information along with the machine
description: driver/receiver types, switching groups, I/O preassignment, and a `machine` level objective function. The objective function sets the relative priority of the system optimization
parameters: delay, power and area. This objective function is applied to all the machine partitions or logic groups during the first pass through the design system as the high level generic tables
are transformed into DCCS cascode tree networks.
The generic logic maps, representing clocked logic groups, are decomposed into DCVS tree networks by a decomposition algorithm (Decompose into Cascode Logic, FIG. 6A) described in the article
entitled "The Decomposition and Factorization of Boolean Expressions" by R. K. Brayton and Curt McMullen, IEEE ISCAS 1982 (Institute of Electrical and Electronic Engineers International Symposium on
Circuits and Systems). This algorithm works upon the machine one logic group at a time. The algorithm minimizes each logic group output vector Boolean expression and decomposes it into a set of
expressions that can be implemented in a cascode logic network of trees. The algorithm must be tree height sensitive, and allows the user to specify the desired maximum tree height. This algorithm
has its own internal objective function, to find a solution with the fewest number of trees having the fewest number of devices. The algorithm does not consider signal level translators or early-late
signal arrival and performs variable level assignment within each tree to effect minimum number of devices. The algorithm therefore yields only a partial cascode logic solution that further be
manipulated to arrive at an optimal logic network solution.
The decomposition result is further manipulated by an optimization phase in the design flow. This phase of the design system assumes that the decomposition result is an optimal solution for the
parameters it considered. Based on this assumption, it's an objective of this design phase to minimally effect the decomposition result. The task performed here, for a Bipolar implementation, is to
complete the DCCS description by the inclusion of the necessary signal level translators (Introduce Trans. Bipolar Optimize Logic, FIG. 6A). An emitter follower is assigned to each logic group
primary input at the LSSD latch in anticipation of heavy signal loading, by the Register Resource Module. An internal objective function of this algorithm is to introduce the fewest number of power
consuming emitter follower level translators as possible. The algorithm assumes that all primary input variables from the logic group LSSD latches are available from any signal level with no
additional emitter follower cost. Considering these translators, the algorithm strives to maximize primary input level assignment requiring translation and minimize the translation of internal trees.
Statistics are collected from the decomposition result as to the variable level assignment for optimal trees, trees having a minimum number of devices. These statistics are used to establish an
optimal level assignment for each network variable. A simplified example might be variable `A` found assigned to signal level `4` for twenty of its twenty-five occurrences in the logic group being
optimized. This would suggest an objective function for the level assignment of variable `A` to signal level `4` in order to have minimum effect on the optimal tree solution from decomposition. The
algorithm would also attempt to resolve the level assignment for variable `A` to signal level `4` in the remaining five trees in order to enhance the physical design concerning wire routing from the
translator. The algorithm attempts to anticipate the physical design phase of the design flow and relates multiple signal routing from a single point source, the input SRL, as a detrimental wiring
impact. The algorithm deals with loading on variables in a similar way. If a particular variable load exceeds the specified maximum from a single emitter follower, a second is introduced into the
solution. The algorithm utilizes this circumstance to give the variable two single level assignments with no additional cost over the load requirement and again attempts to anticipate the physical
design phase and minimize the wiring impact. The algorithm attempts to associate two distinct groups of trees, as independent as possible, with the two emitter followers. The heuristic measures the
tree interdependence as the number of common or shared variables in each tree and asserts that trees sharing many variables will be placed in close proximity and trees with only a single variable in
common will not be placed in close proximity. The algorithm, by this grouping, has attempted to decongest the wiring demand by diverting the point source to two different destinations. The only
remaining tool this segment of optimization can employ to accomplish translation and not increase the emitter follower cost is in-line translation. These translators are devices in-line with the
cascode logic tree output terminal that translate the tree output signal level itself. These devices are subject to their own set of technology rules, such as tree height and loading, which are
scrutinized by the algorithm before their application. This segment of the design flow provides an optimal Bipolar cascode logic solution for a machine level problem.
At this point in the design flow, both the FET and Bipolar solutions are ready to be optimized against the user specified machine objective function. The FET decomposition result is still intact
since it doesn't require signal level translators. The cascode logic levels within a single FET tree are all driven with the same signal swing. The Bipolar solution is considered optimal in that is
reflects the minimum number of emitter follower translators, least effecting the original decomposition result. The machine objective function is passed into the design system along with the complete
generic machine description. The objective function is in the form of a set of numeric weights resolving the priority for optimization of three parameters: machine speed, machine power dissipation,
and the number of devices necessary to implement the design. These numeric weights are applied as constants in a set of simultaneous equations represented in a linear optimization model. The
decomposition solution, after translator introduction for Bipolar, serves as a basis for further parametric optimization directed by the machine objective function. It represents the optimal solution
for two of the three optimization parameters: number of devices and units of power. This solution does not necessarily represent an optimal primary output delay. The optimization model is then
employed in determining how to optimize discrete logic group primary outputs in order to effect a uniform logic group delay pattern.
The delay algorithm determines the worst case delay for each tree in the logic, based on technology delay equations and the worst case path through the tree. The worst case path through a tree is
defined as the path incurring the maximum number of intermediate nodes through the maximum number of cascode levels. These worst case delays are listed for each tree in the logic group. The primary
output networks are interrogated as graphs by a routing algorithm to determine the longest path delays for each network. The algorithm considers early-late signal ordering and cascode level delay
dependencies in its computation of the primary output delay. These delays are in turn listed as the path delays of the logic group primary outputs. The algorithm identifies the primary output with
the longest delay and interrogates its tree network further to reflect legal logical conditions. A legal logic condition is defined to be the subset of all possible Boolean variable combinations that
can actually occur in the defined logic network without logical contradiction. These are logical conditions that can occur simultaneously, such as resolving a single variable condition to either true
or false. This validated delay is now offered as the primary output delay and is compared to the remaining logic group primary output delays in order to determine if it is still the longest path
delay. If it is still the longest path, the logic group interrogation is complete, and its delay is equated with the longest path delay. If another primary output is found to now exceed the desired
delay, it is interrogated for legal condition and processing continues until a longest path delay is found. This sequence is illustrated in FIG. 8 by finding the longest legal primary output path.
The logic group is completely characterized at this point in the design flow with an accounting for devices and power units and distribution of the primary output delays. It's an objective at this
point to force the primary output delays into a uniform distribution with some specified criteria defining the bounds of the distribution. The longest path delays are modified to meet this uniformity
criteria through variable level reassignment and tree powering, as directed by the costing optimization model. The machine objective function will vary this interactive modification depending on the
associated cost of the parameters that effect change compared against the initial objective function. The completion of this phase represents an optimized primitive level design emulating the high
level machine model and ready for further processing to the point of user review.
The machine diagram, such as illustrated in FIG. 7, which includes for example, blocks CZ1, AZ1-AZ4 and RZ1, is updated with the various logic group longest path delays and interrogated for clock
generation. This algorithm (Generate clocks, FIG. 6A) determines the fastest rate at which the time spaced logic groups can be clocked. The algorithm resolves the clock specification to the clock
rate, the number of clock phases and a logic to phase relationship. Upon determining the clock rate and phase requirement, the algorithm completes the definition of the clock generator `Black Box`
(essentially an LSSD clock ring) and appends the clock generation logic to the primitive logic. The complete machine primitive logic expansion, including the various `Black Box` elements are encoded
in a logic descriptive language. This level of the design serves as the basis for primitive logic validation and further processing to realize a physical design.
Logic validation (Validate Primitive Logic in conjunction with Logic Model Functional Patterns, FIG. 6A) is essentially accomplished by comparing the primitive level design to the high level generic
logic map that inspired it. The first level of validation is between the logic group machine diagram and high level model machine structure. This validation is accomplished by constructing a machine
diagram from the high level model in terms of the various generic logic map elements. The algorithm associates each logic group with its high level generic logic map equivalent and determines that
both machine diagrams represent the same configuration. The next validation phase is to insure that the high level generic logic maps have been mapped with integrity to the primitive level. This is
accomplished by formulating the Boolean output expressions for each logic group primary output and comparing it to generic logic map output vector it was designed to emulate. This logic group primary
output to generic logic map output vector comparison is illustrated in FIG. 9. As can be seen in the figure, the algorithm must formulate Boolean output expressions for each of the trees contributing
to the primary output expression and derive the final expression. The algorithm must associate each logic group with its generic logic map equivalent and further resolve an association between the
generic logic map output vectors and the logic group primary outputs. The example illustrated in FIG. 9 relates the formulation of logic group primary output `Q` and its comparison to a generic logic
map output vector `F1`. At the conclusion of this logic validation phase the primitive logic is known to accurately reflect the simulated high level model. This validation coupled with the extensive
timing analysis of the optimization phase constitutes primitive level design that can be committed to a physical design upon user review (User Review, FIG. 6A) and acceptance.
The physical design phase is driven from a BDLS master file derived from the validated encoded primitive logic description. This same BDLS master file also supports the LSSD Rules and TPG phase of
the design flow, and to this end represents the LSSD latch elements in predefined macro form. These macros must be expanded to their microblock level before proceeding into the placement phase of the
physical design. This expanded BDLS also supports the physical to logical audit of the final design. The expanded BDLS is transformed into a Master Logic List (MLL) in order to interface with the
placement phase of the design flow.
Placement is based on a hierarchical methodology (Hierarchical Placement, FIG. 6B) that resolves the machine level placement problem to the cascode logic microblocks, differential pairs of devices,
load devices, current sink devices, etc. The placement task is performed with a routine described in an article entitled "Optimization by Simulated Annealing" by S. Kirkpatrick, C. D. Gelatt, Jr.,
and N. P. Vecchi published May 1983 in the magazine Science. Conceptually the placement algorithm attempts to find an optimal physical placement for the cascode logic microblocks described by the MLL
file. This yields a placement where each microblock position has been optimized to minimize the subsequent wiring problem. The cascode trees are now free for elements accentuating the wiring phase.
Placement is derived hierarchically from the logic group machine diagram, through a pseudo tree phase to the actual cascode logic microblocks, illustrated by FIGS. 10A, 10B and 10C. The placement
algorithm is a Monte Carlo algorithm and therefore benefits greatly through seeding the placement with a reduced number of objects, in terms of improving its efficiency. The first phase of this
seeding is based on resolving a relative placement for each logic group in the machine diagram, LG1 through LG9 in FIG. 10A. FIGS. 10A, 10B and 10C represent a hierarchial approach to the ultimate
placement of the microblocks configuring an entire chip level physical function and utilization of the master slice concept for the support of the Differential Cascode Voltage Switch Technology. As
indicated, FIG. 10A illustrates a number of machine logic groups LG1 through LG9 placed against the complete master slice logic array reference 3.
Each logic group is associated dimensionally with its primitive logic demand in terms of the number of trees and number of microblocks it encompasses. The chip image is completely resolved with
either component demand of logic groups or uncommitted areas such as BA1 through BA4 of FIG. 10A. The placement must deal with fewer than one hundred objects at this phase and can resolve a placement
relatively quickly. This phase of placement is followed by a tree placement phase. Essentially, the logic group pseudo boundary is displaced by a cascode tree pseudo boundary. The sets of cascode
trees, TS1 through TS9 in FIG. 10A are unpacked into the multiplicity of cascode trees, TS2-T1, TS2-T2 etc. in FIG. 10B. The placement algorithm now attempts to resolve placement of these cascode
trees that make up the logic groups. The tree placement has been relatively resolved during the logic group placement phase such that further resolution will take place within some bounded space less
than that of the full image. This phase of placement is also relatively fast in that the algorithm is concerned with fewer than fifteen hundred objects. The final placement phase eliminates the
cascode tree pseudo boundary and allows the algorithm to resolve a final placement for tree microblock elements. The multiplicity of cascode trees, TS2-T1, TS2-T2 etc. in FIG. 10B, are further
resolved to the microblock level, TS2-T2/MB1, TS2-T2/MB2 etc. in FIG. 10C. This phase again resolves the microblock placement within a relatively small bounded space but requires the algorithm to
deal with many more objects. This is obviously the most time consuming placement phase, but yields the final microblock placement against which the wiring algorithm is exercised.
The wiring algorithm (Hierarchical Wiring, FIG. 6B) employed to wire this microblock placement is an enhanced Lee algorithm. The Lee algorithm is described in the article entitled "An algorithm for
Path Connections and Its Applications, IRE Trans. on Electronic Computers, September 1961, pp.346-365. Also see the article entitled "The Interconnection Problem--A Tutorial" by David W. Hightower of
Bell Telephone Laboratories, Inc. published in Proceedings, 10th Design Automation Workshop, June 1973. This algorithm is a maze runner algorithm with `rip-up/lay-down` capability. A `rip-up/lay-down
` capability consists of identifying a blocking network, removing the network, wiring the previously failed network, and then re-wiring the original networks that were removed in order to make the
changes. The algorithm makes provision to avoid `rip-up/lay-down` loops through assignment of channel ownership to the newly interconnected wire. During the course of wiring the microblock placement
the algorithm may find, in a high wire demand area, the LST it is attempting to route a wire to is completely blocked by `owned` channels. This circumstance constitutes a failed net. At this point
the wiring program halts execution and recalls the placement algorithm. The placement algorithm then regions the chip image into bounding rectangles and attempts to identify the regions in the
minimum spanning tree path for the failed net. Next, the placement algorithm proceeds to redistribute the microblocks within the minimum spanning tree path regions. This redistribution is scheduled
to a particular `annealing schedule` and proceeds until its conclusion. The algorithm is scoring the chip image and the region boundaries during this phase and improves the original wire demand
across these boundaries to overcome the congestion causing the failed net. Upon completion of redistribution, this new placement is passed back to the wiring program which continues wiring, including
rewiring of the nets affected by the redistribution. This process continues until all failed nets have been overcome and the placement is completely wired. The design is then audited (Physical
Validation Rules/TPG, FIG. 6B) for physical to logical correspondence, technology groundrule violations and its correspondence to the TPG file. At the conclusion of these audits, the design can be
submitted to manufacturing (Mask/Fabrication, FIG. 6B, and Testing, FIG. 6B, in conjunction with Logic Model Hardware Validation, FIG. 6B).
The machine design flow from the users' perspective has been to model and simulate the machine to satisfaction in a high level environment. The machine design is then submitted for cascode logic
transformation. The user reviews primitive level implementations in both Bipolar and FET, makes a technology choice, and either optimizes the design further or commits it to physical design. This
sequence culminates in a software description of the desired hardware design and serves as an input for actual physical fabrication.
This design system represents a design environment that allows the machine architect to totally control the end product. It further reduces the machine development cycle to such an extent that it
affords the architect unprecedented opportunity to actually resolve an optimal machine design.
The master slice designed to exploit the aforementioned design concepts and the cascode logic technology is illustrated in symbolic form in FIG. 11. As noted in connection with FIG. 1A or 1B master
slice 1 is comprised of two general areas, an input/output cell perimeter 2 and a DCVS logic array area 3. The DCVS logic array area is essentially a brickwall set of MOS transistors arranged into
device rails 6 delimited by power bussing 5. The device rails are further illustrated by the DCVS logic array exploded portion 4 in FIG. 12. FIG. 12 illustrates device level detail of a master slice
and represents a complete device structure requiring only personalization contact holes and interconnect metal in order to form a functional logic product. A portion of the device rail 6 in FIG. 12
is illustrated in FIG. 13. The device rail structure includes power rails 5a to VDD and 5b to Ground. These devices are a centralized "brickwall", a minimally spaced array consisting of five
consecutive rows of discrete logic devices 36 vertically arranged above the current source row. These devices are then followed by a row of complementary output transistors elements 37 located
adjacent to power rail 5a. Representations of areas where contacts may be opened during personalization are shown for the logic devices as 35. Also illustrated in FIG. 13 are the device interconnect
wiring channels, vertical channels 38 and horizontal channels 39, utilized to interconnect the active components of the DCVS logic array into a logical function. The physical design is accomplished
through the placement and interconnection of DCVS primitive microblocks against this generalized logic array area. One such microblock is illustrated in FIG. 14 by reference 45. Microblock 45 is
illustrated with the available wiring channels in the vertical direction 38 and horizontal direction 39 utilized for interconnection with other such microblocks to form a complete physical
interconnect of a logical product. The Microblock 45 in FIG. 14 is comprised of a set of symbolic outer rectangular shapes encompassing the actual mask level contact holes necessary for
personalization of these devices.
Application of microblocks to the physical image is further illustrated by FIG. 15. FIG. 15 represents the application of a load device microblock 37a against the background image of the master slice
represented by physical symbolic target 37. The symbolic target 37 has been overlayed with load element microblock 37a completing the device vertical structure to now include the contact holes
necessary for interconnecting this device with other devices in the logic network. FIG. 15 also illustrates the application of logic microblock 45 against the physical symbolic target reference 36.
The essence of this master slice design concept exploiting cascode logic at its most granular microblock level is provided by FIG. 16. FIG. 16 represents a section of a computer processor dataflow
that has been described logically at the microblock level. This microblock description is used to ascertain the appropriate placement for each individual microblock weighing its impact upon the
various other microblocks of the circuit solution and the impact of the other microblocks upon it. The notion is to derive wiring freeways between devices exactly and precisely where required based
on the interrelationship of all the microblocks configuring the solution.
Thus, a significant aspect of this invention is the discovery that differential cascode voltage switch can be dealt with both logically and physically at this microblock level. As has been described
and illustrated, a logical description of an eventual logic product can be derived in term of this very granular microblock level. This logical description can be transposed and dealt with at this
same microblock level into a physical design.
While a preferred embodiment of the invention has been illustrated and described, it is to be understood that there is no intention to limit the invention to the precise construction herein disclosed
and the right is reserved to all changes and modifications coming within the scope of the invention as defined in the appended claims. | {"url":"http://www.google.com/patents/US4608649?dq=7,134,016","timestamp":"2014-04-24T01:07:11Z","content_type":null,"content_length":"108946","record_id":"<urn:uuid:d6f07fc7-d104-4fa0-8d41-0fab3642c66c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tips on Computing with Big Data in R
December 26, 2013
By Joseph Rickert
by Joseph Rickert
The Revolution R Enterprise 7.0 Getting started Guide makes a distinction between High Performance Computing (HPC) which is CPU centric, focusing on using many cores to perform lots of processing on
small amounts of data, and High Performance Analytics (HPA), data centric computing that concentrates on feeding data to cores, disk I/O, data locality, efficient threading, and data management in
RAM. The following collection of tips for computing with big data is an abbreviated version of the Guide’s discussion of the HPC and HPA considerations underlying the design of Revolution R
Enterprise 7.0 and RevoScaleR, Revolution’s R package for HPA computing.
1 - Upgrade your hardware
It doesn’t hurt to state the obvious: bigger is better. In general, memory is the most important consideration. Getting more cores can also help, but only up to a point since R itself can generally
only use one core at a time internally. Moreover, for many data analysis problems the bottlenecks are disk I/O and the speed of RAM, making it difficult to efficiently use more than 4 or 8 cores on
commodity hardware.
2 - Upgrade your software
R allows its core math libraries to be replaced. Doing so can provide a very noticeable performance boost to any function that make use of computational linear algebra algorithms. Revolution R
Enterprise links in the Intel Math Kernel Libraries.
3 - Minimize copies of the data
R does quite a bit of automatic copying. For example, when a data frame is passed into a function a copy of the data is made if the data frame is modified, and putting a data frame into a list also
automatically causes a copy to be made. Moreover, many basic analysis algorithms, such as lm and glm, produce multiple copies of a data set as the computations progress. Memory management is
4 - Process data in chunks
Processing data a chunk at a time is the key to being able to scale computations without increasing memory requirements. External memory algorithms load a manageable amount of data into RAM, perform
some intermediate calculations, load the next chunk and keep going until all of the data has been processed. Then, the final result is computed from the set of intermediate results. There are several
CRAN packages including biglm, bigmemory, ff and ffbase that either implement external memory algorithms or help with writing them. Revolution R Enterprise’s RevoScaleR package takes chunking
algorithms to the next level by automatically taking advantage of the computational resources to run its algorithms in parallel.
5 - Compute in parallel across cores or nodes
Using all of the available cores and nodes is key to scaling computations to really big data. However, since data analysis algorithms tend to be I/O bound when data cannot fit into memory, the use of
multiple hard drives can be even more important than the use of multiple cores. The CRAN package foreach provides easy-to-use tools for executing R functions in parallel on both on a single computer
and across multiple computers. The foreach() function is particularly useful for “embarrassingly parallel” computations that do not involve communication among different tasks.
The statistical functions and machine learning algorithms in the RevoScaleR package are all Parallel External Memory Algorithm’s (PEMA’s). They automatically take advantage of all of the cores
available on a machine or on a cluster (including LSF and Hadoop clusters.)
6 - Take advantage of integers
In R, the two choices for “continuous” data are numeric, an 8 byte (double) floating point number and integer, a 4 byte integer. There are circumstances where storing and processing integer data can
provide the dual advantages using less memory and decreasing processing time. For example, when working with integers, a tabulation is generally much faster than sorting and gives exact values for
all empirical quantiles. Even when you are not working with integers scaling and converting to integers can produce fast and accurate estimates of quantiles. As an example, if the data consists of
floating point values in the range from 0 to 1,000, converting to integers and tabulating will bound the median or any other quantile to within two adjacent integers. Then interpolation can get you
even closer approximation.
7 - Store data efficiently
You will want to store big data so that it can efficiently accessed from disk. The use of appropriate data types can save both storage space and access time. Take advantage of integers and, when you
can, store data in 32-bit floats not 64-bit doubles. A 32-bit float can represent 7 decimal digits of precision, which is more than enough for most data, and it takes up half the space of doubles.
Save the 64-bit doubles for computations.
8 - Only read the data needed
Even though a data set may have many thousands of variables, typically not all of them are being analyzed at one time. By reading from disk just the actual variables and observations you will use in
analysis, you can speed up the analysis considerably.
9 - Avoid loops when transforming data
Loops in R can be very slow compared with R’s core vector operations which are typically written in C, C++ or Fortran, compiled languages that execute much quicker than the R interpreter.
10 - Use C, C++, or Fortran for critical functions
One R’s great strengths is its ability to integrate easily with other languages, including C, C++, and Fortran. You can pass R data objects to other languages, do some computations, and return the
results in R data objects. The CRAN package Rcpp,for example, makes it easy to call C and C++ code from R.
11 - Process data transformations in batches
When working with small data sets, it is common to perform data transformations one at a time. For instance, one line of code might create a new variable, and subsequent lines perform additional
transformations with each transformation requiring a pass through the data. To avoid the overhead of making multiple passes over a large data set write chunking algorithms that apply all of the
transformations to each chunk. RevoScaleR’s rxDataStep() function is designed for one pass processing by permitting multiple data transformations to be performed on each chunk.
12 - User row-oriented data transformations where possible
When writing chunking algorithms, try to avoid algorithms that cross chunk boundaries. In general, data transformations for a single row of data should not be dependent on values in other rows. The
key idea is that a transformation expression should give the same result even if only some of the rows of data are in memory at one time. Data manipulations requiring lags can be done but require
special handling.
13 - Handle categorical variables efficiently and with care
Working with categorical or factor variables in big data sets can be challenging. For starters, not all of the factor levels may be represented in a single chunk of data. Using R’s factor() function
in a transformation on a chunk of data without explicitly specifying all of the levels that are present in the entire data set might cause you to end up with incompatible factor levels from chunk to
chunk. Also, building models with factors having hundreds of levels may cause hundreds of dummy variables to be created that really eat up memory. The functions in the RevoScaleR package that deal
with factors minimize memory use and do not generally explicitly create dummy variables to represent factors.
14 - Be aware of 0utput with the same number of rows as your input
Most analysis functions return a relatively small object of results that can easily be handled in memory. Occasionally, however, output will have the same number of rows as the data: when computing
predictions and residuals for example. In order for this to scale, you will want the output written out to a file rather than kept in memory.
15 - Think Twice Before Sorting
Sorting is by nature a time-intensive operation. Do what you can to avoid sorting a large data set. Use functions that compute estimates of medians and quantiles and look for implementations of
popular algorithms that avoid sorting. For example, the RevoScaleR function rxDTree() avoids sorting by working with histograms of the data rather that with the raw data itself.
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/tips-on-computing-with-big-data-in-r/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+RBloggers+%28R+bloggers%29","timestamp":"2014-04-20T07:12:52Z","content_type":null,"content_length":"45152","record_id":"<urn:uuid:f51c80de-b34d-47ed-bc74-4d2c12a11fad>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
Los Gatos Calculus Tutor
Find a Los Gatos Calculus Tutor
I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra,
trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years.
11 Subjects: including calculus, statistics, algebra 2, geometry
...My specialty is in Microeconomics, but I am very familiar with all the major aspects of free-market economic theory, including Macroeconomics, Econometrics, Money & Banking and International
Economics. I have strong Financial background/experience: I am a Chartered Financial Analyst (Level I), I...
22 Subjects: including calculus, geometry, accounting, statistics
I graduated from UCLA with a math degree and Pepperdine with an MBA degree. I have taught business psychology in a European university. I tutor middle school and high school math students.
11 Subjects: including calculus, statistics, geometry, Chinese
...My past students have gone on to be doctors, lawyers, and even teachers like myself. My method of choice for tutoring is hands on one on one tutoring with the students doing active learning
where they are applying what they learn to real life situations. They can apply this knowledge so that they can see future applications.
103 Subjects: including calculus, Spanish, reading, writing
...Most students have tremendous improvement over the course of tutoring in a matter of weeks. Other area of interest are or areas I can teach Social Science, computer skills like MS word, MS
office, MS Power point, C language. Interested in teaching Bollywood dancing.
15 Subjects: including calculus, statistics, algebra 1, geometry | {"url":"http://www.purplemath.com/Los_Gatos_calculus_tutors.php","timestamp":"2014-04-21T12:41:06Z","content_type":null,"content_length":"23940","record_id":"<urn:uuid:ba341774-289d-4ded-a30c-555678c195c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00556-ip-10-147-4-33.ec2.internal.warc.gz"} |
Examples of functional pseudo-code
Nic McPhee
University of Minnesota, Morris
Here are examples of using a functional pseudo-code to express some of the algorithms in Chapter 2 of An invitation to computer science by Schneider and Gersting.
In all these examples I assume that the pseudo-code in the text has been modified to return values instead of printing them. If that's the only change I've made to the pseudo-code, then I've not
repeated it here. If I've made more substantial changes to the text's pseudo-code, then I've included the algorithm in both the book's pseudo-code and this functional pseudo-code.
[Euclid's GCD algorithm] [Factorial] [Average miles per gallon (version 1)] [Average miles per gallon (version 2)] [Sequential search] [Find largest] [Pattern matching]
I've rewritten their GCD algorithm to use a while loop instead of that annoying GOTO in Step 3.
1. Get 2 positive integers as input. Call the larger I and the smaller J.
2. Divide I by J and call the remainder R.
3. While R is not 0
1. Reset I to the value of J
2. Reset J to the value of R
3. Reset R to the remainder of dividing the new I by the new J.
4. End of While
5. Return J.
In the functional version I'll assume the first argument (i) is greater than the second (j).
gcd i j
= j, if r = 0
= gcd j r, if r != 0
r = i mod j
The example of factorials doesn't appear in the book, but they are defined at the beginning of Lab 3, and are sufficiently similar to GCDs that I thought I'd include them.
First, using the book's pseudo-code:
1. Input n
2. Set p to 1
3. While n is not equal to 0
1. Reset p to p * n
2. Reset n to n - 1
4. End of While
5. Return p
Then using the functional pseudo-code:
factorial n
= 1, if n = 0
= n * factorial (n-1), otherwise
average_mpg gallons_used starting_mileage ending_mileage
= avg
distance_driven = ending_mileage - starting_mileage
avg = distance_driven / gallons_used
One could make a strong argument that one shouldn't perform the "good mileage" check in the same routine that computes the mileage, as it makes this routine much less useful on other contexts.
Arguably this algorithm should simply compute the average miles per gallon and return the result, leaving it to another algorithm (or individual) to decide what's good or bad. Similarly, the third
version of this algorithm (Figure 2.5, page 34) suffers from over complication.
Given that caveat, I'll go ahead and do version two just because it illustrates conditionals again. Just remember that this isn't a very good way to organize your problem solving.
average_mpg gallons_used starting_mileage ending_mileage
= (avg, "You're getting good gas mileage"), if avg > 25
= (avg, "You're not getting good gas mileage"), otherwise
distance_driven = ending_mileage - starting_mileage
avg = distance_driven / gallons_used
There are two distinct approaches to this problem depending on how we want to represent the inputs, and I'll provide both versions. In the first version we'll have two separate inputs: The list of
names, and the list of telephone numbers. In the second version there will be a single input, namely a list of pairs of names and numbers. Which one of the two you'd prefer would depend largely on
the way in which your data is stored when you call this algorithm. I personally prefer the second (list of pairs) version, because you can't accidentally get the two lists out of synch, but if what
you've got is two separate lists, the first one might make more sense.
First, the version with two separate lists:
sequential_search name [] [] = error "Name not found"
sequential_search name (n:ns) (t:ts)
= t, if name = n
= sequential_search name ns ts, otherwise
Then the version with a single list of pairs:
sequential_search name [] = error "Name not found"
sequential_search name ((n, t):pairs)
= t, if name = n
sequential_search name pairs, otherwise
find_largest [] = error "Empty list not allowed"
find_largest (x:xs) = do_find_largest x xs
do_find_largest largest_so_far [] = largest_so_far
do_find_largest largest_so_far (x:xs)
= do_find_largest largest_so_far xs, if largest_so_far > x
= do_find_largest x xs, otherwise
match [] pattern = False
match (t:ts) pattern
= (prefix_match (t:ts) pattern) OR (match ts pattern)
prefix_match text [] = True
prefix_match [] pattern = False
prefix_match (t:ts) (p:ps)
= prefix_match ts ps, if t = p
= False, otherwise
The views and opinions expressed in this page are strictly those of the page author. The contents of this page have not been reviewed or approved by the University of Minnesota. | {"url":"http://facultypages.morris.umn.edu/~mcphee/Courses/General_1200_materials/Functional_pseudo_code.shtml","timestamp":"2014-04-19T17:26:30Z","content_type":null,"content_length":"6916","record_id":"<urn:uuid:2d02da09-d55d-4152-9885-a5ff6661f4bd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermal Safety Margins for Servomotors
Build in a thermal safety margin to handle servomotors that heat up more than theoretical models predict when the torque cranks up.
Authored by:
Richard Welch, Jr.
Consulting Engineer
Oakdale, Minn.
Edited by Leland Teschler
Key points:
• The relationship between intermittent and continuous torque is important when evaluating servomotor thermal qualities.
• Temperature in servomotors rises much more quickly than predicted by conventional thermal model, particularly under high-torque demands.
Example brush and brushless servomotor data sheets with peak, continuous torque values: www.exlar.com/prod_SLM_ST_curves.html)
More info on motor thermal qualities: R. Welch, Continuous, Dynamic, and Intermittent Thermal Operation in Electric Motors, www.smma.org/motor_college_thermal.htm (52-page Tutorial Book available
from welch022@tc.umn.edu)
S. Noodleman & B. Patel, Duty Cycle Characteristics for DC Servo Motors, paper TOD-73-30, IEEE/IAS Conference, Oct. 9-12, 1972, Philadelphia, Pa.
Underwriters Laboratories, UL 1446 – Systems of Insulating Materials – General, tinyurl.com/35grzb8
R. Welch, Why a Temperature Sensor Won’t Always Protect a Servomotor From Overheating, Machine Design Magazine, February 4, 2010, tinyurl.com/3776lue
Motor-temperature switch placement, tinyurl.com/3x4by7u
Motors spec’d with four-parameter thermal models, www.micromo.com/uploadpk/2607_SR_IE2-16_FTB.pdf, www.micromo.com/uploadpk/4490B_4490_BS_MIN.pdf
Motion-system designers frequently crank pretty hard on servomotors. To get the highest possible performance, they’ll often command the servomotor to put out the maximum peak torque that its maker
allows. However, servomotor electrical windings can overheat rapidly and even burn up when this happens. Consequently, a servomotor needs a hot-spot temperature safety margin. This margin is defined
as the difference between the winding maximum allowable hot-spot temperature and its maximum continuous winding temperature. Stated mathematically,
T[sm] = T[hs] – T[max]
where T[sm]= hot-spot temperature safety margin; T[hs]= maximum hot-spot temperature; and T[max]= maximum continuous-winding temperature, all in degrees.
Manufacturers normally publish values for each motor’s T[max], along with the corresponding maximum continuous current and torque output, plus the ambient conditions (drive electronics, ambient
temperature, amount of forced cooling, heat-sinking method, and so forth). One needn’t worry about hot spots so long as the motor never exceeds its maximum continuous current value and ambient
conditions don’t deviate from those the manufacturer specifies. However, that’s not the way a servomotor typically operates. Servomotors more often are commanded to produce a dynamic motion profile
that contains one or more time intervals during which the motor must output peak torque exceeding its maximum continuous value. Hence, the manufacturer also specifies a peak-torque value for each
servomotor. Depending on the motor model, the peak-to-continuous torque ratio typically ranges between 2:1 and 5:1, though one brand of brush dc servomotor carries a 7.2:1 ratio.
It’s normal for a servomotor to put out peak torque exceeding its maximum continuous value. But overheating can be a problem if it stays in this condition for too long. So during times of peak-torque
output, the motor duty cycle must be less than 100%. The more the peak-torque value exceeds the maximum continuous value, the lower the allowable duty cycle.
For over 50 years, servomotors have been characterized thermally by what’s generally called the two-parameter thermal model. One generally finds manufacturers publishing one value for the motor
winding-to-ambient thermal resistance, R[th] (°C/W), plus the corresponding thermal time constant, τ (seconds). This information permits calculating the motor’s thermal capacitance, C[th] (J/°C),
using the following equation:
C[th] τ/R[th]
This two-parameter thermal model lets motor manufacturers and users size and select the right motor. Many motor manufacturers have developed sizing programs employing this model that are publicly
available. However, I have yet to find a single manufacturer willing to size a competitor’s motor. Hence, motor users generally must size and compare competing brands themselves to make valid
Frequently, the first step in the sizing process is to completely specify the dynamic-motion profile, along with specifying the ambient conditions in which the motor will operate.
Next, in combination with the motor’s engineering specifications, one determines the peak torque and velocity the motor must exhibit during the most demanding time interval in the motion profile.
This information becomes the peak operation point on the motor’s combined intermittent and continuous torque-speed curves, as shown in the accompanying figure.
A necessary requirement is that this peak operation point lies within the boundary of the intermittent torque-speed curve. Otherwise, the motor-drive combination in question will lack enough torque,
velocity, and/or power for the application.
Finally, one calculates the root-mean-square (rms) torque and velocity for the entire motion profile from the two-parameter thermal model in combination with the time-averaged power dissipation
technique. This rms-operation point goes onto the combined torque-speed curves visible in the accompanying figure. If the rms-operation point lies outside the boundary of the continuous torque-speed
curve, then it is an absolute certainty the motor will overheat.
Conversely, the graph tells us that so long as the rms-operation point lies within the boundary of the continuous torque-speed curve, this particular motor will not overheat and it’s okay to use.
However, extensive research has proven this last statement is NOT always true. In the real world of servomotors, it’s entirely possible the winding maximum allowable hot-spot temperature is actually
exceeded in direct violation of UL 1446, despite operating in the safe part of the curve. Designers who depend on the two-parameter thermal model won’t realize this is happening.
This simple, two-parameter thermal model is still used extensively to calculate dynamic-winding temperature during all possible modes of servomotor operation. But experimental measurement shows it’s
NOT particularly accurate in calculating dynamic-winding temperature when the motor uses more than its maximum continuous current. A much-more accurate four-parameter thermal model has been developed
to overcome this inaccuracy. The basic problem with the two-parameter model is it assumes the entire motor, and every component in it including the windings, has the same dynamic operating
temperature. Actual measurements show this isn’t true. In fact, measurements reveal that within the motor and even within the winding itself there can be temperature differences that the
two-parameter model simply doesn’t account for.
Furthermore, there can be as much as a 50°C temperature difference between the motor winding and its outermost surface area, depending on motor size and operating temperature. This difference can’t
be ignored. A higher order [i.e., 4, 6, 8,… parameter] thermal model allows for temperature gradients in the motor. The winding can have its own dynamic-operating temperature, thermal resistance, and
thermal time constant. These can differ from those of the rest of the motor. Research has shown the four-parameter thermal model is accurate enough to explain all the measured temperature data. And
it’s rather easy to obtain the four parameter values for the model.
As shown in the accompanying figure, the winding temperature calculated by the four-parameter model initially rises faster than in the two-parameter model. However, both curves converge at the rated
130°C maximum continuous-winding temperature. This feature is consistent between these two models with the continuous power-dissipation rating.
It is useful to compare the calculated temperature rise between the two-parameter and four-parameter models while the motor is producing 4× peak torque, corresponding to 16× power dissipation in the
winding. (Torque developed by a servomotor rises linearly with current, while the power dissipation in its electrical winding rises as current squared, I2R.)
An accompanying figure depicts the case of 4× peak-torque output, specified for many servomotors, corresponding to 16× power dissipation. The four-parameter model shows that the winding temperature
rises from its initial 25°C to its rated 130°C value in only 12 sec. The two-parameter model takes longer to respond. It predicts the winding temperature should be less than 55°C at 12 sec.
Experiments show this winding temperature is in error. All in all, a significant temperature error that is clearly unacceptable accompanies the use of the two-parameter thermal model in calculating
dynamic winding temperature when peak torque exceeds the 1× value.
Several motor manufacturers proudly claim their servomotors are recognized under the UL 1004 and/or CSA 22.2/100 standards by Underwriters Laboratories and the Canadian Standards Authority,
respectively. As part of the UL/CSA recognition process, the insulation system for the motor’s electrical winding must comply with the UL 1446 Insulation System Standard. That standard says the class
of the insulation on the winding determines the winding’s maximum allowable hot-spot temperature. To comply with UL 1446, the winding must have a hot-spot temperature rating at least equal to the
maximum continuous-winding temperature. To ensure the motor complies with UL 1446 and to make sure the winding can’t possibly overheat, manufacturers often place a temperature sensor/switch inside
the motor. The sole purpose of this temperature sensor/switch is to tell the drive when the winding approaches its maximum allowable hot-spot temperature. The drive responds by shutting off the power
to the motor. However, there are at least three practical reasons why this overtemperature protection scenario doesn’t always work.
A point to note is that even the four-parameter model isn’t perfect. Though it allows the winding to have its own dynamic operating temperature, it still assumes the entire winding is at one
temperature. Measurements at different points in the winding show this is not true. Nevertheless, the four-parameter model is accurate enough to show why a servomotor must have a hot-spot temperature
safety margin while at peak-torque output.
Most servomotor manufacturers still perform all their motor-sizing and dynamic-winding-temperature calculations using the two-parameter thermal model. (I have found only one that uses the
four-parameter model.) Thus motor users have no choice but to use the two-parameter model in calculating dynamic-winding temperature unless they measure the four parameter values themselves. These
measurements are relatively straightforward.
Sizing the optimum motor for the application begins with the process of defining the dynamic-motion profile and the ambient conditions. Next, the designer determines the candidate motor’s
rms-operation point and notes it on the continuous torque-speed curve. The motor will certainly overheat if this rms-operation point lies outside the boundary of the motor continuous torque-speed
curve. The only way to use the particular motor under investigation would be to modify the motion profile and/or change the ambient conditions. Conversely, both motor manufacturers and conventional
calculations will predict the motor won’t overheat if the rms-operation point lies within the boundary of the motor’s continuous torque-speed curve.
However, the four-parameter model predicts the winding heats more quickly and hits a higher temperature than the two-parameter model shows. In fact, the motor exceeds its maximum
continuous-temperature value while at peak torque. Furthermore, the sensor/switch can’t always react fast enough to prevent this high temperature.
Thus, the electrical-winding insulation must have a maximum hot-spot temperature value exceeding the maximum continuous-winding temperature. The greater the safety margin for the hot-spot
temperature, the better the protection. For example, all Exlar SLM servomotors have a 130°C maximum continuous-winding temperature. Their windings insulation system is rated Class H, which provides a
180°C maximum allowable hot-spot temperature. This gives a 180 – 130 = 50°C hot-spot safety margin.
In addition, all SLM servomotors have a 2:1 peak-to-continuous torque rating. This combined with their hot-spot safety margin gives excellent thermal protection during times of peak-torque output. In
contrast, many other servomotors have a hot-spot safety margin of 15°C or less (some are zero). It is also common to find peak-to-continuous torque ratios ranging between 3:1 up to 5:1.
Over time, different authors have suggested varying figures of merit for selecting high-performance servomotors. But from a motor user’s perspective, the single-most important figure of merit is one
that shows how much output to expect from a servomotor for the longest time period while remaining compliant with UL 1446. A hot-spot temperature safety margin provides this kind of feedback. So far,
a 50°C margin is the highest value that I’ve been able to find.
Discuss this Article 0
Post new comment | {"url":"http://machinedesign.com/archive/thermal-safety-margins-servomotors","timestamp":"2014-04-18T03:22:04Z","content_type":null,"content_length":"101142","record_id":"<urn:uuid:7c519e44-7a48-4aaa-b604-4cd7dfd36ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
verify stokes theoremfor vector point function A= (2x-y)i-(2)j where s is hamisphere x^2+y^2+z^2=1 - Homework Help - eNotes.com
verify stokes theoremfor vector point function A= (2x-y)i-(2)j where s is hamisphere x^2+y^2+z^2=1
The Stoke's theorm needs to express (2x-y)i-(yz2)j-(y2z)k as the curl of a vector field F.
Use curl formula:
curl F=`[[i,j,k],[(del)/(del x),(del)/(del y), (del)/(del z)],[F_x,F_y,F_z]]` = `( (del F_z)/(del y) - (del F_y)/(del z) , (del F_x)/(del z) - (del F_z)/(del x) , (del F_y)/(del x) - (del F_x)/(del
Put `(del F_z)/(del y) - (del F_y)/(del z) ` = 2x - y
`(del F_y)/(del x) - (del F_x)/(del y) = 0 =gt F_y = F_x = 0 =gt F_z = 2x - y`
`` `oint` 2x - y dz = 0 because z is not changing over the boundary curve of the hemisphere.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/verify-stokes-theoremfor-vector-point-function-2x-299192","timestamp":"2014-04-21T15:49:36Z","content_type":null,"content_length":"25088","record_id":"<urn:uuid:e0cd8acd-5453-4ed6-a27b-6ae45e0c266d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tessellations with Java
By Roseann Krane, Monte Vista HS, Danville, CA, and
students: Jefferson Ng, Mike Carns, and Amber Bullington, and
Sherida Hare, Bellport HS, Brookhaven, NY, editor
"Understanding symmetry is essential to the understanding of Escher art and understanding symmetry involves a familiarity with the movements that mathematicians call transformations." (Jill and
Walter Britton)
Click to view the "Step by Step Tessellations" for a detailed explanation on tessellations. Click to print the "Step by Step Tessellations" for a detailed explanation on tessellations.
• Tessellation - A tessellation is a pattern of one or more shapes, completely covering a two-dimensional plane. Picture a puzzle with pieces that have the same shape and size that fit together
perfectly as in a square. (A circle would not be a shape for a tessellation because circles cannot fit together in a puzzle.) A tessellation is a pattern that is repeated. If we take squares and
layer them on the floor, we can cover the entire floor with them, without any space leftover.
• Plane - Imagine a plane to be a tabletop with no thickness that continues infinitely in all directions.
• Translation - If we move a figure to a new location by sliding it a fixed distance in a fixed direction, the motion is called a translation.
• Rotation - If we move a figure to a new location by turning it about a fixed axis, the motion is referred to as a rotation or a turn. Center of Rotation - The point or axis about which a figure
is rotated is called the center of rotation. The angle through which the figure turns is called the angle of rotation.
• Reflection - If we move a figure to a new location by flipping it about a fixed line the motion is called a reflection or flip. The fixed line about which the figure is flipped is called the line
of reflection. (It is called a reflection because if a mirror were placed along a line the transformed figure would coincide with the mirror image of the figure in its original location.)
• Glide Reflection - This final transformation combines the motions of reflection and translation to move a figure to its new location.
1. By flipping it about a fixed line.
2. By sliding it at a fixed distance in a direction parallel to that line.
3. The motion is called a glide reflection.
Click on the following for a detailed explanation on: Student Exercises
1. Modify the paint function to create a tessellation of triangles.
2. Modify the paint function to create a tessellation of hexagons.
3. Modify the paint function to create a tessellation of octagons and squares.
Links about tessellations: | {"url":"http://dimacs.rutgers.edu/~rkrane/tessell.html","timestamp":"2014-04-19T04:20:16Z","content_type":null,"content_length":"4746","record_id":"<urn:uuid:3cac738a-c448-43f5-a7be-ae01f85966b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Bean Experiment
Introduction to Hypothesis testing
Barry Sinervo
Null versus Alternative Hypotheses
Read Through the Following Procedure.
Form groups of 4-5 and perform a selection experiment of sorts. First you might ask what the heck do beans have to do with behavior? Sinervo is loosing it. We shall soon see.
Count out a pile of 200 black beans.
Then designate one member of the group as the bean picker. The other members of the group can weigh out the piles of beans as they are collected, and then record and graph the data for the group.
The bean picker will pick approximately 5-10 between their thumb, index, and forefinger and gently roll around the beans and let some beans spill out until only one bean remains. Take this bean and
set it aside in "lot 1". Repeat this procedure with the remaining pile of beans 9 more times so that you end up with 10 beans in lot 1.
Set aside lot 1 and have some of the members weigh out lot 1, while the bean pickers continues making lots 2, 3, ..., 20 by the same procedure. Each lot has 10 beans in it.
Null versus Alternative Hypotheses
Let us first set up a null hypothesis. What pattern of bean size would you predict if you were randomly drawing beans from the pile as a function of size.
Recall the manner in which you collected data on beans.
Let us now set up an alternative hypothesis. What kinds of patterns might you predict if the drawing was in some way non-random. Describe mechanisms that might explain these patterns. You might in
fact have one or more alternative hypotheses to test. You should try to describe at least two hypotheses, and two mechanisms. Discuss the null and alternative hypothesis amongst the members of the
Analyzing the data
Graph the data that your group obtained on bean size as a function of lot# (e.g., 1 is the first lot, 2 is the second lot, etc.). If some member of the group knows how to use a simple graphing
program it might be useful to plot the data on the computer (and get copies of the graphs to other members of the group). If some member of the group knows how to run a simple linear regression, even
better! Run a regression line through the data.
What is the pattern of size change that is found in your samples of beans from the first draw to the last draw?
Is this pattern statistically significant? Do we reject or accept our null hypothesis?
How can you explain this pattern in light of your alternative hypotheses?
Everyone should obtain an regression (linear trendline) and an R2 value to report with your data. Talk to your TAs if you don't know how to do this. If you email your R2 value and sample size to your
TA, they can provide you with a p-value, so that you can assess the significance of your data.
The Lab Report
The Big Picture Question. What sort of implications does this simple experiment have for the origins of Agriculture and cultivated plants? Refer to the paper by C. Heiser (available on reserve in the
library) in your discussion of the been experiment. Now make some brief observations on the behavior of a living creature during its feeding behavior. Make a simple hypothesis concerning either:
1. how variation in the creatures movements might get it into trouble with some kind of predator (natural selection on it) or
2. how the creatures own foraging might select for subtle differences in the food that it is feeding on (natural seletion on the prey).
Include your hypotheses concerning the creatures behaviors in the discussion. Make a simple statement about the "unconcious aspects of natural selection" in your lab report discussion.
Lab report (1 page with 1 page figure) is in section next week.
Include the answers to all of the above Big Picture questions in the form of a standard scientific report (see the Bean Lab Report Grading web page for more details, as what is listed below is only a
brief summary):
1. Introduction where you discuss your hypotheses. These hypotheses should be discussed near the end of the introduction. You should begin your intro with the big concepts and work down to the small
concepts (e.g., your specific hypotheses). The big concepts would be: how does this simple-minded experiment relate to the real world.
2. Methods where you discuss in very succinct terms what you did. Only include enough detail such that a protege following in your footsteps could repeat this monumental experiment.
3. Results should describe the patterns. Words and graphics are integrated. You must report statistical tests in the Results. Describe any statistically-signficant patterns.
4. Discussion includes a consideration of your hypotheses and proposed mechanisms. Also, discuss the big picture question as the last paragraph in the Discussion.
Be succinct and to the point when writing your lab report! | {"url":"http://bio.research.ucsc.edu/~barrylab/classes/animal_behavior/TUTLAB.DIR/SELNBEAN.HTM","timestamp":"2014-04-17T16:24:26Z","content_type":null,"content_length":"18973","record_id":"<urn:uuid:f2bf8ecd-a796-4fb8-94fd-98fb51aaa4eb>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
search results
Expand all Collapse all Results 1 - 8 of 8
1. CMB Online first
Helicoidal Minimal Surfaces in a Finsler Space of Randers Type
We consider the Finsler space $(\bar{M}^3, \bar{F})$ obtained by perturbing the Euclidean metric of $\mathbb{R}^3$ by a rotation. It is the open region of $\mathbb{R}^3$ bounded by a cylinder with
a Randers metric. Using the Busemann-Hausdorff volume form, we obtain the differential equation that characterizes the helicoidal minimal surfaces in $\bar{M}^3$. We prove that the helicoid is a
minimal surface in $\bar{M}^3$, only if the axis of the helicoid is the axis of the cylinder. Moreover, we prove that, in the Randers space $(\bar{M}^3, \bar{F})$, the only minimal surfaces in the
Bonnet family, with fixed axis $O\bar{x}^3$, are the catenoids and the helicoids.
Keywords:minimal surfaces, helicoidal surfaces, Finsler space, Randers space
Categories:53A10, 53B40
2. CMB 2013 (vol 57 pp. 225)
Small Flag Complexes with Torsion
We classify flag complexes on at most $12$ vertices with torsion in the first homology group. The result is moderately computer-aided. As a consequence we confirm a folklore conjecture that the
smallest poset whose order complex is homotopy equivalent to the real projective plane (and also the smallest poset with torsion in the first homology group) has exactly $13$ elements.
Keywords:clique complex, order complex, homology, torsion, minimal model
Categories:55U10, 06A11, 55P40, 55-04, 05-04
3. CMB 2012 (vol 56 pp. 709)
Universal Minimal Flows of Groups of Automorphisms of Uncountable Structures
It is a well-known fact, that the greatest ambit for a topological group $G$ is the Samuel compactification of $G$ with respect to the right uniformity on $G.$ We apply the original description by
Samuel from 1948 to give a simple computation of the universal minimal flow for groups of automorphisms of uncountable structures using Fraïssé theory and Ramsey theory. This work generalizes
some of the known results about countable structures.
Keywords:universal minimal flows, ultrafilter flows, Ramsey theory
Categories:37B05, 03E02, 05D10, 22F50, 54H20
4. CMB 2011 (vol 56 pp. 434)
Some Remarks on the Algebraic Sum of Ideals and Riesz Subspaces
Following ideas used by Drewnowski and Wilansky we prove that if $I$ is an infinite dimensional and infinite codimensional closed ideal in a complete metrizable locally solid Riesz space and $I$
does not contain any order copy of $\mathbb R^{\mathbb N}$ then there exists a closed, separable, discrete Riesz subspace $G$ such that the topology induced on $G$ is Lebesgue, $I \cap G = \{0\}$,
and $I + G$ is not closed.
Keywords:locally solid Riesz space, Riesz subspace, ideal, minimal topological vector space, Lebesgue property
Categories:46A40, 46B42, 46B45
5. CMB 2011 (vol 54 pp. 311)
Some Remarks Concerning the Topological Characterization of Limit Sets for Surface Flows
We give some extension to theorems of Jiménez López and Soler López concerning the topological characterization for limit sets of continuous flows on closed orientable surfaces.
Keywords:flows on surfaces, orbits, class of an orbit, singularities, minimal set, limit set, regular cylinder
Categories:37B20, 37E35
6. CMB 2003 (vol 46 pp. 632)
The Operator Amenability of Uniform Algebras
We prove a quantized version of a theorem by M.~V.~She\u{\i}nberg: A uniform algebra equipped with its canonical, {\it i.e.}, minimal, operator space structure is operator amenable if and only if
it is a commutative $C^\ast$-algebra.
Keywords:uniform algebras, amenable Banach algebras, operator amenability, minimal, operator space
Categories:46H20, 46H25, 46J10, 46J40, 47L25
7. CMB 2002 (vol 45 pp. 154)
On the Poisson Integral of Step Functions and Minimal Surfaces
Applications of minimal surface methods are made to obtain information about univalent harmonic mappings. In the case where the mapping arises as the Poisson integral of a step function, lower
bounds for the number of zeros of the dilatation are obtained in terms of the geometry of the image.
Keywords:harmonic mappings, dilatation, minimal surfaces
Categories:30C62, 31A05, 31A20, 49Q05
8. CMB 1999 (vol 42 pp. 104)
Instabilité de vecteurs propres d'opérateurs linéaires
We consider some geometric properties of eigenvectors of linear operators on infinite dimensional Hilbert space. It is proved that the property of a family of vectors $(x_n)$ to be eigenvectors
$Tx_n= \lambda_n x_n$ ($\lambda_n \noteq \lambda_k$ for $n\noteq k$) of a bounded operator $T$ (admissibility property) is very instable with respect to additive and linear perturbations. For
instance, (1)~for the sequence $(x_n+\epsilon_n v_n)_{n\geq k(\epsilon)}$ to be admissible for every admissible $(x_n)$ and for a suitable choice of small numbers $\epsilon_n\noteq 0$ it is
necessary and sufficient that the perturbation sequence be eventually scalar: there exist $\gamma_n\in \C$ such that $v_n= \gamma_n v_{k}$ for $n\geq k$ (Theorem~2); (2)~for a bounded operator $A$
to transform admissible families $(x_n)$ into admissible families $(Ax_n)$ it is necessary and sufficient that $A$ be left invertible (Theorem~4).
Keywords:eigenvectors, minimal families, reproducing kernels
Categories:47A10, 46B15 | {"url":"http://cms.math.ca/cmb/kw/minimal","timestamp":"2014-04-18T13:19:34Z","content_type":null,"content_length":"37312","record_id":"<urn:uuid:a23077e7-8c76-461a-825e-722377b162b4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00470-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sperner's Lemma
Copyright © University of Cambridge. All rights reserved.
If you try the Triangle Game you will probably soon suspect that it is not a fair game. Read on to find out the truth about it. Take a triangle ABC, labelled counterclockwise, and subdivide it into
smaller triangles in whatever way you like. Then label all the new vertices as follows:
• vertices along AB may be labelled either A or B, but not C
• vertices along BC may be labelled either B or C, but not A
• vertices along CA may be labelled either C or A, but not B
• vertices inside triangle ABC may be labelled A or B or C.
Now shade in every small triangle in the subdivision that has three different labels.
Use two different shadings to distinguish the triangles which have been labelled counterclockwise (i.e. in the same sense as triangle ABC) from the triangles which have been labelled clockwise (i.e.
in the sense opposite to that of as triangle ABC).
Then there will be exactly one more counterclockwise triangle than clockwise triangles. In particular, the number of shaded triangles will be odd.
This is Sperner's Lemma, named after its discoverer Emanuel Sperner, a 20th century German mathematician.
The term "lemma" may need explanation. It is used to describe a minor theorem which may not be of much interest in its own right, but plays an important role in some wider theory. Sperner's Lemma is
a key result in topology. However, the result is so readily stated, and its proof is so accessible and elegant, that Sperner's Lemma should really be elevated to the status of a Theorem.
The proof of Sperner's Lemma requires no more than simple counting.
The proof starts by putting labels inside every small triangle. If the endpoints of the triangle are the same, the edge is labelled 0. If the vertices are different, and in the counterclockwise sense
(the same sense as those of the outside triangle), label it 1. If the endpoints are different and in the clockwise sense (the opposite sense to that of the outside triangle, label it -1. Then add the
three edge numbers and write the sum in a little circle in the middle of the triangle.
There are four possible outcomes:
• If the vertices of the triangle are all different, and labelled counterclockwise, the edge numbers will all be 1 and the circled number in the centre of the triangle will be 3.
• If the endpoints are all different, and labelled clockwise, the edge numbers will all be -1 and the circled number in the centre of the triangle will be -3.
• If the vertices of the triangle are all the same, the edge numbers will all be 0 and the circled number in the centre of the triangle will be 0.
• If two vertices are the same and the third is different, one edge will be labelled 0, another 1 and the third -1. So the circled number in the centre of the triangle will be 0.
Look at edge AB of the original triangle. In moving from A to B, a number 1 indicates a change from A to B, the number -1 indicates a change from B to A, and the number 0 indicates no change. Since
the overall change is from A to B, the sum of the numbers along AB is 1. Similarly, the sum of the numbers along the edges BC and CA of the original triangle are each 1.
So the numbers along the outside edges of the large triangle add up to 3.
The numbers along the inside edges add up to zero, since an inside edge will either be labelled 0 on both sides, or 1 on one side and -1 on the other.
Thus the sum of all the edge labels is exactly 3.
Now the sum of all the edge labels must be the same as the sum of the circled numbers inside the small triangles. Thus the sum of the circled numbers is 3. Since the circled numbers are either 3 (for
a small triangle labelled ABC counterclockwise) or -3 (for a small triangle labelled ABC clockwise), the number of counterclockwise triangles must be exactly one more than the number of clockwise
And that is exactly what Sperner's Lemma predicted. | {"url":"http://nrich.maths.org/1383/index?nomenu=1","timestamp":"2014-04-18T23:39:31Z","content_type":null,"content_length":"7085","record_id":"<urn:uuid:b8cc4b8e-7728-42dd-846d-df02f14b2971>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cooperstock &Tieu's most recent paper
I'm still wading through the paper, but I remain skeptical. A key section, I think, is near eq 30:
This is the key equation. The complexity of this velocity expression as computed by observers external to the distribution of matter is in very sharp contrast to the simplicity of the proper velocity
form [itex]\beta = \sqrt{F/r}[/itex] as witnessed by local observers. However, it is the former that's relevant for astronomical observers.
(emphasis in original).
While I can appreciate that the expression for dr/dt in the particular coordinate system that C&T chose might look more complicated, if they are correct in their approximations there is no reason it
should be numerically significantly different from the simpler expression they computed for a local observer.
Furthermore, conceptually, the external observer won't actually be measuring dr/dt. He'll rather be measuring some redshift factor, z, or possibly some sort of "apparent angular width". Let's assume,
however, that he measures z for simplicity. This is what people usually measure when they measure rotation curves. One doesn't actually take some radar measurement to measure distance as a function
of time and, even if one did this, it still wouldn't be exactly the same thing as a measure of the r coordinate, it would only be approximately the same. So let's calculate what we actually measure,
It's worth noting that we're measuring radial velocities in this problem, and orbital velocities in the more usual case of rotation curves. This doesn't have a major effect, except that we have to
distinguish between different portions of the cloud by some means other than angular position.
Then if the local velocity at the edge of the infalling cloud is sqrt(2m/r) (see the text near eq 8) which I'll quote in part:
The geodesic solution for dr/dt and the metric coefficeints g_00 and g_11 of (1) are used to evalutate the proper radial velocity.
This equal sqrt(2m/r) in magnitude for particles released from rest at infinity and is seen to approach 1, the speed of light, as r approaches 2m. .... However, for asymptotic observers who reckon
radial distance and time increments as dr and dt, the measured velocity is
[tex]\frac{dr}{dt} = -(1-\frac{2m}{r}) \sqrt{\frac{2m}{r}} \hspace{1 in} (8)[/tex]
But, as I noted, our observer at infinity will not actually measure 8)., What he'll (probably) actually measure is the redshift, z. We can compute z by finding the redshift due to the local velocity
relative to a local stationary observer multiplied by the gravitational redshift from the local stationary observer to the distant stationary observer:
[tex]1+z = \left( \frac{1}{\sqrt{g_{00}}} \right) \left( \sqrt{\frac{1+\sqrt{\frac{2m}{r}}}{1 - \sqrt{\frac{2m}{r}}}} \right) = \frac{1+\sqrt{\frac{2m}{r}}}{1-\frac{2m}{r}}
where the first term is the gravitational redshift, with g_00 = 1-2m/r, and the second term is the relativistic doppler shift due to the local velocity. The algebraic simplification is done by
multiplying the numerator and denominator inside the square root by (1+sqrt(2m/r)) and simplifying.
Comparing this formula for 1+z to 8), we see that as we would expect, z-> infinity in the strong field case. If we series expand our expression for 1+z, we find that
[tex]z+1 \approx 1 + \sqrt{\frac{2m}{r}} +\left(\sqrt{\frac{2m}{r}}\right)^2 [/tex]
and the second term can be ignored for small enough v, being quadratic in the square root.
We should also note that if we series expand the formula for relativistic doppler shift:
[tex]z + 1 = \sqrt{\frac{1-v}{1+v}}[/tex] in a series for small v, we get [itex]z+1 = 1 - v + v^2/2[/itex]
which explains why measuring z is equivalent to measuring v for small z (or small v).
The most important point is that the local velocity determines the local redshift (from our infalling observer to a local stationary observer), and that the redshift is multiplicative, so that the
total redshift to our distant observer is the local doppler redshift multiplied by the gravitational redshift from our local observer to our distant observer.
Thus, if the gravitational redshift is negligible because we are in a weak field, all the redshift must be due to the local velocity. Thus playing games with the coordinates can't make the distant
velocity different from the local velocity unless we have significant gravitational redshift - but it has been assumed by C&T that we do not have significant gravitational redshift.
Redoing the analysis in terms of 'z' instead of coordinates puts the problem in slightly closer touch to what is happening physically IMO, and to my mind it makes it very clear that the local
redshift must be the same as the redshift at infinity if we ignore the gravitational component of the redshift. | {"url":"http://www.physicsforums.com/showthread.php?t=203200","timestamp":"2014-04-21T09:45:02Z","content_type":null,"content_length":"41184","record_id":"<urn:uuid:d96a4a6d-386c-454a-9184-6654614d9001>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Light, up to 11Light, up to 11
Whew! Back from a very successful wedding and honeymoon, moving into a new apartment, writing thank-you notes, and all the fun jazz that comes with being newly married.
But hey, we’ve got to get this blog cranking again at some point, and now’s as good a time as any. We’ll kick things back off with a letter from a reader,
Scott writes in with a question:
If you have a cubic meter of nothing but highly condensed photons, what would the upper limit on its energy density be? (If there is even a limit.)
Classically there’s no theoretical limit on the field strength, though radiation pressure will eventually be too large for any physical box to contain. It would be an entertaining calculation to look
at the stress-strain curves of something like solid steel to see just how much light we might be able to stuff into a magic 100% reflective box before we broke the yield strength.
But the physical reality in which we live is not an entirely classical one, and Scott’s question in fact presupposes that we’re dealing with the full quantized description of photons of light.
Quantum Electrodynamics (QED for short) is a tricky subject that I’m actually in the midst of learning formally as part of the required Standard Model class. We don’t need to actually do hard-core
QED to answer the question though, an order-of magnitude estimate will work fine.
It’ll turn out that if you stuff enough energy into the vacuum you’ll eventually start creating matter (electron/positron pairs in this type of circumstance) via Einstein’s famous E = mc^2
relationship. In nuclear weapons we’re used to seeing the m turn into E very dramatically, but of course the other direction works just as well. Get enough E and you’ll start making m.
We might estimate that if we let m be the mass of the electron, that if we stuff E = mc^2 worth of electromagnetic energy into one Compton wavelength of the electron that we might have a good
estimate for the maximum amount of energy we can fit into a given volume before we start pair production and therefore don’t have (just) a box full of light anymore.
The Compton wavelength of the electron is:
Which works out to be about 3.96e-13 meters. We might take the cube of this to get the “Compton volume” of the electron, which is about 5.76e-38 cubic meters. The E=mc^2 energy of the electron works
out to be about 8.19e-14 joules, so the total energy density of works out to be, drumroll…
1.4e24 joules/meter^3.
Which is pretty hefty. We’re talking “giant asteroid impact per cubic meter” hefty. But how does that compare to the light intensity we can generate with current laser technology?
Well, we’ve calculated an energy density, not an intensity. Intensity is watts per square meter – or if you prefer, power per area. We calculated energy per volume. So we have to do a bit of unit
conversion. If we imagine that we turn on a flashlight for 1 second, we’ve created a column of light with a length of about 186,000 miles containing a total energy equal to the power of the
flashlight times 1 second. The relationship between energy density and intensity is thus (E/V)*c = I, where I is the intensity, watts per meter^2. Which is good because the units work out. It comes
out to be something like 4.2e32 watts per square meter, or about 4.2e28 W/cm^2.
Right now our best lasers are generally in the 10^20 W/cm^2 range, so we have a ways to go before we can start stuffing boxes to their limits with photons.
Still, when you’re observing interactions between light fields and electrons that are already moving relativistically fast you can actually get these wacky QED effects at reachable intensities. Chad
wrote a bit about this a while back, in fact.
Will we ever get to the point when we just can’t make our lasers any more intense without turning them into particle beams? Well, I’m not holding my breath. But it would be cool.
What the heck, one more picture from the reception:
1. #1 raghuvir jha ( india ) September 24, 2010
it is very good to find you again here. i speculating that you may not come so quickly. but you are very professional. you are looking very smart and handsome in the above picture. god was very
kind on you , he given you both look and mind.
now i come into right track ,i have a very silly doubt, how can you confine a photon into a box? as far as i know, no sooner photon creates it moves with the very high speed. i have another
doubt, suppose there is nothing but a single photon or single electron in the universe that is to say a isolated photon then what will be the behaviour of photon ? will it be in rest or will it
move with 1,86000 mile per sec?
2. #2 Jesse September 24, 2010
Physics Buzz had a post about the maximum strength of lasers about a month back while back. They claimed that new calculations show pair production will occur in current generation lasers like
the ELI and XFEL.
3. #3 Jesse September 24, 2010
“about a month back while back”
Apparently I can’t edit my posts. Ah well. I’m sure you get the idea.
4. #4 Uncle Al September 24, 2010
A 1 gigatesla field has 4×10^24 J/m^3, E/c^2 mass density 1000 times that of lead. Don’t drop the bottle – same EOS as Ideal Gas.
Real photons are inconvenient for not staying put. Separated charge is a pain in the patootie for sparking: ionizing a gas fill, cold emission from surfaces, and atomic nuclei with Z greater than
137 (to first order. Rather more than that with QED corrections). The choice way to put ten pounds of photons in a five pound vacuum bag is magnetic field.
Magnetars pull ~10 gigateslas and above – more than enough to foment intense vacuum birefringence and spontaneous pair formation,
The hard work is done. All one need do is look.
5. #5 Carl Brannen September 24, 2010
Congratulations on becoming married!
The steel box problem depends on the thickness of the steel, of course. But if you assume the thickness is proportionate to the diameter, the maximum pressure you can contain is approximately
equal to the tensile strength of the steel. Say 40,000 psi.
Light carries momentum proportional to its energy. So it doesn’t matter what frequency light you use, the maximum energy for 40,000 psi will be the same. I’ll leave it to you to work ou the
I’m looking at going to grad school next fall and I’m thinking that my primary objective will be UT Austin. I take the Physics GREs in two weeks.
6. #6 Anonymous September 24, 2010
what if several dozen lasers were focused into a single beam?
would that increase the power?
7. #7 Tercel September 26, 2010
I’m still convinced that Uncle Al is some sort of spam-bot that spits out pseudo-coherent text related to the post on which it is commenting. I mean seriously, what the hell sort of sentence
structure is that? It’s certainly not a human.
8. #8 complex field September 26, 2010
@ #6 — don’t cross the beams….. | {"url":"http://scienceblogs.com/builtonfacts/2010/09/23/light-up-to-11/","timestamp":"2014-04-18T18:20:08Z","content_type":null,"content_length":"54444","record_id":"<urn:uuid:bba2f5e7-20b4-4bd9-9ff5-5bede15b7531>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00533-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decidability of matrix algebra
up vote 6 down vote favorite
Take multi-sorted first-order logic with equality, complex scalars, 1xn vectors, nx1 vectors, nxn matrices, addition and multiplication for each pair of sorts they make sense for, and hermitian
transpose (which is conjugation on scalars). Is it decidable what sentences are [true for all n]? (there are 4 sorts, what sentences are true simultaneously for all n)
(For each particular n, it is decidable by interpreting in a real ordered field.)
What if we also add real scalars and ≤ for them?
lo.logic ra.rings-and-algebras
Ricky, I think you need to restate the question without multisorted logic: There is no quantification over "sorts"! For example, how about the simpler question: "Is the theory of all $n\times n$
matrix rings with complex coefficients decidable?" – SJR Jul 30 '10 at 3:22
What there is, is quantification within "sorts". The answer to the simpler question is yes by interpreting in a real ordered field. – Ricky Demer Jul 30 '10 at 3:34
3 A related question would be whether there is a 0-1 law for your 4-sorted statements as the dimension increases. In other words, is it true that for every statement about matrices, vectors &
scalars, the truth is the same in all sufficiently high dimensions? A simple example: matrix multiplication is commutative in dimension 1, but not in all higher dimensions. Do you know if such a
phenomenon holds for all statements? – Joel David Hamkins Jul 30 '10 at 3:57
1 The scalars 0 and 1 are definable, as is any algebraic number, so they don't have to be in the language, but you can freely use them anyway. The same for the identity matrix and the 0 matrix. –
Joel David Hamkins Aug 2 '10 at 1:03
2 The question is undecideable: see the answers for this Math Overflow question: mathoverflow.net/questions/34186/… – Peter Shor Aug 2 '10 at 21:40
show 9 more comments
3 Answers
active oldest votes
The second problem (where real scalar variables and the comparison relation are also allowed) is equivalent to the first problem. Here is a standard argument showing this:
• A complex scalar variable z can be restricted to real values by requiring $z=\bar{z}$.
• The comparison x≤y can be replaced by $\exists z.\ x+z\bar{z}=y$, where z is a fresh complex scalar variable.
up vote 1 down
vote accepted Back to the original question, the following paper may be related (or even answer your question) but I do not have enough knowledge to understand the content completely. Mihai
Putinar: Undecidability in a Free *-Algebra, preprint, April 2007, http://www.ima.umn.edu/preprints/apr2007/2165.pdf.
add comment
If you want to determine truth in this language with real or complex entries, then Yes. All this is expressible in the language of real-closed fields, simply by using components, and is
therefore expressible in the complete theory of $\langle\mathbb{R},+,\cdot,0,1,\lt\rangle$, which is decidable by Tarski's theorem on real-closed fields. For example, quantifying over $n\
times 1$ vectors is just $n$ quantifiers over reals (or $2n$ if you want complex numbers).
You mentioned that for each particular $n$, it is decidable by interpreting in the real-closed field, but my point is that this algorithm is uniform in $n$, and so you get a full decision
up vote 3 procedure for the multi-sorted logic. That is, given a sentence in the multi-sorted language, we can tell which sorts are quantified over, and so we know how to translate it into a question
down vote about real-closed fields, which we can then answer. (I assume that you use a set-up as usual in the multi-sorted logic where each sort gets its own variables and quantifiers.)
If you intend to interpret it over the rationals, then No, since even the $1$-dimensional ring theory of $\langle\mathbb{Q},+,\cdot,0,1,\lt\rangle$ is not decidable, as the integers are
definable there, and so you can express the halting problem.
"this algorithm is uniform in n, and so you get a full decision procedure for the multi-sorted logic." How does this follow? I see that we have an algorithm to take a sentence and n, and
determine whether the sentence is true for n, but what I'm looking for is an algorithm to take a sentence and determine whether it is [true for all n]. – Ricky Demer Jul 30 '10 at 3:08
I see. I had interpreted your question as to whether or not you could decide the truth of any particular sentence in the multi-sorted logic. But you want to determine whether an unsorted
statement holds true for each sort. – Joel David Hamkins Jul 30 '10 at 3:14
Actually, could you clarify your precise question? You seem to have a handful of sorts for each $n$: matrices, col vectors, row vectors, scalars. So the sentences that you want to decide
should each have quantifiers and variables over matrices, scalars, column vectors and row vectors, of unspecified size? But then you want to decide which such sentences are true
regardless of the dimension? – Joel David Hamkins Jul 30 '10 at 3:25
Correct. (editing question now) – Ricky Demer Jul 30 '10 at 3:29
This answer was answering the version of the question where you have different sorts for each n. But the OP intends to have only four sorts, and wants to know whether the statement holds
in all dimensions. – Joel David Hamkins Aug 30 '10 at 15:52
add comment
Although Peter Shor gave a proof of the undecidability (as he stated in a comment to the current question), here is another proof. An advantage of this proof is that it gives the
undecidability of a very restricted version of the problem.
In an answer to my question, Agol told me that the following problem (which I called the Finite-Dimensional Word Problem for Groups (FWP) in the question) is undecidable by a result of
Slobodskoi [Slo81].
Instance: A finite presentation of a group G and an element w of G as a product of generators and their inverses.
Question: Does every matrix representation of G map w to the identity matrix?
(The result in [Slo81] does not literally talk about this problem, but the result there implies the undecidability of this problem. See the answer by Agol linked above and also the
discussion linked from my question.)
up vote 1
down vote This problem can be easily translated into a special case of the current problem, which shows that the problem in question is undecidable even if we only allow a sentence of the form:
where I, X, X[1], …, X[n] are matrix variables and P[1](X[1],…,X[n]), …, P[m](X[1],…,X[n]), Q(X[1],…,X[n]) are products of one or more variables in X[1], …, X[n] in some order with
repetitions allowed. In particular, the problem is undecidable even if we do not allow scalar variables, vector variables, addition or conjugate transpose!
[Slo81] A. M. Slobodskoi. Unsolvability of the universal theory of finite groups. Algebra and Logic, 20(2):139–156, March 1981. http://www.springerlink.com/content/x880g1x17754hq83/
add comment
Not the answer you're looking for? Browse other questions tagged lo.logic ra.rings-and-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/33879/decidability-of-matrix-algebra","timestamp":"2014-04-16T16:16:16Z","content_type":null,"content_length":"71746","record_id":"<urn:uuid:4fd3951e-fbd8-417c-87ba-461babea800d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00014-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lexicographic order
From Encyclopedia of Mathematics
An order on a direct product
of partially ordered sets Partially ordered set), where the set of indices Totally well-ordered set), defined as follows: If
means that, for some
The lexicographic order is a special case of an ordered product of partially ordered sets (see [3]). The lexicographic order can be defined similarly for any partially ordered set of indices [1]),
but in this case the relation on the set Order (on a set)).
A lexicographic product of finitely many well-ordered sets is well-ordered. A lexicographic product of chains is a chain.
For a finite
in the definition of a product of order types of totally ordered sets.
The lexicographic order is widely used outside mathematics, for example in ordering words in dictionaries, reference books, etc.
[1] G. Birkhoff, "Lattice theory" , Colloq. Publ. , 25 , Amer. Math. Soc. (1973)
[2] K. Kuratowski, A. Mostowski, "Set theory" , North-Holland (1968)
[3] L.A. Skornyakov, "Elements of lattice theory" , A. Hilger (1977) (Translated from Russian)
[4a] G. Cantor, "Beiträge zur Begründung der transfiniten Mengenlehre I" Math. Ann. , 46 (1895) pp. 481–512
[4b] G. Cantor, "Beiträge zur Begründung der transfiniten Mengenlehre II" Math. Ann , 49 (1897) pp. 207–246
[5] F. Hausdorff, "Grundzüge der Mengenlehre" , Leipzig (1914) (Reprinted (incomplete) English translation: Set theory, Chelsea (1978))
The question of which totally ordered sets [a1]). The lexicographic order on
[a1] G. Debreu, "Theory of values" , Yale Univ. Press (1959)
How to Cite This Entry:
Lexicographic order. T.S. Fofanova (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Lexicographic_order&oldid=13984
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Lexicographic_order","timestamp":"2014-04-19T09:38:13Z","content_type":null,"content_length":"21424","record_id":"<urn:uuid:1184c98d-5a8b-4762-a26e-08576d13d342>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solution to puzzle 59: Triangle inequality
A triangle has sides of length a, b, and c. Show that
Left inequality
The left side of the inequality is, in fact, true for all triples (a, b, c) of positive real numbers. We can prove it using the rearrangement inequality, stated below.
Let a[1] [2] [n] and b[1] [2] [n] be real numbers. For any permutation (c[1], c[2], ..., c[n]) of (b[1], b[2], ..., b[n]), we have:
a[1]b[1] + a[2]b[2] + ... + a[n]b[n] [1]c[1] + a[2]c[2] + ... + a[n]c[n] [1]b[n] + a[2]b[n−1] + ... + a[n]b[1],
with equality if, and only if, (c[1], c[2], ..., c[n]) is equal to (b[1], b[2], ..., b[n]) or (b[n], b[n−1], ..., b[1]), respectively.
That is, the sum is maximal when the two sequences, {a[i]} and {b[i]}, are sorted in the same way, and is minimal when they are sorted oppositely.
Now we apply the rearrangement inequality to suitably chosen sequences. Specifically, we will use the result that the sum is maximal when the two sequences are sorted in the same way.
Without loss of generality, assume a
Then the sequences {a, b, c} and
We twice rotate the second sequence, and apply the rearrangement inequality, to obtain:
Adding these two inequalities, and dividing by two, we get
We must also show that equality can occur, which is readily seen by setting a = b = c.
Right inequality
In order to prove the right inequality, we must use the fact that a, b, c are the sides of a triangle.
Let s = ½(a + b + c) be the semi-perimeter of the triangle.
In any triangle, a + b > c, and so a + b > s.
Adding the three inequalities, we get
Additional puzzle
The following inequality is due to Gheorge Eckstein.
Let a, b, x, y, z be positive real numbers. Show that:
Further reading
1. The Rearrangement Inequality by K. Wu and Andy Liu -- a tutorial that shows how to derive many other inequalities, such as Arithmetic Mean - Geometric Mean, Geometric Mean - Harmonic Mean, and
Cauchy-Schwartz, from the Rearrangement Inequality.
2. The left inequality is known as Nesbitt's Inequality.
Source: Traditional | {"url":"http://www.qbyte.org/puzzles/p059s.html","timestamp":"2014-04-17T21:25:32Z","content_type":null,"content_length":"8046","record_id":"<urn:uuid:bf946853-02c4-4ac4-b697-2c0cafbd607c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex Analysis: Law of Exponents
September 11th 2011, 01:35 PM #1
Oct 2009
United States
Complex Analysis: Law of Exponents
Confirm that the law of exponents $z^{\lambda}z^{\mu} = z^{\lambda+\mu}$ is valid for all non-zero complex numbers $z$ and all complex exponents $\lambda$ and $\mu$. Give an example of complex
numbers $z, \lambda$ and $\mu$ for which $z^{\lambda\mu} eq z^{\lambda\mu}$.
My attempt at a solution follows:
$z^{\lambda} z^{\mu} = e^{\lambda Log(z)} e^{\mu Log(z)}$
$=e^{\lambda[ Log(z) + i Arg(z)]} + e^{\mu[Log(z)+i Arg(z)]}$
$=e^{(\lambda+\mu)[Log(z) + i Arg(z)]}$
$=e^{\lambda+\mu Log(z)}$
I am fairly certain this is at least on the way to being right, but I have no idea about a counterexample. Will you please look over it and see if I'm on the right track and then perhaps point me
in the direction to developing a counterexample? Thanks.
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/differential-geometry/187783-complex-analysis-law-exponents.html","timestamp":"2014-04-20T05:48:08Z","content_type":null,"content_length":"32362","record_id":"<urn:uuid:3ad2440e-d2ca-4e41-aca1-d6f6b7809924>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00073-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. The inverse of a number, or a reciprocal, is 1 divided by the number; for example, the inverse of 8 is 1/8 and the inverse of 3/5 is 5/3.
2. The inverse of a function or a transformation is the function or transformation that the reverses the effect of the function or transformation. For example, the inverse of addition is
subtraction, and of clockwise rotation is anticlockwise rotation. For a function f(x), the function g(x) such that f(a) = b implies g(b) = a is described as the inverse of f(x). In practice, the
inverse function of f(x) is written f ^-1(x). For example, the inverse of f(x) = ax + b is f ^-1(x) = (x - b)/a, since f ^-1(ax + b) = (ax + b - b)/a = ax/a = x.
3. The inverse of an element of a set, or a number, with respect to a particular operation, is what has to be combined with the element or number in order to obtain that operation's identity element
. In other words, the inverse of element a is the element b such that a*b = e, where * is an algebraic operation and e is the identity element relative to the operation * of the set of which a
and b are members. For example, a is the identity element of real numbers relative to multiplication: hence if a.b = -1, a is the inverse of b, b the inverse of a, relative to the multiplication
of real numbers. (Moreover, a and b are reciprocals in this case.)
Inverse of a proposition
For a proposition h c (read h implies c), the proposition not-h c is described as its inverse.
Inverse trigonometric function
For a function of the form y = sin x, the function of the form x = sin^-1 y (read as "x is the angle whose sine is y") is described as its inverse. This may also be written as x = arc sin y.
Related category | {"url":"http://www.daviddarling.info/encyclopedia/I/inverse.html","timestamp":"2014-04-16T04:18:10Z","content_type":null,"content_length":"8743","record_id":"<urn:uuid:ff8f2c3b-c544-4a92-a541-83d9b9d3ee0d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00228-ip-10-147-4-33.ec2.internal.warc.gz"} |
Princeton Township, NJ Prealgebra Tutor
Find a Princeton Township, NJ Prealgebra Tutor
...I have been tutoring mathematics and physics for the past ten years. I like to make learning math and physics fun for the students I tutor, giving them examples to which they can relate. My
goal is to get them excited about studying these subjects.
12 Subjects: including prealgebra, physics, calculus, geometry
...I also have experience teaching grammar at the high school level. I try to teach my students easy ways to remember rules and provide plenty of practice to help them become better writers.
Before studying English, I majored in economics for a year.
12 Subjects: including prealgebra, reading, English, writing
I am currently a college student at Rutgers in New Brunswick, who will graduate within the year. My major is Mathematics with a minor in Computer Science. While at school, I have taken extensive
coursework in not only computer science and mathematics but also engineering.
11 Subjects: including prealgebra, calculus, algebra 1, algebra 2
...Princeton Ave. In November 2013 I took the Praxis II Mathematics exam and scored a 150 on the exam. NJ passing score was 135.
14 Subjects: including prealgebra, geometry, ASVAB, algebra 1
...If you have any questions that haven't been answered above, please feel free to ask me! I look forward to hearing from you! Most secondary level math subjects rely on skills learned in algebra
9 Subjects: including prealgebra, calculus, geometry, algebra 1
Related Princeton Township, NJ Tutors
Princeton Township, NJ Accounting Tutors
Princeton Township, NJ ACT Tutors
Princeton Township, NJ Algebra Tutors
Princeton Township, NJ Algebra 2 Tutors
Princeton Township, NJ Calculus Tutors
Princeton Township, NJ Geometry Tutors
Princeton Township, NJ Math Tutors
Princeton Township, NJ Prealgebra Tutors
Princeton Township, NJ Precalculus Tutors
Princeton Township, NJ SAT Tutors
Princeton Township, NJ SAT Math Tutors
Princeton Township, NJ Science Tutors
Princeton Township, NJ Statistics Tutors
Princeton Township, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Princeton_Township_NJ_prealgebra_tutors.php","timestamp":"2014-04-17T01:32:59Z","content_type":null,"content_length":"24252","record_id":"<urn:uuid:4c40df48-6d38-489f-b3f5-957a8dc777d1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficient Localized Routing for Wireless Ad Hoc Networks
- IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS , 2003
"... Several localized routing protocols guarantee the delivery of the packets when the underlying network topology is a planar graph. Typically, relative neighborhood graph (RNG) or Gabriel graph
(GG) is used as such planar structure. However, it is well-known that the spanning ratios of these two grap ..."
Cited by 49 (8 self)
Add to MetaCart
Several localized routing protocols guarantee the delivery of the packets when the underlying network topology is a planar graph. Typically, relative neighborhood graph (RNG) or Gabriel graph (GG) is
used as such planar structure. However, it is well-known that the spanning ratios of these two graphs are not bounded by any constant (even for uniform randomly distributed points). Bose et al. [11]
recently developed a localized routing protocol that guarantees that the distance traveled by the packets is within a constant factor of the minimum if Delaunay triangulation of all wireless nodes is
used, in addition, to guarantee the delivery of the packets. However, it is expensive to construct the Delaunay triangulation in a distributed manner. Given a set of wireless nodes, we model the
network as a unit-disk graph (UDG), in which a link uv exists only if the distance kuvk is at most the maximum transmission range. In this paper, we present a novel localized networking protocol that
constructs a planar 2.5-spanner of UDG, called the localized Delaunay triangulation (LDEL), as network topology. It contains all edges that are both in the unit-disk graph and the Delaunay
triangulation of all nodes. The total communication cost of our networking protocol is Oðn log nÞ bits, which is within a constant factor of the optimum to construct any structure in a distributed
manner. Our experiments show that the delivery rates of some of the existing localized routing protocols are increased when localized Delaunay triangulation is used instead of several previously
proposed topologies. Our simulations also show that the traveled distance of the packets is significantly less when the FACE routing algorithm is applied on LDEL, rather than applied on GG.
- In Proceedings of the 21st International Parallel and Distributed Processing Symposium (IPDPS 2007 , 2007
"... In this paper, we propose the design of VoroNet, an objectbased peer to peer overlay network relying on Voronoi tessellations, along with its theoretical analysis and experimental evaluation.
VoroNet differs from previous overlay networks in that peers are application objects themselves and get iden ..."
Cited by 20 (3 self)
Add to MetaCart
In this paper, we propose the design of VoroNet, an objectbased peer to peer overlay network relying on Voronoi tessellations, along with its theoretical analysis and experimental evaluation. VoroNet
differs from previous overlay networks in that peers are application objects themselves and get identifiers reflecting the semantics of the application instead of relying on hashing functions. This
enables a scalable support for efficient search in large collections of data. In VoroNet, objects are organized in an attribute space according to a Voronoi diagram. VoroNet is inspired from the
Kleinberg’s small-world model where each peer gets connected to close neighbours and maintains an additional pointer to a long-range neighbour. VoroNet improves upon the original proposal as it deals
with general object topologies and therefore copes with skewed data distributions. We show that VoroNet can be built and maintained in a fully decentralized way. The theoretical analysis of the
system proves that routing in VoroNet can be achieved in a poly-logarithmic number of hops in the size of the system. The analysis is fully confirmed by our experimental evaluation by simulation. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3995552","timestamp":"2014-04-23T11:23:29Z","content_type":null,"content_length":"17384","record_id":"<urn:uuid:f2a03c15-6549-412b-9544-c1d1370e815c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00337-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symbolic Logic I Syllabus (Spring 2012)
Ch. 1 - Basic Concepts (weeks 1 - 2)
• 1.1. Arguments, Premises, and Conclusions
• 1.2. Recognizing Arguments
• 1.3. Deduction and Induction
• 1.4. Validity, Truth, Soundness, Strength, Cogency
• 1.5. Argument Forms: Proving Invalidity
Ch. 6 - Propositional Logic (weeks 3 - 8)
• 6.1. Symbols and Translation
• 6.2. Truth Functions
• 6.3. Truth Tables for Propositions
• 6.4. Truth Tables for Arguments
• 6.5. Indirect Truth Tables
• 6.6. Argument Forms and Fallacies
Spring Recess (week 9)
• March 19 - 25 - no class meetings
Ch. 7 - Natural Deduction
• 7.1 and 7.2 - Rules of Implication
• 7.3 and 7.4 - Rules of Replacement
• 7.5. Conditional Proof
• 7.6. Indirect Proof
• 7.7. Proving Logical Truths
Ch. 8 - Predicate Logic
• 8.1. Symbols and Translation
• 8.2. Using the Rules of Inference
• 8.3. Change of Quantifier Rule
• 8.4. Conditional and Indirect Proof
• 8.5. Proving Invalidity
• 8.6 Relational Predicates and Overlapping Quantifiers
• 8.7. Identity
Course Description
This is a rigorous introduction to the principles, methods, strengths and weaknesses of formal logical analysis. Students learn how to demonstrate that a coherent proposition follows from a given set
of such propositions and why such inferences are logically consistent or valid. Topics include: basic concepts of deductive logic, rules of derivation, techniques of formal proof in propositional and
predicate logic. The course satisfies area B5 of the GE program; 3 units.
Required course text: A Concise Introduction to Logic (2012) by Patrick Hurley, 11/e - only this edition will suffice. Students MUST buy the or rent the book, digital versions will not work.
• The entire papeback book itself is appx. $140 and available from the publisher here.
• [DEL:Chapters of the book are available online in PDF form from the publisher.:DEL] DO NOT USE eCHAPTER PDFs.
• [DEL:Access to an eBook version of the text for 6 months is appx. $80:DEL]. DO NOT USE THE eBOOK.
WARNING: DO NOT USE digital versions of the text, these have proven to be cumbersome and do not allow sufficient printing of content. If you do purchase a digital version of the text, you will not be
permitted to use it in class with any electronic device. Instead, students MUST bring hardcopies of relevant chapters with them to every class, because we will work through the exercises extensively.
Print them out from the digital form you have access to and bring these to class. If you do not bring the relevant chapters to class each meeting, then you will not have access to material discussed
in class and you will also be unable to take some in-class quizzes.
Assignments, Grades and Attendance
• EIGHT graded efforts of equal value comprise your course grade: SIX in-class unannounced quizzes (12 pts. each), [DEL:TWO:DEL] ONE online quiz (the midterm) scheduled and presented within SacCT
(13 pts.) and the final (13 pts.). The FINAL exam is an in-class quiz which will occur during finals week on 14 May 2012 at 12:45 p.m. Students may not re-take or make-up any quiz, absolutely, no
exceptions. There are plenty of points available so that one can miss a quiz and still do well in the course.
• There will be no special treatment. No one can take any quiz after it has closed. There is no extra work or credit offered and students cannot re-take or make-up any quiz, absolutely, no
exceptions. There isn't time for this and it is unfair to give people special consideration. There are plenty of points available so that one can miss a quiz and still do well in the course. See
the FAQ section 1 for a fuller rationale.
• Please keep track of your own grades via SacCT, I don't do grade checks, since you can do it for yourself.
• When and where is the final for this course? The FINAL exam is an in-class quiz which will occur during finals week on 14 May 2012 at 12:45 p.m.
• Students may NOT use phones, laptops, or recording devices during class meetings. They are unnecessary distractions and disrupt the class. Why? Here is my argument. Persistently disruptive
students will be warned, identified and dismissed.
• Here is my official grade-scale for ALL assignments:
□ 12 or above = A, 11 = A-, 10 = B+, 9 = B, 8 = B-, 7 = C+, 6 = C, 5 = C-, 3 = D, less than 3 = F
• How are grades assigned?
□ For each graded effort you will receive a numerical score which corresponds to a letter-grade on my grade-scale (above). Scores correspond to letter-grades NOT percentages.
• How do I determine your overall course grade?
□ There are 98 total points available. I add the scores you earn on all of the quizzes, divide this total by 7, then assign the letter-grade based on my grade-scale (above). For instance, if
one earns a total of 55 points, divide this by 7, the result is a 7.86 which corresponds to a C+ on my letter-grade scale. Thus, one receives a C+ for the course. Since rounding introduces
error, I will not round scores up or down. Overall course grade = total points earned divided by 7, then apply my official grade scale.
Students will be able to:
1. translate English sentences into the language of Propositional and Quantificational Logic;
2. apply Truth Table and Truth Tree methods to identify propositions (and sets of propositions) as tautologous, consistent, contradictory, equivalent, contingent or necessary and test logical
arguments (comprised of such propositions) for validity and soundness;
3. use Rules of Inference and Replacement to construct proofs that show that a formal argument is or is not deductively valid.
If you have a disability and require accommodations, you need to provide disability documentation to SSWD, Lassen Hall 1008, (916) 278-6955. Please discuss accommodation needs with me after class or
during my office hours early in the semester.
Review all academic responsibilities, definitions, sanctions and rights described here. | {"url":"http://www.csus.edu/indiv/m/merlinos/60syll.html","timestamp":"2014-04-18T03:00:16Z","content_type":null,"content_length":"9847","record_id":"<urn:uuid:035e1c9e-3834-45c9-a17c-514dff50a5ea>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00395-ip-10-147-4-33.ec2.internal.warc.gz"} |
How’s this for fortuitous timing: I’d literally just gone through this paper by Gentzkow and Kamenica yesterday, and this morning it was announced that Gentzkow is the winner of the 2014 Clark Medal!
More on the Clark in a bit, but first, let’s do some theory.
This paper is essentially the multiple sender version of the great Bayesian Persuasion paper by the same authors (discussed on this site a couple years ago). There are a group of experts who can
(under commitment to only sending true signals) send costless signals about the realization of the state. Given the information received, the agent makes a decision, and each expert gets some utility
depending on that decision. For example, the senders might be a prosecutor and a defense attorney who know the guilt of a suspect, and the agent a judge. The judge convicts if p(guilty)>=.5, the
prosecutor wants to maximize convictions regardless of underlying guilt, and vice versa for the defense attorney. Here’s the question: if we have more experts, or less collusive experts, or experts
with less aligned interests, is more information revealed?
A lot of our political philosophy is predicated on more competition in information revelation leading to more information actually being revealed, but this is actually a fairly subtle theoretical
question! For one, John Stuart Mill and others of his persuasion would need some way of discussing how people competing to reveal information strategically interact, and to the extent that this
strategic interaction is non-unique, they would need a way for “ordering” sets of potentially revealed information. We are lucky in 2014, thanks to our friends Nash and Topkis, to be able to nicely
deal with each of those concerns.
The trick to solving this model (basically every proof in the paper comes down to algebra and some simple results from set theory; they are clever but not technically challenging) is the main result
from the Bayesian Persuasion paper. Draw a graph with the agent’s posterior belief on the X-axis, and the utility (call this u) the sender gets from actions based on each posterior on the y-axis. Now
draw the smallest concave function (call it V) that is everywhere greater than u. If V is strictly greater than u at the prior p, then a sender can improve her payoff by revealing information. Take
the case of the judge and the prosecutor. If the judge has the prior that everyone brought before them is guilty with probability .6, then the prosecutor never reveals information about any suspect,
and the judge always convicts (giving the prosecutor utility 1 rather than 0 from an acquittal). If, however, the judge’s prior is that everyone is guilty with .4, then the prosecutor can mix such
that 80 percent of criminals are convicted by judiciously revealing information. How? Just take 2/3 of the innocent people, and all of the guilty people, and send signals that each of these people is
guilty with p=.5, and give the judge information on the other 1/3 of innocent people that they are innocent with probability 1. This is plausible in a Bayesian sense. The judge will convict all of
the folks where p(guilty)=.5, meaning 80 percent of all suspects are convicted. If you draw the graph described above with u=1 when the judge convicts and u=0 otherwise, it is clear that V>u if and
only if p<.5, hence information is only revealed in that case.
What about when there are multiple senders with different utilities u? It is somewhat intuitive: more information is always, almost by definition, informative for the agent (remember Blackwell!). If
there is any sender who can improve their payoff by revealing information given what has been revealed thus far, then we are not in equilibrium, and some sender has the incentive to deviate by
revealing more information. Therefore, adding more senders increases the amount of information revealed and “shrinks” the set of beliefs that the agent might wind up holding (and, further, the
authors show that any Bayesian plausible beliefs where no sender can further reveal information to improve their payoff is an equilibrium). We still have a number of technical details concerning
multiplicity of equilibria to deal with, but the authors show that these results hold in a set order sense as well. This theorem is actually great: to check equilibrium information revelation, I only
need to check where V and u diverge sender by sender, without worrying about complex strategic interactions. Because of that simplicity, it ends up being very easy to show that removing collusion
among senders, or increasing the number of senders, will improve information revelation in equilibrium.
September 2012 working paper (IDEAS version). A brief word on the Clark medal. Gentzkow is a fine choice, particularly for his Bayesian persuasion papers, which are already very influential. I have
no doubt that 30 years from now, you will still see the 2011 paper on many PhD syllabi. That said, the Clark medal announcement is very strange. It focuses very heavily on his empirical work on
newspapers and TV, and mentions his hugely influential theory as a small aside! This means that five of the last six Clark medal winners, everyone but Levin and his relational incentive contracts,
have been cited primarily for MIT/QJE-style theory-light empirical microeconomics. Even though I personally am primarily an applied microeconomist, I still see this as a very odd trend: no prizes for
Chernozhukov or Tamer in metrics, or Sannikov in theory, or Farhi and Werning in macro, or Melitz and Costinot in trade, or Donaldson and Nunn in history? I understand these papers are harder to
explain to the media, but it is not a good thing when the second most prominent prize in our profession is essentially ignoring 90% of what economists actually do.
“Finite Additivity, Another Lottery Paradox, and Conditionalisation,” C. Howson (2014)
If you know the probability theorist Bruno de Finetti, you know him either for his work on exchangeable processes, or for his legendary defense of finite additivity. Finite additivity essentially
replaces the Kolmogorov assumption of countable additivity of probabilities. If Pr(i) for i=1 to N is the probability of event i, then the probability of the union of all i is just the sum of each
individual probability under either countable of finite additivity, but countable additivity requires that property to hold for a countably infinite set of events.
What is objectionable about countable additivity? There are three classic problems. First, countable additivity restricts me from some very reasonable subjective beliefs. For instance, I might
imagine that a Devil is going to pick one of the integers, and that he is equally likely to predict any given number. That is, my prior is uniform over the integers. Countable additivity does not
allow this: if the probability of any given number being picked is greater than zero, then the sum diverges, and if the probability any given number is picked is zero, then by countable additivity
the sum of the grand set is also zero, violating the usual axiom that the grand set has probability 1. The second problem, loosely related to the first, is that I literally cannot assign
probabilities to some objects, such as a nonmeasurable set.
The third problem, though, is the really worrying one. To the extent that a theory of probability has epistemological meaning and is not simply a mathematical abstraction, we might want to require
that it not contradict well-known philosophical premises. Imagine that every day, nature selects either 0 or 1. Let us observe 1 every day until the present (call this day N). Let H be the hypothesis
that nature will select 1 every day from now until infinity. It is straightforward to show that countable additivity requires that as N grows large, continued observation of 1 implies that Pr(H)->1.
But this is just saying that induction works! And if there is any great philosophical advance in the modern era, it is Hume’s (and Goodman’s, among others) demolition of the idea that induction is
sensible. My own introduction to finite additivity comes from a friend’s work on consensus formation and belief updating in economics: we certainly don’t want to bake in ridiculous conclusions about
beliefs that rely entirely on countable additivity, given how strongly that assumption militates for induction. Aumann was always very careful on this point.
It turns out that if you simply replace countable additivity with finite additivity, all of these problems (among others) go away. Howson, in a paper in the newest issue of Synthese, asks why, given
that clear benefit, anyone still finds countable additivity justifiable? Surely there are lots of pretty theorems, from Radon-Nikodym on down, that require countable additivity, but if the theorem
critically hinges on the basis of an unjustifiable assumption, then what exactly are we to infer about the justifiability of the theorem itself?
Two serious objections are tougher to deal with for de Finetti acolytes: coherence and conditionalization. Coherence, a principle closely associated with de Finetti himself, says that there should
not be “fair bets” given your beliefs where you are guaranteed to lose money. It is sometimes claimed that a uniform prior over the naturals is not coherent: you are willing to take a bet that any
given natural number will not be drawn, but the conjunction of such bets for all natural numbers means you will lose money with certainty. This isn’t too worrying, though; if we reject countable
additivity, then why should we define coherence to apply to non-finite conjunctions of bets?
Conditionalization is more problematic. It means that given prior P(i), your posterior P(f) of event S after observing event E must be such that P(f)(S)=P(i)(S|E). This is just “Bayesian updating”
off of a prior. Lester Dubins pointed out the following. Let A and B be two mutually exclusive hypothesis, such that P(A)=P(B)=.5. Let the random quantity X take positive integer values such that P(X
=n|B)=0 (you have a uniform prior over the naturals conditional on B obtaining, which finite additivity allows), and P(X=n|A)=2^(-n). By the law of total probability, for all n, P(X=n)>0, and
therefore by Bayes’ Theorem, P(B|X=n)=1 and P(A|X=n)=0, no matter which n obtains! Something is odd here. Before seeing the resolution of n, you would take a fair bet on A obtaining. But once n
obtains (no matter which n!), you are guaranteed to lose money by betting on A.
Here is where Howson tries to save de Finetti with an unexpected tack. The problem in Dubins example is not finite additivity, but conditionalization – Bayesian updating from priors – itself! Here’s
why. By a principle called “reflection”, if using a suitable updating rule, your future probability of event A is p with certainty, then your current probability of event A must also be p. By Dubins
argument, then, P(A)=0 must hold before X realizes. But that means your prior must be 0, which means that whatever independent reasons you had for the prior being .5 must be rejected. If we are to
give up one of Reflection, Finite Additivity, Conditionalization, Bayes’ Theorem or the Existence of Priors, Howson says we ought give up conditionalization. Now, there are lots of good reasons why
conditionalization is sensible within a utility framework, so at this point, I will simply point your toward the full paper and let you decide for yourself whether Howson’s conclusion is sensible. In
any case, the problems with countable additivity should be better known by economists.
Final version in Synthese, March 2014 [gated]. Incidentially, de Finetti was very tightly linked to the early econometricians. His philosophy – that probability is a form of logic and hence
non-ampliative (“That which is logical is exact, but tells us nothing”) – simply oozes out of Savage/Aumann/Selten methods of dealing with reasoning under uncertainty. Read, for example, what Keynes
had to say about what a probability is, and you will see just how radical de Finetti really was.
“At Least Do No Harm: The Use of Scarce Data,” A. Sandroni (2014)
This paper by Alvaro Sandroni in the new issue of AEJ:Micro is only four pages long, and has only one theorem whose proof is completely straightforward. Nonetheless, you might find it surprising if
you don’t know the literature on expert testing.
Here’s the problem. I have some belief p about which events (perhaps only one, perhaps many) will occur in the future, but this belief is relatively uninformed. You come up to me and say, hey, I
actually *know* the distribution, and it is p*. How should I incentivize you to truthfully reveal your knowledge? This step is actually an old one: all we need is something called a proper scoring
rule, the Brier Score being the most famous. If someone makes N predictions f(i) about the probability of binary events i occurring, then the Brier Score is the sum of the squared difference between
each prediction and its outcome {0,1}, divided by N. So, for example, if there are three events, you say all three will independently happen with p=.5, and the actual outcomes are {0,1,0}, your score
is 1/3*[(.5-1)^2+2*(.5-0)^2], or .25. The Brier Score being a proper scoring rule means that your expected score is lowest if you actually predict the true probability distribution. That being the
case, all I need to do is pay you more the lower your Brier Score is, and if you are risk-neutral you, being the expert, will truthfully reveal your knowledge. There are more complicated scoring
rules that can handle general non-binary outcomes, of course. (If you don’t know what a scoring rule is, it might be worthwhile to convince yourself why a rule equal to the summed absolute value of
deviations between prediction and outcome is not proper.)
That’s all well and good, but a literature over the past decade or so called “expert testing” has dealt with the more general problem of knowing who is actually an expert at all. It turns out that it
is incredibly challenging to screen experts from charlatans when it comes to probabilistic forecasts. The basic (too basic, I’m afraid) reason is that your screening rule can only condition on
realizations, but the expert is expected to know a much more complicated object, the probability distributions of each event. Imagine you want to use the following rule, called calibration, to test
weathermen: on days where rain was predicted p=.4, it actually does rain close to 40 percent of those days. A charlatan has no idea whether it will rain today or tomorrow, but after making a year of
predictions, notices that most of his predictions are “too low”. When rain was predicted with .6, it rained 80 percent of the time, and when predicted with .7, it rained 72 percent of the time, etc.
What should the charlatan do? Start predicting rain every day, to become “better calibrated”. As the number of days grows large, this trick gets the charlatan closer and closer to calibration.
But, you say, surely I can notice such an obviously tricky strategy. That implicitly means you want to use a more complicated test to screen the charlatans from the experts. And a famous result of
Foster and Vohra (which apparently was very hard to publish because so many referees simply didn’t believe the proof!) says that any test which passes experts with high probability for any
realization of nature as the number of predictions gets large can be passed by a suitably clever and strategic charlatan with high probability. And, indeed, the proof of this turns out to be a
straightforward application of an abstract minimax theorem proven by Fan in the early 1950s.
Back, now, to the original problem of this post. If I know you are an expert, I can get your information with a payment that is maximized when a proper scoring rule is minimized. But what if, in
addition to wanting info when it is good, I don’t want to be harmed when you are a charlatan? And further, what if only a single prediction is being made? The expert testing results mean that
screening good from bad is going to be a challenge no matter how much data I have. If you are a charlatan and are always incentivized to report my prior, then I am not hurt. But if you actually know
the true probabilities, I want to pay you according to a proper scoring rule. Try this payment scheme: if you predict my prior p, then you get a payment ε which does not depend on the realization of
the data. If you predict anything else, you get an expected payment based on a proper scoring rule, and that expected payment is greater than ε. So the informed expert is incentivized to report
truthfully (there is a straightforward modification of the above if the informed expert is not risk-neutral). How can we get the charlatan to always report p? If the charlatan has minmax preferences
as in Gilboa-Schmeidler, then the payoff is ε if p is reported no matter how the data realizes. If, however, the probability distribution actually is p, and the charlatan ever reports anything other
than p, then since payoffs are based on a proper scoring rule, in that “worst-case scenario” the charlatan’s expected payoff is less than ε, hence she will never report anything other than p due to
the minmax preferences. I wouldn’t worry too much about the minmax assumption, since it makes quite a bit of sense as a utility function for a charlatan that must make a decision what to announce
under a complete veil of ignorance about nature’s true distribution.
Final AEJ:Micro version, which is unfortunately behind a paywall (IDEAS page). I can’t find an ungated version of this article. It remains a mystery why the AEA is still gating articles in the AEJ
journals. This is especially true of AEJ:Micro, a society-run journal whose main competitor, Theoretical Economics, is completely open access.
“Immigration and the Diffusion of Technology: The Huguenot Diaspora in Prussia,” E. Hornung (2014)
Is immigration good for natives of the recipient country? This is a tough question to answer, particularly once we think about the short versus long run. Large-scale immigration might have bad
short-run effects simply because more L plus fixed K means lower average incomes in essentially any economic specification, but even given that fact, immigrants bring with them tacit knowledge of
techniques, ideas, and plans which might be relatively uncommon in the recipient country. Indeed, world history is filled with wise leaders who imported foreigners, occasionally by force, in order to
access their knowledge. As that knowledge spreads among the domestic population, productivity increases and immigrants are in the long-run a net positive for native incomes.
How substantial can those long-run benefits be? History provides a nice experiment, described by Erik Hornung in a just-published paper. The Huguenots, French protestants, were largely expelled from
France after the Edict of Nantes was revoked by the Sun King, Louis XIV. The Huguenots were generally in the skilled trades, and their expulsion to the UK, the Netherlands and modern Germany
(primarily) led to a great deal of tacit technology transfer. And, no surprise, in the late 17th century, there was very little knowledge transfer aside from face-to-face contact.
In particular, Frederick William, Grand Elector of Brandenburg, offered his estates as refuge for the fleeing Huguenots. Much of his land had been depopulated in the plagues that followed the Thirty
Years’ War. The centralized textile production facilities sponsored by nobles and run by Huguenots soon after the Huguenots arrived tended to fail quickly – there simply wasn’t enough demand in a
place as poor as Prussia. Nonetheless, a contemporary mentions 46 professions brought to Prussia by the Huguenots, as well as new techniques in silk production, dyeing fabrics and cotton printing.
When the initial factories failed, knowledge among the apprentices hired and purchased capital remained. Technology transfer to natives became more common as later generations integrated more tightly
with natives, moving out of Huguenot settlements and intermarrying.
What’s particularly interesting with this history is that the quantitative importance of such technology transfer can be measured. In 1802, incredibly, the Prussians had a census of manufactories, or
factories producing stock for a wide region, including capital and worker input data. Also, all immigrants were required to register yearly, and include their profession, in 18th century censuses.
Further, Huguenots did not simply move to places with existing textile industries where their skills were most needed; indeed, they tended to be placed by the Prussians in areas which had suffered
large population losses following the Thirty Years’ War. These population losses were highly localized (and don’t worry, before using population loss as an IV, Hornung makes sure that population loss
from plague is not simply tracing out existing transportation highways). Using input data to estimate a Cobb-Douglas textile production function, an additional percentage point of the population with
Huguenot origins in 1700 is associated with a 1.5 percentage point increase in textile productivity in 1800. This result is robust in the IV regression using wartime population loss to proxy for the
percentage of Huguenot immigrants, as well as many other robustness checks. 1.5% is huge given the slow rate of growth in this era.
An interesting historical case. It is not obvious to me how relevant this estimation to modern immigration debates; clearly it must depend on the extent to which knowledge can be written down or
communicated at distance. I would posit that the strong complementarity of factors of production (including VC funding, etc.) are much more important that tacit knowledge spread in modern
agglomeration economies of scale, but that is surely a very difficult claim to investigate empirically using modern data.
2011 Working Paper (IDEAS version). Final paper published in the January 2014 AER.
“Wall Street and Silicon Valley: A Delicate Interaction,” G.-M. Angeletos, G. Lorenzoni & A. Pavan (2012)
The Keynesian Beauty Contest – is there any better example of an “old” concept in economics that, when read in its original form, is just screaming out for a modern analysis? You’ve got coordination
problems, higher-order beliefs, signal extraction about underlying fundamentals, optimal policy response by a planner herself informationally constrained: all of these, of course, problems that have
consumed micro theorists over the past few decades. The general problem of irrational exuberance when we start to model things formally, though, is that it turns out to be very difficult to generate
“irrational” actions by rational, forward-looking agents. Angeletos et al have a very nice model that can generate irrational-looking asset price movements even when all agents are perfectly
rational, based on the idea of information frictions between the real and financial sector.
Here is the basic plot. Entrepreneurs get an individual signal and a correlated signal about the “real” state of the economy (the correlation in error about fundamentals may be a reduced-form measure
of previous herding, for instance). The entrepreneurs then make a costly investment. In the next period, some percentage of the entrepreneurs have to sell their asset on a competitive market. This
may represent, say, idiosyncratic liquidity shocks, but really it is just in the model to abstract away from the finance sector learning about entrepreneur signals based on the extensive margin
choice of whether to sell or not. The price paid for the asset depends on the financial sector’s beliefs about the real state of the economy, which come from a public noisy signal and the trader’s
observations about how much investment was made by entrepreneurs. Note that the price traders pay is partially a function of trader beliefs about the state of the economy derived from the total
investment made by entrepreneurs, and the total investment made is partially a function of the price at which entrepreneurs expect to be able to sell capital should a liquidity crisis hit a given
firm. That is, higher order beliefs of both the traders and entrepreneurs about what the other aggregate class will do determine equilibrium investment and prices.
What does this imply? Capital investment is higher in the first stage if either the state of the world is believed to be good by entrepreneurs, or if the price paid in the following period for assets
is expected to be high. Traders will pay a high price for an asset if the state of the world is believed to be good. These traders look at capital investment and essentially see another noisy signal
about the state of the world. When an entrepreneur sees a correlated signal that is higher than his private signal, he increases investment due to a rational belief that the state of the world is
better, but then increases it even more because of an endogenous strategic complementarity among the entrepreneurs, all of whom prefer higher investment by the class as a whole since that leads to
more positive beliefs by traders and hence higher asset prices tomorrow. Of course, traders understand this effect, but a fixed point argument shows that even accounting for the aggregate strategic
increase in investment when the correlated signal is high, aggregate capital can be read by traders precisely as a noisy signal of the actual state of the world. This means that when when
entrepreneurs invest partially on the basis of a signal correlated among their class (i.e., there are information spillovers), investment is based too heavily on noise. An overweighting of public
signals in a type of coordination game is right along the lines of the lesson in Morris and Shin (2002). Note that the individual signals for entrepreneurs are necessary to keep the traders from
being able to completely invert the information contained in capital production.
What can a planner who doesn’t observe these signals do? Consider taxing investment as a function of asset prices, where high taxes appear when the market gets particularly frothy. This is good on
the one hand: entrepreneurs build too much capital following a high correlated signal because other entrepreneurs will be doing the same and therefore traders will infer the state of the world is
high and pay high prices for the asset. Taxing high asset prices lowers the incentive for entrepreneurs to shade capital production up when the correlated signal is good. But this tax will also lower
the incentive to produce more capital when the actual state of the world, and not just the correlated signal, is good. The authors discuss how taxing capital and the financial sector separately can
help alleviate that concern.
Proving all of this formally, it should be noted, is quite a challenge. And the formality is really a blessing, because we can see what is necessary and what is not if a beauty contest story is to
explain excess aggregate volatility. First, we require some correlation in signals in the real sector to get the Morris-Shin effect operating. Second, we do not require the correlation to be on a
signal about the real world; it could instead be correlation about a higher order belief held by the financial sector! The correlation merely allows entrepreneurs to figure something out about how
much capital they as a class will produce, and hence about what traders in the next period will infer about the state of the world from that aggregate capital production. Instead of a signal that
correlates entrepreneur beliefs about the state of the world, then, we could have a correlated signal about higher-order beliefs, say, how traders will interpret how entrepreneurs interpret how
traders interpret capital production. The basic mechanism will remain: traders essentially read from aggregate actions of entrepreneurs a noisy signal about the true state of the world. And all this
beauty contest logic holds in an otherwise perfectly standard Neokeynesian rational expectations model!
2012 working paper (IDEAS version). This paper used to go by the title “Beauty Contests and Irrational Exuberance”; I prefer the old name!
Personal Note: Moving to Toronto
Before discussing a lovely application of High Micro Theory to a long-standing debate in macro in a post coming right behind this one, a personal note: starting this summer, I am joining the Strategy
group at the University of Toronto Rotman School of Management as an Assistant Professor. I am, of course, very excited about the opportunity, and am glad that Rotman was willing to give me a shot
even though I have a fairly unusual set of interests. Some friends asked recently if I have any job market advice, and I told them that I basically just spent five years reading interesting papers,
trying to develop a strong toolkit, and using that knowledge base to attack questions I am curious about as precisely as I could, with essentially no concern about how the market might view this.
Even if you want to be strategic, though, this type of idiosyncrasy might not be a bad strategy.
Consider the following model: any school evaluates you according to v+e(s), where v is a common signal of your quality and e(s) is a school-specific taste shock. You get an offer if v+e(s) is
maximized for some school s; you are maximizing a first-order statistic, essentially. What this means is that increasing v (by being smarter, or harder-working, or in a hotter field) and increasing
the variance of e (by, e.g., working on very specific topics even if they are not “hot”, or by developing an unusual set of talents) are equally effective in garnering a job you will be happy with.
And, at least in my case, increasing v provides disutility whereas increasing the variance of e can be quite enjoyable! If you do not want to play such a high-variance strategy, though, my friend
James Bailey (heading from Temple’s PhD program to work at Creighton) has posted some more sober yet still excellent job market advice. I should also note that writing a research-oriented blog seemed
to be weakly beneficial as far as interviews were concerned; in perhaps a third of my interviews, someone mentioned this site, and I didn’t receive any negative feedback. Moving from personal
anecdote to the minimal sense of the word data, Jonathan Dingel of Trade Diversion also seems to have had a great deal of success. Given this, I would suggest that there isn’t much need to worry that
writing publicly about economics, especially if restricted to technical content, will torpedo a future job search.
“The Explanatory Relevance of Nash Equilibrium: One-Dimensional Chaos in Boundedly Rational Learning,” E. Wagner (2013)
The top analytic philosophy journals publish a surprising amount of interesting game and decision theory; the present article, by Wagner in the journal Philosophy of Science, caught my eye recently.
Nash equilibria are stable in a static sense, we have long known; no player wishes to deviate given what others do. Nash equilibria also require fairly weak epistemic conditions: if all players are
rational and believe the other players will play the actual strategies they play with probability 1, then the set of outcomes is the Nash equilibrium set. A huge amount of work in the 80s and 90s
considered whether players would “learn” to play Nash outcomes, and the answer is by and large positive, at least if we expand from Nash equilibria to correlated equilibria: fictitious play (I think
what you do depends on the proportion of actions you took in the past) works pretty well, rules that are based on the relative payoffs of various strategies in the past work with certainty, and a
type of Bayesian learning given initial beliefs about the strategy paths that might be used generates Nash in the limit, though note the important followup on that paper by Nachbar in Econometrica
2005. (Incidentally, a fellow student pointed out that the Nachbar essay is a great example of how poor citation measures are for theory. The paper has 26 citations on Google Scholar mainly because
it helped kill a literature; the number of citations drastically underestimates how well-known the paper is among the theory community.)
A caution, though! It is not the case that every reasonable evolutionary or learning rule leads to an equilibrium outcome. Consider the “continuous time imitative-logic dynamic”. A continuum of
agents exist. At some exponential time for each agent, a buzzer rings, at which point they randomly play another agent. The agent imitates the other agent in the future with probability exp(beta*pi
(j)), where beta is some positive number and pi(j) is the payoff to the opponent; if imitation doesn’t occur, a new strategy is chosen at random from all available strategies. A paper by Hofbauer and
Weibull shows that as beta grows large, this dynamic is approximately a best-response dynamic, where strictly dominated strategies are driven out; as beta grows small, it looks a lot like a
replicator dynamic, where imitation depends on the myopic relative fitness of a strategy. A discrete version of the continuous dynamics above can be generated (all agents simultaneously update rather
than individually update) which similarly “ranges” from something like the myopic replicator to something like a best response dynamic as beta grows. Note that strictly dominated strategies are not
played for any beta in both the continuous and discrete time i-logic dynamics.
Now consider a simple two strategy game with the following payoffs:
Left Right
Left (1,1) (a,2)
Right (2,a) (1,1)
The unique Nash equilibrium is X=1/A. Let, say, A=3. When beta is very low (say, beta=1), and players are “relatively myopic”, and the initial condition is X=.1, the discrete time i-logic dynamic
converges to X=1/A. But if beta gets higher, say beta=5, then players are “more rational” yet the dynamic does not converge or cycle at all: indeed, whether the population plays left or right follows
a chaotic system! This property can be generated for many initial points X and A.
The dynamic here doesn’t seem crazy, and making agents “more rational” in a particular sense makes convergence properties worse, not better. And since play is chaotic, a player hoping to infer what
the population will play next is required to know the initial conditions with certainty. Nash or correlated equilibria may have some nice dynamic properties for wide classes of reasonable learning
rules, but the point that some care is needed concerning what “reasonable learning rules” might look like is well taken.
Final 2013 preprint. Big thumbs up to Wagner for putting all of his papers on his website, a real rarity among philosophers. Actually, a number of his papers look quite interesting: Do cooperate and
fair bargaining evolve in tandem? How do small world networks help the evolution of meaning in Lewis-style sender-receiver games? How do cooperative “stag hunt” equilibria evolve when 2-player stag
hunts have such terrible evolutionary properties? I think this guy, though a recent philosophy PhD in a standard philosophy department, would be a very good fit in many quite good economic theory
“Information Frictions and the Law of One Price,” C. Steinwender (2014)
Well, I suppose there is no surprise that I really enjoyed this paper by Claudia Steinwender, a PhD candidate from LSE. The paper’s characteristics are basically my catnip: one of the great
inventions in history, a policy question relevant to the present day, and a nice model to explain what is going on. The question she asks is how informational differences affect the welfare gains
from trade. In the present day, the topic comes up over and over again, from the importance of cell phones to village farmers to the welfare impact of public versus closed financial exchanges.
Steinwender examines the completion of the transatlantic telegraph in July 1866. A number of attempts over a decade had been made in constructing this link; the fact that the 1866 line was stable was
something of a surprise. Its completion lowered the time necessary to transmit information about local cotton prices in New York (from which much of the supply was sent) and Liverpool (where much of
the cotton was bought; see Chapter 15 of Das Kapital for a nice empirical description of the cotton industry at this time). Before the telegraph, steam ships took 7 to 21 days, depending on weather
conditions, to traverse the Pond. In a reduced form estimate, the mean price difference in each port, and the volatility of the price difference, fell; price shocks in Liverpool saw immediate
responses in shipments from America, and the prices there; exports increases and become more volatile; and similar effects were seen from shocks to ship speed before the telegraph, or temporary
technical problems with the line after July 1866. These facts come from amazingly well documented data in New York and UK newspapers.
Those facts are all well and good, but how to explain them, and how to interpret them? It is not at all obvious that information in trade with a durable good should matter. If you ship too much one
day, then just store it and ship less in the next period, right? But note the reduced form evidence: it is not just that prices harmonize, but that total shipments increase. What is going on? Without
the telegraph, the expected price tomorrow in Liverpool from the perspective of New York sellers is less variable (the conditional expectation conditions on less information about the underlying
demand shock, since only the two-week-old autocorrelated demand shock data brought by steamship is available). When high demand in Liverpool is underestimated, then, exports are lower in the era
before the telegraph. On the other hand, a low demand shock and a very low demand shock in Liverpool both lead to zero exports, since exporting is unprofitable. Hence, ignoring storage, better
information increases the variance of perceived demand, with asymmetric effects from high and low demand shocks, leading to higher overall exports. Storage should moderate the volatility of exports,
but not entirely, since a period of many consecutive high demand shocks will eventually exhaust the storage in Liverpool. That is, the lower bound on stored cotton at zero means that even optimal
cotton storage does not fully harmonize prices in the presence of information frictions.
Steinwender confirms that intuition by solving for the equilibrium with storage numerically; this is actually a pretty gutsy move, since the numerical estimates are quantitatively quite different
than what was observed in the data. Nonetheless, I think she is correct that we are fine interpreting these as qualitative comparative statics from an abstract model rather than trying to interpret
their magnitude in any way. (Although I should note, it is not clear to me that we cannot sign the relevant comparative statics just because the model with storage cannot be solved analytically in
its entirety…)
The welfare impact of information frictions with storage can be bounded below in a very simple way. If demand is overestimated in New York, then too much is exported, and though some of this cotton
is stored, the lower bound at zero for storage means that the price in Liverpool is still too high. If demand in underestimated in New York, then too little is exported, and though some stored cotton
might be sold, the lower bound on storage means that the price in Liverpool is still too low. A lower bound on the deadweight loss from those effects can be computed simply by knowing the price
difference between the UK and the US and the slopes of the demand and supply curves; in the case of the telegraph, this deadweight loss is on the order of 8% of the value of US cotton exports to the
UK, or equivalent to the DWL from a 6% tax on cotton. That is large. I am curious about the impact of this telegraph on US vis-a-vis Indian or Egyptian cotton imports, the main Civil War substitutes;
information differences must distort the direction of trade in addition to its magnitude.
January 2014 working paper (No IDEAS version).
“Dynamic Constraints on the Distribution of Stochastic Choice: Drift Diffusion Implies Random Utility,” R. Webb (2013)
Neuroeconomics is a slightly odd field. It seems promising to “open up the black box” of choice using evidence from neuroscience, but despite this promise, I don’t see very many terribly interesting
economic results. And perhaps this isn’t surprising – in general, economic models are deliberately abstract and do not hinge on the precise reason why decisions are made, so unsurprisingly neuro
appears most successful in, e.g., selecting among behavioral models in specific circumstances.
Ryan Webb, a post-doc on the market this year, shows another really powerful use of neuroeconomic evidence: guiding our choices of the supposedly arbitrary parts of our models. Consider empirical
models of random utility. Consumers make a discrete choice, such that the object chosen i is that which maximizes utility v(i). In the data, even the same consumer does not always make the same
choice (I love my Chipotle burrito bowl, but I nonetheless will have a different lunch from time to time!). How, then, can we use the standard choice setup in empirical work? Add a random variable n
(i) to the decision function, letting agents choose i which maximizes v(i)+n(i). As n will take different realizations, choice patterns can vary somewhat.
The question, though, is what distribution n(i) should take? Note that the probability i is chosen is just
P(v(i)+n(i)>=v(j)+n(j)) for all j
P(v(i)-v(j)>=n(i)-n(j)) for all j
If n are distributed independent normal, then the difference n(i)-n(j) is normal. If n are extreme value type I, the difference is logistic. Do either of those assumptions, or some alternative, make
Webb shows that random utility is really just a reduced form of a well-established class of models in psychology called bounded accumulation models. Essentially, you receive a series of sensory
inputs stochastically, the data adds up in your brain, and you make a decision according to some sort of stopping rule as the data accumulates in a drift diffusion. In a choice model, you might think
for a bit, accumulating reasons to choose A or B, then stop at a fixed time T* and choose the object that, after the random drift, has the highest perceived “utility”. Alternatively, you might stop
once the gap between the perceived utilities of different alternatives is high enough, or once one alternative has a sufficiently high perceived utility. It is fairly straightforward to show that
this class of models all collapses to max v(i)+n(i), with differing implications for the distribution of n. Thus, neuroscience evidence about which types of bounded accumulation models appear most
realistic can help choose among distributions of n for empirical random utility work.
How, exactly? Well, for any stopping rule, there is an implied distribution of stopping times T*. The reduced form errors n are then essentially the sample mean of random draws from an finite
accretion process, and hence if the rule implies relatively short stopping times, n will be fat-tailed rather than normal. Also, consider letting the difference in underlying utility v(i)-v(j) be
large. Then the stopping time under the accumulation models is relatively short, and hence the variance in the distribution of reduced form errors (again, essentially the sample mean of random draws)
is relatively large. Hence, errors are heteroskedastic in the underlying v(i)-v(j). Webb gives additional results relating to the skew and correlation of n. He further shows that assuming independent
normality or independent extreme value type I for the error terms can lead to mistaken inference, using a recent AER that tries to infer risk aversion parameters from choices among monetary
lotteries. Quite interesting, even for a neuroecon skeptic!
2013 Working Paper (No IDEAS version).
Is Theory Really Dead? 2013-14 Job Market Stars in Economics
Solely out of curiosity, I have been collecting data on the characteristics of economics job market “stars” over the past few years. In order to receive a tenure-track offer, an economist must first
be “flown out” to a university to give a talk presenting their best research. I define a star using an somewhat arbitrary cutoff based on flyouts reported publicly online – roughly, the minimum
cutoff would be a candidate who is flown out to, e.g., Chicago Booth, UCLA, Cornell and Toronto. 95%+ of the job candidates from prior years above that cutoff have been hired into what I would
consider a highly prestigious job, and hence are in a good place to influence the direction of the profession in the years to come.
It is widely recognized that the topics and methodologies of interest to young economists are a leading indicator of where economics might be heading. Overall, as Hamermesh pointed out in a JEL
article last year, there has been an enormous shift over the past couple decades towards empirical work, particularly work where the parameters of interest are either simple treatment effects from
observational or experimental data; this work is often called “reduced-form”, though that term traditionally had a very different meaning.
This trend does not hold among the top candidates this year. I find 42 candidates, from 21 universities including 6 outside the United States (LSE, CEMFI, EUI, Toulouse, Sciences Po, UCL) above the
“star” cutoff; this list omits junior candidates coming off extended (> 2 year) post-docs. I generally use self-reported field in the table below.
Job Market Stars by Field
Macro 8
Labor 6
Micro theory 5
IO 4
Econometrics 4
Applied Micro 4
Intl./Trade 3
Finance 3
Public 2
Development 1
History 1
Political Economy 1
In the following tables, I split papers up into pure theory and empirics, and then split the empirical papers into structural models (where the estimates of interest are parameters in a choice and/or
equilibrium-based economic model), “light theory” (where the main estimates are treatment effects whose interest is derived from a light model), and pure treatment effect estimation (where the work
is purely experimental, either in the lab or the field, or a reduced form estimate of some economic parameter).
Theory versus Empirics
Pure Theory 11
Empirical 31
of which
Structural 25
“Light theory” 4
Experimental/Reduced-form 2
Data Source if Empirical
Custom Data 11
Public Data 20
Finally, there seems to be a widespread belief that publications are necessary in order to be a top job market candidate. In the table below, “Top 5″ means AER, Econometrica, ReStud, QJE or JPE, and
R&R denotes a publicly divulged Revise & Resubmit. I include AER Papers & Proceedings as a publication, but omit all non-peer reviewed publications such as Fed or think tank journal articles.
Categories refer to the “best” publication should a candidate have more than one.
Publication History Among Stars
No Pubs or R&Rs 20
Sole-authored top five 1
Coauthored top five 4
Sole-authored top five R&R 1
Coauthored top five R&R 6
Sole-authored other pub 2
Coauthored other pub 5
Sole-authored other R&R 3
What is the takeaway here? I see three major ones. First, the market is fairly efficient: students from many schools beyond the Harvards and MITs of the world are able to get looks from top
departments. Second, publications are nice but far from necessary: less than 20% of the stars even have a sole-authored revise & resubmit, let alone an AER on their CV.
Third, and most importantly, theory is far from dead; indeed, purely applied economics appears to be the method going out of favor! Of the 42 star job market papers, 11 are pure theory, and 25
estimate structural models; in many of those papers, the theoretical mechanisms identified clearly trump the data work. Only 6 of the 42 could by any stretch be identified as reduced form or
experimental economics, and of those 6, 4 nonetheless include a non-trivial economic model to guide the empirical estimation. Given Hamermesh’s data, this is a major change (and indeed, it seems
quite striking even compared with the market five years ago!). | {"url":"https://afinetheorem.wordpress.com/author/afinetheorem/","timestamp":"2014-04-19T11:56:14Z","content_type":null,"content_length":"119636","record_id":"<urn:uuid:1942eb22-a1bf-4dab-a8cd-85ee8a3462da>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00178-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
is naphthalene non-polar?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
C - H atoms show an electronegativity difference of 0.4, thus this is treated as slightly polar bond. But concerning the structure of naphthalene (two benzene rings), it is a very stable molecule
and won't dissolve easily in water - thus it behaves mostly nonpolar.
Best Response
You've already chosen the best response.
compared to 2-naphthol, which has the stronger dispersion force?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50746917e4b04aa3791ebb34","timestamp":"2014-04-19T22:27:19Z","content_type":null,"content_length":"30119","record_id":"<urn:uuid:efa216f7-1fe3-4b2b-b109-ba4d7778a6ba>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Correlation Analysis Between Dependent And Independent Variables Finance Essay
This chapter discusses on the results and analysis of this study based on the data collection from seventy Iranian firms in order to investigate the research hypotheses of this survey. The first
section of this chapter investigates the demographic variables using descriptive statistics. This part then used descriptive analysis to summarize the research variables including dependent and
independent variables. The chapter then proceeds with a discussion on regression analysis in order to fulfill the research hypotheses. The chapter finally ends with a correlation analysis between
dependent and independent variables. In order to perform statistical analysis, the SPSS version 16 was used.
Systematic Risk
The Table 4-1 summarized the descriptive statistics for systematic risk. According to Table 4-1, the mean and standard deviation for systematic risk are 1.1803 and 1.462.
Table 4: Descriptive Statistics for Systematic Risk
Std. Deviation
Systematic Risk
Valid N (listwise)
Account Profit Beta
The Table 4-2 summarized the descriptive statistics for Accounting Profit Beta. According to Table 4-2, the mean and standard deviation for Accounting Profit Beta are 0.4378 and 0.31267.
Table 4: Descriptive Statistics for Account Profit Beta
Std. Deviation
Account Profit Beta
Valid N (listwise)
Cash Flow Beta
The Table 4-3 summarized the descriptive statistics for Cash Flow Beta. According to Table 4-3, the mean and standard deviation for Cash Flow Beta are 0.3267 and 0.31233.
Table 4: Descriptive Statistics for Cash Flow Beta
Std. Deviation
Cash Flow Beta
Valid N (listwise)
Networking Capital
The Table 4-4 summarized the descriptive statistics for Networking Capital. According to Table 4-4, the mean and standard deviation for Networking Capital are 2.7233 and 1.34830E7.
Table 4: Descriptive Statistics for Networking Capital
Std. Deviation
Valid N (listwise)
Statistics for Assets
The Table 4-5 summarized the descriptive statistics for Assets. According to Table 4-5, the mean and standard deviation for Assets are 1.6858E6 and 5.23948E6.
Table 4: Descriptive Statistics for Assets
Std. Deviation
Valid N (listwise)
Testing Hypotheses Using Multiple Regressions
In this research study, it is assumed that there is no constant in regression equation. Hence, the constant was removed from multiple regression equation. The outputs of multiple regression modeling
were represented in Table 4-7, 4-8, and 4-9. The first step to examine the multiple regressions is to test normality assumption. According to Royston (1992), for samples with sizes between 3 and 2000
the Shapiro-Wilk test is suitable examination. In order to test normality assumption, standardized residuals were used. Since, the p-value for standardized residuals equals .2 (Under Shapiro-Wilk
test), which is more than 0.05 thus it can be concluded that the normality assumption was met. Hence, for these four independent variables multiple regression analysis can be done. The results were
represented in Table 4-6.
Table 4: Tests of Normality
Standardized Residual
a. Lilliefors Significance Correction
In this study, all Assets, Networking Capital, Cash Flow Beta, Account Profit Beta were regressed on Systematic Risk. As shown in table 4-7, the R-Square of this regression model equals 0.556. The
Durbin-Watson is of 1.995 drops between 1.5 and 2.5. This indicated that there is no autocorrelation problem among the error terms. Therefore, it approved that error terms were independent. According
to Table 4-9, the co linearity statistics represents that tolerance statistics for Account Profit Beta, Cash Flow Beta, Networking Capital, and Assets are all greater than 0.1, and VIF (Variation
Inflation Factors) for these variables are all lesser than 10. Hence, it can be stated that these variables have no multi co linearity problem. Therefore, the research hypothesis and regression
analysis strongly supported.
Table 4: Model Summary
R Squareb
Adjusted R Square
Std. Error of the Estimate
a. Predictors: Assets, Networking Capital, Account Profit Beta, Cash Flow Beta
b. For regression through the origin (the no-intercept model), R Square measures the proportion of the variability in the dependent variable about the origin explained by regression. This CANNOT be
compared to R Square for models which include an intercept.
c. Dependent Variable: Systematic Risk
d. Linear Regression through the Origin
According to Table 4-8, the ANOVA procedure provided F-value equals 20.626 (F=20.626) with p-value of 0.000 which is lesser than 0.05 (Sig=0.000<0.05). Hence, it is strongly supported that regression
modeling is significant and at least one of the predictors namely Account Profit Beta, Cash Flow Beta, Networking Capital, and Assets can be used to predict Systematic Risk.
Table 4: ANOVA
Sum of Squares
Mean Square
a. Predictors: Assets, Networking Capital, Account Profit Beta, Cash Flow Beta
b. This total sum of squares is not corrected for the constant because the constant is zero for regression through the origin.
c. Dependent Variable: Systematic Risk
d. Linear Regression through the Origin
Table 4: Coefficients
Unstandardized Coefficients
Standardized Coefficients
Collinearity Statistics
Std. Error
Account Profit Beta
Cash Flow Beta
Networking Capital
a. Dependent Variable: Systematic Risk
b. Linear Regression through the Origin
The results of Table 4-9 confirmed that there were two financial ratios including Account Profit Beta and Cash Flow Beta, which positively associated with Systematic Risk. As can be seen in Table
4-9, two predictors namely Account Profit Beta (B=1.284, Sig=0.003<0.05) and Cash Flow Beta (B=1.595, Sig=0.002<0.05) were all directly contributed in predicting Systematic Risk. As shown in Table
4-9, there is no significant relationship between Networking Capital (sig=0.974>0.05) and Assets (sig=0.062>0.05) with Systematic Risk. In addition, the results also indicated that Cash Flow Beta
with highest coefficient value is the most important variable to predict Systematic Risk.
In order to have a clear picture of multiple regression equation a stepwise method is recommended. The results of multiple regression using stepwise method are represented in Table 4-10, Table 4-11,
and Table 4-12.
In this section, all Assets, Networking Capital, Cash Flow Beta, Account Profit Beta were regressed on Systematic Risk using stepwise regression analysis. As shown in table 4-10, the R-Square for
model 2 of this regression model equals 0.531. The Durbin-Watson of 1.828 drops between 1.5 and 2.5. This indicated that there is no autocorrelation problem among the error terms. Therefore, it
approved that error terms were independent. According to Table 4-12, the co linearity statistics represents that tolerance statistics for Account Profit Beta, Cash Flow Beta are all greater than 0.1,
and VIF (Variation Inflation Factors) for these variables are all lesser than 10. Hence, it can be stated that these variables have no multi co linearity problem. Therefore, the research hypothesis
and regression analysis strongly supported.
Table 4: Model Summary for Stepwise Regression
R Squareb
Adjusted R Square
Std. Error of the Estimate
a. Predictors: Cash Flow Beta
b. For regression through the origin (the no-intercept model), R Square measures the proportion of the variability in the dependent variable about the origin explained by regression. This CANNOT be
compared to R Square for models, which include an intercept.
c. Predictors: Cash Flow Beta, Account Profit Beta
d. Dependent Variable: Systematic Risk
e. Linear Regression through the Origin
According to Table 4-11, under model 2, the ANOVA procedure provided F-value equals 38.544 (F=38.544) with p-value of 0.000 which is lesser than 0.05 (Sig=0.000<0.05). Hence, it is strongly supported
that regression modeling is significant and at least one of the predictors namely Account Profit Beta, Cash Flow Beta, Networking Capital, and Assets can be used to predict Systematic Risk.
Table 4: ANOVA for Stepwise Regression
Sum of Squares
Mean Square
a. Predictors: Cash Flow Beta
b. This total sum of squares is not corrected for the constant because the constant is zero for regression through the origin.
c. Predictors: Cash Flow Beta, Account Profit Beta
d. Dependent Variable: Systematic Risk
e. Linear Regression through the Origin
Table 4: Coefficient for Stepwise Regression
Unstandardized Coefficients
Standardized Coefficients
Collinearity Statistics
Std. Error
Cash Flow Beta
Cash Flow Beta
Account Profit Beta
a. Dependent Variable: Systematic Risk
b. Linear Regression through the Origin
The results of Table 4-12 confirmed that there were two financial ratios including Account Profit Beta and Cash Flow Beta, which positively associated with Systematic Risk. As can be seen in Table
4-12, two predictors namely Account Profit Beta (B=1.334, Sig=0.002<0.05) and Cash Flow Beta (B=1.678, Sig=0.001<0.05) were all directly contributed in predicting Systematic Risk. As shown in Table
4-12, there is no significant relationship between Networking Capital and Assets with Systematic Risk. In addition, the results also indicated that Cash Flow Beta with highest coefficient value is
the most important variable to predict Systematic Risk.
Correlation between Systematic Risk and Account Profit Beta
Table 4-13 represents the correlation between Systematic Risk and Account Profit Beta. According to the following table, the correlation between Systematic Risk and Account Profit Beta is 0.352 with
p-value of 0.003. Since, the p-value is lesser than 0.05 thus it can be stated that there is a positive significant relationship between Account Profit Beta and Systematic Risk.
Table 4: Correlation between Systematic Risk and Account Profit Beta
Systematic Risk
Account Profit Beta
Systematic Risk
Pearson Correlation
Sig. (2-tailed)
Account Profit Beta
Pearson Correlation
Sig. (2-tailed)
**. Correlation is significant at the 0.01 level (2-tailed).
Correlation between Systematic Risk and Cash Flow Beta
Table 4-14 represents the correlation between Systematic Risk and Cash Flow Beta. According to the following table, the correlation between Systematic Risk and Cash Flow Beta is 0.414 with p-value of
0.000. Since, the p-value is lesser than 0.05 thus it can be stated that there is a positive significant relationship between Cash Flow Beta and Systematic Risk.
Table 4: Correlation between Systematic Risk and Cash Flow Beta
Systematic Risk
Cash Flow Beta
Systematic Risk
Pearson Correlation
Sig. (2-tailed)
Cash Flow Beta
Pearson Correlation
Sig. (2-tailed)
**. Correlation is significant at the 0.01 level (2-tailed).
Correlation between Systematic Risk and Networking Capital
Table 4-15 represents the correlation between Systematic Risk and Networking Capital. According to the following table, the correlation between Systematic Risk and Networking Capital is 0.032 with
p-value of 0.793. Since, the p-value is greater than 0.05 thus it can be stated that there is no significant relationship between Networking Capital and Systematic Risk.
Table 4: Correlations between Systematic Risk and Networking Capital
Systematic Risk
Systematic Risk
Pearson Correlation
Sig. (2-tailed)
Pearson Correlation
Sig. (2-tailed)
Correlation between Systematic Risk and Assets
Table 4-16 represents the correlation between Systematic Risk and Assets. According to the following table, the correlation between Systematic Risk and Assets is 0.144 with p-value of 0.235. Since,
the p-value is greater than 0.05 thus it can be stated that there is no significant relationship between Assets and Systematic Risk.
Table 4: Correlation between Systematic Risk and Assets
Systematic Risk
Systematic Risk
Pearson Correlation
Sig. (2-tailed)
Pearson Correlation
Sig. (2-tailed)
In this chapter, the data were analyzed using descriptive statistics and multiple regressions. In order to carry out the multiple regressions the independent variables namely Account Profit Beta,
Cash Flow Beta, Networking Capital, and Assets were regressed against Systematic Risk. The basic assumptions regarding multiple regression analysis were performed. According to analysis in this
chapter, all the basic assumptions were met. As shown in this chapter, Accounting Profit Beta and Cash Flow Beta had significant positive relationship with Systematic Risk.
Share This Essay
Did you find this essay useful? Share this essay with your friends and you could win £20 worth of Amazon vouchers. One winner chosen at random each month.
Request Removal
If you are the original writer of this essay and no longer wish to have the essay published on the UK Essays website then please click on the link below to request removal:
Request the removal of this essay.
More from UK Essays
Need help with your essay?
We offer a bespoke essay writing service and can produce an essay to your exact requirements, written by one of our expert academic writing team. Simply click on the button below to order your essay,
you will see an instant price based on your specific needs before the order is processed: | {"url":"http://www.ukessays.com/essays/finance/correlation-analysis-between-dependent-and-independent-variables-finance-essay.php","timestamp":"2014-04-17T04:25:12Z","content_type":null,"content_length":"39553","record_id":"<urn:uuid:be323886-0603-4a83-aae6-54b3f1ac0622>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00114-ip-10-147-4-33.ec2.internal.warc.gz"} |
Figure 2: Examples of gamma distributions and associated survival functions. (a, b) The two parameters of the gamma distribution ( and ) permit description of a wide variety of distributions useful
for representing the probability density function for myosin crossbridge attachment time, . Notably, a single exponential results when (a), and a Gaussian distribution is approximated as increases.
(c, d) The survival function, , refers to the probability that an attached crossbridge will survive to time . The analytical relationship between the and is provided by (10a) and (10b). | {"url":"http://www.hindawi.com/journals/bmri/2011/592343/fig2/","timestamp":"2014-04-19T10:06:25Z","content_type":null,"content_length":"15110","record_id":"<urn:uuid:58c1bf52-30db-42d8-a8f4-a2f654727726>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00317-ip-10-147-4-33.ec2.internal.warc.gz"} |
Avondale Estates Geometry Tutor
...I have also prepared Spanish lessons to teach to a 2nd grade class in the past. I graduated as a Summa Cum Laude Honors graduate with a Science Bachelor's degree. I have taught myself how to
efficiently study and make mostly A's while in High School and in college.
29 Subjects: including geometry, chemistry, reading, Spanish
***Abigail has moved to a new location! Make sure to ask her for the new address! She does not do in-home tutoring.*** Current Availability as of 4/17/14: Thursday: 3, 5p Friday: 2, 3p Tuesday: 4,
5p Wednesday: 7p (this week only) You're probably trying to find a tutor who stands out from the rest.
22 Subjects: including geometry, reading, writing, calculus
...I am also very knowledgeable on the subject of the human body, from years of studying the MCAT. I am a tutor who takes teaching seriously. I want to make the best out of the time spent with my
29 Subjects: including geometry, chemistry, reading, physics
...I have a proven track record with student improvement and can guarantee better academic performance. I can meet in person or online and price is always negotiable. Although I approach each
student uniquely, I use a two step method to maximize academic success.
47 Subjects: including geometry, reading, Spanish, chemistry
...If you need to be better at any of these topics: -Essentials or Basics of Geometry -Reasoning and Proof -Segments and Angles -Parallel and Perpendicular Lines -Triangle Relationships -Congruent
Triangles -Properties of Triangles -Quadrilaterals -Similarity -Transformations -Polygons and Area -Sur...
6 Subjects: including geometry, algebra 1, algebra 2, trigonometry | {"url":"http://www.purplemath.com/avondale_estates_ga_geometry_tutors.php","timestamp":"2014-04-19T15:03:58Z","content_type":null,"content_length":"24125","record_id":"<urn:uuid:5579e6c5-ee7d-4900-beb1-d648436de5b6>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inaccurate Area Under a Curve Calculations... What's wrong?!
October 22nd 2010, 12:24 PM #1
Oct 2010
Inaccurate Area Under a Curve Calculations... What's wrong?!
This question has been driving me nuts, and multiple colleagues have not been able to help. I expect its something simple, but the solution has not yet presented itself. I came across this issue
trying to write a spreadsheet that will help quickly identify limits to a physical issue… so far, no luck.
Key issue is: Calculating the area under a curve with an integral is inaccurate.
Take the following as an example. PLEASE HELP! Why is it wrong to use the integral???
If you look at the graph below, using the equation y = 50 cos2 (x) you can infer that the area under the curve would roughly be (and even exceed) a rectangle of 25x45 plus a triangle of 25h x
45b… therefore the approx Area = 25x45 + (1/2)(25x45) = 1687.5
If you use the integral of the equation: ½ (50)(x + sin(x)cos(x))
With the limit of 45 and 0, you get
Area = ½ (50)(45 + sin(45)cos(45))
= (25)(45 + (0.7071)(0.7071))
= (25)(45.5)
= 1137.5
To find exact area… Using a polynomial curve fit to the equation, and then integrating that between 0 and 45, I find an area of 1840.95
What is going on?... why is the method using an integral finding 1137.5 when the actual area under the curve is closer to 1840.95 ???
just to clarify, the base equation is y = 50 x [cosine squared of x] ... NOT the [cosine of 2x]
Last edited by mr fantastic; October 22nd 2010 at 01:00 PM. Reason: Merged posts.
$\displaystyle A = \frac{180}{\pi} \int_0^{\frac{\pi}{4}} 50\cos^2{x} \, dx \approx 1841$
Ahhhh, convert after integrating... Thanks Skeeter, big help
The derivative formulas, (sin x)'= cos(x) and (cos x)'= -sin x, and the integral formulas that are derived from them, are based on the limit formula $\lim_{x\to 0}\frac{sin x}{x}= 1$ and $\lim_{x
\to 0}\frac{1- cos x}{x}= 0$ are based on x being in radians.
October 22nd 2010, 01:49 PM #2
October 25th 2010, 06:26 AM #3
Oct 2010
October 25th 2010, 04:48 PM #4
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/calculus/160632-inaccurate-area-under-curve-calculations-what-s-wrong.html","timestamp":"2014-04-19T23:09:08Z","content_type":null,"content_length":"43527","record_id":"<urn:uuid:8265561d-0f38-4688-bb0f-5e4090c06af3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: December 2005 [00212]
[Date Index] [Thread Index] [Author Index]
Re: Re: Re: Types in Mathematica
• To: mathgroup at smc.vnet.net
• Subject: [mg62904] Re: [mg62872] Re: Re: Types in Mathematica
• From: Sseziwa Mukasa <mukasa at jeol.com>
• Date: Thu, 8 Dec 2005 00:04:45 -0500 (EST)
• References: <dn22fb$kum$1@smc.vnet.net> <200512070411.XAA23826@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
On Dec 6, 2005, at 11:11 PM, Steven T. Hatton wrote:
>> Fair enough but Head[{1,2,3}] is not Symbol.
> I would say that we have to be clear about what we mean by "Head
> [{1,2,3}]".
I meant the value of evaluating the expression.
> At one level, it is nothing but a sequence of characters. When
> Head[{1,2,3}] is evaluated, a downvalue is added to In
That's a side effect of the evaluator, and can be suppressed anyway.
I don't consider In and Out to be fundamental to the operation of
Mathematica. In fact their existences is one of the areas in which
Mathematica breaks with being purely functional but that's another
issue. Others like Jon Harrop probably have far more intelligent
things than me to say on that subject.
> A.9.2 in the 5.2 Mathematica Book says "A Mathematica expression
> internally
> consists of a contiguous array of pointers, the first to the head,
> and the
> rest to its successive elements." I find that statement very
> incomplete.
I don't know why, it simply betrays Mathematica's LISP like roots,
and as Daniel Lichtblau pointed out is to be read more as a model for
reasoning than as to how Mathematica is actually implemented. At any
rate in the purest forms of LISP that's all one has an atom, which is
defined as an indivisible S-expression and a list which is an S-
expression of two parts the first is called the head or car and the
second a pointer to the rest of the list. I believe I read somewhere
that Mathematica's representations of expressions are called M-
expressions and were actually the intended syntax for LISP but for
some reason S-expressions ended up being preferred.
> In any case, (I believe) the leaves of an expression will be atoms
> of one
> kind or another. The atoms are (AFAIK) the only entities in
> Mathematica
> which hold data.
Heads have values too, an atomic expression is simply one which you
can't take apart any further, but the atomic expression 1 has a head
in Mathematica and the head has a value. The fact that we display
and write 1 instead of Integer[1] is syntactic sugar as far as I'm
concerned because at that point we are typically more interested in
the interpreting Integer[1] as a value which is the result of some
computation which is really what one is interested in.
> there are clearly different types of atomic expressions.
Especially if you consider the heads of atoms as defining their type,
which is why I think that is a useful paradigm for reasoning about
types in Mathematica. The fact that Head[Sqrt[2]] isn't Sqrt as Jon
Harrop pointed out seems to me more an artifact of the evaluator than
a problem with Mathematica.
>> There are different kinds of statements in C++ of which expressions
>> are but one class (http://cpp.comsci.us/syntax/). There are no
>> different kinds of expressions in Mathematica all have the same
>> structure Head[Arguments...].
> How is that different from saying there are atomic expressions and
> composite
> expressions in Mathematica? Is 1 of the form Head[Arguments...]?
However the expression Integer[1] is represented internally, and for
efficiency's sake it's hard to imagine that it occupies multiple
words of machine storage, is irrelevant as to how one should reason
about Mathematica programs. It's also irrelevant that the Notebook
Interface displays it as 1 instead of Integer[1]. The question is
whether one can reason about a program correctly, it seems to me
ambiguity can be best avoided by understanding 1 to be a) atomic and
b) representing Integer[1].
>>> What confusion?
>> The confusion over exactly what constitutes a type in Mathematica.
> Since there is no formal definition, it is strictly a matter of
> opinion,
> and/or convention. Since there seems to be little in the way of
> commonly
> agreed upon convention, we are stuck with relying on opinion.
But as you pointed out yourself Wolfram documents exactly what
everything in Mathematica is in A.9.2, it's an expression. Some
expressions are atomic, and if you want to assign a "type" to the
atoms other than expression the best candidate I can see is the head.
>> You can't inspect the structure of the function in C++ because it's a
>> function definition and there is no facility for programmatically
>> inspecting the contents of a definition in C++. In Mathematica the
>> contents are stored as an expression in the DownValues which can be
>> modified as any other expression in Mathematica sans restrictions.
> But that really seems to rely on the fact that C++ is a compiled
> language,
> whereas Mathematica is interpreted.
I don't think that matters. The distinction that's important is
between statements and expressions, LISP like languages do not have a
statement type and it is that fact that allows "functions" to be
treated as a first class types.
> I believe there are important distinction between the way C++ and
> Mathematica
> support operator overloading.
I do too, but I don't think we agree on what the distinction is.
>> For the sake of brevity and in the correct context clarity, it helps
>> to refer to things like Plus[a_,b_] as a function or operator and For
>> [...] as a statement but the truth is in Mathematica they are both
>> expressions and equivalent in that sense. Making the distinction may
>> be useful to thinking about problems from the programmer's
>> perspective but there is nothing within Mathematica itself that
>> forces that distinction unlike in C++ where function+(a,b) and for
>> (...){} are different.
> I don't follow. I can write a function
> Vector3f plus(const Vecotr3f& v1, const Vecotr3f& v1) {
> return Vector3f(v1.x + v2.x, v1.y + v2.y, v1.z + v2.z);
> }
> and
> Vector3f operaotr+(const Vecotr3f& v1, const Vecotr3f& v1) {
> return plus(v1,v2);
> }
> Then write va + vb to add two Vector3f objects. How does that
> compare to
> overloading + in Mathematica?
It's roughly equivalent except in Mathematica I can programmatically
rewrite the rules for + whereas there is no such facility in C++. I
suppose you can consider that an artifact of the fact that C++ is
compiled (actually is that a requirement? http://root.cern.ch/root/
Cint.html) but I think the real reason for the difference is that
operator+ is a statement in C++ and there are no facilities for
programmatically altering statements.
> When trying to represent an ensamble of interacting physical
> objects, I find
> it useful to be able to maintain state. In OOP that is fairly
> straightforward. This is why I like development libraries such as
> OpenSceneGraph. I've done similar things with Mathematica, but it
> has not
> been quite as easy.
I've had similar problems, but usually I found that reorganization of
the problem either allowed the writing of expressions that could
compute a representation of state when necessary, and at no more
expense than maintaining a state variable, or obviated the need
altogether. When translating programs implemented in other languages
or interacting with such programs I've found that naively trying to
use that paradigm in Mathematica leads to difficult to maintain code
and often it's better to just rewrite the algorithm in a different
style. One example are divide and conquer algorithms for linear
algebra problems. Typically they are implemented with state
variables that keep track of indices of submatrices and operations
like multiplying a matrix against a submatrix of a larger one. I've
found that recasting the algorithm as functions which do basic
operations and encasing them in structures like FoldList, Nest etc.
which construct the final result from the submatrices leads to
clearer, if not necessarily more efficient, Mathematica code.
On the other hand, I've noticed a trend in OO graphics programs for
defining 3 and 4D vectors and matrices as classes and then
overloading +,-,* etc. which leads to inefficient code like
instead of
which is one of the problems that Blitz++ tries to address. As we've
both noticed Blitz++ addresses this problem by using C++ templates
(and the CPP) to effectively convert statements to expressions which
can be optimized for efficiency similar to the fundamental difference
between operator overloading in c++ and Mathematica.
> Part of the problem is that I really didn't understand
> Mathematica well when I started using things such as Dr. Maeder's
> Classes
> package.
I have heard a lot about Dr. Maeder's package on this mailing list
but I have not personally used it or investigated it much.
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Dec/msg00212.html","timestamp":"2014-04-16T04:53:16Z","content_type":null,"content_length":"43714","record_id":"<urn:uuid:280a3233-5196-4620-a124-fede1cbf3263>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
HP codename, series Unknown
Type, Precision, Input mode Scientific, 12 BCD digits, exponent ±499, Algebraic or Reverse Polish Notation
Programmable Yes. Keystrokes, labels A-Z.
Performance Index 44
Memory 31277 bytes available by default. According to the manual the memory is shared by programs, formulas and variables.
26 number storage registers (A-Z), index register (i).
Interestingly, the free memory indication does not change when numbers are stored. So it seems the available bytes are available to programs and equations alone.
Assuming 32 kB of RAM 1491 bytes are used for number storage and internal data. There are 26 direct number registers (A-Z), the index register, LastX, 4 stack registers
and 6 statistics registers, in total 38 registers. Assuming 8 bytes per number this adds up to 304 bytes. So approximately 1 kB is used by the calculator internally.
Display LCD with 2 rows and 14 digits each, 5x7 pixel per digit, annunciators
Special features RPN and algebraic entry mode, equations, hex, binary and octal mode, equation solver, integration, statistics, complex functions (but not a complex stack), fractions,
unit conversions, many physical constants, program and formula checksums (used to verify transcriptions from printed listings).
Original Pricing, Production 61.95 Euro in 2007
Batteries 2x large button sized cells
Dimensions Length 15.8 cm, Width 8.3 cm, Height 1.6 cm
Links Manual: HP-33S Scientific Calculator User's Guide (PDF, English, 387 pages, 3rd edition, Nov 2004)
Manual: HP-33S Wissenschaftlicher Taschenrechner Benutzeranleitung (PDF, German, 408 pages, 2nd edition, Nov 2004)
HP-33s info on Finseth.com.
The Cosine bug.
Comments In the equation (EQN) and memory (MEM) menu it is possible to display the length of a program or an equation.
The "V" shaped keyboard together with the tilted labels and a wealth of commands make it very hard to find the right keys. In general, the design of this calculator
doesn't look very appealing to me. | {"url":"http://www.thimet.de/CalcCollection/Calculators/HP-33S/Contents.htm","timestamp":"2014-04-20T23:29:46Z","content_type":null,"content_length":"14955","record_id":"<urn:uuid:6431bca7-7865-4da8-91ae-b82ad7394811>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Calculate Internal Rate of Return
How big of a return do your investments REALLY make?
The answer is tougher to calculate than you might expect.
Let’s say Jim invests $1,000 on New Year’s Day. The money sits for a year. The following New Years, Jim checks his balance and sees that he now has $1,100.
Clearly, he’s earned an annual percentage yield of 10 percent. That’s obvious.
But what if Jim invested only $400 in January, $200 in April, $350 in August and $50 in December? His contribution throughout the year stills total $1,000. But if he has $1,100 in his account by the
following January 1, he would have earned more than 10 percent annualized.
“But he only invested $1,000 and now he has $1,100. Why is that more than 10 percent?”
Because of the time value of money. He’s only kept $400 in his investment account for the span of one calendar year. The rest of the money entered his account in bits and chunks throughout the year.
How can he calculate his annual return?
Enter: a calculation called the Internal Rate of Return, or IRR. This formula helps you adjust your returns based on the amount of time the money is invested, so you can decipher your annual
percentage yield.
Calculating it is simple: Just open Microsoft Excel. In one column, list an increment of time (like months of the year). In the next column, list your contributions. Then calculate your Internal Rate
of Return using a function called XIRR.
Let’s say you contribute $416 per month ($5,000 per year) into a retirement savings account.
On January 1 of the following year, you have a total balance of $5,300. You want to know what kind of return you had.
Step 1: List 12 months (Jan – Dec) in a column in Excel.
Step 2: List $416 after each month, in a separate column.
Step 3: In the final row, list January of the following year (in the months column), and list your $5300 total (in the amounts column).
Step 4: Use the XIRR function to calculate your return. | {"url":"http://budgeting.about.com/od/budget_definitions/a/How-To-Calculate-Internal-Rate-Of-Return.htm","timestamp":"2014-04-19T09:36:44Z","content_type":null,"content_length":"44798","record_id":"<urn:uuid:0749d131-a234-4dd3-8937-fa16d2e31372>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00422-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hierarchy Problem
1. We know our QED is non-interacting with the interactions done by perturbation. This is because we still don't know a pure interacting QED. But when we do. We can solve directly without
perturbation. Would this make the Hierarchy Problem go away because you no longer have to deal with quadratic divergences which came from Perturbation technique or process? The pure interaction QED
won't have any perturbation and quadratic divergences, isn't it?
"Pure QED" probably doesn't exist mathematically (except in a sense I will discuss), because of the
Landau pole
. The sense in which QED does exist mathematically, is as a quantum field theory which is defined at energies less than the Landau pole.
But first let's talk about what sort of QFTs do exist mathematically, up to unlimited energies. There might be some simple examples in the mathematical literature, but physically the most interesting
is QCD, which is an "asymptotically free" theory. It is well-defined at high energies because the interaction grows weaker with high energy; the higher the energy goes, the more it resembles a "free
theory", a completely non-interacting theory.
Let's suppose that most or all of the truly well-defined interacting QFTs are like QCD - they are free at high energies, but at lower energies there are interactions. At lower energies, you may not
even be able to see the fundamental fields. In QCD, quarks and gluons are fundamental, but at low energies you only get mesons and baryons.
"QED" would then only exist as a low-energy approximate field theory (an "effective field theory"). But there might be an infinite number of "exact QFTs" which reduce to QED in some low energy range.
It would only be as you increased the energy that the electron would be revealed as composite, or some other details took over and made it deviate from pure QED.
The ability to define QFTs that only work within a certain range of energies means that it may be difficult to work out the true fundamental theory (because different high-energy QFTs can look the
same at low energies), but it has also allowed progress in particle physics to occur, even before we had a possible complete theory.
2. LHC hasn't detected or seen any hint of the Super partners (from Supersymmetry). If they won't ever be detected and the model not true. What then would solve the Hierarchy Problem (if this is
still retained in the pure interaction QED theory)?
Let's compare the meaning of the Landau pole problem for QED and the hierarchy problem for the standard model.
No-one believes that the world is described just by QED - there are other forces. So the question of whether pure QED is defined at ultra-high energies is a mathematical question.
On the other hand, the standard model does describe all the data. Unlike pure QED, experimentally it is a candidate to be the exact and total theory of the world. So if you want to treat the standard
model as the theory of everything, and not just an approximation, then the mathematical problems of the exact standard model are physical problems and not just mathematical ones.
However, there is a catch here. The standard model
without gravity
behaves in a certain way as you extrapolate upwards to infinite energies. But reality contains gravity, so really you need to be considering how standard model plus gravity behaves at high energies.
The standard view is that once you get to Planck-scale energies, particle interactions must include things like short-lived micro black holes. That is, when you collide, say, two protons at those
ultra-high energies, sometimes they will create a black hole which then evaporates via Hawking radiation, and in fact the Hawking radiation from the death of the micro black hole will be the "output"
of the proton-proton collision. Micro black holes aren't part of the standard model without gravity, so this energy scale represents the limit of the validity of the "standard model without gravity"
as an approximate description of physics.
In discussions of the effective field theories which provide approximate descriptions of physics up to a particular energy scale, you will find references to "bare mass", "renormalized mass",
"physical mass", and so on. These approximate theories contain parameters which are supposed to be mass, charge, etc, but if you then calculate the mass or charge that would be observed, you get
quantities which get larger and larger, the more you take into account short-range processes. In the continuum limit, the observed mass and charge would be infinite, which is experimentally wrong.
The "bare mass" is the mass parameter appearing in the basic equation, and then the calculated mass is the bare mass plus a huge correction.
The way people used to describe renormalization was to say that it involved assuming that the "bare mass", the mass parameter appearing in the basic equations, was a huge value which happened to
offset the quantum corrections. That is, experimentally the observed mass m of a particle is tiny; theory says the observed mass is the bare mass m_bare plus a huge quantum correction M_correction;
so therefore the bare mass must equal "observed mass minus the correction", i.e. m_bare = m - M_correction.
Even worse, the size of M_correction depends on how fine-grained you make your calculations. If you consider arbitrarily short-lived processes, M_correction ends up being infinite, so m_bare has to
be "m - infinity".
Later on, the renormalization group came to the rescue somewhat, by describing in detail how M_correction varies as a function of energy scale. You adopt the philosophy of effective field theory; you
say, of course the bare mass isn't actually "m_observed - infinity". What's really happening is that your approximate theory is incomplete, and at some high energy, new physical processes show up,
and change how the effective mass (charge, etc) varies with energy, so that the "bare" quantities are more reasonable.
(I should probably add that this informal discussion of renormalization may have been simplified to the point of error in some places. I think it gives the correct impression, but in reality you're
concerned with the Higgs field energy density, quantum corrections can be multiplicative rather than additive, and there's a whole universe of further technical details that I haven't bothered to
So let us now return to the possibility that the standard model plus gravity is the true theory of everything. Let us suppose that the micro black holes I mentioned are the only new addition to
particle physics that gravity introduces. Then this would be the place at which the philosophy of effective field theory runs out and we have to take seriously the parameters appearing directly in
the fundamental equations.
Now if it turned out that for the standard model plus gravity, M_correction is still absolutely huge (a planck-scale mass), that would be a problem, because it looks like m_Higgs is about 125 GeV
(and it's definitely true that the masses of the W and Z particles are a little less than 100 GeV). So the bare mass parameter appearing in the theory will have to be something like m_observed -
M_correction. That would be fine-tuning to about 1 part in 10^16, the magnitude of the difference between m_observed and M_correction.
This is what people want to avoid - theories in which there are fundamental parameters along the lines of "m_Higgs = 1.000000000000000125 Planck masses", with the "1" out the front disappearing when
the quantum corrections are taken into account, so that the observed mass is just .000000000000000125 Planck masses. This is just an example, the actual numbers appearing in a fine-tuned theory
wouldn't be so neatly decimal, but they would have a similar degree of artificiality.
So one way to avoid this is to have quantum corrections cancel themselves - there are negative and positive corrections and they mostly cancel out. Supersymmetry can give you that. Another way is to
have an asymptotically free theory like QCD, in which the "deconfinement scale" is not too far above 100 GeV. This might imply that the Higgs, at those higher energies, just comes apart into "preons"
or "subquarks", so the short-scale physics is completely different. This is the "technicolor" approach to the Higgs, and a lot of people seem to think it can't work for a Higgs at 125 GeV, but I
think a few other assumptions are going into this dismissal.
Supersymmetry and technicolor would be the two main solutions proposed to the hierarchy problem. Then there are other approaches, like "little Higgs", an idea using "asymptotic safety", and I'm sure
there are others. | {"url":"http://www.physicsforums.com/showthread.php?p=3774933","timestamp":"2014-04-17T12:36:16Z","content_type":null,"content_length":"95489","record_id":"<urn:uuid:9bfdf0b5-5d25-4463-b6c9-777474a69638>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: January 2009 [00615]
[Date Index] [Thread Index] [Author Index]
Re: Re: 0^0 = 1?
• To: mathgroup at smc.vnet.net
• Subject: [mg95857] Re: [mg95796] Re: 0^0 = 1?
• From: Andrzej Kozlowski <akoz at mimuw.edu.pl>
• Date: Wed, 28 Jan 2009 06:44:04 -0500 (EST)
• References: <gl7211$c8r$1@smc.vnet.net> <gl9mua$ajr$1@smc.vnet.net> <200901271201.HAA23416@smc.vnet.net>
On 27 Jan 2009, at 13:01, Dave Seaman wrote:
> There have been many discussions of this point in sci.math over the
> years. The emerging consensus among mathematicians (not necessarily
> my
> own opinion) seems to be that it makes sense to observe that very
> distinction,
> namely, that 0^0 = 1 when the exponent is an integer, but that the
> expression should be left undefined when the exponent is real or
> complex.
> There is also considerable precedent in various programming
> languages for
> doing this. In fact, the value of 0^0 (or 0**0 or pow(0,0), or
> whatever
> the notation may be) is often defined to be 1 of the appropriate
> base type,
> provided the exponent is an integer, and undefined otherwise.
Since you must know perfectly well that Mathematica does not
distinguish between the integer 0 and the complex number 0 your entire
argument is spurious in the context of the Mathematica language. You
can't seriously expect that Mathematica should be completely rewritten
to accommodate you on this.
>> One place I think it would it would cause problems is in
>> Mathematica Limit
>> and Series code. Things that do not evaluate to Indeterminate are
>> often
>> taken as "correct" limiting values. This is one reason that having
>> values
>> at discontinuities e.g. on branch cuts requires careful treatment,
>> special
>> code, is a source of bugs, etc. While I have not tried the
>> experiment, I
>> would venture to guess that making 0^0 evaluate to 1 would bring
>> substantial new troubles to that part of the code base. As I alluded
>> above, this sort of change incurs a development cost that can be
>> rather
>> steep, and for no gain I can discern.
> Mathematically, the definition of a limit makes no mention of
> whether the
> expression is defined at the point in question. It's unfortunate that
> Mathematica code conflates two concepts that have nothing to do with
> each
> other.
But of course they do have something to do with each other and that
something is the concept of continuity of a function. If Mathematica
follows the conventions that functions are defined only where they are
continuous than it is reasonable, when computing a limit, to fist
check if the function has a value at the point where the limit is
being computer. If so, that value can be returned. This would give a
substantial benefit in terms of performance, which is of primary
importance for computer programs.
> There are several benefits that I can see for defining 0^0 = 1.
> 1) The binomial theorem says
> (a+b)^n = Sum[Binomial[n,k] a^k b^(n-k),{k,0,n}]
> but in order for this to work for the case a=0 or b=0, we need 0^0
> = 1.
> 2) The derivative of x^n is D[x^n,x] = n x^(n-1), but in order for
> this to work for the case n=1, we need 0^0 = 1.
> 3) The MacLaurin series for f[x] is given by
> Sum[D[f,{x,k}][0]/k! x^k,{k,0,Infinity}]
> but for this to work for x = 0 we need 0^0 = 1. The Exp[0]
> argument that I mentioned previously is a special case.
> In each of these cases, the exponent is the integer 0. The base may
> be
> real or complex.
All these are benefits only in respect of mathematical elegance. I
cant see any benefit in any of the above in respect of computing
performance - which is what counts in programs such as Mathematica.
Andrzej Kozlowski
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2009/Jan/msg00615.html","timestamp":"2014-04-18T10:54:49Z","content_type":null,"content_length":"29099","record_id":"<urn:uuid:f3bf8230-9206-4e0a-ba88-0fd5c20ff39b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |