content
stringlengths
86
994k
meta
stringlengths
288
619
Decode :) Re: Decode :) I decoded that one. The name was given to me by pappym, the leader of our clan. Also, he was considered the greatest m that ever lived. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://mathisfunforum.com/viewtopic.php?id=18847","timestamp":"2014-04-17T06:42:23Z","content_type":null,"content_length":"19804","record_id":"<urn:uuid:f7e89124-d331-414d-aac3-a1370822349e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00412-ip-10-147-4-33.ec2.internal.warc.gz"}
Lucas Sequences in Cryptography This is a short note on the practical usefulness of Lucas sequences in applied cryptography. A Lucas sequence is a sequence of integers characterized by two parameters, P and Q. In practice Q is always 1 and the sequence is taken modulo a large integer. Calculating an element of a Lucas sequence is very similar to exponentiation. It may be helpful to think of P as the base and the index as the exponent. The following algorithm calculates V_e(p, 1) mod n, i.e., the e-th element of the Lucas sequence mod n characterized by P=p and Q=1. It uses m modular multiplies and m modular squarings, where m is the bit length of e. Therefore, it's about twice as slow as a modular exponentiation to the power e mod n. Integer Lucas(const Integer &e, const Integer &p, const Integer &n) unsigned i = e.BitCount(); if (i==0) return 2; Integer v=p, v1=(p*p-2)%n; while (i--) if (e[i]) // if i-th bit of e is 1 v = (v*v1 - p) % n; v1 = (v1*v1 - 2) % n; v1 = (v*v1 - p) % n; v = (v*v - 2) % n; return v; One application for Lucas sequences is primality testing. A theorem similar to Fermat's Little Theorem states that if n is prime and Jacobi(P^2-4, n)==-1, then V_n+1(P, 1) mod n == 2. The following algorithm uses this theorem as a probable primality test. A combination of this test and the strong probable prime test to the base 2 is extremely fast and reliable. In fact no composite number is known to pass both tests, and the total amount of time for the combined test is no more than 3 modular exponentiations. boolean IsStrongLucasProbablePrime(const Integer &n) if (n[0]==0) return n==2; Integer b=1, d; unsigned int i=0; int j; if (++i==64 && n.IsSquare()) // avoid infinite loop if n is a square return FALSE; ++b; ++b; d = (b.Square()-4)%n; while ((j=Jacobi(d,n)) == 1); if (j==0) return FALSE; Integer n1 = n-j; unsigned int a; // calculate a = largest power of 2 that divides n1 for (a=0; ; a++) if (n1[a]) Integer m = n1>>a; Integer z = Lucas(m, b, n); if (z==2 || z==n-2) return TRUE; for (i=1; i<a; i++) z = (z.Square()-2)%n; if (z==n-2) return TRUE; if (z==2) return FALSE; return FALSE; Lucas sequences can also be used for public key crypto and signature systems in a manner similar to RSA, but using Lucas sequences modulo a composite number instead of exponentiation. It has roughly the same security as RSA for the same size key, but is about twice as slow. Lucas sequence analogues of Diffie-Hellman and ElGamal can also be constructed. Compared to DH and ElGamal, for the same level of security they only require modulus half the size because their security is based on discrete log in GF(p^2) rather than GF(p). Because of the smaller modulus used and depending on your modular multiplication algorithm, they are also 50 to 100 percent faster. For more details, see the Crypto 95 paper "Some Remarks on Lucas-Based Cryptosystems" by Bleichenbacher, Bosman, and Lenstra. In summary, Lucas sequences are very useful for fast and reliable primality testing. The Lucas sequence analogue of RSA is relatively less efficient, but the Lucas sequence analogues of Diffie-Hellman and ElGamal is relatively more efficient. However, Lucas sequence based cryptosystems have not received as much scrutiny as the more popular exponentiation based ones, so they should be used with caution. P.S. C++ implementations of the above mentioned algorithms and cryptosystems can be found in Crypto++.
{"url":"http://www.weidai.com/lucas.html","timestamp":"2014-04-16T21:53:23Z","content_type":null,"content_length":"4841","record_id":"<urn:uuid:789b25a9-413b-48f1-b609-26d8a00eb4ac>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Who’s On First?–Ranking Performance Using Excel No this isn’t a variation on the classic Abbott and Costello comedy routine. It’s a discussion of how to use an excel formula to arrange members of a group according to some sort of performance One way of finding out who the top performers in a list are is to sort the list in descending order by the value that reflects the performance. Of course, that would mean having to re-sort the list every time you want to see the new ranking. This article will demonstrate how to perform ranking without resorting the underlying list using formulas. These formulas use three excel functions, LARGE(), INDEX(), and MATCH() to rank individuals according to their total sales. The companion workbook for this article, RANKER.xlsx can be downloaded from here. All data in the workbook is fictitious. The Challenge We have a large list of sales representatives. We want to know, without sorting the list, the names of the top 10 sales people. The leaders list should always display the current top 10 sellers based on current data in the sales representatives list. At first glance, this problem seems to call for some sort of lookup functionality. Unfortunately VLookup() is not up to the challenge because that function can only use the first column of a table or list as an index to the list. Or list of sales representatives has the person’s id or employee number in in its first column. The more general Lookup() function is also unsuitable because the index list it refers to must be sorted in ascending order. The Functions For complete descriptions of the functions used in this example refer to linked help pages, LARGE(), INDEX(), and MATCH(). LARGE The Large() function searches a list of values and returns the value that is in the relative position in the list that you specify. For example, you can find the largest value in a list without having to sort the list. The function is very flexible because it has an argument that lets you specify any specific relative position in the list from 1 (largest value) to the very last position (by specifying a number equal to the total number of items in the list. All this can be done without ever having to sort the list. INDEX The Index() function returns the value found at a specific position in a list. MATCH The Match() function searches a list and returns the number of the list row in which the value you have said to search for is found. The example used in this article specifically uses a one-column list. If the list contains duplicate values, Match will find one of the duplicates over and over for as many times as the value is duplicated. Solving the Duplicates Issue While the probability of two representatives having exactly the same Sales value may be low, the very possibility that a duplicate might exist necessitates a strategy to deal with duplicates should they occur. In this example, we have used a bit of formula trickery to create a unique way of identifying duplicate values. This formula when copied down a column will count the instance of a value in the specified range. Notice the use of mixed references in the first argument. For this technique to work, the list may not be formatted as an Excel 2007/2010 Table. If the data is formatted as a Table, convert it to a range before entering the formula. This expression is part of the more complex formula contained in cells F2 to F224 on the Demo_Notable worksheet. When copied down a column the reference to the start of the range remains fixed on E2 while the end of the range and the criteria cell reference are incremented for each successive reference. However, that’s not the formula trickery I mentioned. In order to use the Large() function, we need numeric values. At the same time, in order to create a useful identifier that will not distort results, we need to work with numbers formatted as text. The trick is to convert intermediate calculations to text and then reconvert the final result back to a number, not just the textual representation of the number. This is what the full formula looks like When you copy this formula down the column, say to E3, Excel modifies the relative cell references. So in Row 3 the formula would become: The idea is to convert the Sales value to text with a fixed number of digits and then concatenate the text value with the text representation of the count of duplicates. Simple adding the two values together arithmetically would distort the value and not bring us any closer to having unique value that will distinguish between duplicate instances. COUNTIF() is counts how many times the value of interest occurs in the specified range, which grows row by row as copy the formula down the column. Here’s what the formula looks like when you copy it down several rows. Notice that the second argument of the CountIf() function changes the end of the range being counted for each row the formula is copied to. However, the range beginning is anchored to cell E2 because that part of the reference is absolute. Caution: Using this technique with a very large list may be very slow. For lists of a few hundred items or less should not be affected. The Final Solution Six formulas drive the solution. Here are the formulas from the second row of the try it worksheet. Once the individual formulas have been created they can be copied down their respective columns (using autofill.) The formula in F2 converts the sales value in E2 to Text and concatenates that value with a four character representation of the result of counting the frequency with which the value in E2 appears. This gives us a unique index value that ensures we will be able to find each instance of the duplicate value. In I2 the formula finds the value in column E that is x positions from the largest value. H2 contains the value 1, so the formula in I1 returns the largest value from the list. This result is not quite adequate because it can’t distinguish duplicate values. J2 finds the row number of the value in I2. When there are duplicate values in column E, the result in this column will always indicate the first row in which the value occurs. The formulas in K2 and L2 are very similar to those in I2 and J2 EXCEPT the formula in K2 uses values in column F to determine rank. The forumlas in column F ensure that there are no duplicates to be ranked. The method uses has a significant side effect because it yields an higher rank for the first occurrence of a duplicate and progressively lower ranks for each subsequent occurrence. The video accompanying this article includes a demonstration of the effect of duplicates on ranking. The final formula in cell M2 uses the row number calcluation in L2 to find the name of the rep having a particular rank.
{"url":"http://officetipsandmethods.com/?p=774","timestamp":"2014-04-19T07:13:38Z","content_type":null,"content_length":"44343","record_id":"<urn:uuid:f3804323-5e35-4f7c-87c4-9a44a6d996fb>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Box-rectangular drawings of plane graphs - Proc. 7th International Symp. on Graph Drawing (GD '99 , 1999 "... In this paper we investigate the general position model for the drawing of arbitrary degree graphs in the D-dimensional (D >= 2) orthogonal grid. In this model no two vertices lie in the same grid hyperplane. ..." Cited by 13 (5 self) Add to MetaCart In this paper we investigate the general position model for the drawing of arbitrary degree graphs in the D-dimensional (D >= 2) orthogonal grid. In this model no two vertices lie in the same grid - Proc. 12th International Symp. on Graph Drawing (GD ’04 , 2004 "... We study straight-line drawings of graphs with few segments and few slopes. Optimal results are obtained for all trees. Tight bounds are obtained for outerplanar graphs, 2-trees, and planar 3-trees. We prove that every 3-connected plane graph on n vertices has a plane drawing with at most 5n/2 segme ..." Cited by 10 (3 self) Add to MetaCart We study straight-line drawings of graphs with few segments and few slopes. Optimal results are obtained for all trees. Tight bounds are obtained for outerplanar graphs, 2-trees, and planar 3-trees. We prove that every 3-connected plane graph on n vertices has a plane drawing with at most 5n/2 segments and at most 2n slopes. We prove that every cubic 3-connected plane graph has a plane drawing with three slopes (and three bends on the outerface). Drawings of non-planar graphs with few slopes are also considered. For example, interval graphs, co-comparability graphs and AT-free graphs are shown to have have drawings in which the number of slopes is bounded by the maximum degree. We prove that graphs of bounded degree and bounded treewidth have drawings with O(log n) slopes. Finally we prove that every graph has a drawing with one bend per edge, in which the number of slopes is at most one more than the - Journal of Algorithms , 2000 "... In this paper we introduce a new drawing style of a plane graph G, called proper box rectangular (PBR ) drawing. It is defined to be a drawing of G such that every vertex is drawn as a rectangle, called a box, each edge is drawn as either a horizontal or a vertical line segment, and each face is dra ..." Cited by 7 (0 self) Add to MetaCart In this paper we introduce a new drawing style of a plane graph G, called proper box rectangular (PBR ) drawing. It is defined to be a drawing of G such that every vertex is drawn as a rectangle, called a box, each edge is drawn as either a horizontal or a vertical line segment, and each face is drawn as a rectangle. We establish necessary and sufficient conditions for G to have a PBR drawing. We also give a simple linear time algorithm for finding such drawings. The PBR drawing is closely related to the box rectangular (BR ) drawing defined by Rahman, Nakano and Nishizeki [17]. Our method can be adapted to provide a new simpler algorithm for solving the BR drawing problem. 1 Introduction The problem of "nicely" drawing a graph G has received increasing attention [5]. Typically, we want to draw the edges and the vertices of G on the plane so that certain aesthetic quality conditions and/or optimization measures are met. Such drawings are very useful in visualizing planar graphs and fi... - SIAM J. DISCRETE MATH. C ○ 2004 SOCIETY FOR INDUSTRIAL AND APPLIED MATHEMATICS VOL. 18, NO. 1, PP. 19–29 , 2004 "... Let G be an n-node planar graph. In a visibility representation of G,eachnodeofG is represented by a horizontal line segment such that the line segments representing any two adjacent nodes of G are vertically visible to each other. In the present paper we give the best known compact visibility repre ..." Cited by 6 (1 self) Add to MetaCart Let G be an n-node planar graph. In a visibility representation of G,eachnodeofG is represented by a horizontal line segment such that the line segments representing any two adjacent nodes of G are vertically visible to each other. In the present paper we give the best known compact visibility representation of G. Given a canonical ordering of the triangulated G, our algorithm draws the graph incrementally in a greedy manner. We show that one of three canonical orderings obtained �from Schnyder’s � realizer for the triangulated G yields a visibility representation of G no wider than 22n−40. Our easy-to-implement O(n)-time algorithm bypasses the complicated subroutines for 15 four-connected components and four-block trees required by the best previously known algorithm of Kant. Our result provides a negative answer to Kant’s open question about whether � � 3n−6 is a 2 worst-case lower bound on the required width. Also, if G has no degree-three (respectively, degreefive) internal node, then our visibility representation for G is no wider than � � , 2003 "... In an orthogonal drawing of a plane graph each vertex is drawn as a point and each edge is drawn as a sequence of vertical and horizontal line segments. A bend is a point at which the drawing of an edge changes its direction. Every plane graph of the maximum degree at most four has an orthogonal dra ..." Cited by 3 (2 self) Add to MetaCart In an orthogonal drawing of a plane graph each vertex is drawn as a point and each edge is drawn as a sequence of vertical and horizontal line segments. A bend is a point at which the drawing of an edge changes its direction. Every plane graph of the maximum degree at most four has an orthogonal drawing, but may need bends. A simple necessary and sufficient condition has not been known for a plane graph to have an orthogonal drawing without bends. In this paper we obtain a necessary and sufficient condition for a plane graph G of the maximum degree three to have an orthogonal drawing without bends. We also give a linear-time algorithm to find such a drawing of G if it exists. "... Abstract — Data-centric storage is a very important concept for sensor networks, where data of the same type are aggregated and stored in the same set of nodes. It is essential for many sensornet applications because it supports efficient in-network query and processing. Multiple approaches have bee ..." Cited by 1 (1 self) Add to MetaCart Abstract — Data-centric storage is a very important concept for sensor networks, where data of the same type are aggregated and stored in the same set of nodes. It is essential for many sensornet applications because it supports efficient in-network query and processing. Multiple approaches have been proposed so far. Their main technique is the hashing technique, where a hashing function is used to map data with the same key value to the same geometric location, and sensors closest to the location are made to store the data. Such solutions are elegant and efficient for implementation. However, two difficulties still remain: load balancing and the support for range queries. When the data of some key values are more abundant than data of other key values, or when sensors are not uniformly placed in the geometric space, some sensors can store substantially more data than other sensors. Since hashing functions map data with similar key values to independent locations, to query a range of data, multiple query messages need to be sent, even if the data of some key value in the range do not exist. In addition to the above two difficulties, obtaining the locations of sensors is also a non-trivial task. In this paper, we propose a new data-centric storage method based on sorting. Our method is robust for different network models and works for unlocalized homogeneous sensor networks, i.e., it requires no location information and no super nodes that have significantly more resources than other nodes. The idea is to sort the data in the network based on their key values, so that queries – including range queries – can be easily answered. The sorting method balances the storage load very well, and we present a sorting algorithm that is both decentralized and very efficient. We present both rigorous theoretical analysis and extensive simulations for analyzing its performance. They show that the sorting-based method has excellent performance for both communication and storage. I. "... Abstract. In an orthogonal drawing of a planar graph G, each vertex is drawn as a point, each edge is drawn as a sequence of alternate horizontal and vertical line segments, and any two edges do not cross except at their common end. A bend is a point where an edge changes its direction. A drawing of ..." Add to MetaCart Abstract. In an orthogonal drawing of a planar graph G, each vertex is drawn as a point, each edge is drawn as a sequence of alternate horizontal and vertical line segments, and any two edges do not cross except at their common end. A bend is a point where an edge changes its direction. A drawing of G is called an optimal orthogonal drawing if the number of bends is minimum among all orthogonal drawings of G. In this paper we give an algorithm to find an optimal orthogonal drawing of any given series-parallel graph of the maximum degree at most three. Our algorithm takes linear time, while the previously known best algorithm takes cubic time. Furthermore, our algorithm is much simpler than the previous one. We also obtain a best possible upper bound on the number of bends in an optimal "... Abstract. Canonical ordering is an important tool in planar graph drawing and other applications. Although a linear-time algorithm to determine canonical orderings has been known for a while, it is rather complicated to understand and implement, and the output is not uniquely determined. We present ..." Add to MetaCart Abstract. Canonical ordering is an important tool in planar graph drawing and other applications. Although a linear-time algorithm to determine canonical orderings has been known for a while, it is rather complicated to understand and implement, and the output is not uniquely determined. We present a new approach that is simpler and more intuitive, and that computes a newly defined leftist canonical ordering of a triconnected graph which is a uniquely determined leftmost canonical ordering. 1 "... Contact graphs of isothetic rectangles unify many concepts from applications including VLSI and architectural design, computational geometry, and GIS. Minimizing the area of their corresponding rectangular layouts is a key problem. We study the area-optimization problem and show that it is NP-hard t ..." Add to MetaCart Contact graphs of isothetic rectangles unify many concepts from applications including VLSI and architectural design, computational geometry, and GIS. Minimizing the area of their corresponding rectangular layouts is a key problem. We study the area-optimization problem and show that it is NP-hard to find a minimum-area rectangular layout of a given contact graph. We present O(n)-time algorithms that construct O(n2)-area rectangular layouts for general contact graphs and O(n log n)-area rectangular layouts for trees. (For trees, this is an O(log n)-approximation algorithm.) We also present an infinite family of graphs (rsp., trees) that require Ω(n2) (rsp., Ω(n log n)) area. We derive these results by presenting a new characterization of graphs that admit rectangular layouts using the related concept of rectangular duals. A corollary to our results relates the class of graphs that admit rectangular layouts to rectangle of influence drawings.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=840217","timestamp":"2014-04-24T14:59:49Z","content_type":null,"content_length":"36880","record_id":"<urn:uuid:69bb3655-504d-4144-ba9c-85ca3c75d49f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by Rezu Total # Posts: 138 What is 12.8 divided by .04? What is 1 1/2 divided by 1/2? 3? 1 1/2 converted to a mixed fraction is 3/2 3/2 divided by 1/2 3/2 x 2/1 6/2 6/2 reduced = 3 If Sally places a rocket that is 3 feet 6 inches tall atop a launch pad that is 1 foot 8 inches tall, how tall will the entire unit, rocket and launch pad, be when she is done? A.5 feet 4 inches B.5 feet 2 inches C.1 foot 8 inches D.4 feet 2 inches B? 3 feet 6 inches converts ... y = -x+8 and x+y=7 OR y = -x +7 Line are parallel, both with slopes = -1 y = -x+8 y = -x+7 There is no solution -2y=6-6 -2y=0 y=0 ans. Proof: 3*6-6*0=18 18+0=18 18=18 One solution: (6,0). algebra 1 If you mean infinite not indinetly than the answer is C. 2x-y-1=0 subtract 2x from each side -y-1 = -2x + 0 add 1 to each side -y = -2x + 1 multiply everything by -1 y = 2x-1 The equations are equal, so the system has infinite solutions. boring history Just like america, it had it's defenses and reasons to "wage" war. The Aztecs were such a powerful civilization that they conquered many lands because they were so warlike. Allie has an income which is five eighths that of Basil. Allie's expenses are one-half those of Basil and Allie saved 40% of his income. What is the percentage of his income that Basil saves? 25%? Yes,It's D. Science 7R - Hw Qs. Check (plzzzz read!!!!) The veins are the ones that get larger Science 7R - Hw Qs. Check (plzzzz read!!!!) No,the arteries get smaller and smaller. As the arteries devide further they become smaller and smaller Science 7R - Hw Qs. Check Q2 (plzzzz read!!!!) They get larger and larger to circulate more blood. Yeah that is right. Yup. It's $900.00 to Anon 1 kg = 2.2 pounds 1 ton = 2,000 pounds Parts of speech True. Because the red fish is shorter. They are both measured in feet. Which one has a higher number? The cost of a lunch of 3 sandwiches, 7 cups of coffee and 1 donut is $3.15. The cost of a lunch of 4 sandwiches, 10 cups of coffee and 1 donut was $4.20 at the same cafe. How much will 1 sandwich, 1 cup of coffee and 1 donut cost? s= sandwich cost c= coffee d= donut 3s+7c+d = ... Someone good is History!!!!!! It was Lincoln's way of fighting back at the South. It angered the South and probably a minority in the North. And it freed slaves. Lincoln signed the Emancipation Proclamation to free slaves. One example of a tissue is- A.heart B.nucleus C.muscle D.mitochondria C? What is the function of a cell's nucleus? cell reproduction? What was the name of the original colony established in Virginia in 1607? Colony of Virginia? When did the Pilgrims arrive at Plymouth Rock? x= number of jumps taken. 3x = 45 + 2x x=45 A dog chasing a rabbit, which has a start of 45m, jumps 3m every time the rabbit jumps 2m. In how many leaps does the dog overtake the rabbit? 45? Betty and Tracy planned a 5000km trip in an automobile with five tires, of which four are in use at any time. They plan to interchange them so that each tire is used the same number of kilometers. What is the number of kilometers each tire will be used? I dont get this questio... Three people share a car for a period of one year and the mean number of kilometers travelled by each person is 152 per month. How many kilometers will be travelled in one year? 3 x 152km/month x 12 months/year = 5472 km/year. Is this correct? A car traveled 281 miles in 4 hours 41 minutes. What was the average speed of the car in miles per hour? 4 hours 41 minutes = 4 × 60 + 41 = 281 minutes Average speed S is given by distance/ time. S = 281 miles/281 minutes =1 mile/minute = 60 miles/hour In 1969 the price of 5 kilograms of flour was $0.75. In 1970 the price was increased 15 percent. In 1971, the 1970 price was decreased by 5 percent. What was the price of 5 kilograms of flour in 1971? If the price of 5 Kg of flour in 1969 = $0.75, then 1970 = $0.75 x 1.15 = $0... After a Math test, each of the twenty-five students in the class got a peek at the teacher's grade sheet. Each student noticed five A's. No student saw all the grades and no student saw his or her own grade. What is the minimum number of students who scored A on the te... What group first delivered the mail? Thank You. What 2 countries were Allies of the U.S. during both World Wars? Marvin's Taxi Service charges $0.30 for the first kilometre and $0.05 for each additional km. If the cab fare was $3.20, how far did the Taxi go? Cost of first kilometer = $0.30 Total cost of additional kilometers = (3.20 - 0.30)= 2.90 Total number of kilometers = 1 +($2.9... It takes one man one day to dig a 2m x 2m x 2m hole. How long does it take 3 men working at the same rate to dig a 4m x 4m x 4m hole? x = rate of one man = 23 m3/ 1 day = 8m3/day 3x = rate of three men = 3(8m3/day) = 24m3/day 64m3/(24m3/day) = 2 2/3 days Is this correct? A rectangular chalk board is 3 times as long as it is wide. If it were 3 metres shorter and 3 metres wider, it would be square. What are the dimensions of the chalk board? 3 meters wide 9 meters long Thank You! Three ducks and two ducklings weigh 32 kg. Four ducks and three ducklings weigh 44kg. All ducks weigh the same and all ducklings weigh the same. What is the weight of two ducks and one duckling? A student at St. F. X. decided to become his own employer by using his car as a taxi for the summer. It costs the student $693.00 to insure his car for the 4 months of summer. He spends $452.00 per month on gas. If he lives at home and has no other expenses for the 4 months of... f(x) = x - 1 g(x) = x + 1 f(g(x)) = g(x) - 1 = x + 1 - 1 = x g(f(x)) = f(x) + 1 = x - 1 + 1 = x English (curious question) Blacksmiths make implements from metal, from farm tools like horseshoes to weapons like swords and axes. Mostly they used iron Social Studies Social Studies Where were the original Olympic games held? Which of the following organisms would most likely be found gazing at a meadow? deer owls cougars mosquitoes deer? Social Studies Where were the original Olympic games held? Roger made a mixture of salt and sand.Which of the following procedures explains the easiest way to separate the mixture? A.Pour the mixture into a glass of water and stir it until the salt dissolves.Pour the micture through a filter.Evaportate the water. B.Pour the mixture on... Water pollution is harming an ecosystem.Many of the organisms that once lived in the ecosystem moved to a new area.The organisms have moved because if they stayed they would most likely- A.thrive B.become ill or perish C.reproduce more frequently D.create more pollution B? I know that transmitted means passed from one thing to another Absorbed means to take in Reflected means to bounce off(example a mirror) refracted means to bend Light that passes through a pair of eyeglasses is- a.transmitted b.absorbed c.reflected d.refracted D? 1.grabbed 2.slipped 3.dripped 4.sprayed 5.splashed 383,4000 900 Do you mean add the rounded numbers to the original numbers? Example Your numbers are 7, 9, 5, 3, 15, 15. There are 6 numbers all together To find the mean you need to all all the numbers and divide by how many there. 7 + 9 + 5 + 3 + 15 + 15 = 54 54/6= 9 Physical Science A chemical change is a change in which something new is formed Physical change is a change in which the substance changes form but keeps its same chemical composition In ice, each of the molecules is connected to the molecules next to it, making it hard. Freezing is a change o... SS7R - 3 HW Question Check 4.He was concerned about whether the purchase was unconstitutional.I guess you could add that but your answer is good to me 5.to study the area's plants, animal life, and geography. 6. She had a sense of what the landscapes said about direction, where they were, and where ... Making a Table is a problem-solving strategy that people can use to solve mathematical word problems by writing the information in a better and easier way. Name the 3 countries and their leaders of the Allied Powers during World War II. 60%=3/5 60%=.60 a positive number Social Studies Social Studies 1,250 miles, or 2,000 kilometers Social Studies To see its unique nature,its beauty,and all the wildlife.Some people go to study the marine life. Social Studies The great barrier reef was formed over years and years of growth. Coral reefs are made up of the skeletons of dead coral Social Studies Ok,Thanks! :) What event occured in the U.S. between 1861-1865? Civil War? What is located at 0 latitude and 0 longitude? Gulf of Guinea Name the 3 famous Civil RIghts leaders for African-Americans. Harriet Tubman,Frederick Douglas,and Dred Scott? Which of the following is not a variable in an investigation designed "how much light is needed for a spider plant to grow?" A.type of plant B.amount of water C.source of light D.amount of light B? Which of the following is the best investigation question? A.Do heavi... For the 2nd one I think it's B or A Which of the following is not a variable in an investigation designed "how much light is needed for a spider plant to grow?" A.type of plant B.amount of water C.source of light D.amount of light B? Which of the following is the best investigation question? A.Do heavi... Where did it say she measured the weight of the paper? Was it this sentence? "They were all exactly the same except for the weight of the paper from which they were made." Rebecca wondered if the type of papre used to make a paper airplane would affect the distance the plane flew.She hypothesized that an airplane made of light weight paper would fly farther than an airplane made of heavier paper.Rebecca made 5 airplanes.They were all exactly the... Can you explain? For special project question is it B? What info in paragprah 6 helps the reader know that Eduardo is excited about volunteering? A.He does not want to be late for the special project B.He barely sleeps Friday night and wakes up early Saturday morning C.He tells his parents that he is helping the Volunteer Fire Dep... In Paragraph 2,which words help the reader infer that driving a fire truck involves concentration? A.fire alarm sounded and rushing past B.I recognized Sarita Mendoza behind the wheel C.closed my eyes and pictured myself riding along D.serious expression on her face,and focuse... 1.My big adventure with our town's volunteer fire department started on the first say of summer vacation.The weather during the spring season had been unusually dry.It was Saturday and another hot,arid morning.It was as dry as a bone outdoors.A wildfire broke out on the ed... In Paragraph 3 depriving means F.giving something G.taking something away H.teaching something J.paying for something G? Julie's letter shows that she believes war could have been avoided if A.Lincoln had not called for soldiers to enlist B.The North had compromised C.The ... Dearest Miriam, 1.I am grieved to hear that you have left the lovely home where I have enjoyed such pleasant visits.My heart too is heavy,for I would wish none of this heartbreak on you.If only South could have found a way to compromise. This Union is a precious thing and must... Ok, Thanks to both of you! From these paragraphs it shows that the blockage A.the blockage will probaly end soon B.the blockage will not suceed in hurting the South C.a bloackge is an unfair strategy in the war D.a blockage is a quick way to end the conflict C? Each day life becomes a little more difficult.Northern forces have blockaded our ports.Now we are unable to ship our goods and make a living. Most people around here plant cotton and tobacco to sell. We depend on those sales.We are doing our best to get by,but it is getting ha... Ok Thanks! I showed my work but I wanted to make sure I did it correct:) So is my answers wrong? Coming out of the grocery store, Ebree has eight coins, of which none is a half-dollar, that add up to $1.45. Unfortunately, on the way home she loses one of them. If the chances of losing a quarter, dime or nickel are equal, which coin is most probably lost? Quarter? Bart Sim... What is one consequence of going into bankruptcy? A.Losing the ability to make a budget. B.Losing your job C.Having some of your possesions taken away D.Going to prison C? People in debt are often said to have a budget deficit.What does that mean? A.They never set up a budget,... Charter colonies were given by the king, and some group had the power in that colony. Royal colonies were ruled directly by the king. Proprietary colonies were usually given to a single individual, who could do as he pleased, in terms of ruling that colony. I think it might be... The Thirteen Colonies Ok THanks Pages: 1 | 2 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Rezu","timestamp":"2014-04-17T22:37:25Z","content_type":null,"content_length":"24908","record_id":"<urn:uuid:b1fb3e08-99e5-4c4e-a2a2-bdff39c9f40e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Convert Metric Units The easiest method to use when converting Metric units to English units is understand what the basis of each unit. Metric units are based on ten (10). To determine the next unit is to multiply the previous by ten. If one (1) centimeter is equivalent to 10mm, then 1 Meter is equivalent to 100 cm (10 x 10=100) and 1 Kilometer (100 x 10= 1000) is 1000m. Assuming you already know the English
{"url":"http://answers.reference.com/Information/Misc/how_to_convert_metric_units","timestamp":"2014-04-21T07:07:36Z","content_type":null,"content_length":"45388","record_id":"<urn:uuid:e53eb981-7027-4dc7-9285-7e30ca771084>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics 327. Graph Theory A graph is a mathematical structure consisting of dots and lines. Graphs serve as mathematical models for many real-world applications: for example, scheduling committee meetings, routing of campus tours and assigning students to dorm rooms. In this course, we study both the theory and the utility of graphs. Offered at the discretion of the department.
{"url":"http://wheatoncollege.edu/catalog/math_327/","timestamp":"2014-04-19T01:53:47Z","content_type":null,"content_length":"13020","record_id":"<urn:uuid:5024ec3e-ea51-4aca-bf3b-bbbc65f99774>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you show the steps to combing like terms. - WyzAnt Answers How do you show the steps to combing like terms. Tutors, please to answer this question. The distributive property allows you to pull the g out and you add the numbers in front of each term so 12g + 7g = (12 + 7)g = 19g. You start with a problem involving a variable and by using the distributive property, you get a problem in arithmetic, something you already know. That's the step you are looking for. For the problem you put up you would simply add the numbers together and keep the g after it I hope this helps.
{"url":"http://www.wyzant.com/resources/answers/429/how_do_you_show_the_steps_to_combing_like_terms","timestamp":"2014-04-20T21:40:17Z","content_type":null,"content_length":"44663","record_id":"<urn:uuid:d8555c8e-3396-4720-9f36-17e531665fcc>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
entire functions of one complex variable with prescribed value and order. up vote 4 down vote favorite In the complex plane, say $a_n \rightarrow \infty$ and $d_n$ and $A_n$ are arbitrary complex numbers. can we find an entire function with $f(a_n)=A_n$ with order $d_n$? (here "order" means $f(z)-A_n$ has a zero with order $d_n$) If without restrictions on the order, it is an exercise from Ahlfors 3rd edition P197 No.1. Following his hint, an answer is like $\sum_{n} g(z) \frac{A_n}{g^{\prime}(a_n)} \frac{e^{r_n(z-a_n)}} {(z-a_n)}$ for some suitable chosen $r_n$ where $g(z)$ is an entire functions with simple zeros at $a_n$. I am not sure how to do with the requirement on orders. It sounds like a standard result, I greatly appreciate if anyone with an idea or reference to this problem. add comment 2 Answers active oldest votes Yes this is possible. See Theorem 15.13 in Rudin: Real and Complex Analysis: up vote 5 down If $\Omega \subseteq \mathbb C$ is open and $A \subseteq \Omega$ has no limit point in $\Omega$, and to each $a \in A$ there is an associated integer $m(a)$ and complex numbers vote accepted $w_{n,a}\,(0 \le n \le m(a))$. Then there exists a function $f$, holomorphic on $\Omega$, such that $$f^{(n)}(a) = n!\; w_{n,a}$$ for all $a \in A, 0\le n \le m(a)$. add comment See the Weierstrass factorization theorem. up vote -1 down vote 1 That only solves when $A_n=0$ – i707107 May 11 '12 at 23:04 add comment Not the answer you're looking for? Browse other questions tagged cv.complex-variables or ask your own question.
{"url":"https://mathoverflow.net/questions/96715/entire-functions-of-one-complex-variable-with-prescribed-value-and-order","timestamp":"2014-04-24T11:56:19Z","content_type":null,"content_length":"54143","record_id":"<urn:uuid:722784ba-a690-4e29-a740-c059462251a2>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Oriented Matroids THE COMPLETE GRAPH WITH 12 VERTICES ON A SURFACE OF GENUS 6 Adjacent curves at a point together with the connection of its endpoints form triangles all of which form the 2-manifold. Foto of N.Haehn, Wiesbaden, Design: J.Bokowski For more information see On the Generation of Oriented Matroids (Discrete Comp. Geometry 2000, special volume on the occassion of Branko Gruenbaum's 70th birthday) Software from Darmstadt on oriented matroids Recent Postscript Files last update: Oct. 4, 2000
{"url":"http://www.mathematik.tu-darmstadt.de/~bokowski/oma.html","timestamp":"2014-04-17T16:03:34Z","content_type":null,"content_length":"2465","record_id":"<urn:uuid:eebcae69-e176-4c5b-9a97-cc993ddf6331>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|a|=|b|, which of the following must be true : Author Message |a|=|b|, which of the following must be true : [#permalink] 28 Oct 2012, 11:06 This post received 15% (low) Question Stats: (01:28) correct 42% (00:23) Joined: 29 Aug 2012 Posts: 26 based on 230 sessions GMAT Date: 02-28-2013 |a|=|b|, which of the following must be true : Followers: 0 I. a=b Kudos [?]: 2 [1] , II. |a|=-b given: 56 III. -a=-b A. I only B. II only. C. III only. D. I and III only. E. None Spoiler: OA Last edited by on 28 Oct 2012, 23:58, edited 1 time in total. Renamed the topic and edited the question. Let's say l a l = 1 and l b l = 1 Joined: 28 Oct 2012 For l a l = 1 ; a can be 1 or -1 Posts: 16 Similarly b can be 1 or -1 Followers: 0 This reasoning is used to get the answer Kudos [?]: 0 [0], given: 1 This post received Expert's post himanshuhpr wrote: |a|=|b| , which of the following must be true : 1. a=b 2.|a|=-b 3.-a=-b a. 1 only b. 2 only. C. 3 only. D. 1 and 3 only. E.none Responding to a pm: Neither method needs to be used here. Just think of the definition of mod we use to remove the mod sign. VeritasPrepKarishma |x| = x if x >= 0 and |x| = -x if x < 0 Veritas Prep GMAT We don't know whether a and b are positive or negative. |a|=|b| when absolute values of both a and b are the same. The signs can be different or same. There are 4 cases: a and b Instructor are positive, a is positive b is negative, a is negative b is positive, a and b are negative. Joined: 16 Oct 2010 For a must be true question, the relation should hold in every case. Posts: 4178 1. a=b Location: Pune, Doesn't hold when a and b have opposite signs. e.g. a = 5, b= -5 Followers: 895 Doesn't hold when b is positive because -b will become negative while left hand side is always non negative. e.g. a = 5, b = 5 Kudos [?]: 3795 [8] , given: 148 |5| \neq -5 Doesn't hold when a and b have opposite signs. e.g. a = 5, b = -5 -5 \neq 5 Answer (E) Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews VeritasPrepKarishma wrote: himanshuhpr wrote: |a|=|b| , which of the following must be true : 1. a=b 2.|a|=-b 3.-a=-b a. 1 only b. 2 only. C. 3 only. D. 1 and 3 only. E.none Responding to a pm: Neither method needs to be used here. Just think of the definition of mod we use to remove the mod sign. |x| = x if x >= 0 and |x| = -x if x < 0 We don't know whether a and b are positive or negative. |a|=|b| when absolute values of both a and b are the same. The signs can be different or same. There are 4 cases: a and b Joined: 29 Aug 2012 are positive, a is positive b is negative, a is negative b is positive, a and b are negative. Posts: 26 For a must be true question, the relation should hold in every case. GMAT Date: 1. a=b Doesn't hold when a and b have opposite signs. e.g. a = 5, b= -5 Followers: 0 Kudos [?]: 2 [0], given: 56 Doesn't hold when b is positive because -b will become negative while left hand side is always non negative. e.g. a = 5, b = 5 |5| \neq -5 Doesn't hold when a and b have opposite signs. e.g. a = 5, b = -5 -5 \neq 5 Answer (E) ^^ by the highlighted statement above you mean that all the four cases you listed out should hold true for every stmt. 1. 2. 3. individually. If yes then the only possible solution the to the question would be |a|=|b| , pl. re confirm ... thanks Expert's post himanshuhpr wrote: ^^ by the highlighted statement above you mean that all the four cases you listed out should hold true for every stmt. 1. 2. 3. individually. Veritas Prep GMAT Instructor If yes then the only possible solution the to the question would be |a|=|b| , pl. re confirm ... thanks Joined: 16 Oct 2010 What I mean is that if we say any statement 'must be true' then it must hold for all 4 cases i.e. both a and b are positive, a is positive b is negative, a is negative b is positive and a and b are negative. Posts: 4178 i.e. if statement 1 i.e. a = b must be true, then it should be true in all 4 cases. Location: Pune, India _________________ Followers: 895 Karishma Veritas Prep | GMAT Instructor Kudos [?]: 3795 [0] My Blog , given: 148 Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews himanshuhpr VeritasPrepKarishma wrote: Intern himanshuhpr wrote: Joined: 29 Aug 2012 ^^ by the highlighted statement above you mean that all the four cases you listed out should hold true for every stmt. 1. 2. 3. individually. Posts: 26 If yes then the only possible solution the to the question would be |a|=|b| , pl. re confirm ... thanks GMAT Date: What I mean is that if we say any statement 'must be true' then it must hold for all 4 cases i.e. both a and b are positive, a is positive b is negative, a is negative b is 02-28-2013 positive and a and b are negative. Followers: 0 i.e. if statement 1 i.e. a = b must be true, then it should be true in all 4 cases. Kudos [?]: 2 [0], Ok. thanks very much for the clarification... your blogs and posts are very informative given: 56 prep Thanks for the explanation. Had a query on this one. Suppose if numbers weren't chosen to evaluate this. Consider: |a|= |b| Joined: 15 Oct 2011 this can be evaluated as: a,b have same signs or a,b have opposite signs Posts: 35 thus, a =b (same signs) and (a = -b or -a = b) for opposite signs. Followers: 0 |a| = -b would have two cases: a +ve , a -ve thus, a = -b or -a = -b => a = b. Kudos [?]: 2 [0], Thus, a = -b or -a=b AND a = b. which is what |a| = |b| boils down to. given: 21 Please help me understand if I'm missing anything. This post received Expert's post prep wrote: Thanks for the explanation. Had a query on this one. Suppose if numbers weren't chosen to evaluate this. Consider: |a|= |b| this can be evaluated as: a,b have same signs or a,b have opposite signs thus, a =b (same signs) and (a = -b or -a = b) for opposite signs. |a| = -b would have two cases: a +ve , a -ve thus, a = -b or -a = -b => a = b. Thus, a = -b or -a=b AND a = b. which is what |a| = |b| boils down to. Please help me understand if I'm missing anything. |a|= |b| basically means that the distance between and zero on the number line is the same as the distance between Bunuel b Math Expert and zero on the number line. Joined: 02 Sep 2009 Thus either Posts: 17321 a=b Followers: 2875 (notice that it's the same as Kudos [?]: 18405 [1 -a=-b ] , given: 2350 ) or (notice that it's the same as Hope it helps. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Re: |a|=|b|, which of the following must be true : [#permalink] 05 Dec 2012, 00:52 mbaiseasy |a|=|b| Senior Manager The equation doesn't tell us anything about the sign of a and b. All we know is that their absolute values are equal. Joined: 13 Aug 2012 Possibilities: Posts: 465 |-5| = |5| Concentration: |5| = |5| Marketing, Finance |5| = |-5| GMAT 1: Q V0 I. a=b ==> When a=5 and b=-5, this is false! GPA: 3.23 II. |a|=-b ==> When a=-5 and b=5, this is false! Followers: 14 III. -a=-b ==> When a=-5 and b=5, this is false! Kudos [?]: 152 [0], given: 11 Answer: NONE or E Impossible is nothing to God. Math Expert Re: |a|=|b|, which of the following must be true : [#permalink] 04 Jul 2013, 00:45 Joined: 02 Sep 2009 Expert's post Posts: 17321 Followers: 2875 Kudos [?]: 18405 [0 ], given: 2350 Re: |a|=|b|, which of the following must be true : [#permalink] 04 Jul 2013, 17:33 himanshuhpr wrote: |a|=|b|, which of the following must be true : I. a=b II. |a|=-b III. -a=-b A. I only B. II only. C. III only. D. I and III only. E. None SravnaTestPrep Replace mod with its equivalent Senior Manager We have one of these 4 equivalents for |a|=|b|: Joined: 17 Dec 2012 -(a) = -(b) Posts: 359 -(a) = b Location: India a = -(b) Followers: 9 a=b Kudos [?]: 128 [0], In the answer choices we can see that, given: 8 (i) is not the only possibility because we see there are other possibilities as seen above (ii) is equivalent to -(a) = -b or a = -b. Again these are not the only possibilities as we see there are other possibilities as seen above (iii) again is not the only possibility as there are other possibilities as seen above So the answer is E. Srinivasan Vaidyaraman Sravna Test Prep Online courses and 1-on-1 Online Tutoring for the GMAT and the GRE gmatclubot Re: |a|=|b|, which of the following must be true : [#permalink] 04 Jul 2013, 17:33
{"url":"http://gmatclub.com/forum/a-b-which-of-the-following-must-be-true-141468.html?fl=similar","timestamp":"2014-04-20T06:21:01Z","content_type":null,"content_length":"209749","record_id":"<urn:uuid:250a3a0a-29f8-430a-b41f-0d2ec26afff7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help November 15th 2006, 07:52 AM #1 Oct 2006 What degenerate form or forms of the parabola CANNOT be obtained from the intersection of a plane and a double napped cone? I have to use a diagram and give complete description how to obtain this ( or these) form(s) how would I do this question ? A degeneate is a the empty set (no intersection which seems impossible), a pairs of intersecting lines or a point. The way you get a pair of intersection lines is by passing a plane perpendicular to the cone base, but that is a type of hyperbola shape. So that point is the only thing you can get, by passing a plane parrallel with with the side of the cone through the vertex of the double This is mine 33 Last edited by ThePerfectHacker; November 15th 2006 at 01:00 PM. November 15th 2006, 07:58 AM #2 Global Moderator Nov 2005 New York City November 15th 2006, 08:21 AM #3 Oct 2006
{"url":"http://mathhelpforum.com/pre-calculus/7614-degenerate.html","timestamp":"2014-04-18T02:00:59Z","content_type":null,"content_length":"35945","record_id":"<urn:uuid:e797670e-aa84-4f37-b566-b448409fe74f>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
X-intercepts of a parabola December 25th 2007, 04:50 PM #1 Dec 2007 X-intercepts of a parabola The vertex of a parabola is (1,2). One x-intercept is 1 + Square root 5. What is the other x-intercept? Explain. My first guess would be 1 - Square Root five since that's usually the outcome when you use the quadratic formula. However, I'm confused about where the vertex is incorporated into the answer? Intercepts are ALWAYS equidistant from the Axis of Symmetry. The vertex is (1,2) and there are two x-intercepts. This means the axis of symmetry is x = 1. Thus, 1+something and 1-something. As an interesting exercise, find the average (arithmetic mean) of the two roots provided by the Quadratic Formula. Last edited by TKHunny; December 26th 2007 at 04:18 AM. Reason: Change the first word from incorrect 'vertices'. December 25th 2007, 07:21 PM #2 MHF Contributor Aug 2007
{"url":"http://mathhelpforum.com/algebra/25237-x-intercepts-parabola.html","timestamp":"2014-04-16T17:57:45Z","content_type":null,"content_length":"32345","record_id":"<urn:uuid:7d38988e-0734-470a-8e74-015054dad20c>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
Points, Vectors, and Functions Now that we've managed to capture a few mythological beasts and have quashed them inside our laptop for processing, we want a picture of it. After all, all of that work is a tall tale until you convince your friends with the pictorial evidence. This will make it easier to remember how to capture them the next time. Much like scalar functions, when we draw vector functions, we get a much better idea of what they do and how they work. Like scalar functions, we begin the same way by plotting a table of values, graphing those values, and connecting the dots. See, math is like a game of connect-the-dots. When graphing vector functions, we should be sure to know what values of the input variable we want to consider. Then we calculate the output values given by the vector function at those points.
{"url":"http://www.shmoop.com/points-vectors-functions/sketching-vectors.html","timestamp":"2014-04-21T12:07:58Z","content_type":null,"content_length":"27594","record_id":"<urn:uuid:eaf0be97-fa20-4cd1-b8a1-8c7649081af0>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Santos-Dumont: Flying Balloon Puzzle - Solution The Puzzle: A balloon propelled by some mechanical device travels five miles in ten minutes with the wind, but requires one hour to go back again to the starting point against the wind, how long would it have taken to go the whole ten miles in a calm, without any wind? Our Solution: The balloon travels five miles in ten minutes with the wind, but requires one hour to go back to the starting point against the wind. In 10 minutes it would travel 5/6 miles against the wind. So, in 20 minutes it would travel 5+5/6 miles in calm, without any wind i.e. it would take 20x6/35x10 minutes i.e.34 minutes 17 and 1/7 seconds to go the whole ten miles in a calm, without any wind. Puzzle Author: Loyd, Sam See this puzzle without solutionDiscuss this puzzle at the Math is Fun Forum
{"url":"http://www.mathsisfun.com/puzzles/santos-dumont-flying-balloon-solution.html","timestamp":"2014-04-20T18:47:28Z","content_type":null,"content_length":"6359","record_id":"<urn:uuid:0cf9cfc8-584c-4bb6-8f28-03945985dad3>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Probability Models Results 1 - 10 of 315 , 1999 "... The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines. In this paper we describe a new hypertext resource discovery system called a Focused Crawler. The goal of a focused crawler is to selectively seek out pages that are relevan ..." Cited by 482 (9 self) Add to MetaCart The rapid growth of the World-Wide Web poses unprecedented scaling challenges for general-purpose crawlers and search engines. In this paper we describe a new hypertext resource discovery system called a Focused Crawler. The goal of a focused crawler is to selectively seek out pages that are relevant to a pre-defined set of topics. The topics are specified not using keywords, but using exemplary documents. Rather than collecting and indexing all accessible Web documents to be able to answer all possible ad-hoc queries, a focused crawler analyzes its crawl boundary to find the links that are likely to be most relevant for the crawl, and avoids irrelevant regions of the Web. This leads to significant savings in hardware and network resources, and helps keep the crawl more up-to-date. To achieve such goal-directed crawling, we designed two hypertext mining programs that guide our crawler: a classifier that evaluates the relevance of a hypertext document with respect to the focus topics, ... - IN IEEE SNPA WORKSHOP , 2003 "... Abstract — This paper presents and analyzes an architecture that exploits the serendipitous movement of mobile agents in an environment to collect sensor data in sparse sensor networks. The mobile entities, called MULEs, pick up data from sensors when in close range, buffer it, and drop off the data ..." Cited by 324 (6 self) Add to MetaCart Abstract — This paper presents and analyzes an architecture that exploits the serendipitous movement of mobile agents in an environment to collect sensor data in sparse sensor networks. The mobile entities, called MULEs, pick up data from sensors when in close range, buffer it, and drop off the data to wired access points when in proximity. This leads to substantial power savings at the sensors as they only have to transmit over a short range. Detailed performance analysis is presented based on a simple model of the system incorporating key system variables such as number of MULEs, sensors and access points. The performance metrics observed are the data success rate (the fraction of generated data that reaches the access points) and the required buffer capacities on the sensors and the MULEs. The modeling along with simulation results can be used for further analysis and provide certain guidelines for deployment of such systems. I. - IEEE/ACM Transactions on Networking , 1994 "... Abstract — The paper considers a network with many apparently-independent periodic processes and discusses one method by which these processes can inadvertent Iy become synchronized. In particular, we study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadve ..." Cited by 264 (10 self) Add to MetaCart Abstract — The paper considers a network with many apparently-independent periodic processes and discusses one method by which these processes can inadvertent Iy become synchronized. In particular, we study the synchronization of periodic routing messages, and offer guidelines on how to avoid inadvertent synchronization. Using simulations and analysis, we study the process of synchronization and show that the transition from unsynchronized to synchronized traffic is not one of gradual degradation but is instead a very abrupt ‘phase transition’: in general, the addition of a single router will convert a completely unsynchronized traffic stream into a completely synchronized one. We show that synchronization can be avoided by the addition of randomization to the tra~c sources and quantify how much randomization is necessary. In addition, we argue that the inadvertent synchronization of periodic processes is likely to become an increasing problem in computer networks. - IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS , 1996 "... Suppose that n balls are placed into n bins, each ball being placed into a bin chosen independently and uniformly at random. Then, with high probability, the maximum load in any bin is approximately log n log log n . Suppose instead that each ball is placed sequentially into the least full of d ..." Cited by 201 (23 self) Add to MetaCart Suppose that n balls are placed into n bins, each ball being placed into a bin chosen independently and uniformly at random. Then, with high probability, the maximum load in any bin is approximately log n log log n . Suppose instead that each ball is placed sequentially into the least full of d bins chosen independently and uniformly at random. It has recently been shown that the maximum load is then only log log n log d +O(1) with high probability. Thus giving each ball two choices instead of just one leads to an exponential improvement in the maximum load. This result demonstrates the power of two choices, and it has several applications to load balancing in distributed systems. In this thesis, we expand upon this result by examining related models and by developing techniques for "... View an n-vertex, m-edge undirected graph as an electrical network with unit resistors as edges. We extend known relations between random walks and electrical networks by showing that resistance in this network is intimately connected with the lengths of random walks on the graph. For example, the c ..." Cited by 143 (6 self) Add to MetaCart View an n-vertex, m-edge undirected graph as an electrical network with unit resistors as edges. We extend known relations between random walks and electrical networks by showing that resistance in this network is intimately connected with the lengths of random walks on the graph. For example, the commute time between two vertices s and t (the expected length of a random walk from s to t and back) is precisely characterized by the e ective resistance Rst between s and t: commute time = 2mRst. As a corollary, the cover time (the expected length of a random walk visiting all vertices) is characterized by the maximum resistance R in the graph to within a factor of log n: mR cover time O(mR log n). For many graphs, the bounds on cover time obtained in this manner are better than those obtained from previous techniques such as the eigenvalues of the adjacency matrix. In particular, we improve known bounds on cover times for high-degree graphs and expanders, and give new proofs of known results for multidimensional meshes. Moreover, resistance seems to provide an intuitively appealing and tractable approach to these problems. - ACM TRANSACTIONS ON DESIGN AUTOMATION OF ELECTRONIC SYSTEMS , 2000 "... ..." - IEEE Transactions on Information Theory , 2005 "... Abstract — We consider the throughput/delay tradeoffs for scheduling data transmissions in a mobile ad-hoc network. To reduce delays in the network, each user sends redundant packets along multiple paths to the destination. Assuming the network has a cell partitioned structure and users move accordi ..." Cited by 110 (9 self) Add to MetaCart Abstract — We consider the throughput/delay tradeoffs for scheduling data transmissions in a mobile ad-hoc network. To reduce delays in the network, each user sends redundant packets along multiple paths to the destination. Assuming the network has a cell partitioned structure and users move according to a simplified independent and identically distributed (i.i.d.) mobility model, we compute the exact network capacity and the exact endto-end queueing delay when no redundancy is used. The capacity achieving algorithm is a modified version of the Grossglauser-Tse 2-hop relay algorithm and provides O(N) delay (where N is the number of users). We then show that redundancy cannot increase capacity, but can significantly improve delay. The following necessary tradeoff is established: delay/rate ≥ O(N). Two protocols that use redundancy and operate near the boundary of this curve are developed, with delays of O ( √ N) and O(log(N)), respectively. Networks with non-i.i.d. mobility are also considered and shown through simulation to closely match the performance of i.i.d. systems in the O ( √ N) delay regime. Index Terms — fundamental limits, queueing analysis, stochastic systems, wireless networks I. - Proc. of SIGMETRICS’03 , 2003 "... It is common to classify scheduling policies based on their mean response times. Another important, but sometimes opposing, performance metric is a scheduling policy’s fairness. For example, a policy that biases towards short jobs so as to minimize mean response time, may end up being unfair to long ..." Cited by 87 (15 self) Add to MetaCart It is common to classify scheduling policies based on their mean response times. Another important, but sometimes opposing, performance metric is a scheduling policy’s fairness. For example, a policy that biases towards short jobs so as to minimize mean response time, may end up being unfair to long jobs. In this paper we define three types of unfairness and demonstrate large classes of scheduling policies that fall into each type. We end with a discussion on which jobs are the ones being treated unfairly. 1 - in Proceedings of the 7th International Conference on Architectural Support for Programming Languages and Operating Systems , 1996 "... Branch prediction is an important mechanism in modem microprocessor design. The focus of research in this area has been on designing new branch prediction schemes. In contrast, very few studies address the theoretical basis behind these prediction schemes. Knowing this theoretical basis helps us to ..." Cited by 83 (3 self) Add to MetaCart Branch prediction is an important mechanism in modem microprocessor design. The focus of research in this area has been on designing new branch prediction schemes. In contrast, very few studies address the theoretical basis behind these prediction schemes. Knowing this theoretical basis helps us to evaluate how good a prediction scheme is and how much we can expect to improve its accuracy. - In ACM MobiCom , 2005 "... When a sensor network is deployed to detect objects penetrating a protected region, it is not necessary to have every point in the deployment region covered by a sensor. It is enough if the penetrating objects are detected at some point in their trajectory. If a sensor network guarantees that every ..." Cited by 67 (8 self) Add to MetaCart When a sensor network is deployed to detect objects penetrating a protected region, it is not necessary to have every point in the deployment region covered by a sensor. It is enough if the penetrating objects are detected at some point in their trajectory. If a sensor network guarantees that every penetrating object will be detected by at least £ distinct sensors before it crosses the barrier of wireless sensors, we say the network provides £-barrier coverage. In this paper, we develop theoretical foundations for £-barrier coverage. We propose efficient algorithms using which one can quickly determine, after deploying the sensors, whether the deployment region is £-barrier covered. Next, we establish the optimal deployment pattern to achieve £-barrier coverage when deploying sensors deterministically. Finally, we consider barrier coverage with high probability when sensors are deployed randomly. The major challenge, when dealing with probabilistic barrier coverage, is to derive critical conditions using which one can compute the minimum number of sensors needed to ensure barrier coverage with high probability. Deriving critical conditions for £-barrier coverage is, however, still an open problem. We derive critical conditions for a weaker notion of barrier coverage, called weak £-barrier coverage.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=44291","timestamp":"2014-04-18T06:18:47Z","content_type":null,"content_length":"38745","record_id":"<urn:uuid:63e8bc1b-c8e5-4982-9bb5-45becb08155b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Subtracting Decimals Decimals are fractional numbers. The decimal 0.3 is the same as 3/10. The number 0.78 is a decimal that represents 78/100. Subtracting Decimals is just like subtracting other numbers. Always line up the decimal points when subtracting decimals. Remember to put the decimal point in the proper place in your answer.
{"url":"http://aaamath.com/B/g5_312x2.htm","timestamp":"2014-04-19T11:57:50Z","content_type":null,"content_length":"6331","record_id":"<urn:uuid:81c144ba-1db5-4fab-852b-1a4d35ddd247>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Interests Academic History I have a B.S. from in Applied Mathematics and a Ph.D. from Cornell University Operations Research and Information Engineering . My thesis research was supervised by Éva Tardos Research Interests My research is in theoretical computer science, especially optimization, combinatorics and the design, analysis, and implementation of computer algorithms. Currently, I am working on the design, analysis, and efficient implementation of polynomial-time algorithms for network flow problems, including generalized flows and multicommodity flows. Generalized flows model the shipment of a single commodity though a network which "leaks." Some applications include shipping oil, optimal currency conversion, and scheduling. Multicommodity flows can model the shipment of several commodities through a common network. Some applications include: routing communication messages, VLSI design, and maintaining sparsity with Gaussian elimination. Selected Publications
{"url":"http://www.cs.princeton.edu/~wayne/research/","timestamp":"2014-04-20T13:22:02Z","content_type":null,"content_length":"9179","record_id":"<urn:uuid:b0c66b28-f6b9-4a7f-a469-15fe6f717403>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00152-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: September 2007 [00467] [Date Index] [Thread Index] [Author Index] Re: Re: rationalize numerator of quotient • To: mathgroup at smc.vnet.net • Subject: [mg81225] Re: [mg81208] Re: rationalize numerator of quotient • From: Andrzej Kozlowski <akoz at mimuw.edu.pl> • Date: Sun, 16 Sep 2007 04:08:00 -0400 (EDT) • References: <29319569.1189724898261.JavaMail.root@eastrmwml14.mgt.cox.net> <fcdf9p$plc$1@smc.vnet.net> <200709150818.EAA28315@smc.vnet.net> On 15 Sep 2007, at 17:18, Peter Breitfeld wrote: > Murray Eisenberg schrieb: >> Thanks to all who replied, either here or privately. I didn't try >> the >> "obvious" method of using Simplify because, of course, I forgot >> Mathematica's conception of "simpler". >> And this is a good example where Mathematica's "simplify" is >> contrary to >> what is taught in school about when a fraction is simpler -- in high >> school it is often (unfortunately) taught that one should >> "rationalize" >> the fraction so that the square-root is in the numerator and never in >> the denominator. Of course in calculus, when taking limits of such >> quotients, that is precisely what you do NOT want to do, but instead >> want to do what Mathematica's sense of simplifying here accomplishes. > [ little bit off-topic ] > I think this is relict of the times roots had to be calculated by > hand. To calculate 1/Sqrt[2] you first had to calculate > Sqrt[2]=1.4142 and then divide 1/1.4142. This is more work to be > done than simply dividing Sqrt[2] by 2. > Gruss Peter > -- > ==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-==-== > Peter Breitfeld, Bad Saulgau, Germany -- http://www.pBreitfeld.de I don't think it is just a "relic" of "by hand" computations. The fact that a fraction like: (2 + Sqrt[2])/(3 - 5*Sqrt[2]) can be uniquely expressed in the form RationalizeDenominator[(2 + Sqrt[2])/(3 - 5*Sqrt[2]), Sqrt[2]] -(16/41) - (13*Sqrt[2])/41 (where the function RationalizeDenominator is defined by RationalizeDenominator[f_, a_] := Block[{t}, PolynomialExtendedGCD[Denominator[f] /. {a -> t}, MinimalPolynomial[a, t]][[2, 1]] /. t -> a // Expand] is both non-trivial and useful, not only for computational purposes (which in the computer age does not count for that much) but for mathematical ones. Its significance is exactly the same as that of the fact that (2 + I)/(3 - 5*I) can be uniquely expressed in the form ComplexExpand[(2 + I)/(3 - 5*I)] 1/34 + (13*I)/34 and I think it is pretty clear that the latter fact is not just a relic of "by hand" computations with complex numbers. In fact both of these facts are simply consequences (or illustrations) of the following basic lemma in field theory: Let E be a field, F a subfield and z an alement of E which is algebraic over F. Then the ring F[z] of polynomials in z with coefficients in F is a field. This is very useful for all kinds of mathematical reasons and, I think, the habit of "rationalizing the denominator" derives from derives more from the mathematical usefulness of this theorem rather than from computational convenience. And while I am at it: it may sound pedantic, but the "correct" way to deal with the original problem is: Cancel[(Sqrt[x] - 2)/(x - 4)] 1/(Sqrt[x] + 2) Of course Factor will also work, but in general it is wasteful as it will try to factor the entire expression when we actually only want to perform a cancellation. Simplify only works because it uses Cancel (and Factor); but in fact Simplify will not in general "rationalize" either the numerator or the denominator: FullSimplify[(Sqrt[x] - 1)/(x - 4)] (Sqrt[x] - 1)/(x - 4) FullSimplify[(x - 4)/(Sqrt[x] - 1)] (x - 4)/(Sqrt[x] - 1) The reaosn is not the "notion of simplicity" but simply the fact that FullSimplify lacks the encessary transformations to do that. (However, it is possible to make it do this in some cases involving algebraic numbers). The fucntion RationalizeDenominator defined above only works for numerical radicals. It would be possible to write its analogue for algebraic function fields but, unlike in the case of algebraic numbers, I don't think it woud be worth the effort. Andrzej Kozlowski • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Sep/msg00467.html","timestamp":"2014-04-19T02:32:41Z","content_type":null,"content_length":"38240","record_id":"<urn:uuid:87c89b7c-7e49-441b-8acd-3eed4fc6d48a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Clyde Hill, WA Geometry Tutor Find a Clyde Hill, WA Geometry Tutor ...I have been published multiple times and feel very confident in my abilities in writing. I have always sought avenues that allowed me to work with students and peers in teaching settings. As an undergrad I was a peer mentor for an intro to engineering class, where I guided students across vario... 14 Subjects: including geometry, writing, biology, algebra 1 ...I was actively certified as a Computer Information Systems Security Professional (CISSP) for about six years (until October 2010). I prepped a 10-year old recently in ISEE math. He took the test and, coupled with his above average verbal skills, was able to get on the waiting list at Overlake. ... 43 Subjects: including geometry, chemistry, calculus, physics ...In college, I completed math through Calculus 3 and am proficient in Advanced Trigonometry, Calculus 1 and 2. My favorite types of math are Trigonometry, Geometry, Algebra 1 and 2. I truly like and understand math concepts and enjoy helping others understand the underlying principles. 26 Subjects: including geometry, chemistry, calculus, physics Hi everyone!! My name is Jesse and I am a very enthusiastic and positive tutor eager to help your children learn and master any topic from college sciences to basic mathematics. I believe education is the key to success. In my journey so far I have earned: - Associate of Arts and Sciences degree ... 25 Subjects: including geometry, chemistry, physics, statistics ...I tutored students in introductory and intermediate financial and managerial accounting classes one-on-one the entire time and learned how to convey the information so that it could be easily understood. Later, I started teaching classes that would review what students had been taught in class a... 12 Subjects: including geometry, reading, accounting, ASVAB Related Clyde Hill, WA Tutors Clyde Hill, WA Accounting Tutors Clyde Hill, WA ACT Tutors Clyde Hill, WA Algebra Tutors Clyde Hill, WA Algebra 2 Tutors Clyde Hill, WA Calculus Tutors Clyde Hill, WA Geometry Tutors Clyde Hill, WA Math Tutors Clyde Hill, WA Prealgebra Tutors Clyde Hill, WA Precalculus Tutors Clyde Hill, WA SAT Tutors Clyde Hill, WA SAT Math Tutors Clyde Hill, WA Science Tutors Clyde Hill, WA Statistics Tutors Clyde Hill, WA Trigonometry Tutors Nearby Cities With geometry Tutor Beaux Arts Village, WA geometry Tutors Bellevue, WA geometry Tutors Duvall geometry Tutors Houghton, WA geometry Tutors Hunts Point, WA geometry Tutors Kirkland, WA geometry Tutors Medina, WA geometry Tutors Mercer Island geometry Tutors Monroe, WA geometry Tutors Redmond, WA geometry Tutors Sammamish geometry Tutors Seahurst geometry Tutors Snohomish geometry Tutors Woodway, WA geometry Tutors Yarrow Point, WA geometry Tutors
{"url":"http://www.purplemath.com/Clyde_Hill_WA_Geometry_tutors.php","timestamp":"2014-04-20T13:45:38Z","content_type":null,"content_length":"24167","record_id":"<urn:uuid:02421259-914e-457c-81ef-0cef1b480c89>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Adding Matrices generated from a loop [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: Adding Matrices generated from a loop From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Adding Matrices generated from a loop Date Fri, 24 Aug 2007 13:53:45 +0100 There may be subtleties here I am missing, but I would initialise sum <- first_matrix and then loop over the other matrices sum <- sum + next_matrix This is pseudocode. Otherwise I can't tell what's troubling you about this. As the details will depend on whether you are using Stata or Mata, your naming structure, what else you are doing with these matrices, whether the number is predictable in advance, etc., none of which is indicated, I'll leave it there. Nadeem Shafique > I want to add several matrices (say K) of the same order, suppose the > matrices to be added are comming from a loop, how can i calculate the > following > K > Sum X_i > i=1 > where X_i is the ith Matix * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-08/msg00930.html","timestamp":"2014-04-17T15:27:39Z","content_type":null,"content_length":"6304","record_id":"<urn:uuid:79c070ec-d3ae-42c4-8077-376eb0047abe>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Equation set based on the Fibinocci Sequence July 6th 2009, 03:49 PM #1 Jul 2009 Equation set based on the Fibonacci Sequence Okay, so it's been forever and a day since I did any real math that deserves to be called that. This isn't for any class or anything more than my own peace of mind. As such I'm not entirely sure this is where I should be posting this, so if it needs/can be moved to a more relevant forum. Please do. Okay so here's the problem. I'd like to come up with a graphable solution that satisifies these three equations. X+y=z, y/x=phi, z/y=phi I've tried a number of things with no success. I'd be very appreciative if someone could point me in the right direction as to how to go about doing so. I've derived the equation ((x/phi)+x)/x=phi, as well as (x/phi)+x=y. But neither seem to get me a graphable solution or even a single real answer. Thanks in advance for any advice. Last edited by Thuleando; July 6th 2009 at 04:37 PM. July 6th 2009, 06:20 PM #2
{"url":"http://mathhelpforum.com/math-topics/94546-equation-set-based-fibinocci-sequence.html","timestamp":"2014-04-23T16:19:19Z","content_type":null,"content_length":"35998","record_id":"<urn:uuid:a6531903-4977-4029-94f8-4db3261a94ff>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
Glenn Heights, TX Trigonometry Tutor Find a Glenn Heights, TX Trigonometry Tutor ...I hold a Master's Degree in Education with emphasize on instruction in math and science for grades 4th through 8th. I have taken courses in pre-algebra, algebra I and II, Matrix Algebra, Trigonometry, pre-calculus, Calculus I and II, Geometry and Analytical Geometry, Differential Equations. I was a tutor in college for students that needed help in math. 11 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...I am not extremely fluent at speaking French, but I'm very proficient at reading and writing at a high level in French, and at teaching French grammar. Geometry, like Algebra and Trigonometry, is one of the base subjects for the higher level math courses. As a math major, I had to be very grounded in these subjects. 41 Subjects: including trigonometry, chemistry, French, calculus ...As far as the tutoring space goes, I'll hold session in public libraries or any other better place you know including your home. I don't mind traveling to a place where you feel comfortable. I look forward to hearing from you and helping you to accomplish the goals with your studies.Currently, I am working on my masters on Chemistry at Texas Woman's University. 19 Subjects: including trigonometry, chemistry, physics, geometry ...I have an undergraduate degree in mathematics (40 hours of mathematical content) which included a linear algebra course. Also, I have a graduate degree in mathematics education which included a graduate level course in matrix theory. Concepts I am competent in (with regards to TAKS) include: fu... 17 Subjects: including trigonometry, calculus, statistics, geometry ...Conic sections is also a familiar area of expertise. As an additional resource I also possess an Algebra II "Teacher's Edition" text. Word problems are a particular challenge for a number of students for whom the following steps must first be modeled: 1) drawing an appropriate diagram, if requ... 17 Subjects: including trigonometry, chemistry, geometry, GRE Related Glenn Heights, TX Tutors Glenn Heights, TX Accounting Tutors Glenn Heights, TX ACT Tutors Glenn Heights, TX Algebra Tutors Glenn Heights, TX Algebra 2 Tutors Glenn Heights, TX Calculus Tutors Glenn Heights, TX Geometry Tutors Glenn Heights, TX Math Tutors Glenn Heights, TX Prealgebra Tutors Glenn Heights, TX Precalculus Tutors Glenn Heights, TX SAT Tutors Glenn Heights, TX SAT Math Tutors Glenn Heights, TX Science Tutors Glenn Heights, TX Statistics Tutors Glenn Heights, TX Trigonometry Tutors Nearby Cities With trigonometry Tutor Balch Springs, TX trigonometry Tutors Cedar Hill, TX trigonometry Tutors Dalworthington Gardens, TX trigonometry Tutors Desoto trigonometry Tutors Duncanville, TX trigonometry Tutors Hurst, TX trigonometry Tutors Lancaster, TX trigonometry Tutors Mansfield, TX trigonometry Tutors Midlothian, TX trigonometry Tutors Oak Leaf, TX trigonometry Tutors Ovilla, TX trigonometry Tutors Pantego, TX trigonometry Tutors Red Oak, TX trigonometry Tutors Watauga, TX trigonometry Tutors Waxahachie trigonometry Tutors
{"url":"http://www.purplemath.com/glenn_heights_tx_trigonometry_tutors.php","timestamp":"2014-04-19T12:17:27Z","content_type":null,"content_length":"24739","record_id":"<urn:uuid:0a5d1cd6-3f79-468a-a423-342f24102e7e>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Jersey City Geometry Tutor Find a Jersey City Geometry Tutor ...The verbal section is focused on figuring out vocabulary with relation to context clues and developing strong reading comprehension skills. The writing section assesses whether you can concisely write two 30-minute organized essays. I am confident that I can help you significantly improve your scores in all three areas. 26 Subjects: including geometry, calculus, writing, GRE Hi! Thanks for taking a look at my profile. My name is Laura. 27 Subjects: including geometry, English, reading, writing ...I excel at opening, middle, and end-game study, and during my lessons I use a mix of students' games, chess puzzles, online videos and computer analysis to keep students engaged. I volunteer as a chess teacher at a school in Brooklyn. I have a large breadth of teaching experience, which I believe distinguishes me from other chess instructors. 13 Subjects: including geometry, Spanish, English, SAT math ...However, I am also available in the NYC/NJ/PA area. I'm very willing to negotiate on price and work with students with all types of needs. Please feel free to reach out to me if you have any questions and I'll respond as soon as possible. 9 Subjects: including geometry, algebra 1, algebra 2, SAT math ...I escorted the children to the nurse's, bathroom, etc. I also helped the children with their projects, for example the counting caterpillar. I also read out loud to the children and depending on the grade level helped them with their reading skills. 13 Subjects: including geometry, English, Spanish, algebra 2
{"url":"http://www.purplemath.com/Jersey_City_Geometry_tutors.php","timestamp":"2014-04-16T07:52:04Z","content_type":null,"content_length":"23741","record_id":"<urn:uuid:7177327d-4a07-45d1-82e9-8e150d417f8a>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00143-ip-10-147-4-33.ec2.internal.warc.gz"}
The Great Debate Over Whether 1+2+3+4..+ ∞ = -1/12 So there you are living your life, content in your grasp on how the world works: up is up, down is down, the Sun rises in the east and sets in the west. Then, out of nowhere, a bunch of mathematicians try to tell you that the sum of all positive integers, that is, 1 + 2 + 3 + 4 + 5 + 6 +... and so on to infinity is equal to... -1/12. Well, that's clearly ridiculous, right? How can increasingly big numbers, when added together, make a small number? How can whole numbers make a fraction? How can positive numbers make a negative? So you watch the rest of the video (above), made by Numberphile, a bunch of math wonks with a popular (and generally trustworthy) YouTube channel about math. (Watch it. It's worth it. We'll wait.) Their proof seems rock solid. A bunch of really smart people, from astronomer Phil Plait to the folks at Physics Central, double down on the claim. "It turns out that the conclusions they draw in that video are literally correct. You can add an infinite series of positive numbers, and they’ll add up to a negative fraction,” said Plait. So that's that, right? Math doesn't make sense, and the world is weird. But that's not the end of the story. A second set of the mathematically inclined people, including Scientific American blogger Evelyn Lamb and physicist Greg Gbur, took to the web to show that while the sum of all positive numbers can kind of sort of equal -1/12 (a result that, they explain, is used all the time in accurately solving physics problems), this mind-bending answer really only works if you totally redefine some core concepts of mathematics. Phil Plait and the Physics Central crew eventually came around, and it was the follow-up from Physics Central that most helped us get our minds around this quandary. According to Physics Central, 1 + 2 + 3 + 4 + … only equals -1/12 because the mathematicians redefined the equal sign. In this style of mathematics, called analytical continuation, "=" stopped meaning “is equal to” and started meaning “is associated with.” Tricky mathematicians. This mathematical trick goes way back, says Physics Central. It's in the work of pioneering Indian mathematician Srinivasa Ramanujan, for instance: Ramanujan is to blame a bit too. After all, how are we supposed to understand what he was trying to say here? "I told him that the sum of an infinite number of terms of the series: 1 + 2 + 3 + 4 + · · · = −1/12 under my theory. If I tell you this you will at once point out to me the lunatic asylum as my -S. Ramanujan in a letter to G.H. Hardy Are we supposed to realize that "under my theory" means that "=" doesn't mean equal? I haven't found his original work, but several people have reproduced a calculation by Euler that uses an equal sign in the same way. If the two sides aren't equal then, as I recall from second grade math, you can't use an equal sign. So, does 1 + 2 + 3 + 4 + 5.... = -1/12? Yes, but only if, to you, an equal sign means something other than “is equal to.” Now, that's not to say that the Numberphile team were just straight up messing with our heads. The -1/12 value can be proven in a number of ways, and the result is certainly useful. But 1 + 2 + 3 + 4 + ... definitely does not "equal" -1/12 in any way that a person would normally think about it. That the Numberphile team took such a leap without explaining it to people, says physicist Greg Gbur, is sort of a shame: The video makes it seem so simple, and uncontroversial, almost obvious. But there are some big mathematical assumptions hidden in their argument that, in my opinion, make it very misleading. To put it another way: in a restricted, specialized mathematical sense, one can assign the value -1/12 to the increasing positive sum. But in the usual sense of addition that most human beings would intuitively use, the result is nonsensical. To me, this is an important distinction: a depressingly large portion of the population automatically assumes that mathematics is some nonintuitive, bizarre wizardry that only the super-intelligent can possibly fathom. Showing such a crazy result without qualification only reinforces that view, and in my opinion does a disservice to mathematics. We haven't even attempted to tackle to long proofs involved in sorting out this debate here, but if you want more, check out: Correction: Does 1+2+3+4+ . . . =-1/12? Absolutely Not! By Physics Central Infinite series: not quite as weird as some would say by Greg Gbur Follow-up: The Infinite Series and the Mind-Blowing Result by Phil Plait Does 1+2+3… Really Equal -1/12? by Evelyn Lamb Comment on this Story comments powered by Disqus
{"url":"http://www.smithsonianmag.com/smart-news/great-debate-over-whether-1234-112-180949559/","timestamp":"2014-04-16T13:46:47Z","content_type":null,"content_length":"80109","record_id":"<urn:uuid:27659ab7-8a5a-4698-9166-f93ea5b067d7>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Well, you can figure g ought to be a polynomial function of degree 3, right? So put g(x+1)=A(x+1)^3+B(x+1)^2+C(x+1)+D. If you set that equal to x^3+3x+1, can you find A,B,C and D? You should get four equations in four unknowns if you equate the powers of x. Of course, you can also be clever and pick some special values of x that might make the job easier (like x=-1).
{"url":"http://www.physicsforums.com/showthread.php?t=268430","timestamp":"2014-04-19T19:45:27Z","content_type":null,"content_length":"43752","record_id":"<urn:uuid:58a1e9ac-30a4-4cc2-89a5-998bc80ddfcc>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Approximation by polynomials up vote 6 down vote favorite Let $f:[a,b] \rightarrow \mathbb{R}$ be of class $C^n$. Let $ x_0, ..., x_m$ be different numbers from $[a,b]$. Does for each $\varepsilon >0$ there exist a polynom $P$ such that $P^{(k)}(x_i)=f^{(k)}(x_i)$ for $i=0,...,m$, $k=0,...,n$ and $sup_{x \in [a,b]} |f(x)-P(x)|< \varepsilon$? approximation-theory ca.analysis-and-odes fa.functional-analysis I have never found an adequate reference for what one would call a "differentiable Stone--Weierstrass theorem", but I would assume that such a thing exists. However, I do happen to know a theorem 1 found in Real Algebraic Manifolds by J. Nash that say: Let $f: U \rightarrow \R^n$ be analytic, let $A\subseteq U$ be compact and bounded by an analytic manifold. Then there exists a sequence of polynomials $\{f_n\}$ so that for any $\alpha$: [ D^\alpha f_n \rightarrow D^\alpha f ] I don't know much about Fourier series, which appear in the proof. Maybe it works with $C^n$ in place of $C^ \omega$. – Malte Mar 13 '12 at 21:27 3 @Malte: this is true. See mathoverflow.net/questions/85153/…. But I think that the OT wants something different: a uniform approximation of the function and exact value of the function and derivatives at the given points. It is not required that the derivative converges uniformly. – Valerio Capraro Mar 13 '12 at 21:38 add comment 2 Answers active oldest votes The problem may be split into two independent and classical ones: the Hermite interpolation, and the Weierstrass approximation. First, we want a polynomial $p\in \mathbb{R}[x]$ with given derivatives at some given nodes $x _ 0,\dots, x _ m $. This is an instance of the Hermite interpolation problem; yours has exactly one solution $p$ with $ \operatorname{deg}(p) < (m+1)(n+1) $ (the degree one would expect in terms of number of linear conditions). So, given your $f\in C^n$, you can find a polynomial with $p^{(j)}(x _ i)=f^{(j)}(x _ i)$ for all $0 \le i \le m$ and $0 \le j \le n$. Second, as a consequence, $$\frac{f(x)-p(x)}{\prod _ {i=0}^ m(x- x _ i)^n}$$ is (extends to) a continuous function on $[a,b]$ that vanishes on the points $x _ i$. By the Stone-Weierstrass approximation theorem there is a polynomial vanishing on the points $x _ i$ as well, whose uniform distance from that function on the interval $[a,b]$ is less than, say, $\epsilon (b-a)^{-n(m+1)}$. In other words, there is a polynomial $q\in \mathbb{R}[x]$ such that $$\bigg\| \frac{f(x)-p(x)}{\prod _ {i=0}^ m(x- x _ i)^n} - q(x) \prod _ {i=0} ^ m(x- x _ i) \bigg\|_{\infty, [a,b]} < \epsilon (b-a)^{-n(m+1)}\ , $$ therefore the polynomial $P(x):= p(x)+ q(x) \prod _ {i=0} ^ m(x- x _ i) ^{n+1}$ fullfills the requirements, for $P^{(j)}(x _ i)=p^{(j)}(x _ i)=f^{(j)}(x _ i)$ for all $0 \le i \le m$ and $0 \le j \le n$, and $$\|f- P\|_{\infty,[a,b]} < \epsilon\ .$$ up vote 9 btw. Incidentally, some time ago I happen to notice that one can find the solution of the Hermite interpolation problem as an application of the Chinese Remainder Theorem in the ring of down vote polynomials, and wrote here the details. edit. As to why The set $A$ of all polynomial functions on $[a,b]$ that vanish at given points $x_0,\dots, x_m$ is dense in all continuous functions on $[a,b]$ that vanish in $x_0,\dots, x_r$. One way, a bit abstract but quite immediate is, to see it as a corollary of the Stone-Weierstrass theorem (A separating closed algebra of real valued functions on a compact space $X$ is either $C(X)$ or a maximal ideal $M_x\subset C(X)$, the set of all functions vanishing at $x$). Consider $X=$ the topological quotient of $[a,b]$ obtained identifying all points $x_i$ to a point $\xi$. All functions in $A$ factor through to the quotient map, and define a closed separating algebra of continuous functions on $X$ that vanish on the identified point $\xi$. Thus, this algebra contains all continuous functions on $X$ that vanish on $\xi$, which is the thesis read on the quotient. Note that the same construction holds in general, and provides a characterization of all closed algebras $A$ of continuous functions on a compact space $X$: identifying all points that are not distinguished by the functions of $A$ (that is, under the equivalence relation $x R_A y$ iff $f(x)=f(y)$ for all $f\in A$) one gets a Hausdorff compact quotient space (whether or not $X$ is Hausdorff), and the quotient map $\pi: X\to X/{R _ A}$ induces an isometric isomorphism of algebras $f\mapsto f\circ \pi$ of either $C(X/{R_A})$ or a maximal ideal of it onto $A$; conversely, any Hausdorff quotient of $X$ produces a closed sub-algebra of $C(X)$ this way. Another way to see it is as a corollary of the classic Weierstrass theorem: Consider $P$ as in your comment below, then add a perturbation $L$ that makes $P+L$ vanish on the points $\{x_i \}$; this has been clearly explained in Ilya Bogdanov's answer. Here you don't have derivatives and $L$ is just a Lagrange interpolation polynomial, which is small in the uniform norm because it is small on the points $\{x_i\}$. Very thanks. I have one question. You used the following fact: if a continuous function $g:[a,b]\rightarrow R$ vanishes at points $x_0,...,x_m$ then for every $\delta >0$ there exists a polynomial $Q$ which vanishes at $x_0,...,x_m$ such that $\|g(x)-Q(x)\|_{sup} <\delta$. It is easy for $m=0$, because by Weierstrass threorem we find polynomial $P$ such that $\|f-P\| <\frac{\delta}{2}$ and polynomial $Q:=P-P(x_0)$ will be good. How to prove its for $m\geq 1$? – arc Mar 14 '12 at 13:59 I've edit and added a few lines on this point. – Pietro Majer Mar 14 '12 at 17:02 add comment For the sake of simplicity, let us assume that $[a,b]=[0,1]$. $C^k$ always means $C^k[0,1]$. We will even approximate $f$ in $C^n$-norm satisfying your additional condition. 1. As was mentioned in the comments, you can easily approximate $f$ together with all its derivatives up to $n$th uniformly by a polynomial. In fact, it is enough to approximate $f^{(n)}$ with an adequate accuracy: if $||f'-P'||_C<\varepsilon$ and $f(0)=P(0),$ then $||f-P||_C<\varepsilon.$ up vote 2. Now take the polynomials $Q_{ik}(x)$ such that $Q_{ik}^{(d)}(x_j)=0$ for all $d=0,\dots,n$ and $j=0,\dots,m$ except that $Q_{ik}^{(k)}(x_i)=1$. Such polynomials are easy to construct: 2 down for instance, one may take $$ Q_{ik}(x)=c_{ik}(x-x_i)^k\prod_{j\neq i}\left((x-x_i)^{n+1}-(x_j-x_i)^{n+1}\right)^{n+1}\;\; $$ for a suitable constant $c_{ik}.$ Let $M=\max_{i,k}||Q_{ik}| vote |_{C^n}.$ Then, let the approximation in the previous paragraph be $\delta$-accurate with $\delta=\varepsilon/(2M(m+1)(n+1)).$ To correct the values of the polynomial and its derivatives at $x_i,$ it is enough to add the polynomials $Q_{ik}$ multiplied by the coefficients with absolute values $\leq\delta,$ hence the total error will be not more that $\delta+(m+1)(n+1)M\ nice ! – Pietro Majer Mar 14 '12 at 5:49 add comment Not the answer you're looking for? Browse other questions tagged approximation-theory ca.analysis-and-odes fa.functional-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/91116/approximation-by-polynomials?sort=votes","timestamp":"2014-04-18T10:59:23Z","content_type":null,"content_length":"65785","record_id":"<urn:uuid:cb346af8-3c98-44b9-91b3-29416548145c>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
How do i answer this question? PR is the diameter of circle O If the measure of arc QR =50 degrees then the measure... - Homework Help - eNotes.com How do i answer this question? PR is the diameter of circle O If the measure of arc QR =50 degrees then the measure of arc QS=? It is given that arc QR = 50 `S = Rtheta ` S = arc length R = radius of arc `theta` = angle generated by arc at the center `arc QR = ORxxangleQOR` Consider the triangles QOT and SOT. According to the figure both are right triangles. Hypotenuse of both QOT and SOT triangles is the radius of the circle. Further leg OT is common for both triangles. Therefore QOT and SOT triangles are equilateral. So we can say; `angleSOT = angleQOT` `angleSOR = angleQOR` Length of arc RS `= ORxxSOR = ORxxQOR = 50` So arc length QS `= QR+RS = 50+50 = 100` So the arc length of QS is 100. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/how-do-answer-this-question-pr-diameter-circle-o-438608","timestamp":"2014-04-21T09:05:37Z","content_type":null,"content_length":"27672","record_id":"<urn:uuid:2bb33665-a6bd-4c05-bd7b-01c369c1ea42>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00624-ip-10-147-4-33.ec2.internal.warc.gz"}
WyzAnt Resources When it comes to using a legitimate online resource to help with tutoring mathematics, or answering mathematical questions I use Wolfram.com. This website is very diverse and allows the user to input any mathematical equation, formula etc. With subject areas of mathematics, such as calculus, Wolfram.com has proved to be extremely beneficial, especially when working with difficult integrals and derivatives. With the Pro version of this website, which is well worth its value, you will be provided step-by-step instructions on how to solve the particular problem that you have inputted. Check out this website and explore the countless benefits it has to offer. Keith
{"url":"http://www.wyzant.com/resources/blogs/calculus","timestamp":"2014-04-19T04:23:28Z","content_type":null,"content_length":"80906","record_id":"<urn:uuid:5be36dbc-8b23-47cc-9fcc-7fa696f04540>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
An efficient clustering algorithm for partitioning Y-short tandem repeats data. Jump to Full Text MedLine PMID: 23039132 Owner: NLM Status: MEDLINE Abstract/ BACKGROUND: Y-Short Tandem Repeats (Y-STR) data consist of many similar and almost similar objects. This characteristic of Y-STR data causes two problems with partitioning: non-unique OtherAbstract: centroids and local minima problems. As a result, the existing partitioning algorithms produce poor clustering results. RESULTS: Our new algorithm, called k-Approximate Modal Haplotypes (k-AMH), obtains the highest clustering accuracy scores for five out of six datasets, and produces an equal performance for the remaining dataset. Furthermore, clustering accuracy scores of 100% are achieved for two of the datasets. The k-AMH algorithm records the highest mean accuracy score of 0.93 overall, compared to that of other algorithms: k-Population (0.91), k-Modes-RVF (0.81), New Fuzzy k-Modes (0.80), k-Modes (0.76), k-Modes-Hybrid 1 (0.76), k-Modes-Hybrid 2 (0.75), Fuzzy k-Modes (0.74), and k-Modes-UAVM (0.70). CONCLUSIONS: The partitioning performance of the k-AMH algorithm for Y-STR data is superior to that of other algorithms, owing to its ability to solve the non-unique centroids and local minima problems. Our algorithm is also efficient in terms of time complexity, which is recorded as O(km(n-k)) and considered to be linear. Authors: Ali Seman; Zainab Abu Bakar; Mohamed Nizam Isa Publication Type: Journal Article; Research Support, Non-U.S. Gov't Date: 2012-10-06 Journal Title: BMC research notes Volume: 5 ISSN: 1756-0500 ISO Abbreviation: BMC Res Notes Publication Date: 2012 Date Detail: Created Date: 2013-02-14 Completed Date: 2013-06-21 Revised Date: 2013-07-11 Medline Nlm Unique ID: 101462768 Medline TA: BMC Res Notes Country: England Journal Info: Other Details: Languages: eng Pagination: 557 Citation Subset: IM Affiliation: Center for Computer Sciences, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA, 40450 Shah Alam, Selangor, Malaysia. aliseman@tmsk.uitm.edu.my Export APA/MLA Format Download EndNote Download BibTex MeSH Terms Descriptor/ Algorithms* Qualifier: Alleles Chromosomes, Human, Y* Cluster Analysis Computational Biology / methods* Databases, Genetic Microsatellite Repeats* Pattern Recognition, Automated / statistics & numerical data* From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine Full Text Journal Information Article Information Journal ID (nlm-ta): BMC Res Notes Download PDF Journal ID (iso-abbrev): BMC Res Notes Copyright ©2012 Seman et al.; licensee BioMed Central Ltd. ISSN: 1756-0500 open-access: Publisher: BioMed Central Received Day: 1 Month: 3 Year: 2012 Accepted Day: 22 Month: 9 Year: 2012 collection publication date: Year: 2012 Electronic publication date: Day: 6 Month: 10 Year: 2012 Volume: 5First Page: 557 Last Page: 557 PubMed Id: 23039132 ID: 3571976 Publisher Id: 1756-0500-5-557 DOI: 10.1186/1756-0500-5-557 An efficient clustering algorithm for partitioning Y-short tandem repeats data Ali Seman1 Email: aliseman@tmsk.uitm.edu.my Zainab Abu Bakar1 Email: zainab@tmsk.uitm.edu.my Mohamed Nizam Isa2 Email: nizam.isa@gmail.com 1Center for Computer Sciences, Faculty of Computer and Mathematical Sciences, Universiti Teknologi MARA (UiTM), 40450 Shah Alam, Selangor, Malaysia 2Medical Faculty, Masterskill University College of Health Sciences, No. 6, Jalan Lembah, Bandar Seri Alam, 81750, Johor Bahru, Johor, Malaysia Y-Short Tandem Repeats (Y-STR) data represent the number of times an STR motif repeats on the Y-chromosome. It is often called the allele value of a marker. For example, if there are eight allele values for the DYS391 marker, the STR would look like the following fragments: [TCTA] [TCTA] [TCTA] [TCTA] [TCTA] [TCTA] [TCTA] [TCTA]. The number of tandem repeats has effectively been used to characterize and differentiate between two people. In modern kinship analyses, the Y-STR is very useful for distinguishing lineages and providing information about lineage relationships [^1]. Many areas of study, including genetic genealogy, forensic genetics, anthropological genetics, and medical genetics, have taken advantage of the Y-STR method. For example, it has been used to trace a similar group of Y-surname projects to support traditional genealogical studies, e.g., [^2^-^4]. Further, in forensic genetics, the Y-STR is one of the primary concerns in human identification for sexual assault cases [^5], paternity testing [^6], missing persons [^7], human migration patterns [^8], and the reexamination of ancient cases [^9]. From a clustering perspective, the goal of partitioning Y-STR data is to group a set of Y-STR objects into clusters that represent similar genetic distances. The genetic distance of two Y-STR objects is based on the mismatch results from comparing the Y-STR objects and their modal haplotypes. For Y-surname applications, if two people share 0, 1, 2, and 3 allele value mismatches for each marker, they are considered to be the most familially related. Furthermore, for Y-haplogroup applications, the number of mismatches is variant and greater than that typically found in Y-surname applications. This is because the haplogroup application is based on larger family groups branched out from the same ancestor, covering certain geographical areas and ethnicities throughout the world. The established Y-DNA haplogroups named by the letters A to T, with further subdivisions using numbers and lower case letters, are now available for reference (see [^10] and [^11] for details). Efforts to group Y-STR data based on genetic distances have recently been reported. For example, Schlecht et al. [^12] used machine learning techniques to classify Y-STR fragments into related groups. Furthermore, Seman et al. [^13^-^19] used partitional clustering techniques to group Y-STR data by the number of repeats, a method used in genetic genealogy applications. In this study, we continue efforts to partition the Y-STR data based on the partitional clustering approaches carried out in [^13^-^19]. Recently, we have also evaluated eight partitional clustering algorithms over six Y-STR datasets [^19]. As a result, we found that there is scope to propose a new partitioning algorithm to improve the overall clustering results for the same datasets. A new partitioning algorithm is required to handle the characteristics of Y-STR data, thus producing better clustering results. Y-STR data are slightly unique compared to the common categorical data used in [^20^-^25]. The Y-STR data contain a higher degree of similarity of Y-STR objects in their intra-classes and inter-classes. (Note that the degree of similarity is based on the mismatch results when comparing the objects and their modal haplotypes.) For example, many Y-STR surname objects are found to be similar (zero mismatches) and almost similar (1, 2, and 3 mismatches) in their intra-classes. In some cases, the mismatch values of inter-class objects are not obviously far apart. Y-STR haplogroup data contain similar, almost similar, and also quite distant objects. Occasionally, the Y-STR haplogroup data may include sub-classes that are sparse in their intra-classes. Partitional clustering algorithms Classically, clustering has been divided into hierarchical and partitional methods. The main difference between the two is that the hierarchical method breaks the data up into hierarchical clusters, whereas the partitional method divides the data into mutually disjoint partitions. The pillar of the partitional algorithms is the k-Means algorithm [^26], introduced almost four decades ago. As a consequence, the k-Means paradigm has been extended to various versions, including the k-Modes algorithm [^25] for categorical data. The k-Modes algorithm owes its existence to the ineffectiveness of the k-Means algorithm for handling categorical data. Ralambondrainy [^27] attempted to rectify this using a hybrid numeric–symbolic method based on the binary characters 0 and 1. However, this approach suffered from an unacceptable computational cost, particularly when the categorical attributes had many categories. Since then, a variety of k-Modes-type algorithms have been introduced, such as k-Modes with new dissimilarity measures [^21,^22], k-Population [^23], and a new Fuzzy k-Modes [^20]. Partitional algorithms use an objective function in their optimization process, and the determination of this function was described as the P problem by Bobrowski and Bezdek [^28] and Salim and Ismail [^29]. When he proposed the k-Modes clustering algorithm, Huang [^25] split P into P[1] and P[2]. P[1] denotes the minimization problem of obtaining values for the partition matrix w[li] of 0 or 1 (for the hard clustering approach) or 0 to 1 (for the fuzzy clustering approach); see Eq. (1b) as an example. Furthermore, P[2] denotes the minimization problem of obtaining the value that occurs most often (or the mode of a categorical data set) to represent the center of a cluster (often called the centroid). The minimization of P[2] by obtaining the appropriate mode essentially causes the minimization of problem P[2], and vice versa. As an example of the optimization process for problem P in the Fuzzy k-Modes algorithm, we wish to solve Eq. (1) subject to Eqs. (1a), (1b), and (1c). [Formula ID: subject to: • w[li] is a (k×n) partition matrix that denotes the degree of membership of object i in the lth cluster that contains a value of 0 to 1, • k (≤ n) is a known number of clusters, • Z is the centroid such that [Z[1], Z[2],…,Z[k]] ∈ R^mk, • α [1, ∞) is a weighting exponent, • d(X[i],Z[l]) is the distance measure between the object X[i] and the centroid Z[l], as described in Eqs. (2) and (2a). Huang and Ng [^24] described the optimization process of P[1] and P[2] as follows: • Problem P[1]: Fix Z = Z^ and solve the reduced problem P(W,Z^) as in Eq. (3). This process obtains the minimized values of 0–1 of the partition matrix w[li]. [Formula ID: • Problem P[2]: Fix W = Ŵ and solve the reduced problem P(Ŵ, Z) as in Eq. (4) subject to Eq. (4a). This process obtains the most frequent attributes, or the modes, which give the centroids. [Formula ID: and α ∈ [1, ∞) is a weighting exponent. Problem of partitioning Y-STR data Due to the characteristics of Y-STR data, there are two optimization problems for existing partitional algorithms: non-unique centroids and local minima problems. These two problems are caused by the drawback of the modes mechanism of determining the centroids. Non-unique centroids would result in empty clusters, whereas the local minima problem leads to poorer clustering results. Both problems are a result of the obtained centroids, which are not sufficient to represent their classes. Therefore, problems will occur for the following two cases: i)The total number of objects in a dataset is small while the number of classes is large. To illustrate this case, consider the following example. Example I: Figure1 shows an artificial example of a dataset consisting of nine objects in three classes: Class A = {A[1], A[2], A[3]}, Class B = {B[1], B[2], B[3]}, and Class C = {C[1], C[2], C[3]}. Each object is composed of three attributes, represented in lower case; e.g., for object A[1], the attributes are a[1], a[2], and a[3]. The dataset is considered to have a higher degree of similarity between objects in intra-classes, while the number of objects is small and number of classes is large. Thus, the appropriate modes for representing the classes are: Class A – [a[1], a[2], a[3]], lass B – [a[1], b[2], c[3]], and Class C – [b[1], c[2], d[4]]. However, attribute a[1] in DOMAIN (A[1]), a[2] in DOMAIN (A[2]), and c[3] in DOMAIN (A[3]) are too dominant, and would therefore dominate the process of updating P[2]. Figure2 shows the possibility that each cluster is formed by the dominant attributes. As a result, the mode that consists of [a[1], a[2], c[3]] would be obtained twice. Thus, P[2] would not be minimized due to this non-unique centroid. Another possibility is that the two modes are different, but are not distinctive enough to represent their clusters, such as modes [a[1], a[2], a[3]] or [a[1], a[2], b[3]] for Cluster 2. As a consequence, this case would fall into a local minima ii)An extreme distribution of objects in a class. To illustrate this case, consider the following example. Example II: Figure3 shows a dataset consisting of eight objects in two classes: Class A = {A[1], A[2], A[3], A[4], A[5], A[6]} and Class B = {B[1], B[2]}. Each object consists of three attributes, again represented in lower case. The appropriate modes to represent the classes are: Class A – [a[1], a[2], b[3]] and Class B – [a[1], b[2], c[3]] or [a[1], b[2], d[3]]. The distribution of objects in Class A is considerably larger than in Class B, covering approximately 75% of the total set of objects. This characteristic of the data is found to be problematic for P[2], particularly for the fuzzy approach. The problem is actually caused by the initial centroid selection. Figure4 shows the objects in Class A would be equally distributed into clusters 1 and 2. As a result, object A becomes dominant in both clusters, and so the obtained modes might be represented solely by objects in Class A, e.g., [a[1], a[2], a[3]] and [a[1], a[2], b[3]]. The above situations cause P not to be fully optimized, thus producing poor clustering results. Therefore, a new algorithm with a new concept of P[2] is proposed in order to overcome these problems and improve the clustering accuracy results of Y-STR data. The center of a cluster The mode mechanism for the center of a cluster (problem P[2]) is not appropriate for handling the characteristics of Y-STR data, and therefore, it cannot be used as a mechanism to represent the center of a cluster (centroid). Instead, the center of Y-STR data should be the modal haplotypes, which are required to calculate the distance of Y-STR objects. The distance between a Y-STR object and its modal haplotype can be formalized as in Eq. (5) subject to Eq. (5a). subject to: where m is the number of markers. The modal haplotype is controlled by groups of objects that are similar or almost similar in Y-STR data. The similar and almost similar objects have a lower distance, or a higher degree of membership values in a fuzzy sense. Thus, these two groups are considerably the most dominant objects required to find the Approximate Modal Haplotype. Consider four objects x[1], x[2], x[3], and x[4] and two clusters c[1] and c[2]. The membership value for each object and its cluster are as shown in Table1, whereby objects x[1] and x[3] have a 100% chance of being the most dominant object in cluster c[1 ], but only a 50% chance of being the dominant object in cluster c[2], and so on. A dominant weighting value of 1.0 is given to any dominant object and a weight of 0.5 is given to the remaining The k-AMH algorithm Let X ={X[1], X[2],…, X[n]} be a set of n Y-STR objects and A ={A[1],A[2],…, A[m]} be a set of markers (attributes) of a Y-STR object. Let H = {H[1], H[2],.,H[k]} ∈ X be the set of Approximate Modal Haplotypes for k clusters. Suppose k is known a priori. Let H[l] be the Approximate Modal Haplotype, represented as [h[l,1], h[l,2],…,h[l,m]], and therefore, H[l,j] = X[i,j] for 1≤ j ≤ m and 1≤ i ≤ n . The objective of the algorithm is to partition the categorical objects X into k clusters. Thus, the H[l] can be replaced by X[i] until n provided they satisfy the condition described in Eq. (6). Here, P(Á) is the cost function described in Eq. (7), which is subject to Eqs. (7a), (8), (8a), (8b), (9), (9a), (9b), and (9c). subject to: • W[li]^∝ is a (k × n) partition matrix that denotes the degree of membership of Y-STR object i in the lth cluster that contains a value of 0 to 1 as described in Eq. (8), subject to Eqs. (8a) and [Formula ID: • subject to: • k (≤ n) is a known number of clusters. • H is the Approximate Modal Haplotype (centroid) such that [H[1], H[2],…,H[k]] ∈ X. • α ∈ [1, ∞) is a weighting exponent and used to increase the precision of the membership degrees. Note that this alpha is typical based on 1.1 until 2.0 as introduced by Huang and Ng [^24]. • d[ystr](X[i,]H[l]) is the distance measure between the Y-STR object X[i] and the Approximate Modal Haplotype H[l] as described in Eq. (5) and subject to Eq.(5a). • D[li] is another (k × n) partition matrix which contains a dominant weighting value of 1.0 or 0.5, as explained above (See Table1). The dominant weighting values are based on the value of W[li]^∝ above. D[li] is described in Eq. (9), subject to Eqs. (9a), (9b), and (9c). [Formula ID: subject to: The basic idea of the k-AMH algorithm is to find k clusters in n objects by first randomly selecting an object to be the Approximate Modal Haplotype h for each cluster. The next step is to iteratively replace the objects x one-by-one towards the Approximate Modal Haplotype h. The replacement is based on Eq. (6) if the cost function as described in Eq. (7) and subject to (7a), (8), (8a), (8b), (9), (9a), (9b) and, (9c) is maximized. Thus, the differences between the k-AMH algorithm and the other k-Mode-type algorithms are as follows. i. The objects (the data themselves) are used as the centroids instead of modes. Since the distance of Y-STR objects is measured by comparing the objects and their modal haplotypes, we need to approximately find the objects that can represent the modal haplotypes. In finding the final Approximate Modal Haplotype for a particular group (cluster), each object needs to be tested one-by-one and replaced on a maximization of a cost function. ii. A maximization process of the cost function is required instead of minimizing it as in the k-mode-type algorithms. A detailed description of the k-AMH algorithm is given below. Step 1 – Select k initial objects randomly as Approximate Modal Haplotype (centroids). E.g. if k=4, then choose randomly 4 objects as the initial Approximate Modal Haplotype. Step 2 – Calculate distance d[ystr](X[i,]H[l]) according to Eq. (5) and subject to (5a). Step 3 – Calculate partition matrix w[li]^∝ according to Eq. (8), subject to Eqs. (8a) and (8b). Note that the w[li]^∝ is based on the distance calculated in Step 2. Step 4 – Assign a weighting dominant of 1.0 or 0.5 for partition matrix D[li] according to Eqs. (9), (9a), (9b) and (9c). Step 5 – Calculate cost function P(Á) based on W[li]^∝D[li] according to Eqs (7) and (7a). Step 6 – Test for each initial modal haplotype by the other objects one-by-one. If current cost function is greater than previous cost function according to Eq. (6), then replace it. Step 7 – Repeat Step 2 until Step 6 for each x and h Step 8 – Once the final Approximate Modal Haplotypes are obtained for all clusters, assign the objects to their corresponding crisp clusters C[li] according to Eq. (10). [Formula ID: Furthermore, the implementation of the steps above of the algorithm is formalized in the form of pseudo-code as follows. INPUT: Dataset, X, the number of cluster, k, the number of dimensional, d and the fuzziness index, OUTPUT: A set of clusters, k 01: Select H[l] randomly from Xsuch that 1≤l≤ k 02: for each H[l] an Approximate Modal Haplotype do 03: for each X[i] do 04: Calculate P(À)=∑[l=1]^k∑[i=1]^nÀ[li] 05: if P(À)=∑[l=1]^k∑[i=1]^nÀ[li] is maximized, then 06: Replace H[l] by X[i] 07: end if end for 09: end for 10: Assign X[i] to C[l] for all l, 1≤ l ≤ k; 1≤i≤ n as Eq. (10) 11: Output Results Optimization of the problem P In optimizing the problem P, the k-AMH algorithm uses a maximization process instead of the minimization process imposed by the k-Mode-type algorithms. This process is formalized in the k-AMH algorithm as follows. Step 1 - Choose an Approximate Modal Haplotype, H^(t)∈ X. Calculate P(Á); Set t=1 Step 2 - Choose X^(t+1) such that P(Á)^t+1 is maximized; Replace H^1 by X^(t+1) Step 3 - Set t=t+1; Stop when t=n; otherwise go to Step 2. *Note:n is the number of objects The convergence of the algorithm is proven as P[1] and P[2] are maximized accordingly. The function P(Á) incorporates the P(W, H) function imposed by the Fuzzy k-Modes algorithm, where W is a partition matrix and H is the approximate modal haplotype that defines the center of a cluster. Thus, P[1] and P[2] are solved by Theorems 1 and 2, respectively. Theorem 1 – Let Ĥ be fixed. P(W, Ĥ) is maximized if and only if Let X= {X[1],X[2],.,X[n]} be a set of n Y-STR categorical objects and H= {H[1],H[2],.,H[k]} be a set of centroids (Approximate Modal Haplotypes) for k clusters. Suppose that P= {P[1],P[2],.,P[k]} is a set of dissimilarity measures based on d[ystr](X[i,]H[l]), as described in Eqs. (5) and subject to (5a), ∀ i and l 1≤i≤n; 1≤l≤k Definition 1 - For X[i]=H[l] and X[i]=H[z], where z≠l, the membership value for all i is For any P that is obtained from d[ystr](X[i,]H[l]) where X[i]=H[l], the maximum value of w[li]^∝ is 1 and X[i]=H[z],z≠l the value of w[li]^∝ is 0. Therefore, because H[l] is fixed, w[li] ^∝ is maximized. Definition 2 – For the case of H[i]≠X[i]and X[i] ≠ H[z],∀z,1≤z≤k, the membership value for all i is Suppose that p[li] ∈ P is the minimum value, we write as where z ≠ l Thus, ∑z=1kPlizi1∝−1<∑z=1kPtiPzi1∝−1 where t ≠ l and ∀zandt,1≤z≤k;1≤t≤k It follows that where t ≠ l Therefore, based on definitions 1 and 2, w[li]^∝ is maximal. Because Ĥ is fixed, PW,H^ is maximized. Theorem 2 – Let h[l] ∈ X be the initial center of a cluster for 1 ≤ l ≤ k. h[l] is replaced by x[i] as the Approximate Modal Haplotype if and only if Let D= {D[1],D[2],.,D[k]} be a set of dominant weighting values. For any maximum value of w[li]^∝ as proved by Theorem 1, we assign an optimum value of 1.0 as a dominant weighting value, otherwise 0.5 as described in Eq, (9) and subject to Eqs. (9a), (9b) and (9c). We write Because w[li]^∝ and D[li] are non-negative, the product W[li]^∝D[li] must be maximal. It follows that the sum of all quantities ∑[l=1]^k∑[i=1]^nÁ[li] is also maximal. Hence, the result Y-STR Datasets The Y-STR data were mostly obtained from a database called worldfamilies.net [^30]. The first, second, and third datasets represent Y-STR data for haplogroup applications, whereas the fourth, fifth, and sixth datasets represent Y-STR data for Y-surname applications. All datasets were filtered for standardization on 25 similar attributes (25 markers). The chosen markers include DYS393, DYS390, DYS19 (394), DYS391, DYS385a, DYS385b, DYS426, DYS388, DYS439, DYS389I, DYS392, DYS389II, DYS458, DYS459a, DYS459b, DYS455, DYS454, DYS447, DYS437, DYS448, DYS449, DYS464a, DYS464b, DYS464c, and DYS464b. These markers are more than sufficient for determining a genetic connection between two people. According to Fitzpatrick [^31], 12 markers (Y-DNA12 test) are already sufficient to determine who does or does not have a relationship to the core group of a family. All datasets were retrieved from the respective websites in April 2010, and can be described as follows: 1) The first dataset consists of 751 objects of the Y-STR haplogroup belonging to the Ireland yDNA project [^32]. The data contain only 5 haplogroups, namely E (24), G (20), L (200), J (32), and R (475). Thus, k = 5. 2) The second dataset consists of 267 objects of the Y-STR haplogroup obtained from the Finland DNA Project [^33]. The data are composed of only 4 haplogroups: L (92), J (6), N (141), and R (28). Thus, k = 4. 3) The third dataset consists of 263 objects obtained from the Y-haplogroup project [^34]. The data contain Groups G (37), N (68), and T (158). Thus, k = 3. 4) The fourth dataset consists of 236 objects combining four surnames: Donald [^35], Flannery [^36], Mumma [^37], and William [^38]. Thus, k = 4. 5) The fifth dataset consists of 112 objects belonging to the Philips DNA Project [^39]. The data consist of eight family groups: Group 2 (30), Group 4 (8), Group 5 (10), Group 8 (18), Group 10 (17), Group 16 (10), Group 17 (12), and Group 29 (7). Thus, k = 8. 6) The sixth dataset consists of 112 objects belonging to the Brown Surname Project [^40]. The data consist of 14 family groups: Group 2 (9), Group 10 (17), Group 15 (6), Group 18 (6), Group 20 (7), Group 23 (8), Group 26 (8), Group 28 (8), Group 34 (7), Group 44 (6), Group 35 (7), Group 46 (7), Group 49 (10), and Group 91 (6). Thus, k = 14. The values in parentheses indicate the number of objects belonging to that particular group. Datasets 1–3 represent Y-STR haplogroups and datasets 4–6 represent Y-STR surnames. Results and discussion The following results compare the performance of the k-AMH algorithm with eight other partitional algorithms: the k-Modes algorithm [^25], k-Modes with RVF [^21,^22,^41], k-Modes with UAVM [^21], k -Modes with Hybrid 1 [^21], k-Modes with Hybrid 2 [^21], the Fuzzy k-Modes algorithm [^24], the k-Population algorithm [^23], and the New Fuzzy k-Modes algorithm [^20]. Our analysis was based on the average accuracy scores obtained from 100 runs for each algorithm and dataset. During the experiments, the objects in the datasets were randomly reordered from the preceding run. The misclassification matrix proposed by Huang [^25] was used to obtain the clustering accuracy scores for evaluating the performance of each algorithm. The clustering accuracy r defined by Huang [^25] is given by Eq. (11): where k is the number of clusters, a[i] is the number of instances occurring in both cluster i and its corresponding haplogroup or surname, and n is the number of instances in the dataset. Clustering performance Table2 shows the clustering accuracy scores for all datasets (boldface indicates the highest clustering accuracy). Based on these results, the performance of the k-AMH algorithm was very promising. Out of six datasets, our algorithm obtained the highest clustering accuracy scores for datasets 1, 2, 4, 5, and 6. In fact, the algorithm also achieved the optimal clustering accuracy for two datasets (4 and 5). However, for dataset 3, the results show that the accuracy of the k-AMH algorithm was 0.01 lower than that of the k-Population algorithm. A statistical t-test was carried out for further verification. This indicated that t(101.39) = 0.65, and p = 0.51. Thus, there was no significant difference at the 5% level between the accuracy score of our k-AMH algorithm and the k -Population algorithm. This means that both algorithms displayed an equal performance for this dataset. During the experiments, the k-AMH algorithm did not encounter any difficulties. However, the Fuzzy k-Modes and the New Fuzzy k-Modes algorithms faced problems with datasets 1, 5, and 6. For dataset 1, the problem was caused by the extreme number of objects in Class R (475), which covered about 63% of the total objects. Further, for datasets 5 and 6, the problem was caused by many similar objects in a larger number of classes. In particular, both algorithms faced the problem P[2] caused by the initial centroid selections. Note also that the results for both algorithms were based on the diverse method, an initial centroid selection proposed by Huang [^25]. For an overall comparison, Table3 shows the results of all Y-STR datasets. It clearly indicates that the k-AMH algorithm obtained the highest accuracy score of 0.93. The closest score of 0.91 belongs to the k-Population algorithm. Furthermore, the k-AMH algorithm also recorded the best results in terms of standard deviation (0.07), the lower bound (0.93), the upper bound (0.94), and the minimum accuracy score (0.79). For further verification, a one-way ANOVA test was carried out. This indicated that the assumption of homogeneity of variance was violated; therefore, the Welch F-ratio is reported. There was a significant variance in the clustering accuracy scores among the nine algorithms, in which F(8, 2230) = 378, p < 0.001, and ω^2 = 0.25. Thus, the Games–Howell procedure was used for a multiple comparison among the nine algorithms. Table4 shows the result of this comparison with regard to the k-AMH algorithm against the other eight algorithms. At the 5% level of significance, it is clearly shown that the k-AMH algorithm (M = 0.93, 95% CI [0.93, 0.94]) differed from the other eight algorithms (all P values < 0.001). Thus, the performance of k-AMH algorithm exhibited a very significant difference compared to the other algorithms. We now consider the time efficiency of the k-AMH algorithm. The computational cost of the algorithm depends on the nested loop for k(n-k), where k is the number of clusters and n is the number of data required to obtain the cost function, P(À). The function P(À) involves the number of attributes m in calculating the distances and the membership values for its partition matrix w[li]. Thus, the overall time complexity is O(km(n-k)). However, the time efficiency of the k-AMH algorithm will not reach O(n^2) because the value of k in the outer loop will not become equivalent to the value of n-k in the inner loop. See pseudo-code for a detailed implementation of these loops. A scalability test was also carried out for the k-AMH algorithm. These experiments were based on a dataset called Connect [^42]. This dataset consisted of 65,000 data, 42 attributes, and three classes. Two scalability tests were conducted: (a) scalability against the number of objects, when the number of clusters was three, and (b) scalability against the number of clusters, when the number of objects was 65,000. The test was performed on a personal computer with an Intel® Core™ 2 DUO Processor with 2.93GHz and 2.00GB memory. Figure 5(a) and (b) illustrate the results of the tests. In conclusion, the runtime of the k-AMH algorithm increased linearly with the number of clusters and data. Our experimental results indicate that the performance of the proposed k-AMH algorithm for partitioning Y-STR data was significantly better than that of the other algorithms. Our algorithm handled all problems, as described previously, and was not too sensitive to P[0], the initial centroid selection, even though the datasets contained a lot of similar objects. Moreover, the concept of P[2] in using the object (the data itself) as the approximate center of a cluster has significantly improved the overall performance of the algorithm. In fact, our algorithm is the most consistent of those tested because the difference between the minimum and maximum scores is smaller. The k-AMH algorithm always produces the highest minimum score for each dataset. In conclusion, the k-AMH algorithm is an efficient method of partitioning Y-STR categorical data. Competing interests The authors declare that they have no competing interests. Authors' contributions AS carried out the algorithm development and experiments. ZAB verified the algorithm and the results. MNI verified the Y-STR data and also the results. All authors read and approved the final This research is supported by Fundamental Research Grant Scheme, Ministry of Higher Eduction Malaysia. We would like to thank RMI, UiTM for their support for this research. We extend our gratitude to many contributors toward the completion of this paper, including Prof. Dr. Daud Mohamed, En. Azizian Mohd Sapawi, Puan Nuru'l-'Izzah Othman, Puan Ida Rosmini, and our research assistants: Syahrul, Azhari, Kamal, Hasmarina, Nurin, Soleha, Mastura, Fadzila, Suhaida, and Shukriah. Kayser M,Kittler R,Erler A,Hedman M,Lee AC,Mohyuddin A,Mehdi SQ,Rosser Z,Stoneking M,Jobling MA,Sajantila A,Tyler-Smith C,A comprehensive survey of human Y-chromosomal microsatellitesAm J Hum Genet Year: 20047461183119710.1086/42153115195656 Perego UA,Turner A,Ekins JE,Woodward SR,The science of molecular genealogyNational Genealogical Society QuarterlyYear: 2005934245259 Perego UA,The power of DNA: Discovering lost and hidden relationshipsYear: 2005Oslo: World Library and Information Congress: 71st IFLA General Conference and Council Oslo Hutchison LAD,Myres NM,Woodward S,Growing the family tree: The power of DNA in reconstructing family relationshipsProceedings of the First Symposium on Bioinformatics and Biotechnology (BIOT-04) Year: 200414249 Dekairelle AF,Hoste B,Application of a Y-STR-pentaplex PCR (DYS19, DYS389I and II, DYS390 and DYS393) to sexual assault casesForensic Sci IntYear: 200111812212510.1016/S0379-0738(00)00481-311311823 Rolf B,Keil W,Brinkmann B,Roewer L,Fimmers R,Paternity testing using Y-STR haplotypes: Assigning a probability for paternity in cases of mutationsInt J Legal MedYear: 2001115121510.1007/ Dettlaff-Kakol A,Pawlowski R,First polish DNA “manhunt” - an application of Y-chromosome STRsInt J Legal MedYear: 200211628929112376840 Stix G,Traces of the distant pastSci AmYear: 20082995663 Gerstenberger J,Hummel S,Schultes T,Häck B,Herrmann B,Reconstruction of a historical genealogy by means of STR analysis and Y-haplotyping of ancient DNAEur J Hum GenetYear: 1999746947710.1038/ International Society of Genetic Genealogy http://www.isogg.org. The Y Chromosome Consortium http://ycc.biosci.arizona.edu. Schlecht J,Kaplan ME,Barnard K,Karafet T,Hammer MF,Merchant NC,Machine-learning approaches for classifying haplogroup from Y chromosome STR dataPLoS Comput BiolYear: 200846e100009310.1371/ Seman A,Abu Bakar Z,Mohd Sapawi A,Centre-based clustering for Y-Short Tandem Repeats (Y-STR) as Numerical and Categorical dataProc. 2010 Int. Conf. on Information Retrieval and Knowledge Management (CAMP’10)Year: 201012833 Shah Alam, Malaysia. Seman A,Abu Bakar Z,Mohd Sapawi A,Centre-Based Hard and Soft Clustering Approaches for Y-STR DataJournal of Genetic GenealogyYear: 20106119 Available online: http://www.jogg.info. Seman A,Abu Bakar Z,Mohd Sapawi A,Attribute Value Weighting in K-Modes Clustering for Y-Short Tandem Repeats (Y-STR) SurnameProc. of Int. Symposium on Information Technology 2010 (ITsim’10)Year: 2010315311536 Kuala Lumpur, Malaysia. Seman A,Abu Bakar Z,Mohd Sapawi A,Hard and Soft Updating Centroids for Clustering Y-Short Tandem Repeats (Y-STR) DataProc. 2010 IEEE Conference on Open Systems (ICOS 2010)Year: 20101611 Kuala Lumpur, Malaysia. Seman A,Abu Bakar Z,Mohd Sapawi A,Modeling Centre-based Hard and Soft Clustering for Y Chromosome Short Tandem Repeats (Y‐STR) DataProc. 2010 International Conference on Science and Social Research (CSSR 2010)Year: 201017378 Kuala Lumpur, Malaysia. Seman A,Abu Bakar Z,Mohd Sapawi A,Centre-based Hard Clustering Algorithm for Y-STR DataMalaysia Journal of ComputingYear: 201016273 Seman A,Abu Bakar Z,Isa MN,Evaluation of k-Mode-type Algorithms for Clustering Y-Short Tandem RepeatsJournal of Trends in BioinformaticsYear: 201252475210.3923/tb.2012.47.52 Ng M,Jing L,A new fuzzy k-modes clustering algorithm for categorical dataInternational Journal of Granular Computing, Rough Sets and Intelligent SystemsYear: 20091110511910.1504/IJGCRSIS.2009.026727 He Z,Xu X,Deng S,Attribute value weighting in k-Modes clusteringYear: 2007Ithaca, NY, USA: Cornell University Library, Cornell University115 available online: http://arxiv.org/abs/cs/0701013v1. Ng MK,Junjie M,Joshua L,Huang Z,He Z,On the impact of dissimilarity measure in k-modes clustering algorithmIEEE Trans Pattern Anal Mach IntellYear: 200729350350717224620 Kim DW,Lee YK,Lee D,Lee KH,k-Populations algorithm for clustering categorical dataPattern RecognYear: 2005381131113410.1016/j.patcog.2004.11.017 Huang Z,Ng M,A Fuzzy k-Modes algorithm for clustering categorical dataIEEE Trans Fuzzy SystYear: 19997444645210.1109/91.784206 Huang Z,Extensions to the k-Means algorithm for clustering large datasets with categorical valuesData Min Knowl DiscovYear: 1998228330410.1023/A:1009769707641 MacQueen JB,Some methods for classification and analysis of multivariate observationsThe 5th Berkeley Symposium on Mathematical Statistics and ProbabilityYear: 19671281297 Ralambondrainy H,A conceptual version of the k-Means algorithmPattern Recogn LettYear: 1995161147115710.1016/0167-8655(95)00075-R Bobrowski L,Bezdek JC,c-Means clustering with the l1 and l∞ normsIEEE Trans Syst Man CybernYear: 1989213545554 Salim SZ,Ismail MA,k-Means-type algorithms: A generalized convergence theorem and characterization of local optimalityIEEE Trans Pattern Anal Mach IntellYear: 19846818721869168 WorldFamilies.net http://www.worldfamilies.net. Fitzpatrick C,Forensic genealogyYear: 2005Fountain Valley: Cal.: Rice Book Press Ireland yDNA project http://www.familytreedna.com/public/IrelandHeritage/. Finland DNA Project http://www.familytreedna.com/public/Finland/. Y-Haplogroup project http://www.worldfamilies.net/yhapprojects/. Clan Donald Genealogy Project http://dna-project.clan-donald-usa.org. Flannery Clan http://www.flanneryclan.ie. Doug and Joan Mumma’s Home Page http://www.mumma.org. Williams Genealogy http://williams.genealogy.fm. Phillips DNA Project. http://www.phillipsdnaproject.com. Brown Genealogy Society http://brownsociety.org. San OM,Huynh V,Nakamori Y,An alternative extension of the K-Means Algorithm for clustering categorical dataIJAMCSYear: 2004142241247 Blake CL,Merz CJ,UCI repository of machine learning databaseYear: 1989 Figure 1 ID: F1] Artificial Example 1. An example of higher degree of similarity between objects. Figure 2 ID: F2] The dominant attributes form centroid 1 (a[1], a[2], c[3]), centroid 2 (a[1], a[2], c[3]) and centroid 3 (b[1], c[2], d[3]). In this case, there are possibilities that each cluster is formed by the dominant attributes, e.g. attribute a[1], a[2 ]and c[3. ]This scenario of non-unique centroids would result in empty clusters; otherwise the centroids would lead to local a minima problem and produce poorer clustering results. Figure 3 ID: F3] Artificial Example 2. An example of the extreme distribution of objects in a class. Figure 4 ID: F4] The extreme distribution of objects A forms centroid 1 (a[1], a[2], a[3]) and centroid 2 (a[1], a[2], b[3]). In this case, the objects in Class A are equally distributed into clusters 1 and 2. Therefore, the obtained centroids are not sufficient to represent their classes. Figure 5 ID: F5] Scalability Testing. a Execution time to cluster 65,000 data into different numbers of clusters. b Execution time to cluster a different number of data into three clusters. [TableWrap ID: ] Table 1 Example of dominant objects Objects Membership Values Probability of being the dominant object in the cluster c[1] c[2] c[1] c[2] x[1] 0.7 0.3 100% (1.0) 50% (0.5) x[2] 0.4 0.6 50% (0.5) 100% (1.0) x[3] 0.6 0.4 100% (1.0) 50% (0.5) x[4] 0.3 0.7 50% (0.5) 100% (1.0) [TableWrap ID: ] Table 2 Clustering accuracy scores for all datasets ALGORITHM DATASET k-Modes 0.70 0.79 0.84 0.84 0.74 0.62 k-Modes-RVF 0.79 0.83 0.87 0.78 0.87 0.72 k-Modes-UAVM 0.65 0.75 0.83 0.87 0.56 0.54 k-Modes-Hybrid 1 0.67 0.81 0.85 0.77 0.80 0.64 k-Modes-Hybrid 2 0.56 0.82 0.83 0.79 0.81 0.70 Fuzzy k-Modes 0.56 0.74 0.74 0.97 0.76 0.66 k-Population 0.80 0.90 0.97 1.00 0.97 0.84 New Fuzzy k-Modes 0.71 0.84 0.77 1.00 0.77 0.69 k-AMH 0.83 0.93 0.96 1.00 1.00 0.87 [TableWrap ID: ] Table 3 Clustering accuracy scores for all Y-STR datasets N Mean Std. Dev. 95% Confidence Interval for Mean Min Max Lower Bound Upper Bound k-Mode 600 0.76 0.13 0.75 0.77 0.45 1.00 k-Mode-RVF 600 0.81 0.11 0.80 0.82 0.56 1.00 k-Mode-UAVM 600 0.70 0.17 0.69 0.71 0.38 1.00 k-Mode-Hybrid 1 600 0.76 0.13 0.75 0.77 0.38 1.00 k-Mode-Hybrid 2 600 0.75 0.14 0.74 0.76 0.45 1.00 Fuzzy k-Mode 600 0.74 0.16 0.73 0.75 0.32 1.00 k-Population 600 0.91 0.09 0.91 0.92 0.59 1.00 New Fuzzy k-Mode 600 0.80 0.13 0.79 0.81 0.44 1.00 k-AMH 600 0.93 0.07 0.93 0.94 0.79 1.00 [TableWrap ID: ] Table 4 Multiple comparisons for the k-AMH algorithm Accuracy Games–Howell (I) Algorithm (J) Algorithm Mean Diff. (I-J) Std. Error p-value 95% Confidence Interval Lower Bound Upper Bound k-AMH k-Mode 0.17^* 0.01 < 0.00001 0.16 0.19 k-Mode-RVF 0.12^* 0.01 < 0.00001 0.11 0.14 k-Mode-UAVM 0.23^* 0.01 < 0.00001 0.21 0.25 k-Mode-Hybrid 1 0.17^* 0.01 < 0.00001 0.16 0.19 k-Mode-Hybrid 2 0.18^* 0.01 < 0.00001 0.16 0.20 Fuzzy k-Mode 0.19^* 0.01 < 0.00001 0.17 0.21 k-Population 0.02^* 0.00 0.00271 0.01 0.03 New Fuzzy k-Modes 0.13^* 0.01 < 0.00001 0.12 0.15 Article Categories: • Research Article Keywords: Algorithms, Bioinformatics, Clustering, Optimization, Data mining. Previous Document: The 'real world' utility of a web-based bipolar disorder screening measure. Next Document: Peculiar fundus abnormalities and pathognomonic electrophysiological findings in a 14-month-old boy ...
{"url":"http://www.biomedsearch.com/nih/efficient-clustering-algorithm-partitioning-Y/23039132.html","timestamp":"2014-04-18T21:37:03Z","content_type":null,"content_length":"93196","record_id":"<urn:uuid:ca1202c4-d4d1-4288-9fde-8a20c361534c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Hopewell, NJ Geometry Tutor Find a Hopewell, NJ Geometry Tutor ...The LSAT is a well-designed test of logic, critical reasoning, reading in detail, and mental speed and endurance. What are your areas of weakness? We'll analyze where you're giving up points on the test and plug those leaks. 16 Subjects: including geometry, calculus, algebra 1, algebra 2 ...I look forward to helping students achieve their goals!I hold the rank of shodan (1st degree black belt) in Shito-ryu karate. I am currently practicing and have been studying since 2007. I also have experience in Tae Kwon Do from when I was younger. 16 Subjects: including geometry, calculus, physics, ASVAB I am a highly motivated, passionate math teacher who has taught in high performing schools in four states and two countries. I have previously taught all grades from 5th to 10th and am extremely comfortable teaching all types of math to all level learners. I am a results driven educator who motivates and educates in a fun, focused atmosphere. 7 Subjects: including geometry, accounting, algebra 1, algebra 2 I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including geometry, calculus, physics, algebra 2 ...Together we can design a plan to help you achieve success in any areas with which you are currently having difficulty. My BA is in elementary education, with a specialization in Reading/ Language Arts, but my courses included higher level math and science. For many years, I have worked at a nati... 9 Subjects: including geometry, reading, SAT math, ACT Math Related Hopewell, NJ Tutors Hopewell, NJ Accounting Tutors Hopewell, NJ ACT Tutors Hopewell, NJ Algebra Tutors Hopewell, NJ Algebra 2 Tutors Hopewell, NJ Calculus Tutors Hopewell, NJ Geometry Tutors Hopewell, NJ Math Tutors Hopewell, NJ Prealgebra Tutors Hopewell, NJ Precalculus Tutors Hopewell, NJ SAT Tutors Hopewell, NJ SAT Math Tutors Hopewell, NJ Science Tutors Hopewell, NJ Statistics Tutors Hopewell, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Hopewell_NJ_Geometry_tutors.php","timestamp":"2014-04-18T05:56:21Z","content_type":null,"content_length":"24082","record_id":"<urn:uuid:0b5016eb-7d20-45bf-b9b4-26fb1f1e1f11>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Hobart, IN Precalculus Tutor Find a Hobart, IN Precalculus Tutor ...I have several years of experience as a teacher which required effective public speaking for many hundreds of hours. As the manager of a business for several years, both marketing and employee management required effective public speaking. Several years of group tutoring and many years of group training have given me additional experience as a speaker. 49 Subjects: including precalculus, reading, writing, English I earned High Honors in Molecular Biology and Biochemistry as well as an Ancient History (Classics) degree from Dartmouth College. I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece. 41 Subjects: including precalculus, chemistry, physics, English ...Critical Reasoning questions vary widely, but recognizing each type and how to pre-think to eliminate answers is a critical technique I can impart on others. I scored a 34 on the ACT Reading section. The crux of my tutoring focuses on what I call the Dual Flow Approach, that is, maintaining harmony between the two key tasks that encompass the section - Reading and Thinking. 17 Subjects: including precalculus, geometry, algebra 2, SAT math I have spent in excess of 30 years in the chemical and environmental industry as an industrial trainer, research engineer, supervisor and manager. I have authored technical articles and made numerous presentations to both technical and public audiences. I believe that in order for anyone to unders... 13 Subjects: including precalculus, chemistry, calculus, physics I am a petroleum engineer and I arrived to the U.S. four years ago. I have an MBA (Valparaiso University) and I am pursuing a master's degree in Mathematics (Purdue University Calumet). I love teaching math and science courses (physics and chemistry). I worked professionally as a petroleum engineer... 8 Subjects: including precalculus, Spanish, geometry, trigonometry Related Hobart, IN Tutors Hobart, IN Accounting Tutors Hobart, IN ACT Tutors Hobart, IN Algebra Tutors Hobart, IN Algebra 2 Tutors Hobart, IN Calculus Tutors Hobart, IN Geometry Tutors Hobart, IN Math Tutors Hobart, IN Prealgebra Tutors Hobart, IN Precalculus Tutors Hobart, IN SAT Tutors Hobart, IN SAT Math Tutors Hobart, IN Science Tutors Hobart, IN Statistics Tutors Hobart, IN Trigonometry Tutors
{"url":"http://www.purplemath.com/Hobart_IN_precalculus_tutors.php","timestamp":"2014-04-21T00:13:38Z","content_type":null,"content_length":"24351","record_id":"<urn:uuid:2274907d-5a4a-4843-9fd0-ec02c8dbde97>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimal investment and hedging under partial and inside information Monoyios, Michael (2009) Optimal investment and hedging under partial and inside information. In: Radon Series on Computational and Applied Mathematics. Radon Series on Computational and Applied Mathematics . De Gruyter, Berlin. (In Press) This article concerns optimal investment and hedging for agents who must use trading strategies which are adapted to the filtration generated by asset prices, possibly augmented with some inside information related to the future evolution of an asset price. The price evolution and observations are taken to be continuous, so the partial (and, when applicable, inside) information scenario is characterised by asset price processes with an unknown drift parameter, which is to be filtered from price observations. We first give an exposition of filtering theory, leading to the Kalman-Bucy filter. We outline the dual approach to portfolio optimisation, which is then applied to the Merton optimal investment problem when the agent does not know the drift parameter of the underlying stock. This is taken to be a random variable with a Gaussian prior distribution, which is updated via the Kalman filter. This results in a model with a stochastic drift process adapted to the observation filtration, and which can be treated as a full information problem, and an explicit solution to the optimal investment problem is possible. We also consider the same problem when the agent has noisy knowledge at time enlargement of filtration to accommodate the insider's additional knowledge, followed by filtering the asset price drift, we are again able to obtain an explicit solution. Finally we treat an incomplete market hedging problem. A claim on a non-traded asset is hedged using a correlated traded asset. We summarise the full information case, then treat the partial information scenario in which the hedger is uncertain of the true values of the asset price drifts. After filtering, the resulting problem with random drifts is solved in the case that each asset's prior distribution has the same variance, resulting in analytic approximations for the optimal hedging strategy. Repository Staff Only: item control page
{"url":"http://eprints.maths.ox.ac.uk/763/","timestamp":"2014-04-16T07:22:42Z","content_type":null,"content_length":"17872","record_id":"<urn:uuid:837f11d0-23c7-48b5-8b8a-7c6e1ff70060>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Harris-Stowe State University (314) 340-3366 Name: Dr. Lateef Adelani Position: Professor of Mathematics Department: Arts and Sciences E-mail: AdelaniL@hssu.edu Phone: (314)340-3349 Office: 317 HGAB Degrees Held: Doctor of Philosophy Washington University in St. Louis Applied Mathematics Master of Science Stanford University Operations Research Master of Science Stanford University, California Master of Science Washington University, St. Louis Systems Science and Mathematics Bachelor of Science University of Ibadan, Ibadan, Nigeria Courses: Probability & Statistics, Linear Algebra, Functions of Complex Variables, Topology, Modern Algebra, Differential Equations, Number Theory, Mathematical Modeling, Continuous Probability Theory, Structures of Mathematical Systems, Calculus and Analytic Geometry, Plane Geometry, College Algebra Publications: Adelani, L.A; and Rodin, E.Y; "Optimal Age of Harvest", International Journal Of Mathematical and Computer Modeling, Vol. 17, No.3, 1993. Adelani, L.A; and Rodin, E.Y; "Optimal Harvesting of Renewable Economic Resources in a Model with Bertalanffy Growth Law I", Appl. Math Letters, Vol.1, No.6, 1989. Adelani, L.A.; and Rodin, E.Y.; "Optimal Harvesting of Renewable Economic Resources in a Model with Bertalanffy Law II", Appl. Math Letters, Vol.2, No.2, 1989. Adelani, L.A.; and Rodin, E.Y.; Optimal Management of Renewable Economic Resources in a Model with Bertalanffy Growth Law", International Journal of Mathematical Modeling, Vol. 12, No. 7, 1989 Adelani, L.A.; and Rodin, E.Y.; "A Simple Nonlinear Model for the Exploitation of Renewable Economic Resources", International Journal of Computers and Mathematics with Applications, Vol. 18, No. 5, 1989. Adelani, L.A.; and Behle, J.H.; "A Shortcut for finding the product of Radicals Extended", Mathematics Teachers, Vol. 82, No. 8, 1989. Adelani, L.A.; " Sums of Roots", The College mathematical Journal, Vol. 21, No. 4, 1990. Adelani, L.A.; and Flynn, L.E.; "Average Number of Children in a family", The College Mathematics Journal, Vol. 21, No. 4, 1990. Adelani, L.A.; "An Increasing Sequence", The College Mathematics Journal, Vol. 21, No. 4, 1990. Presentations: Adelanil, L.A.; and Behle, J.H.; "On Classes of Odd Abundant Numbers", Presented at the Spring Meeting of the Missouri Section of the MAA, April 9, 1999. Adelani, L.A; "Determining the Awareness of the Simultaneous Agenda Among Pre-Service Teachers", An inquiry project, Institute for Educational Renewal, Seattle, WA., 1996. Awards: Mr. Harris-Stowe, 1986
{"url":"http://www.hssu.edu/profiles_full.cfm?prflID=115","timestamp":"2014-04-17T21:44:18Z","content_type":null,"content_length":"13001","record_id":"<urn:uuid:2d401d33-6454-4894-b626-4bf4a9f31bac>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Welcome to the Webpage of Jens Hainmueller (KRLS) as described in Hainmueller and Hazlett (2013). KRLS is a machine learning method that can flexibly fit solution surfaces of the form y=f(X) that arise in regression or classification problems without relying on linearity or other assumptions that use the columns of the predictor matrix X directly as basis functions (such as additivity). KRLS finds the best fitting function by minimizing a Tikhonov regularization problem with a square loss using Gaussian Kernels as radial basis functions. KRLS is currently available for R and Stata. Feedback from users is appreciated. KRLS for R You can obtain the KRLS package for R from CRAN by typing: Source: http://cran.r-project.org/web/packages/KRLS/ KRLS for Stata You can obtain the krls package for Stata from SSC by typing: ssc install krls, all replace Ferwerda, Hainmueller, and Hazlett (2013) describes the Stata package in detail and provides empirical illustrations.
{"url":"http://www.stanford.edu/~jhain/krlspage.html","timestamp":"2014-04-18T22:17:15Z","content_type":null,"content_length":"9535","record_id":"<urn:uuid:aa9bc794-c2a4-4506-9769-2c395fb533f5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
vectors and differentiation January 8th 2010, 05:29 AM #1 Jan 2010 vectors and differentiation The vector a depends on a parameter t, i.e. $a=a(t)=a_x(t)i +a_y(t)j +a_z(t)k$.. it satisfies the equation $da/dt= j (Vector product) a$ show that $d^2a_x/dt^2 =-a_x$ , $da_y/dt=0$ and $d^2a_z/dt^2 =-a_z$. For the vector a, find its value for t=pi if at t=0 $a(0)=i+j$ and $da/dt(0)=0k$ i have absolutely no idea how to start... Last edited by Emma L; January 9th 2010 at 02:14 PM. can you use latex to restate your problem? Hope that's a bit clearer ye sorry, it's the cross product The vector a depends on a parameter t, i.e. $a=a(t)=a_x(t)i +a_y(t)j +a_z(t)k$.. it satisfies the equation $da/dt= j (Vector product) a$ show that $d^2a_x/dt^2 =-a_x$ , $da_y/dt=0$ and $d^2a_z/dt^2 =-a_z$. For the vector a, find its value for t=pi if at t=0 $a(0)=i+j$ and $da/dt(0)=0k$ i have absolutely no idea how to start... So the problem is that you are given $\vec{a}$ such that $\frac{d\vec{a}}{dt}= \vec{j}\times \vec{a}$ and you are asked to show that, in that case, $\frac{d^2a_x}{dt^2}= -a_x$, $\frac{da_y}{dt}= 0$, and $\frac{d^2a_z}{dt^2}= -a_z$. Okay, go ahead and calculate $\vec{j}\times \vec{a}$. That's easy, it is just $a_z\vec{i}- a_x\vec{k}$. Setting $\frac{d\vec{a}}{dt}= \frac{da_x}{dx}\vec{i}+ \frac{da_y}{dt}\vec{j}+ \frac{da_z} {dt}\vec{k}$ equal to that gives you three equations: $\frac{da_x}{dt}= a_z$, $\frac{da_y}{dt}= 0$, and $\frac{da_z}{dt}= -a_x$. differentiating the first of those, with respect to t, gives $\frac{d^2 a_x}{dt^2}= \frac{d^a_z}{dt}= -a_x$. Get the point? I'll leave the others to you now. For the last part you need to solve those equations. $\frac{da_y}{dt}= 0$ is easy: $a_y$ is a constant and the last part tells us that that constant is 1. To solve the other two, use the fact, that you have now shown, that $\frac{d^2a_x}{dt^2}= -a_x$ and $\frac{d^2a_z}{dt^2}= -a_z$. Solve those differential equations using the initial values $a_x (0)= 1$, $a_x'(0)= 0$, $a_z(0)= 0$, and $a_z'(0)= 0$. January 8th 2010, 07:14 AM #2 January 9th 2010, 02:16 PM #3 Jan 2010 January 9th 2010, 02:20 PM #4 January 9th 2010, 03:21 PM #5 Jan 2010 January 10th 2010, 06:00 AM #6 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/122895-vectors-differentiation.html","timestamp":"2014-04-20T15:58:28Z","content_type":null,"content_length":"52940","record_id":"<urn:uuid:d3f52b16-c3c4-4527-a795-23feb993c4f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Percent Greater Than vs. Increased Date: 11/22/2002 at 15:20:11 From: Melissa Holmes Subject: Percents What is the difference between the following statements: My profits are 200% bigger than they were last year. My profits from last year have increased 200%. This is one of the questions we have to answer in my Middle school methods course and I have looked everywhere for the answer. I hope you can help. Thank you. Date: 11/23/2002 at 21:05:23 From: Doctor Peterson Subject: Re: Percents Hi, Melissa. As far as I can see, they mean the same thing; in fact, both are similarly ambiguous. Taken literally, "200% bigger" (or, more formally, larger or greater) and "increased 200%" (or, more completely, increased _by_ 200%) both mean that the increase from one year to the next is 200% of the first year's value, so that the second year's profit is 3 times the first. But both statements are more likely to have been made with the intention of saying that this year's profit is twice last years. English is not very clear in cases like this. Here is a related discussion in our archive, which deals mostly with "two times greater" but mentions your case: Larger Than and As Large As Since writing that, I found a good reference on "two times greater," although it doesn't mention your "200% greater." It is in Merriam Webster's _Dictionary of English Usage_, which under "times" writes The argument in this case is that _times more_ (or _times larger_, _times stronger_, _times brighter_, etc.) is ambiguous, so that "He has five times more money than you" can be misunderstood as meaning "He has six times as much money as you." It is, in fact, possible to misunderstand _times more_ in this way, but it takes a good deal of effort. If you have $100, five times that is $500, which means that "five times more than $100" can mean (the commentators claim) "$500 more than $100," which equals "$600," which equals "six times as much as $100." The commentators regard this as a serious ambiguity, and they advise you to avoid it by always saying "times as much" instead of "times more." Here again, it seems that they are paying homage to mathematics at the expense of language. The fact is that "five times more" and "five times as much" are idiomatic phrases which have - and are understood to have - exactly the same meaning. The "ambiguity" of _times more_ is imaginary: in the world of actual speech and writing, the meaning of _times more_ is clear and unequivocal. It is an idiom that has existed in our language for more than four centuries, and there is no real reason to avoid its use. I think the same applies to "X percent bigger" and "increased [by] X%." There is just enough ambiguity in a technical context that I would want to ask what was intended before assuming anything, but there is no reason to say that they definitely mean different things, or mean something different than "X percent of" or "increased to X percent." I myself would avoid saying these things, just because there are enough people who have heard that they are ambiguous, and would therefore take them the wrong way (whichever that is!). If you have any further questions, feel free to write back. I'd be interested to hear what the "correct" answer to this question is supposed to be. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/61774.html","timestamp":"2014-04-18T06:13:00Z","content_type":null,"content_length":"8848","record_id":"<urn:uuid:5edf8b4f-a8f4-4f04-8358-f372100c1d19>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Questions for random data generation and value label Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Questions for random data generation and value label From "Joseph Coveney" <stajc2@gmail.com> To <statalist@hsphsun2.harvard.edu> Subject Re: st: Questions for random data generation and value label Date Tue, 12 Mar 2013 10:28:27 +0900 Mark Yu Xue wrote: Let me use an example to describe my question more clearly. There is an actual data that has three variables: Var1, Var2, Var3. Each of them has continuous numeric values. And I get the max, min, SD, mean for each of them, and save them in several macros, and then clear the memory. Then, I want to generate a synthetic data, which also include three variables: SynVar1, SynVar2, SynVar3. And they keep the same max, min, SD, mean of Var1, Var2, Var3, respectively as in actual data. If you have the actual data available, then you can try fitting a Johnson distribution to each variable (with one of the user-written commands -jnsn- or -jnsw-), and then generate the artificial dataset from the parameters of the Johnson distribution (using the user-written command -ajv-). All three user-written commands are in the same package, "JNSN", which you can download from SSC. Type -findit jnsn- to see more. These commands will not get you the exact-same mean, SD, minimum and maximum of the original variable each time, but Johnson distributions have been considered useful in creating artificial data following the same arbitrary (unknown) distribution of actual data of interest, for example, in order to characterize the behavior of candidate estimators or tests. The commands' help files might be a little busy-looking your first time through them, but the commands' use together is rather simple, with just two required lines of code: first either -jnsn- or -jnsw-, and then -ajv- using the returned scalars and macros of the first command. I've illustrated their use in a simple example below. Joseph Coveney . sysuse auto (1978 Automobile Data) . jnsn mpg Johnson's system of transformations Mean and moments for mpg Mean = 21.297 Variance = 33.472 Skewness = 0.949 Kurtosis = 3.975 Johnson distribution type: SB gamma = 2.248 delta = 1.541 xi = 9.616 lambda = 56.418 Note: Program terminated normally . return list r(lambda) = 56.41802121562024 r(xi) = 9.615504048256971 r(delta) = 1.54090335776377 r(gamma) = 2.247612125156365 r(fault) : "Program terminated normally" r(johnson_type) : "SB" . ajv , distribution(`r(johnson_type)') generate(fake_mpg) lambda(`r(lambda)') xi(`r(xi)') gamma(`r(gamma)') delta(`r(delta)') seed(12345) n(100) . summarize mpg fake_mpg Variable | Obs Mean Std. Dev. Min Max mpg | 74 21.2973 5.785503 12 41 fake_mpg | 100 20.84794 5.561717 12.62255 37.59033 * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-03/msg00524.html","timestamp":"2014-04-19T09:39:41Z","content_type":null,"content_length":"12255","record_id":"<urn:uuid:c9aef2f3-574b-4a20-b994-876ed1da39bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
On cyclically overlap-free words in binary alphabets, The Book oFL - HANDBOOK OF FORMAL LANGUAGES , 1997 "... ..." - Invited Lecture at the 4th Conference on Formal Power Series and Algebraic Combinatorics , 1992 "... The purpose of this survey is to present, in contemporary terminology, the fundamental contributions of Axel Thue to the study of combinatorial properties of sequences of symbols, insofar as repetitions are concerned. The present state of the art is also sketched. ..." Cited by 22 (3 self) Add to MetaCart The purpose of this survey is to present, in contemporary terminology, the fundamental contributions of Axel Thue to the study of combinatorial properties of sequences of symbols, insofar as repetitions are concerned. The present state of the art is also sketched. , 1995 "... It is shown that 2+-repetition, i.e. a word of the form uvuvu where u is a letter and v is a word, is the smallest repetition which can be avoided in infinite words over binary alphabet. Such binary words avoiding pattern uvuvu, finite or infinite, are called as 2+-free words and those words are the ..." Cited by 4 (0 self) Add to MetaCart It is shown that 2+-repetition, i.e. a word of the form uvuvu where u is a letter and v is a word, is the smallest repetition which can be avoided in infinite words over binary alphabet. Such binary words avoiding pattern uvuvu, finite or infinite, are called as 2+-free words and those words are the main topic of this work. It is shown here that 2+-free words over binary alphabet can be presented as words built from special kind of blocks, called Morse-blocks, with some rules. In particular, the given presentation by these blocks is unique for 2+-free words long enough. Moreover, it is also shown that the language generated by this presentation can be described by some automaton. In fact, the corresponding presentation in blocks for finite 2 - J. Combin. Theory Ser. A "... The border correlation function β: A ∗ → A ∗ , for A = {a, b}, specifies which conjugates (cyclic shifts) of a given word w of length n are bordered, in other words, β(w) = c0c1... cn−1, where ci = a or b according to whether the i-th cyclic shift σ i (w) of w is unbordered or bordered. Except for ..." Cited by 1 (1 self) Add to MetaCart The border correlation function β: A ∗ → A ∗ , for A = {a, b}, specifies which conjugates (cyclic shifts) of a given word w of length n are bordered, in other words, β(w) = c0c1... cn−1, where ci = a or b according to whether the i-th cyclic shift σ i (w) of w is unbordered or bordered. Except for some special cases, no binary word w has two consecutive unbordered conjugates (σ i (w) and σ i+1 (w)). We show that this is optimal: in every cyclically overlap-free word every other conjugate is unbordered. We also study the relationship between unbordered conjugates and critical points, as well as, the dynamic system given by iterating the function β. We prove that, for each word w of length n, the sequence w, β(w), β 2 (w),... terminates either in b n or in the cycle of conjugates of the word ab k ab k+1 for n = 2k + 3.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2090237","timestamp":"2014-04-21T11:29:38Z","content_type":null,"content_length":"20473","record_id":"<urn:uuid:ca8f9801-1411-4d31-bd13-11be199ab1e5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Mastering Physics Help Site Wavelengths of Matter Ranking Task The following objects move at the speed v. Rank these objects on the basis of their wavelength. Rank from largest to smallest. To rank items as equivalent, overlap them. 1: Red light - v=c 2: Photon - v=0.01c 3: Electron - v=0.01c 4: Baseball - 41m/s 5: Car - 27m/s 6: Person - 4.5m/s Problem 34.31 A proton and an electron have the same de Broglie wavelength. How do their speeds compare, assuming both are much less than that of light? Problem 34.54 Part A: Find the energy of the highest energy photon that can be emitted as the electron jumps between two adjacent allowed energy levels in the Bohr hydrogen atom. Part B: Which energy levels are involved? Enter your answers numerically separated by a comma. Problem 34.36 An electron is moving in the +x direction with speed measured at 2.5×107 m/s, to an accuracy of +/- 10%. What is the minimum uncertainty in its position? Express your answer using two significant figures. Problem 34.32 Part A: Find the de Broglie wavelength of electrons with kinetic energies of 10 . Express your answer using two significant figures. Part B: Find the de Broglie wavelength of electrons with kinetic energies of 3.0 . Express your answer using two significant figures. Part C: Find the de Broglie wavelength of electrons with kinetic energies of 40 . Express your answer using two significant figures. Problem 34.46 The stopping potential in a photoelectric experiment is 1.9 V when the illuminating radiation has wavelength 365 nm. Part A: What is the work function of the emitting surface? Express your answer using two significant figures. Part B: What would be the stopping potential for 280-nm radiation? Express your answer using two significant figures. Problem 34.58 A hydrogen atom is in its ground state when its electron absorbs 44eV in an interaction with a photon. What is the energy of the resulting free electron? Problem 35.34 An electron is trapped in an infinite square well 30nm wide. Find the wavelengths of the photons emitted in these transitions: Part A: n=2 to n=1; Express your answer using two significant figures. Part B: n=20 to n=19; Express your answer using two significant figures. Part C: n=100 to n=1; Express your answer using two significant figures.
{"url":"http://masteringphysicshelp.weebly.com/index.html","timestamp":"2014-04-17T04:32:59Z","content_type":null,"content_length":"14182","record_id":"<urn:uuid:09567036-184e-4092-a87a-0f932f30d474>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
Section 4.5 NOAA Polar Orbiter Data User's Guide Section 4.5 Introduction Page, NOAA POD Guide TOC, Acronyms Previous Section, Next Section 4.5 Calibration of TOVS Data TOVS thermal data values (HIRS/2 Channels 1-19, SSU Channels 1-3, and MSU Channels 1-4) may be converted to brightness temperatures, and TOVS visible data values (HIRS/2 Channel 20) may be converted to percent albedo, by the following calibration procedures. (Note: see Appendix M for an improved method of calibrating the HIRS data.) The format and order of the calibration coefficients is described in Sections 4.1.2.1, 4.2.2.1, 4.3.2.1 for HIRS/2, SSU, and MSU data, respectively. Once the calibration coefficients have been extracted they must be scaled and normalized in that order. The scale factors for the coefficients from lowest to highest in order are 2^22, 2^30, 2^44, and 2^56. (The 0th order term is a constant or in this case the intercept value, and has a scale factor of 2^22. Similarly, the 3rd order term has a scale factor of 2^56.) To scale the raw calibration values, they must be divided by their respective scale factor. HIRS/2 users should refer to adjustments necessary to obtain correct intercept values in Section 4.1.2.1. Once the coefficients have been scaled, the raw data (in counts) should be normalized or corrected for non-linearity by using the normalization coefficients which are supplied with the calibration coefficients. The equation for the normalized count value C[i]' is as follows: where L is the normalization coefficient, C is the raw data in counts, subscript i indicates the channel, and the subscripts 0, 1, 2, and 3 indicate the order of the normalization coefficient. This is a generalized equation since the HIRS/2 calibration coefficients do not contain a 3rd order normalization coefficient (i.e., drop the L[i,3] C^3 term in Equation 4.5-1). When the condition of L [i,0] = 0, L[i,1] = 1, L[i,2] = 0, and L[i,3] = 0 is met, then C[i]' = C[i]. This means that channel i is linear and no non-linearity correction is necessary. At this time, the normalization coefficients for HIRS/2 and SSU data have this condition. The scaled calibration coefficients and normalized data may now be used as described below. 4.5.1 Thermal Channel Calibration The scaled thermal channel zero order coefficients (intercept) are in units of mW/(m^2-sr-cm^-1), the 1st order coefficients (slope) are in units of mW/(m^2-sr-cm^-1) per count, etc. The radiance measured by the sensor (Channel i) is computed as a function of the input data value as follows: where E[i] is the radiance value, in mW/(m^2-sr-cm^-1), C[i]' is the normalized count value (computed from Equation 4.5-1), A is the calibration coefficient (auto or manual), subscript i indicates the channel, and subscripts 0, 1, and 2 indicate the order of the calibration coefficients. The A[i,2] C[i]'^2 term in Equation 4.5.1-1 should be dropped for SSU and MSU data. For the SSU and MSU data, the conversion to "brightness" temperature from energy is performed using the inverse of Planck's radiation equation (which is Equation 3.3.1-2 in Section 3.3.1). The same values should be used for the constants C[1] and C[2], and the central wave number values can be found in Section 1.4 (see the corresponding subsection for the desired satellite). For the conversion to "brightness" temperatures for the HIRS/2 data, the same procedure is followed as with the MSU and SSU data, except that a band correction algorithm must be applied to the results of the inverse of Planck's equation. The inverse of Planck's equation actually produces an apparent brightness temperature, T^*, which is corrected using the following equation: where T is the corrected brightness temperature, and b and c are the band correction coefficients which are supplied in Section 1.4 (see the corresponding subsection for the desired satellite). 4.5.2 Visible Channel Calibration The scaled visible channel calibration values are in units of percent albedo for the zero order term (intercept), percent albedo/count for the 1st order term (slope), etc. The only visible channel for the TOVS is the HIRS/2 Channel 20, so the equation to compute the percent albedo, B, is as follows: where A is the calibration coefficient (auto or manual), C[20]' is the normalized count value for Channel 20, subscript 20 indicates Channel 20, and subscripts 0, 1, and 2 indicate the order of the calibration coefficients. At this time, the second order term (A[20,2] C[20]'^2) in Equation 4.5.2-1 can be dropped since A[20,2] is usually 0. ┃Previous Section │Top of Page│Next Section ┃
{"url":"http://www.ncdc.noaa.gov/oa/pod-guide/ncdc/docs/podug/html/c4/sec4-5.htm","timestamp":"2014-04-16T13:22:43Z","content_type":null,"content_length":"10431","record_id":"<urn:uuid:ff23f3c6-0cf0-489a-a089-3bb51ee796ed>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
estimates and those around requirement estimates—into a single value that reflects the uncertainty around the prevalence estimate. Variability Associated with the Collection of Intake Data Other characteristics of dietary studies complicate the matter even further. Dietary intake data suffer from inaccuracies due to underreporting of food, incorrect specification of portion sizes, incomplete or imprecise food composition tables, etc. These factors may have a compound effect on prevalence estimates. In addition, systematic errors in measurement (such as energy underreporting) may increase the bias of the prevalence estimate. All of these factors have an effect on how precisely (or imprecisely) the prevalence of nutrient adequacy in a group can be estimated, and it is difficult to quantify their effect with confidence. The software developed at Iowa State University (called SIDE) (Dodd, 1996) to estimate usual intake distributions also produces prevalence estimates using the cut-point method and provides an estimate of the standard deviation associated with the prevalence estimate. However, it is important to remember that the standard deviations produced by the program are almost certainly an underestimate of the true standard deviations because they do not consider variability associated with the EAR or with the collection of intake data. Why should standard deviations be a concern? Standard deviations of prevalence estimates are needed to determine, for example, whether a prevalence estimate differs from zero or any other target value or to compare two prevalence estimates. The evaluation of differences in intakes requires the estimation of standard deviations of quantities such as prevalence of nutrient inadequacy or excess (e.g., Application 3 in Chapter 7). As another example, suppose that prevalence of inadequate intake of a nutrient in a group was measured at one point in time as 45 percent. An intervention is applied to the group and then a new estimate of the prevalence of inadequate intake of the nutrient is found to be 38 percent, a decrease of 7 percent. However, to accurately assess the effectiveness of the intervention, the standard deviations around the 45 and 38 percent prevalence estimates are also needed. If the standard deviations are small (e.g., 1 percent), then one could con-
{"url":"http://www.nap.edu/openbook.php?record_id=9956&page=160","timestamp":"2014-04-19T22:31:50Z","content_type":null,"content_length":"41593","record_id":"<urn:uuid:378f8e11-ce0c-4751-a3c6-a2166576be5d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Epsilon Calcul Epsilon Calculi Epsilon Calculi are extended forms of the predicate calculus that incorporate epsilon terms. Epsilon terms are individual terms of the form ‘εxFx’, being defined for all predicates in the language. The epsilon term ‘εxFx’ denotes a chosen F, if there are any F’s, and has an arbitrary reference otherwise. Epsilon calculi were originally developed to study certain forms of arithmetic, and set theory; also to prove some important meta-theorems about the predicate calculus. Later formal developments have included a variety of intensional epsilon calculi, of use in the study of necessity, and more general intensional notions, like belief. An epsilon term such as ‘εxFx’ was originally read as ‘the first F’, and in arithmetical contexts as ‘the least F’. More generally it can be read as the demonstrative description ‘that F’, when arising either deictically, that is, in a pragmatic context where some F is being pointed at, or in linguistic cross-reference situations, as with, for example, ‘There is a red-haired man in the room. That red-haired man is Caucasian’. The application of epsilon terms to natural language shares some features with the use of iota terms within the theory of descriptions given by Bertrand Russell, but differs in formalising aspects of a slightly different theory of reference, first given by Keith Donnellan. More recently, epsilon terms have been used by a number of writers to formalise cross-sentential anaphora, which would arise if ‘that red-haired man’ in the linguistic case above was replaced with a pronoun such as ‘he’. There is then also the similar application in intensional cases, like ‘There is a red-haired man in the room. Celia believed he was a woman.’ Table of Contents 1. Introduction Epsilon terms were introduced by the german mathematician David Hilbert, in Hilbert 1923, 1925, to provide explicit definitions of the existential and universal quantifiers, and resolve some problems in infinitistic mathematics. But it is not just the related formal results, and structures which are of interest. In Hilbert’s major book Grundlagen der Mathematik, which he wrote with his collaborator Paul Bernays, epsilon terms were presented as formalising certain natural language constructions, like definite descriptions. And they in fact have a considerably larger range of such applications, for instance in the symbolisation of certain cross-sentential anaphora. Hilbert and Bernays also used their epsilon calculus to prove two important meta-theorems about the predicate calculus. One theorem subsequently led, for instance, to the development of semantic tableaux: it is called the First Epsilon Theorem, and its content and proof will be given later, in section 6 below. A second theorem that Hilbert and Bernays proved, which we shall also look at then, establishes that epsilon calculi are conservative extensions of the predicate calculus, that is, that no more theorems expressible just in the quantificational language of the predicate calculus can be proved in epsilon calculi than can be proved in the predicate calculus itself. But while epsilon calculi do have these further important formal functions, we will not only be concerned to explore them, for we shall also first discuss the natural language structures upon which epsilon calculi have a considerable bearing. The growing awareness of the larger meaning and significance of epsilon calculi has only come in stages. Hilbert and Bernays introduced epsilon terms for several meta-mathematical purposes, as above, but the extended presentation of an epsilon calculus, as a formal logic of interest in its own right, in fact only first appeared in Bourbaki’s Éléments de Mathématique (although see also Ackermann 1937-8). Bourbaki’s epsilon calculus with identity (Bourbaki, 1954, Book 1) is axiomatic, with Modus Ponens as the only primitive inference or derivation rule. Thus, in effect, we get: (X ∨ X) → X, X → (X ∨ Y), (X ∨ Y) → (Y ∨ X), (X ∨ Y) → ((Z ∨ X) → (Z ∨ Y)), Fy → FεxFx, x = y → (Fx ↔ Fy), (x)(Fx ↔ Gx) → εxFx = εxGx. This adds to a basis for the propositional calculus an epsilon axiom schema, then Leibniz’ Law, and a second epsilon axiom schema, which is a further law of identity. Bourbaki, though, used the Greek letter tau rather than epsilon to form what are now called ‘epsilon terms’; nevertheless, he defined the quantifiers in terms of his tau symbol in the manner of Hilbert and Bernays, namely: (∃x)Fx ↔ FεxFx, (x)Fx ↔ Fεx¬Fx; and note that, in his system the other usual law of identity, ‘x = x’, is derivable. The principle purpose Bourbaki found for his system of logic was in his theory of sets, although through that, in the modern manner, it thereby came to be the foundation for the rest of mathematics. Bourbaki’s theory of sets discriminates amongst predicates those which determine sets: thus some, but only some, predicates determine sets, i.e. are ‘collectivisantes’. All the main axioms of classical Set Theory are incorporated in his theory, but he does not have an Axiom of Choice as a separate axiom, since its functions are taken over by his tau symbol. The same point holds in Bernays’ epsilon version of his set theory (Bernays 1958, Ch VIII). Epsilon calculi, during this period, were developed without any semantics, but a semantic interpretation was produced by Gunter Asser in 1957, and subsequently published in a book by A.C. Leisenring, in 1969. Even then, readings of epsilon terms in ordinary language were still uncommon. A natural language reading of epsilon terms, however, was present in Hilbert and Bernays’ work. In fact the last chapter of book 1 of the Grundlagen is a presentation of a theory of definite descriptions, and epsilon terms relate closely to this. In the more well known theory of definite descriptions by Bertrand Russell (Russell 1905) there are three clauses: with The king of France is bald we get, on Russell’s theory, first there is a king of France, there is only one king of France, and third anyone who is king of France is bald. Russell uses the Greek letter iota to formalise the definite description, writing the whole but he recognises the iota term is not a proper individual symbol. He calls it an ‘incomplete symbol’, since, because of the three parts, the whole proposition is taken to have the quantificational (∃x)(Kx & (y)(Ky → y = x) & (y)(Ky → By)), which is equivalent to (∃x)(Kx & (y)(Ky→ y = x) & Bx). And that means that it does not have the form ‘Bx’. Russell believed that, in addition to his iota terms, there was another class of individual terms, which he called ‘logically proper names’. These would simply fit into the ‘x’ place in ‘Bx’. He believed that ‘this’ and ‘that’ were in this class, but gave no symbolic characterisation of them. Hilbert and Bernays, by contrast, produced what is called a ‘pre-suppositional theory’ of definite descriptions. The first two clauses of Russell’s definition were not taken to be part of the meaning of ‘The King of France is bald’: they were merely conditions under which they took it to be permitted to introduce a complete individual term for ‘the King of France’, which then satisfies Kx & (y)(Ky → y = x). Hilbert and Bernays continued to use the Greek letter iota in their individual term, although it has a quite different grammar from Russell’s iota term, since, when Hilbert and Bernays’ term can be introduced, it is provably equivalent to the corresponding epsilon term (Kneebone 1963, p102). In fact it was later suggested by many that epsilon terms are not only complete symbols, but can be seen as playing the same role as the ‘logically proper names’ Russell discussed. It is at the start of book 2 of the Grundlagen that we find the definition of epsilon terms. There, Hilbert and Bernays first construct a theory of indefinite descriptions in a similar manner to their theory of definite descriptions. They allow, now, an eta term to be introduced as long as just the first of Russell’s conditions is met. That is to say, given one can introduce the term ‘ηxFx’, and say But the condition for the introduction of the eta term can be established logically, for certain predicates, since (∃x)((∃y)Fy → Fx), is a predicate calculus theorem (Copi 1973, p110). It is the eta term this theorem allows us to introduce which is otherwise called an epsilon term, and its logical basis enables entirely formal theories to be constructed, since such individual terms are invariably defined. Thus we may invariably introduce ‘ηx((∃y)Fy → Fx)’, and this is commonly written ‘εxFx’, about which we can therefore (∃y)Fy → FεxFx. Since it is that F which exists if anything is F, Hilbert read the epsilon term in this case ‘the first F’. For instance, in arithmetic, ‘the first’ may be taken to be the least number operator. However, while if there are F’s then the first F is clearly some chosen one of them, if there are no F’s then ‘the first F’ must be a misnomer. And that form of speech only came to be fully understood in the theories of reference which appeared much later, when reference and denotation came to be more clearly separated from description and attribution. Donnellan (Donnellan 1966) used the example ‘the man with martini in his glass’, and pointed out that, in certain uses, this can refer to someone without martini in his glass. In the terminology Donnellan made popular, ‘the first F’, in the second case above works similarly: it cannot be attributive, and so, while it refers to something, it must refer arbitrarily, from a semantic point of view. With reference in this way separated from attribution it becomes possible to symbolise the anaphoric cross-reference between, for instance, ‘There is one and only one king of France’ and ‘He is bald’. For, independently of whether the former is true, the ‘he’ in the latter is a pronoun for the epsilon term in the former — by a simple extension of the epsilon definition of the existential quantifier. Thus the pair of remarks may be symbolised (∃x)(Kx & (y)(Ky → y = x)) & Bεx(Kx & (y)(Ky → y = x)). Furthermore such cross-reference may occur in connection with intensional constructions of a kind Russell also considered, such as George IV wondered whether the author of Waverley was Scott. Thus we can say ‘There is an author of Waverley, and George IV wondered whether he was Scott’. But the epsilon analysis of these cases puts intensional epsilon calculi at odds with Russellian views of such constructions, as we shall see later. The Russellian approach, by not having complete symbols for individuals, tends to confuse cases in which assertions are made about individuals and cases in which assertions are made about identifying properties. As we shall see, epsilon terms enable us to make the discrimination between, for instance, s = εx(y)(Ay ↔ y = x), (i.e. ‘Scott is the author of Waverley’), and (y)(Ay ↔ y = s), (that is, ‘there is one and only one author of Waverley and he is Scott’), and so it enables us to locate more exactly the object of George IV’s thought. 2. Descriptions and Identity When one starts to ask about the natural language meaning of epsilon terms, it is interesting that Leisenring just mentions the ‘formal superiority’ of the epsilon calculus (Leisenring 1969, p63, see also Routley 1969, Hazen 1987). Leisenring took the epsilon calculus to be a better logic than the predicate calculus, but merely because of the Second Epsilon Theorem. Its main virtue, to Leisenring, was that it could prove all that seemingly needed to be proved, but in a more elegant way. Epsilon terms were just neater at calculating which were the valid theorems of the predicate Remembering Hilbert and Bernays’ discussion of definite and indefinite descriptions, clearly there is more to the epsilon calculus than this. And there are, in fact, two specific theorems provable within the epsilon calculus, though not the predicate calculus, which will start to indicate the epsilon calculus’ more general range of application. They concern individuals, since the epsilon calculus is distinctive in providing an appropriate, and systematic means of reference to them. The need to have complete symbols for individuals became evident some years after Russell’s promotion of incomplete symbols for them. The first major book to allow for this was Rosser’s Logic for Mathematicians, in 1953, although there were precursors. For the classical difficulty with providing complete terms for individuals concerns what to do with ‘non-denoting’ terms, and Quine, for instance, following Frege, often gave them an arbitrary, though specific referent (Marciszewski 1981, p113). This idea is also present in Kalish and Montague (Kalish and Montague 1964, pp242-243), who gave the two rules: (∃x)(y)(Fy ↔ y = x) ├ FιxFx, ¬(∃x)(y)(Fy ↔ y = x) ├ιxFx = ιx¬(x = x), where ‘ιxFx’ is what otherwise might be written ‘εx(y)(Fy ↔ y = x)’. Kalish and Montague believed, however, that the second rule ‘has no intuitive counterpart, simply because ordinary language shuns improper definite descriptions’ (Kalish and Montague 1964, p244). And, at that time, what Donnellan was to publish in Donnellan 1966, about improper definite descriptions, was certainly not well known. In fact ordinary speech does not shun improper definite descriptions, although their referents are not as fixed as the above second rule requires. Indeed the very fact that the descriptions are improper means that their referents are not determined semantically: instead they are just a practical, pragmatic choice. Stalnaker and Thomason recognised the need to be more liberal when they defined their referential terms, which also had to refer, in the contexts they were concerned with, in more than one possible world (Thomason and Stalnaker 1968, p363): In contrast with the Russellian analysis, definite descriptions are treated as genuine singular terms; but in general they will not be substance terms [rigid designators]. An expression like ιxPx is assigned a referent which may vary from world to world. If in a given world there is a unique existing individual which has the property corresponding to P, this individual is the referent of ιxPx; otherwise, ιxPx refers to an arbitrarily chosen individual which does not exist in that world. Stalnaker and Thomason appreciated that ‘A substance term is much like what Russell called a logically proper name’, but they said that an individual constant might or might not be a substance term, depending on whether it was more like ‘Socrates’ or ‘Miss America’ (Thomason and Stalnaker 1968, p362). A more complete investigation of identity and descriptions, in modal and general intensional contexts, was provided in Routley, Meyer and Goddard 1974, and Routley 1977, see also Hughes and Cresswell 1968, Ch 11. And with these writers we get the explicit rendering of definite descriptions in epsilon terms, as in Goddard and Routley 1973, p558, Routley 1980, p277, c.f. Hughes and Cresswell 1968, p203. Certain specific theorems in the epsilon calculus, as was said before, support these kinds of identification. One theorem demonstrates directly the relation between Russell’s attributive, and some of Donnellan’s referential ideas. For (∃x)(Fx & (y)(Fy → y = x) & Gx) is logically equivalent to (∃x)(Fx & (y)(Fy → y = x)) & Ga, where a = εx(Fx & (y)(Fy → y = x)). This arises because the latter is equivalent to Fa & (y)(Fy → y = a) & Ga, which entails the former. But the former is Fb & (y)(Fy → y = b) & Gb, with b = εx(Fx & (y)(Fy → y = x) & Gx), and so entails (∃x)(Fx & (y)(Fy → y = x)), Fa & (y)(Fy → y = a). But that means that, from the uniqueness clause, a = b, and so meaning the former entails the latter, and therefore the former is equivalent to the latter. The former, of course, gives Russell’s Theory of Descriptions, in the case of ‘The F is G’; it explicitly asserts the first two clauses, to do with the existence and uniqueness of an F. A presuppositional theory, such as we saw in Hilbert and Bernays, would not explicitly assert these two clauses: on such an account they are a precondition before the term ‘the F’ can be introduced. But neither of these theories accommodate improper definite descriptions. Since Donnellan it is more common to allow that we can always use ‘the F’: if the description is improper then the referent of this term is simply found in the term’s practical use. One detail of Donnellan’s historical account, however, must be treated with some care, at this point. Donnellan was himself concerned with definite descriptions which were improper in the sense that they did not uniquely describe what the speaker took to be their referent. So the description might still be ‘proper’ in the above sense — if there still was something to which it uniquely applied, on account of its semantic content. Thus Donnellan allowed ‘the man with martini in his glass’ to identify someone without martini in his glass irrespective of whether there was some sole man with martini in his glass. But if one talks about ‘the man with martini in his glass’ one can be correctly taken to be talking about who this describes, if it does in fact correctly describe someone — as Devitt and Bertolet pointed out in criticism of Donnellan (Devitt 1974, Bertolet 1980). It is this aspect of our language which the epsilon account matches, for an epsilon account allows definite descriptions to refer without attribution of their semantic character, but only if nothing uniquely has that semantic character. Thus it is not the whole of the first statement above , but only the third part of the second statement which makes the remark ‘The F is G’. The difficulty with Russell’s account becomes more plain if we read the two equivalent statements using relative and personal pronouns. They then become There is one and only one F, which is G, There is one and only one F; it is G. But using just the logic derived from Frege, Russell could formalise the ‘which’, but could not separate out the last clause, ‘it is G’. In that clause ‘it’ is an anaphor for ‘the (one and only) F’, and it still has this linguistic meaning if there is no such thing, since that is just a matter of grammar. But the uniqueness clause is needed for the two statements to be equivalent — without uniqueness there is no equivalence, as we shall see – so ‘which’ is not itself equivalent to ‘it’. Russell, however, because he could not separate out the ‘it’, had to take the whole of the first expression as the analysis of ‘The F is G’ — he could not formulate the needed ‘logically proper name’. But how can something be the one and only F ‘if there is no such thing’? That is where another important theorem provable in the epsilon calculus is illuminating, namely: (Fa & (y)(Fy → y = a)) → a = εx(Fx & (y)(Fy → y = x)). The important thing is that there is a difference between the left hand side and the right hand side, i.e. between something being alone F, and that thing being the one and only F. For the left-right implication cannot be reversed. We get from the left to the right when we see that the left as a whole entails (∃x)(Fx & (y)(Fy → y = x)), and so also its epsilon equivalent Fεx(Fx & (y)(Fy → y = x)) & (z)(Fz → z = εx(Fx & (y)(Fy → y = x))). Given Fa, then from the second clause here we get the right hand side of our original implication. But if we substitute ‘εx(Fx & (y)(Fy → y = x))’ for ‘a’ in that implication then on the right we have something which is necessarily true. But the left hand side is then the same as (∃x)(Fx & (y)(Fy → y = x)), and that is in general contingent. Hence the implication cannot generally be reversed. Having the property of being alone F is here contingent, but possessing the identity of the one and only F is The distinction is not made in Russell’s logic, since possession of the relevant property is the only thing which can be formally expressed there. In Russell’s theory of descriptions, a’s possession of the property of being alone a king of France is expressed as a quasi identity a = ιxKx, and that has the consequence that such identities are contingent. Indeed, in counterpart theories of objects in other possible worlds the idea is pervasive that an entity may be defined in terms of its contingent properties in a given world. Hughes and Cresswell, however, differentiated between contingent identities and necessary identities in the following way (Hughes and Cresswell 1968, Now it is contingent that the man who is in fact the man who lives next door is the man who lives next door, for he might have lived somewhere else; that is living next door is a property which belongs contingently, not necessarily, to the man to whom it does belong. And similarly, it is contingent that the man who is in fact the mayor is the mayor; for someone else might have been elected instead. But if we understand [The man who lives next door is the mayor] to mean that the object which (as a matter of contingent fact) possesses the property of being the man who lives next door is identical with the object which (as a matter of contingent fact) possesses the property of being the mayor, then we are understanding it to assert that a certain object (variously described) is identical with itself, and this we need have no qualms about regarding as a necessary truth. This would give us a way of construing identity statements which makes [(x = y) → L(x = y)] perfectly acceptable: for whenever x = y is true we can take it as expressing the necessary truth that a certain object is identical with itself. There are more consequences of this matter, however, than Hughes and Cresswell drew out. For now that we have proper referring terms for individuals to go into such expressions as ‘x = y’, we first see better where the contingency of the properties of such individuals comes from — simply the linguistic facility of using improper definite descriptions. But we also see, because identities between such terms are necessary, that proper referring terms must be rigid, i.e. have the same reference in all possible worlds. This is not how Stalnaker and Thomason saw the matter. Stalnaker and Thomason, it will be remembered, said that there were two kinds of individual constants: ones like ‘Socrates’ which can take the place of individual variables, and others like ‘Miss America’ which cannot. The latter, as a result, they took to be non-rigid. But it is strictly ‘Miss America in year t’ which is meant in the second case, and that is not a constant expression, even though such functions can take the place of individual variables. It was Routley, Meyer and Goddard who most seriously considered the resultant possibility that all properly individual terms are rigid. At least, they worked out many of the implications of this position, even though Routley was not entirely content with it. Routley described several rigid intensional semantics (Routley 1977, pp185-186). One of these, for instance, just took the first epsilon axiom to hold in any interpretation, and made the value of an epsilon term itself. On such a basis Routley, Meyer and Goddard derived what may be called ‘Routley’s Formula’, i.e. L(∃x)Fx → (∃x)LFx. In fact, on their understanding, this formula holds for any operator and any predicate, but they had in mind principally the case of necessity illustrated here, with ‘Fx’ taken as ‘x numbers the planets’, making ‘εxFx’ ‘the number of the planets’. The formula is derived quite simply, in the following way: from we can get by the epsilon definition of the existential quantifier, and so by existential generalisation over the rigid term (Routley, Meyer and Goddard 1974, p308, see also Hughes and Cresswell 1968, pp197, 204). Routley, however, was still inclined to think that a rigid semantics was philosophically objectionable (Routley 1977, p186): Rigid semantics tend to clutter up the semantics for enriched systems with ad hoc modelling conditions. More important, rigid semantics, whether substitutional or objectual, are philosophically objectionable. For one thing, they make Vulcan and Hephaestus everywhere indistinguishable though there are intensional claims that hold of one but not of the other. The standard escape from this sort of problem, that of taking proper names like ‘Vulcan’ as disguised descriptions we have already found wanting… Flexible semantics, which satisfactorily avoid these objections, impose a more objectual interpretation, since, even if [the domain] is construed as the domain of terms, [the value of a term in a world] has to be permitted, in some cases at least, to vary from world to As a result, while Routley, Meyer and Goddard were still prepared to defend the formula, and say, for instance, that there was a number which necessarily numbers the planets, namely the number of the planets (np), they thought that this was only in fact the same as 9, so that one still could not argue correctly that as L(np numbers the planets), so L(9 numbers the planets). ‘For extensional identity does not warrant intersubstitutivity in intensional frames’ (Routley, Meyer and Goddard 1974, p309). They held, in other words that the number of the planets was only contingently 9. This means that they denied ‘(x = y) → L(x = y)’, but, as we shall see in more detail later, there are ways to hold onto this principle, i.e. maintain the invariable necessity of identity. 3. Rigid Epsilon Terms There is some further work which has helped us to understand how reference in modal and general intensional contexts must be rigid. But it involves some different ideas in semantics, and starts, even, outside our main area of interest, namely predicate logic, in the semantics of propositional logic. When one thinks of ‘semantics’ one maybe thinks of the valuation of formulas. Since the 1920s a meta-study of this kind was certainly added to the previous logical interest in proof theory. Traditional proof theory is commonly associated with axiomatic procedures, but, from a modern perspective, its distinction is that it is to do with ‘object languages’. Tarski’s theory of truth relies crucially on the distinction between object languages and meta-languages, and so semantics generally seems to be necessarily a meta-discipline. In fact Tarski believed that such an elevation of our interest was forced upon us by the threat of semantic paradoxes like The Liar. If there was, by contrast, ‘semantic closure’, i.e. if truth and other semantic notions were definable at the object level, then there would be contradictions galore (c.f. Priest 1984). In this way truth may seem to be necessarily a predicate of (object-level) sentences. But there is another way of looking at the matter which is explicitly non-Tarskian, and which others have followed (see Prior 1971, Ch 7, Sayward 1987). This involves seeing ‘it is true that’ as not a predicate, but an object-level operator, with the truth tabulations in Truth Tables, for instance, being just another form of proof procedure. Operators indeed include ‘it is provable that’, and this is distinct from Gödel’s provability predicate, as Gödel himself pointed out (Gödel 1969). Operators are intensional expressions, as in the often discussed ‘it is necessary that’ and ‘it is believed that’, and trying to see such forms of indirect discourse as metalinguistic predicates was very common in the middle of the last century. It was pervasive, for instance, in Quine’s many discussions of modality and intensionality. Wouldn’t someone be believing that the Morning Star is in the sky, but the Evening Star is not, if, respectively, they assented to the sentence ‘the Morning Star is in the sky’, and dissented from ‘the Evening Star is in the sky’? Anyone saying ‘yes’ is still following the Quinean tradition, but after Montague’s and Thomason’s work on operators (e.g. Montague 1963, Thomason 1977, 1980) many logicians are more persuaded that indirect discourse is not quotational. It is open to doubt, that is to say, whether we should see the mind in terms of the direct words which the subject would use. The alternative involves seeing the words ‘the Morning Star is in the sky’ in such an indirect speech locution as ‘Quine believes that the Morning Star is in the sky’ as words merely used by the reporter, which need not directly reflect what the subject actually says. That is indeed central to reported speech — putting something into the reporter’s own words rather than just parroting them from another source. Thus a reporter may say Celia believed that the man in the room was a woman, but clearly that does not mean that Celia would use ‘the man in the room’ for who she was thinking about. So referential terms in the subordinate proposition are only certainly in the mouth of the reporter, and as a result only certainly refer to what the reporter means by them. It is a short step from this thought to seeing There was a man in the room, but Celia believed that he was a woman, as involving a transparent intensional locution, with the same object, as one might say, ‘inside’ the belief as ‘outside’ in the room. So it is here where rigid constant epsilon terms are needed, to symbolise the cross-sentential anaphor ‘he’, as in: (∃x)(Mx & Rx) & BcWεx(Mx & Rx). To understand the matter fully, however, we must make the shift from meta- to object language we saw at the propositional level above with truth. Routley, Meyer and Goddard realised that a rigid semantics required treating such expressions as ‘BcWx’ as simple predicates, and we must now see what this implies. They derived, as we saw before, ‘Routley’s Formula’ L(∃x)Fx → (∃x)LFx, but we can now start to spell out how this is to be understood, if we hold to the necessity of identities, i.e. if we use ‘=’ so that x = y → L(x = y). Again a clear illustration of the validity of Routley’s Formula is provided by the number of the planets, but now we may respect the fact that some things may lack a number, and also the fact that referential, and attributive senses of terms may be distinguished. Thus if we write ‘(nx)Px’ for ‘there are n P’s', then εn(ny)Py will be the number of P’s, and it is what numbers them (i.e. ([εn(ny) Py]x)Px) if they have a number (i.e. if (∃n)(nx)Px) — by the epsilon definition of the existential quantifier. Then, with ‘Fx’ as the proper (necessary) identity ‘x = εn(ny)Py’ Routley’s Formula holds because the number in question exists eternally, making both sides of the formula true. But if ‘Fn’ is simply the attributive ‘(ny)Py’ then this is not necessary, since it is contingent even, in the first place, that there is a number of P’s, instead of just some P, making both sides of the formula false. Hughes and Cresswell argue against the principle saying (Hughes and Cresswell 1968, p144): …let [Fx] be ‘x is the number of the planets’. Then the antecedent is true, for there must be some number which is the number of the planets (even if there were no planets at all there would still be such a number, namely 0): but the consequent is false, for since it is a contingent matter how many planets there are, there is no number which must be the number of the planets. But this forgets continuous quantities, where there are no discrete items before the nomination of a unit. The number associated with some planetary material, for instance, numbers only arbitrary units of that material, and not the material itself. So the antecedent of Routley’s Formula is not necessarily true. Quine also used the number of the planets in his central argument against quantification into modal contexts. He said (Quine 1960, pp195-197): If for the sake of argument we accept the term ‘analytic’ as predicable of sentences (hence as attachable predicatively to quotations or other singular terms designating sentences), then ‘necessarily’ amounts to ‘is analytic’ plus an antecedent pair of quotation marks. For example, the sentence: (1) Necessarily 9 > 4 is explained thus: (2) ’9 > 4′ is analytic… So suppose (1) explained as in (2). Why, one may ask, should we preserve the operatorial form as of (1), and therewith modal logic, instead of just leaving matters as in (2)? An apparent advantage is the possibility of quantifying into modal positions; for we know we cannot quantify into quotation, and (2) uses quotation… But is it more legitimate to quantify into modal positions than into quotation? For consider (1) even without regard to (2); surely, on any plausible interpretation, (1) is true and this is (3) Necessarily the number of major planets > 4. Since 9 = the number of major planets, we can conclude that the position of ’9′ in (1) is not purely referential and hence that the necessity operator is opaque. But here Quine does not separate out the referential ‘the number of the major planets is greater than 4′, i.e. ‘εn(ny)Py > 4′, from the attributive ‘There are more than 4 major planets’, i.e. ‘(∃n) ((ny)Py & n > 4)’. If 9 = εn(ny)Py, then it follows that εn(ny)Py > 4, but it does not follow that (∃n)((ny)Py & n > 4). Substitution of identicals in (1), therefore, does yield (3), even though it is not necessary that there are more than 4 major planets. We can now go into some details of how one gets the ‘x’ in such a form as ‘LFx’ to be open for quantification. For, what one finds in traditional modal semantics (see Hughes and Cresswell 1968, passim) are formulas in the meta-linguistic style, like V(Fx, i) = 1, which say that the valuation put on ‘Fx’ is 1, in world i. There should be quotation marks around the ‘Fx’ in such a formula, to make it meta-linguistic, but by convention they are generally omitted. To effect the change to the non-meta-linguistic point of view, we must simply read this formula as it literally is, so that the ‘Fx’ is in indirect speech rather than direct speech, and the whole becomes the operator form ‘it would be true in world i that Fx’. In this way, the term ‘x’ gets into the language of the reporter, and the meta/object distinction is not relevant. Any variable inside the subordinate proposition can now be quantified over, just like a variable outside it, which means there is ‘quantifying in’, and indeed all the normal predicate logic operations apply, since all individual terms are rigid. A example illustrating this rigidity involves the actual top card in a pack, and the cards which might have been top card in other circumstances (see Slater 1988a). If the actual top card is the Ace of Spades, and it is supposed that the top card is the Queen of Hearts, then clearly what would have to be true for those circumstances to obtain would be for the Ace of Spades to be the Queen of Hearts. The Ace of Spades is not in fact the Queen of Hearts, but that does not mean they cannot be identical in other worlds (c.f. Hughes and Cresswell, 1968, p190). Certainly if there were several cards people variously thought were on top, those cards in the various supposed circumstances would not provide a constant c such that Fc is true in all worlds. But that is because those cards are functions of the imagined worlds — the card a believes is top (εxBaFx) need not be the card b believes is top (εxBbFx), etc. It still remains that there is a constant, c, such that Fc is true in all worlds. Moreover, that c is not an ‘intensional object’, for the given Ace of Spades is a plain and solid extensional object, the actual top card (εxFx). Routley, Meyer and Goddard did not accept the latter point, wanting a rigid semantics in terms of ‘intensional objects’ (Goddard and Routley, 1973, p561, Routley, Meyer and Goddard, 1974, p309, see also Hughes and Cresswell 1968, p197). Stalnaker and Thomason accepted that certain referential terms could be functional, when discriminating ‘Socrates’ from ‘Miss America’ — although the functionality of ‘Miss America in year t’ is significantly different from that of ‘the top card in y’s belief’. For if this year’s Miss America is last year’s Miss America, still it is only one thing which is identical with itself, unlike with the two cards. Also, there is nothing which can force this year’s Miss America to be last year’s different Miss America, in the way that the counterfactuality of the situation with the playing cards forces two non-identical things in the actual world to be the same thing in the other possible world. Other possible worlds are thus significantly different from other times, and so, arguably, other possible worlds should not be seen from the Realist perspective appropriate for other times — or other spaces. 4. The Epsilon Calculus’ Problematic It might be said that Realism has delayed a proper logical understanding of many of these things. If you look ‘realistically’ at picturesque remarks like that made before, namely ‘the same object is ‘inside’ the belief as ‘outside’ in the room’, then it is easy for inappropriate views about the mind to start to interfere, and make it seem that the same object cannot be in these two places at once. But if the mind were something like another space or time, then counterfactuality could get no proper purchase — no one could be ‘wrong’, since they would only be talking about elements in their ‘world’, not any objective, common world. But really, all that is going on when one says, for instance, There was a man in the room, but Celia believed he was a woman, is that the same term — or one term and a pronominal surrogate for it — appears at two linguistic places in some discourse, with the same reference. Hence there is no grammatical difference between the cross reference in such an intensional case and the cross reference in a non-intensional case, such as There was a man in the room. He was hungry. (∃x)Mx & HεxMx. What has been difficult has merely been getting a symbolisation of the cross-reference in this more elementary kind of case. But it just involves extending the epsilon definition of existential statements, using a reiteration of the substituted epsilon term, as we can see. It is now widely recognised how the epsilon calculus allows us to do this (Purdy 1994, Egli and von Heusinger 1995, Meyer Viol 1995, Ch 6), the theoretical starting point being the theorem about the Russellian theory of definite descriptions proved before, which breaks up what otherwise would be a single sentence into a sequential piece of discourse, enabling the existence and uniqueness clauses to be put in one sentence while the characterising remark is in another. The relationship starts to matter when, in fact, there is no obvious way to formulate a combination of anaphoric remarks in the predicate calculus, as in, for instance, There is a king of France. He is bald, where there is no uniqueness clause. This difficulty became a major problem when logicians started to consider anaphoric reference in the 1960s. Geach, for instance, in Geach 1962, even believed there could not be a syllogism of the following kind (Geach 1962, p126): A man has just drunk a pint of sulphuric acid. Nobody who drinks a pint of sulphuric acid lives through the day. So, he won’t live through the day. He said, one could only draw the conclusion: Some man who has just drunk a pint of sulphuric acid won’t live through the day. Certainly one can only derive (∃x)(Mx & Dx & ¬Lx) (∃x)(Mx & Dx), (x)(Dx → ¬Lx), within predicate logic. But one can still derive ¬Lεx(Mx & Dx), within the epsilon calculus. Geach likewise was foxed later when he produced his famous case (numbered 3 in Geach 1967): Hob thinks a witch has blighted Bob’s mare, and Nob wonders whether she (the same witch) killed Cob’s sow, which is, in epsilon terms Th(∃x)(Wx & Bxb) & OnKεx(Wx & Bxb)c. For Geach saw that this could not be (4) (∃x)(Wx & ThBxb & OnKxc), or (5) (∃x)(Th(Wx & Bxb)& OnKxc). But also a reading of the second clause as (c.f. 18) Nob wonders whether the witch who blighted Bob’s mare killed Cob’s sow, in which ‘the witch who blighted Bob’s mare killed Cob’s sow’ is analysed in the Russellian manner, i.e. as (20) just one witch blighted Bob’s mare and she killed Cob’s sow, Geach realised does not catch the specific cross-reference — amongst other things because of the uniqueness condition which is then introduced. This difficulty with the uniqueness clause in Russellian analyses has been widely commented on, although a recent theorist, Neale, has said that Russell’s theory only needs to be modestly modified: Neale’s main idea is that, in general, definite descriptions should just be localised to the context. His resolution of Geach’s troubling cases thus involves suggesting that ‘she’, in the above, might simply be ‘the witch we have been hearing about’ (Neale 1990, p221). Neale might here have said ‘that witch who blighted Bob’s mare’, showing that an Hilbertian account of demonstrative descriptions would have a parallel effect. A good deal of the ground breaking work on these matters, however, was done by someone again much influenced by Russell: Evans. But Evans significantly broke with Russell over uniqueness (Evans 1977, One does not want to be committed, by this way of telling the story, to the existence of a day on which just one man and boy walked along a road. It was with this possibility in mind that I stated the requirement for the appropriate use of an E-type pronoun in terms of having answered, or being prepared to answer upon demand, the question ‘He? Who?’ or ‘It? Which?’ In order to effect this liberalisation we should allow the reference of the E-type pronoun to be fixed not only by predicative material explicitly in the antecedent clause, but also by material which the speaker supplies upon demand. This ruling has the effect of making the truth conditions of such remarks somewhat indeterminate; a determinate proposition will have been put forward only when the demand has been made and the material supplied. It was Evans who gave us the title ‘E-type pronoun’ for the ‘he’ in such expressions as A Cambridge philosopher smoked a pipe, and he drank a lot of whisky, i.e., in epsilon terms, (∃x)(Cx & Px) & Dεx(Cx & Px). He also insisted (Evans 1977, p516) that what was unique about such pronouns was that this conjunction of statements was not equivalent to A Cambridge philosopher, who smoked a pipe, drank a lot of whisky, (∃x)(Cx & Px & Dx). Clearly the epsilon account is entirely in line with this, since it illustrates the point made before about cases without a uniqueness clause. Only the second expression, which contains a relative pronoun, is formalisable in the predicate calculus. To formalise the first expression, which contains a personal pronoun, one at least needs something with the expressive capabilities of the epsilon 5. The Formal Semantics of Epsilon Terms The semantics of epsilon terms is nowadays more general, but the first interpretations of epsilon terms were restricted to arithmetical cases, and specifically took epsilon to be the least number operator. Hilbert and Bernays developed Arithmetic using the epsilon calculus, using the further epsilon axiom schema (Hilbert and Bernays 1970, Book 2, p85f, c.f. Leisenring 1969, p92) : (εxAx = st) → ¬At, where ‘s’ is intended to be the successor function, and ‘t’ is any numeral. This constrains the interpretation of the epsilon symbol, but the least number interpretation is not strictly forced, since the axiom only ensures that no number having the property A immediately precedes εxAx. The new axiom, however, is sufficient to prove mathematical induction, in the form: (A0 & (x)(Ax → Asx)) → (x)Ax. For assume the reverse, namely A0 & (x)(Ax → Asx) & ¬(x)Ax, and consider what happens when the term ‘εx¬Ax’ is substituted in t = 0 ∨ t = sn, which is derivable from the other axioms of number theory which Hilbert and Bernays are using. If we had εx¬Ax = 0, then, since it is given that A0, then we would have Aεx¬Ax. But since, by the definition of the universal quantifier, Aεx¬Ax ↔ (x)Ax, we know, because ¬(x)Ax is also given, that ¬Aεx¬Ax, which means we cannot have εx¬Ax = 0. Hence we must have the other alternative, i.e. εx¬Ax = sn, for some n. But from the new axiom (εx¬Ax = sn) → An, hence we must have An, although we must also have An → Asn, because (x)(Ax → Asx). All together that requires Aεx¬Ax again, which is impossible. Hence the further epsilon axiom is sufficient to establish the given principle of induction. The more general link between epsilon terms and choice functions was first set out by Asser, although Asser’s semantics for an elementary epsilon calculus without the second epsilon axiom makes epsilon terms denote rather complex choice functions. Wilfrid Meyer Viol, calling an epsilon calculus without the second axiom an ‘intensional’ epsilon calculus, makes the epsilon terms in such a calculus instead name Skolem functions. Skolem functions are also called Herbrand functions, although they arise in a different way, namely in Skolem’s Theorem. Skolem’s Theorem states that, if a formula in prenex normal form is provable in the predicate calculus, then a certain corresponding formula, with the existential quantifiers removed, is provable in a predicate calculus enriched with function symbols. The functions symbolised are called Skolem functions, although, in another context, they would be Herbrand functions. Skolem’s Theorem is a meta-logical theorem, about the relation between two logical calculi, but a non-metalogical version is in fact provable in the epsilon calculus from which Skolem’s actual theorem follows, since, for example, we can get, by the epsilon definition, now of the existential quantifier (x)(∃y)Fxy ↔ (x)FxεyFxy. As a result, if the left hand side of such an equivalence is provable in an epsilon calculus the right hand side is provable there. But the left hand side is provable in an epsilon calculus if it is provable in the predicate calculus, by the Second Epsilon Theorem; and if the right hand side is provable in an epsilon calculus it is provable in a predicate calculus enriched with certain function symbols — epsilon terms, like ‘εyFxy’. So, by generalisation, we get Skolem’s original result. When we add to an intensional epsilon calculus the second epsilon axiom (x)(Fx ↔ Gx) →εxFx = εxGx, the interpretation of epsilon terms is commonly extensional, i.e. in terms of sets, since two predicates ‘F’ and ‘G’ satisfying the antecedent of this second axiom will determine the same set — if they determine sets at all, that is. For that requires the predicates to be collectivisantes, in Bourbaki’s terms, as with explicit set membership statements, like ‘x ∈ y’. In such a case the epsilon term ‘εx(x ∈ y)’ designates a choice function, i.e. a function which selects one from a given set (c.f. Leisenring 1969, p19, Meyer Viol 1995, p42). In the case where there are no members of the set the selection is arbitrary, although for all empty sets it is invariably the same. Thus the second axiom validates, for example, Kalish and Montague’s rule for this case, which they put in the form εxFx = εx¬(x = x). Kalish and Montague in fact prove a version of the second epsilon axiom in their system (Kalish and Montague 1964, see T407, p256). The second axiom also holds in Hermes’ system (Hermes 1965), although there one in addition finds a third epsilon axiom, εx¬(x = x) = εx(x = x), for which there would seem to be no real justification. But the second epsilon axiom itself is curious. One questionable thing about it is that both Leisenring and Meyer Viol do not state that the predicates in question must determine sets before their choice function semantics can apply. That the predicates are collectivisantes is merely presumed in their theories, since ‘εxBx’ is invariably modelled by means of a choice from the presumed set of things which in the model are B. Certainly there is a special clause dealing with the empty set; but there is no consideration of the case where some things are B although those things are not discrete, as with the things which are red, for instance. If the predicate in question is not a count noun then there is no set of things involved, since with mass terms, and continuous quantities there are no given elements to be counted (c.f. Bunt 1985, pp262-263 in particular). Of course numbers can still be associated with them, but only given an arbitrary unit. With the cows in a field, for instance, we can associate a determinate number, but with the beef there we cannot, unless we consider, say, the number of pounds of it. The point, as we saw before, has a formalisation in epsilon terms. Thus if we write ‘(nx)Fx’, for ‘there are n F’s', then εn(ny)Fy will be the number of F’s, and it is what numbers them if they have a number. But in the reverse case the previously mentioned arbitrariness of the epsilon term comes in. For if ¬(∃n)(nx)Fx, then ¬([εn(ny)Fy]x)Fx, and so, although an arbitrary number exists, it does not number the F’s. In that case, in other words, we do not have a number of F’s, merely some F. In fact, even when there is a set of things, the second epsilon axiom, as stated above, does not apply in general, since there are intensional differences between properties to consider, as in, for instance ‘There is a red-haired man, and a Caucasian in the room, and they are different’. Here, if there were only red-haired Caucasians in the room, then with the above second axiom, we could not find epsilon substitutions to differentiate the two individuals involved. This may remind us that it is necessary co-extensionality, and not just contingent co-extensionality which is the normal criterion for the identity of properties (c.f. Hughes and Cresswell 1968, pp209-210). So it leads us to see the appropriateness of a modalised second axiom, which uses just an intensional version of the antecedent of the previous second epsilon axiom, in which ‘L’ means ‘it is necessary that’, namely: L(x)(Fx ↔ Gx) →εxFx = εxGx. For with this axiom only the co-extensionalities which are necessary will produce identities between the associated epsilon terms. We can only get, for instance, εxPx = εx(Px ∨ Px), εxFx = εyFy, and all other identities derivable in a similar way. However, the original second epsilon axiom is then provable, in the special case where the predicates express set membership. For if necessarily (x)(x ∈ y ↔ x ∈ z) ↔ y = z, while necessarily y = z ↔ L(y = z), (see Hughes and Cresswell, 1968, p190) then L(x)(x ∈ y ↔ x ∈ z) ↔ (x)(x ∈ y ↔ x ∈ z), and so, from the modalised second axiom we can get (x)(x ∈ y ↔ x ∈ z) →εx(x ∈ y) = εx(x ∈ z). Note, however, that if one only has contingently (x)(Fx ↔ x ∈ z), then one cannot get, on this basis, εxFx = εx(x ∈ z). But this is something which is desirable, as well. For we have seen that it is contingent that the number of the planets does number the planets — because it is not necessary that ([εn(ny)Py]x)Px. This makes ‘(9x)Px’ contingent, even though the identity ’9 = εn(nx)Px’ remains necessary. But also it is contingent that there is the set of planets, p, which there is, since while, say, (x)(x ∈ p ↔ Px), εn(nx)(x ∈ p) = εn(nx)Px = 9, it is still possible that, in some other possible world, (x)(x ∈ p’ ↔ Px), with p’ the set of planets there, and ¬(εn(nx)(x ∈ p’) = 9). We could not have this further contingency, however, if the original second epsilon axiom held universally. It is on this fuller basis that we can continue to hold ‘x = y → L(x = y)’, i.e. the invariable necessity of identity — one merely distinguishes ‘(9x)Px’ from ’9 = εx(nx)Px’, and from ’9 = εx(nx)(x ∈ p)’, as above. Adding the original second epsilon axiom to an intensional epsilon calculus is therefore acceptable only if all the predicates are about set membership. This is not an uncommon assumption, indeed it is pervasive in the usually given semantics for predicate logic, for instance. But if, by contrast, we want to allow for the fact that not all predicates are collectivisantes then we should take just the first epsilon axiom with merely a modalised version of the second epsilon axiom. The interpretation of epsilon terms is then always in terms of Skolem functions, although if we are dealing with the membership of sets, those Skolem functions naturally are choice functions. 6. Some Metatheory To finish we shall briefly look, as promised, at some meta-theory. The epsilon calculi that were first described were not very convenient to use, and Hilbert and Bernays’ proofs of the First and Second Epsilon Theorems were very complex. This was because the presentation was axiomatic, however, and with the development of other means of presenting the same logics we get more readily available meta-logical results. I will indicate some of the early difficulties before showing how these theorems can be proved, nowadays, much more simply. The problem with proving the Second Epsilon Theorem, on an axiomatic basis, is that complex, and non-constant epsilon terms may enter a proof in the epsilon calculus by means of substitutions into the axioms. What has to be proved is that an epsilon calculus proof of an epsilon-free theorem (i.e. one which can be expressed just in predicate calculus language) can be replaced by a predicate calculus proof. So some analysis of complex epsilon terms is required, to show that they can be eliminated in the relevant cases, leaving only constant epsilon terms, which are sufficiently similar to the individual symbols in standard predicate logic. Hilbert and Bernays (Hilbert and Bernays 1970, Book 2, p23f) say that one epsilon term ‘εxFx’ is subordinate to another ‘εyGy’ if and only if ‘G’ contains ‘εxFx’, and a free occurrence of the variable ‘y’ lies within ‘εxFx’. For instance ‘εxRxy’ is a complex, and non-constant epsilon term, which is subordinate to ‘εySyεxRyx’. Hilbert and Bernays then define the rank of an epsilon term to be 1 if there are no epsilon terms subordinate to it, and otherwise to be one greater than the maximal rank of the epsilon terms which are subordinate to it. Using the same general ideas, Leisenring proves two theorems (Leisenring 1969, p72f). First he proves a rank reduction theorem, which shows that epsilon proofs of epsilon-free formulas in which the second epsilon axiom is not used, but in which every term is of rank less than or equal to r, may be replaced by epsilon proofs in which every term is of rank less than or equal to r – 1. Then he proves the eliminability of the second epsilon axiom in proofs of epsilon-free formulas. Together, these two theorems show that if there is an epsilon proof of an epsilon-free formula, then there is such a proof not using the second epsilon axiom, and in which all epsilon terms have rank just 1. Even though such epsilon terms might still contain free variables, if one replaces those that do with a fixed symbol ‘a’ (starting with those of maximal length) that reduces the proof to one in what is called the ‘epsilon star’ system, in which there are only constant epsilon terms (Leisenring 1969, p66f). Leisenring shows that proofs in the epsilon star system can be turned into proofs in the predicate calculus, by replacing the epsilon terms by individual But, as was said before, there is now available a much shorter proof of the Second Epsilon Theorem. In fact there are several, but I shall just indicate one, which arises simply by modifying the predicate calculus truth trees, as found in, for instance, Jeffrey (see Jeffrey 1967). Jeffrey uses the standard propositional truth tree rules, together with the rules of quantifier interchange, which remain unaffected, and which are not material to the present purpose. He also has, however, a rule of existential quantifier elimination, (∃x)Fx ├ Fa, in which ‘a’ must be new, and a rule of universal quantifier elimination (x)Fx ├ Fb, in which ‘b’ must be old — unless no other individual terms are available. By reducing closed formulas of the form ‘P & ¬C’ to absurdity Jeffrey can then prove ‘P → C’, and validate ‘P ├ C’ in his calculus. But clearly, upon adding epsilon terms to the language, the first of these rules must be changed to (∃x)Fx ├ FεxFx, while also the second rule can be replaced by the pair (x)Fx ├ Fεx¬Fx, Fεx¬Fx ├ Fa, (where ‘a’ is old) to produce an appropriate proof procedure. Steen reads ‘εx¬Fx’ as ‘the most un-F-like thing’ (Steen 1972, p162), which explains why Fεx¬Fx entails Fa, since if the most un-F-like thing is in fact F, then the most plausible counter-example to the generalisation is in fact not so, making the generalisation exceptionless. But there is a more important reason why the rule of universal quantifier elimination is best broken up into two parts. For Jeffrey’s rules only allow him ‘limited upward correctness’ (Jeffrey 1967, p167), since Jeffrey has to say, with respect to his universal quantifier elimination rule, that the range of the quantification there be limited merely to the universe of discourse of the path below. This is because, if an initial sentence is false in a valuation so also must be one of its conclusions. But the first epsilon rule which replaces Jeffrey’s rule ensures, instead, that there is ‘total upwards correctness’. For if it is false that everything is F then, without any special interpretation of the quantifier, one of the given consequences of the universal statement is false, namely the immediate one — since Fεx¬Fx is in fact equivalent to (x)Fx. A similar improvement also arises with the existential quantifier elimination rule. For Jeffrey can only get ‘limited downwards correctness’, with his existential quantifier elimination rule (Jeffrey 1967, p165), since it is not an entailment. In fact, in order to show that if an initial sentence is true in a valuation so is one of its conclusions, in this case, Jeffrey has to stretch his notion of ‘truth’ to being true either in the given valuation, or some nominal variant of it. The epsilon rule which replaces Jeffrey’s overcomes this difficulty by not employing names, only demonstrative descriptions, and by being, as a result, totally downward correct. For if there is an F then that F is F, whatever name is used to refer to it. The epsilon calculus terminology thus precedes any naming: it gets hold of the more primitive, demonstrative way we have of referring to objects, using phrases like ‘that F’. Thus in explication of the predicate calculus rule we might well have said suppose there is an F, well, call that F ‘a’, then Fa, but that requires we understand ‘that F’ before we come to use ‘a’. So how does the Second Epsilon Theorem follow? This theorem, as before, states that an epsilon calculus proof of an epsilon-free theorem may be replaced by a predicate calculus proof of the same formula. But the transformation required in the present setting is now evident: simply change to new names all epsilon terms introduced in the epsilon calculus quantifier elimination rules. This covers both the new names in Jeffrey’s first rule, but also the odd case where there are no old names in Jeffrey’s second rule. The epsilon calculus proofs invariably use constant epsilon terms, and are thus effectively in Leisenring’s epsilon star system. Epsilon terms which are non-constant, however, crucially enter the proof of the First Epsilon Theorem. The First Epsilon Theorem states that if C is a provable predicate calculus formula, in prenex normal form, i.e. with all quantifiers at the front, then a finite disjunction of instances of C’s matrix is provable in the epsilon calculus. The crucial fact is that the epsilon calculus gives us access to Herbrand functions, which arise when universal quantifiers are eliminated from formulas using their epsilon definition. Thus for instance, is equivalent to and so and the resulting epsilon term ‘εxFyx’ is a Herbrand function. Using such reductions, all universal quantifiers can evidently be removed from formulas in prenex normal form, and the additional fact that, in a certain specific way, the remaining existential quantifiers are disjunctions makes all predicate calculus formulas equivalent to disjunctions. Remember that a formula is provable if its negation is reducible to absurdity, which means that its truth tree must close. But, by König’s Lemma, if there is no open path through a truth tree then there is some finite stage at which there is no open path, so, in the case above, for instance, if no valuation makes the last formula’s negation true, then the tree of the instances of that negative statement must close in a finite length. But the negative statement is the universal formula by the rules of quantifier interchange, so a finite conjunction of instances of the matrix of this universal formula, namely Fyx, must reduce to absurdity. For the rules of universal quantifier elimination only produce consequences with the form of this matrix. By de Morgan’s Laws, that makes necessary a finite disjunction of instances of ¬Fyx. By generalisation we thus get the First Epsilon Theorem. The epsilon calculus, however, can take us further than the First Epsilon Theorem. Indeed, one has to take care with the impression this theorem may give that existential statements are just equivalent to disjunctions. If that were the case, then existential statements would be unlike individual statements, saying not that one specified thing has a certain property, but merely that one of a certain group of things has a certain property. The group in question is normally called the ‘domain’ of the quantification, and this, it seems, has to be specified when setting out the semantics of quantifiers. But study of the epsilon calculus shows that there is no need for such ‘domains’, or indeed for such semantics. This is because the example above, for instance, is also equivalent to where a = εy¬FεxFyx. So the previous disjunction of instances of ¬Fyx is in fact only true because this specific disjunct is true. The First Epsilon Theorem, it must be remembered, does not prove that an existential statement is equivalent to a certain disjunction; it shows merely that an existential statement is provable if and only if a certain disjunction is provable. And what is also provable, in such a case, is a statement merely about one object. Indeed the existential statement is provably equivalent to it. It is this fact which supports the epsilon definition of the quantifiers; and it is what permits anaphoric reference to the same object by means of the same epsilon term. An existential statement is thus just another statement about an individual — merely a nameless one. The reverse point goes for the universal quantifier: a universal statement is not the conjunction of its instances, even though it implies them. A generalisation is simply equivalent to one of its instances — to the one involving the prime putative exception to it, as we have seen. Not being able to specify that prime putative exception leaves Jeffrey saying that if a generalisation is false then one of its instances is false without any way of ensuring that that instance has been drawn as a conclusion below it in the truth tree except by limiting the interpretation of the generalisation just to the universe of discourse of the path. It thus seems necessary, within the predicate calculus, that there be a ‘model’ for the quantifiers which restricts them to a certain ‘domain’, which means that they do not necessarily range over everything. But in the epsilon calculus the quantifiers do, invariably, range over everything, and so there is no need to specify their range. 7. References and Further Reading • Ackermann, W. 1937-8, ‘Mengentheoretische Begründung der Logik’, Mathematische Annalen, 115, 1-22. • Asser, G. 1957, ‘Theorie der Logischen Auswahlfunktionen’, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik, 3, 30-68. • Bernays, P. 1958, Axiomatic Set Theory, North Holland, Dordrecht. • Bertolet, R. 1980, ‘The Semantic Significance of Donnellan’s Distinction’, Philosophical Studies, 37, 281-288. • Bourbaki, N. 1954, Éléments de Mathématique, Hermann, Paris. • Bunt, H.C. 1985, Mass Terms and Model-Theoretic Semantics, C.U.P., Cambridge. • Church, A. 1940, ‘A Formulation of the Simple Theory of Types’, Journal of Symbolic Logic, 5, 56-68. • Copi, I. 1973, Symbolic Logic, 4th ed. Macmillan, New York. • Devitt, M. 1974, ‘Singular Terms’, The Journal of Philosophy, 71, 183-205. • Donnellan, K. 1966, ‘Reference and Definite Descriptions’, Philosophical Review, 75, 281-304. • Egli, U. and von Heusinger, K. 1995, ‘The Epsilon Operator and E-Type Pronouns’ in U. Egli et al. (eds.), Lexical Knowledge in the Organisation of Language, Benjamins, Amsterdam. • Evans, G. 1977, ‘Pronouns, Quantifiers and Relative Clauses’, Canadian Journal of Philosophy, 7, 467-536. • Geach, P.T. 1962, Reference and Generality, Cornell University Press, Ithaca. • Geach, P.T. 1967, ‘Intentional Identity’, The Journal of Philosophy, 64, 627-632. • Goddard, L. and Routley, R. 1973, The Logic of Significance and Context, Scottish Academic Press, Aberdeen. • Gödel, K. 1969, ‘An Interpretation of the Intuitionistic Sentential Calculus’, in J. Hintikka (ed.), The Philosophy of Mathematics, O.U.P. Oxford. • Hazen, A. 1987, ‘Natural Deduction and Hilbert’s ε-operator’, Journal of Philosophical Logic, 16, 411-421. • Hermes, H. 1965, Eine Termlogik mit Auswahloperator, Springer Verlag, Berlin. • Hilbert, D. 1923, ‘Die Logischen Grundlagen der Mathematik’, Mathematische Annalen, 88, 151-165. • Hilbert, D. 1925, ‘On the Infinite’ in J. van Heijenhoort (ed.), From Frege to Gödel, Harvard University Press, Cambridge MA. • Hilbert, D. and Bernays, P. 1970, Grundlagen der Mathematik, 2nd ed., Springer, Berlin. • Hughes, G.E. and Cresswell, M.J. 1968, An Introduction to Modal Logic, Methuen, London. • Jeffrey, R. 1967, Formal Logic: Its Scope and Limits, 1st Ed. McGraw-Hill, New York. • Kalish, D. and Montague, R. 1964, Logic: Techniques of Formal Reasoning, Harcourt, Brace and World, Inc, New York. • Kneebone, G.T. 1963, Mathematical Logic and the Foundations of Mathematics, Van Nostrand, Dordrecht. • Leisenring, A.C. 1969, Mathematical Logic and Hilbert’s ε-symbol, Macdonald, London. • Marciszewski, W. 1981, Dictionary of Logic, Martinus Nijhoff, The Hague. • Meyer Viol, W.P.M. 1995, Instantial Logic, ILLC Dissertation Series 1995-11, Amsterdam. • Montague, R. 1963, ‘Syntactical Treatments of Modality, with Corollaries on Reflection Principles and Finite Axiomatisability’, Acta Philosophica Fennica, 16, 155-167. • Neale, S. 1990, Descriptions, MIT Press, Cambridge MA. • Priest, G.G. 1984, ‘Semantic Closure’, Studia Logica, XLIII 1/2, 117-129. • Prior, A.N., 1971, Objects of Thought, O.U.P. Oxford. • Purdy, W.C. 1994, ‘A Variable-Free Logic for Anaphora’ in P. Humphreys (ed.) Patrick Suppes: Scientific Philosopher, Vol 3, Kluwer, Dordrecht, 41-70. • Quine, W.V.O. 1960, Word and Object, Wiley, New York. • Rasiowa, H. 1956, ‘On the ε-theorems’, Fundamenta Mathematicae, 43, 156-165. • Rosser, J. B. 1953, Logic for Mathematicians, McGraw-Hill, New York. • Routley, R. 1969, ‘A Simple Natural Deduction System’, Logique et Analyse, 12, 129-152. • Routley, R. 1977, ‘Choice and Descriptions in Enriched Intensional Languages II, and III’, in E. Morscher, J. Czermak, and P. Weingartner (eds), Problems in Logic and Ontology, Akademische Druck und Velagsanstalt, Graz. • Routley, R. 1980, Exploring Meinong’s Jungle, Departmental Monograph #3, Philosophy Department, R.S.S.S., A.N.U. Canberra. • Routley, R., Meyer, R. and Goddard, L. 1974, ‘Choice and Descriptions in Enriched Intensional Languages I’, Journal of Philosophical Logic, 3, 291-316. • Russell, B. 1905, ‘On Denoting’ Mind, 14, 479-493. • Sayward, C. 1987, ‘Prior’s Theory of Truth’ Analysis, 47, 83-87. • Slater, B.H. 1986(a), ‘E-type Pronouns and ε-terms’, Canadian Journal of Philosophy, 16, 27-38. • Slater, B.H. 1986(b), ‘Prior’s Analytic’, Analysis, 46, 76-81. • Slater, B.H. 1988(a), ‘Intensional Identities’, Logique et Analyse, 121-2, 93-107. • Slater, B.H. 1988(b), ‘Hilbertian Reference’, Noûs, 22, 283-97. • Slater, B.H. 1989(a), ‘Modal Semantics’, Logique et Analyse, 127-8, 195-209. • Slater, B.H. 1990, ‘Using Hilbert’s Calculus’, Logique et Analyse, 129-130, 45-67. • Slater, B.H. 1992(a), ‘Routley’s Formulation of Transparency’, History and Philosophy of Logic, 13, 215-24. • Slater, B.H. 1994(a), ‘The Epsilon Calculus’ Problematic’, Philosophical Papers, XXIII, 217-42. • Steen, S.W.P. 1972, Mathematical Logic, C.U.P. Cambridge. • Thomason, R. 1977, ‘Indirect Discourse is not Quotational’, Monist, 60, 340-354. • Thomason, R. 1980, ‘A Note on Syntactical Treatments of Modality’, Synthese, 44, 391-395. • Thomason, R.H. and Stalnaker, R.C. 1968, ‘Modality and Reference’, Noûs, 2, 359-372. Author Information Barry Hartley Slater Email: slaterbh@cyllene.uwa.edu.au University of Western Australia
{"url":"http://www.iep.utm.edu/ep-calc/print","timestamp":"2014-04-16T10:10:55Z","content_type":null,"content_length":"79444","record_id":"<urn:uuid:50a384ae-3d36-4315-a482-d19f44f0c5ec>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
Questions: What is collinearity? Why is it a problem? How do I know if I've got it? What can I do about it? When IVs are correlated, there are problems in estimating regression coefficients. Collinearity means that within the set of IVs, some of the IVs are (nearly) totally predicted by the other IVs. The variables thus affected have b and b weights that are not well estimated (the problem of the "bouncing betas"). Minor fluctuations in the sample (measurement errors, sampling error) will have a major impact on the weights. Variance Inflation Factor (VIF) The standard error of the b weight with 2 IVs This is the square root of the mean square residual over the sum of squares X[1] times 1 minus the squared correlation between IVs. The sampling variance of the b weight with 2 IVs: Notice the term on the far right. The standard error of the b weight with multiple IVs: The term on the bottom right is the squared correlation where IV 1 is considered as a DV and all the other IVs are treated as IVs. The R-square term tells us how predcictable our IV is from the set of other IVs. It tells us about the linear dependence of one IV on all the other IVs. The VIF for variable b1: The VIF for variable i: Big values of VIF are trouble. Some say look for values of 10 or larger, but there is no certain number that spells death. The VIF is also equal to the diagonal element of R^-1, the inverse of the correlation matrix of IVs. Recall that =R^-1r, so we need to find R^-1 to find the beta weights. This is easiest to see with a 2x2 matrix: The determinant of R is |R| = (1)(1)-(r[12])(r[21]) = 1 - r^2[12]. To find the inverse, we have to interchange main diagonal elements, reverse the sign of the off-diagonal elements, and divide each element by the determinant, like this: As you can see, when r^2[12 ]is large, VIF will be large. When R is of order greater than 2 x 2, the main diagonal elements of R are 1/ R^2[i],[ ]so we have the multiple correlation of the X with the other IVs instead of the simple correlation. Tolerance = 1 - R^2[i] = 1/VIF[i] Small values of tolerance (close to zero) are trouble. Some computer programs will complain to you about tolerance. Do not interpret such complaints as computerized comments on silicon diversity; rather look to problems in collinearity. Condition Indices Most multivariate statistical approaches (factor analysis, MANOVA, cannonical correlation, etc.) involve decomposing a correlation matrix into linear combinations of variables. The linear combinations are chosen so that the first combination has the largest possible variance (subject to some restrictions we won't discuss), the second combination has the next largest variance, subject to being uncorrelated with the first, the third has the largest possible variance, subject to being uncorrelated with the first and second, and so forth. The variance of each of these linear combinations is called an eigenvalue. You will learn about the kinds of decompositions and their uses in a course on multivariate statistics. We will only be using the eigenvalue for diagnosing collinearity in multiple regression. SAS will produce a table for you that looks kind of like this (if you have 3 IVs): (Pedhazur, p. 303): │ Number │ Eigenval │ Condition │ Variance Proportions │ │ │ │ Index │ Constant │ X1 │ X2 │ X3 │ │ 1 │ 3.771 │ 1.00 │ .004 │ .006 │ .006 │ .008 │ │ 2 │ .106 │ 5.969 │ .003 │ .029 │ .268 │ .774 │ │ 3 │ .079 │ 6.90 │ .000 │ .749 │ .397 │ .066 │ │ 4 │ .039 │ 9.946 │ .993 │ .215 │ .329 │ .152 │ Number stands for linear combination of X variables. Eigenval(ue) stands for the variance of that combination. The condition index is a simple function of the eigenvalues, namely, where l is the conventional symbol for an eigenvalue. To use the table, you first look at the variance proportions. For X1, for example, most of the variance (about 75 percent) is associated with Number 3, which has an eigenvalue of .079 and a condition index of 6.90. Most of the rest of X1 is associated with Number 4. Variable X2 is associated with 3 different numbers (2, 3, & 4), and X3 is mostly associated with Number 2. Look for variance proportions about .50 and larger. Collinearity is spotted by finding 2 or more variables that have large proportions of variance (.50 or more) that correspond to large condition indices. A rule of thumb is to label as large those condition indices in the range of 30 or larger. There is no evident problem with collinearity in the above example. There is thought be a problem indicated in the example below (Pedhazur, p. 303): │ Number │ Eigenval │ Condition │ Variance Proportions │ │ │ │ Index │ Constant │ X1 │ X2 │ X3 │ │ 1 │ 3.819 │ 1.00 │ .004 │ .006 │ .002 │ .002 │ │ 2 │ .117 │ 5.707 │ .043 │ .384 │ .041 │ .087 │ │ 3 │ .047 │ 9.025 │ .876 │ .608 │ .001 │ .042 │ │ 4 │ .017 │ 15.128 │ .077 │ .002 │ .967 │ .868 │ The last condition index (15.128) is highly associated with X2 and X3. The b weights for X2 and X3 are probably not well estimated. How to Deal with Collinearity As you may have noticed, there are rules of thumb in deciding whether collinearity is a problem. People like to conclude that collinearity is not a problem. However, you should at least check to see if it seems to be a problem with your data. If it is, then you have some choices: 1. Lump it, but cautiously. Admit that there is ambiguity in the interpretation of the regression coefficients because they are not well estimated. Examine both the regression weights and zero order correlations together to see whether the results make sense. If the regression weights don't make sense, say so and refer to the correlation coefficients. Nonsignificant regression coefficients that correspond to "important" variables are very likely. 2. Select or combine variables. If you have multiple indicators of the same variable (e.g., two omnibus cognitive ability tests, two tests of conscientiousness, etc.), add them together (for an alternative, see point 3). If you are in a prediction only context, you may wish to use one of the variable selection methods (e.g., all possible regressions) to choose a useful subset of variables for your equation. 3. Factor analyze your IVs to find sets of relatively homogeneous IVs that you can combine (add together). 4. Use another type of analysis (path analysis, SEM). 5. Use another type of regression (ridge regression). 6. Try unit weights, that is, standardize each IV and then add them without estimating regression weights. Of course, this is no longer regression.
{"url":"http://luna.cas.usf.edu/~mbrannic/files/regression/Collinearity.html","timestamp":"2014-04-16T18:57:22Z","content_type":null,"content_length":"14428","record_id":"<urn:uuid:b0681def-331a-441c-bca0-b6293ce39a47>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof that the rational numbers are dense (idea) Density of Rational Numbers Theorem Given any two real number s α, β ∈ , α<β, there is a rational number r in such that α<r<β. Since α<β, β-α>0. 1>0 as well. We may use the Archimedean property to conclude that there is some m so 1 < m(β-α), or equivalently, mα +1 < mβ. Let n be the largest integer such that n ≤ mα. Adding 1 to both sides gives n+1 ≤ mα +1 < mβ. But since n is the largest integer less than or equal to mα, we know that mα < n+1 and therefore that m α < n+1 < mβ or α <(n+1)/m < β. I like this because it’s simplistic and low on ; I suppose it’s more of a hoi polloi -ish proof than the professors would prefer we use. The Archimedean property is the most sophisticated tool you need to understand this, and there’s a good write-up on that. This proof is fantastic for someone being introduced to the study of or a non-major “stuck” taking a single semester of the stuff. Taken from a homework assignment from a class titled "Fundamental Properties of Spaces and Functions: Part I" at the University of Iowa.
{"url":"http://everything2.com/user/Mandi/writeups/Proof+that+the+rational+numbers+are+dense","timestamp":"2014-04-18T16:54:44Z","content_type":null,"content_length":"21235","record_id":"<urn:uuid:2ea326e1-d65a-430f-91bf-a6bd1f8d781f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Is logic part of mathematics - or is mathematics part of logic? Replies: 32 Last Post: Jul 7, 2013 9:03 PM Messages: [ Previous | Next ] Re: Is logic part of mathematics - or is mathematics part of logic? Posted: Jul 5, 2013 1:02 PM On Thu, 04 Jul 2013 20:12:09 -0600, Robert Hansen <bob@rsccore.com> wrote: > On Jul 4, 2013, at 8:38 PM, GS Chandy <gs_chandy@yahoo.com> wrote: >> Are we discussing effective/ineffective teaching methods and processes >> or how students do or should receive >>such teaching? > I was curious as to how Lou handled the situation. I ran into the same > situation myself a couple of times. > Bob Hansen Mostly I ignored her. As I already said, I knew better. And I didn't really give a big hairy rat's behind about my grade. This was, after all, back in my misspent youth. --Louis A. Talman Department of Mathematical and Computer Sciences Metropolitan State University of Denver
{"url":"http://mathforum.org/kb/message.jspa?messageID=9158728","timestamp":"2014-04-16T13:37:35Z","content_type":null,"content_length":"55778","record_id":"<urn:uuid:5b6ee0e3-35a0-4480-96e5-6d59a0fb26f1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Weekly Household Planner Wake up. Do some laundry. Feed the kids. Get the kids dressed. Do more laundry. Pay the bills. Clean the house - and then do more laundry. This is just a typical day (at least it seems like this!). And at times, if you look at everything you need to do, it gets so overwhelming. How can you get it all done? And how in the world can you find some time for yourself? Between work, housework, and family responsibilities, there is a lot we have to "get done." And, if you are planning on becoming more organized in 2013 (it's definitely on our list!) - we've got a great way to get you on the right track. A few months ago, Susan, the owner and creator of the Weekly Household Planner (who is also a PCI Certified Parent Coach® and Certified Family Manager® Coach)- contacted us and shared with us her secrets to keeping a more organized household. As she worked with other families (as well as her own) she saw the need for an organizer to help relieve the stress. 2013 Confident Mom Weekly Household Planner will keep busy moms from all walks of life motivated and on track with all their to-do items day after day, week after week, all year long. This weekly planner breaks down household tasks into manageable daily and weekly bite size pieces, utilizing smaller increments of time to keep the tasks from becoming too large and overwhelming. Most of the items can be easily completed in 3 to 30 minutes, and you can select appropriate items to delegate to family members (after all, even with the household planner, you still can’t be “super mom”). Just take a look at how it is set up: It is so easy to use, and it includes everything from ironing, cleaning the bathtub, vacuuming under furniture, and washing the car to taking your vitamins, exercising, meal planning, and pampering yourself! It will even help you make sure you’re drinking enough water each day. We were so excited to try the Weekly Household Planner (really, anything to relieve the stress is worth a try!), but we were pleasantly surprised by how it was. Not only did it help us focus on what we can get done and delegate tasks, but it helped us manage our time so that it could be spent where it is most valuable - with our families. The supplement kit was amazing, too. For us, it is so hard to keep track of all our accounts, birthdays, special events, blogs we want to visit, or recipes we want to try -but the supplements to this planner made it a lot easier to keep track of all of it! We are so excited to be able to offer 3 of our lucky readers the Weekly Household Planner plus the Supplemental Kit this week! Can't wait? Get it now! Use the code "six2" for $2 off the $14 Planner with Supplemental Pack (you can enter the code at checkout) Here's how to enter: Be sure to leave a separate comment for each entry. You can enter up to 3 times. Please leave your email address in the comment so that we can contact you if you win. The winner will be announced on January 16th, 2012. The winner of the Innotab 2s from Vtech Electronics is: rachpetite {at} gmail {dot} com Don't forget! Attending our Build Your Blog Conference 2013 is a great way to kick off your year! Join us for a fun-filled day including lunch, workshops, and panels. This will be a great opportunity to network with lots of other bloggers and businesses, plus you'll get lots of free prizes, swag, and loads of information to help you grow your blog! Register today! Confident Mom Weekly Household Planner Other interesting food sites: 1. this is great! I have liked the facebookpage, follow her on twitter (ineseda account) and ma favourite pin is the one with the glass frames. I follow her on pinterest (also ineseda account) and have repinned my 2. I have liked her on facebook. This would be so much fun!!! And so very helpful! moodiesmum at yahoo dot com 3. I have followed The Confident Mom on Facebook. 4. I have followed The Confident Mom on Twitter. 5. I have repinned the planner printables on Pinterest. 7. I liked her on facebook. – spuorro@yahoo.com 8. I followed the confident mom on twitter. – spuorro@yahoo.com 9. I repined her free printables. – spuorro@yahoo.com 10. I like on Facebook! batessrh {at} comcast {dot} net 11. I follow on Twitter! batessrh {at} comcast {dot} net 12. I follow on Pinterest! batessrh {at} comcast {dot} net 13. Nancylspangler@yahoo.com. I’m following confident mom on Twitter 14. I’m following Confident Mom (and Six Sisters) on Pinterest 15. I re-pinned a book worth reading from her; it’s called Desperate 16. I ‘liked’ confident mom on facebook. Hope I win, but thinking I’ll have to buy it if I don’t! 17. I follow her on twitter! @hinsy5 18. Liked “The Confident Mom” on Facebook. Following “The Confident Mom” on Twitter. My FAVORITE Pin from “The Confident Mom” on Pinterest is under “Parenting” and is referred to as “28 Things Your Family Needs to Hear You Say” – http://pinterest.com/pin/202310208232988664/. I would LOVE to win this new planner for 2013. It would really help me to get organized, which is my New Year’s 21. I pinned on pinterest, a way to keep papers in your kitchen organized!! 22. “Like” on Facebook! greerlloyd at gmail.com 23. Following on twitter~ greerlloyd at gmail.com 24. I liked on facebook! always looking for ways to be more organized so my household is more peaceful. 25. I liked on facebook! always looking for ways to be more organized so my household is more peaceful. 26. Repinned the IKEA entryway cabinet. greerlloyd at gmail.com 27. I liked her on Facebook! My email is anjop35@gmail.com 28. I already Liked on facebook 29. I also re-pinned her pin on natural room scents anjop35@gmail.com. Hope I win!! 30. I liked the Confident Mom on FB. 31. I have “liked” Confident Mom on facebook. 32. I have followed Confident Mom on Twitter (@franciscanmom) 33. I like Confident Mom on FB. 34. I repinned the eighteen25 printables from her board. 35. I like the Confident Mom on Facebook! 36. I like Confident Mom on Facebook! 37. I follow @ConfidentMom on Twitter! 38. I follow several Confident Mom boards on Pinterest. Let me know if you need my Pinterest user name. 39. I re-pinned this cool cleaning cupboard storage idea: http://pinterest.com/pin/215398794649736185/ 40. I liked The Confident Mom on Facebook! 41. I liked Confident Mom on facebook. 42. I repinned an awesome pin about do it yourself backsplash and saw so many awesome boards, so I had to follow them all. 43. I liked Confident Mom on Facebook. 44. I follow Confident Mom on Twitter. 45. I pinned a quote/sayings board from Confident Mom on Pinterest. “I love days where my only problem is ‘coffee or tea?’” LOL 47. This would be AWESOME! I need to get organized and have a cleaning schedule. This would help out a lot!!! 48. I like the confident mom on facebook. 49. I follow the confident mom on twitter 50. like on facebook mrspotts0817 at yahoo.com 51. i repinned ‘the blessings of adoption’. a great story and something to think about – you never know what life might bring 52. I am liked confident mom on facebook 53. i repined date night ideas from confident mom 54. Liked Confient Mom on Facebook! 55. Followed Confident Mom on Twitter! 56. Repinned a favorite pin from Confident Mom’s Pinterest! 58. I repinned on facebook! 59. I liked The Confident Mom on Facebook (Andi Bauer Trautman) 60. “Liked” Confident Mom on Facebook! 61. Repinned a pin from Confident Mom! 62. I liked the Confident Mom on Facebook. cpaulson0520{at}gmail{dot}com 63. Following on Twitter! (casjp) cpaulson0520{at}gmail{dot}com 64. Repinned a pin on Pinterest (cassipaulson) cpaulson0520{at}gmail{dot}com 65. I liked them on facebook (stephanie nemo) seftraditions@aoldotcom 66. So excited! Pinned/repinned in Pintrest. maandrus@msn.com 67. So excited! “Liked” in Facebook. maandrus@msn.com 69. Followed her on Twitter @kismetkreation1 71. I liked TCM on FB. catholichusker at gmail dot com 72. I like the Confident mom on facebook 73. I follow the Confident Mom on facebook mkshriver (@) gmail (.) com 74. I follow The Confident Mom on Twitter 75. Pinned on Pintrest….I repinned a link for building your own storage bins…love the idea 76. I like the Confident mom on facebook 77. I follow the Confident mom on twitter 78. I follow the Confident mom on pinterest!!! & this is my favorite pin…so far! Cuz I love all her boards! 79. I liked her on facebook. shasty at olatheks.org 80. I am now following her on Twitter *tweet tweet* shasty at olatheks.org 81. 1. I liked The Confident Mom on Facebook and leave a comment to let us know. yoursandmineareours@gmail.com , http://www.yoursandmineareours.blogspot.com 83. I followed The Confident Mom on Twitter and leave a comment to let us know. yoursandmineareours@gmail.com – http://www.yoursandmineareours.blogspot.com 84. I pinned this! Love the bathroom printables! Going up tonight! shasty at olatheks.org 85. bummer I don’t use Twitter…. But I use Facebook! 86. I have liked The Confident Mom on Facebook! 87. I have liked you on facebook, follow you on twitter and repinned on pinterest. 88. I am following The Confident Mom on Twitter! @KangaMomma 91. Liked the site on FB!! And I love this!! 92. I followed her on pinterest and pinned 30 day Mom challenge 93. I repinned from the Organization board … “keeping papers off the kitchen counter” Awesome! 95. I “liked” The Confident Mom on Facebook!! 96. I like her on facebook Robin Savage Ingram 97. I liked the confident mom on facebook! 99. I liked Confident Mom on Facebook. 100. I follow her on twitter @centsablerobin 101. I liked The Confident Mom on Facebook 103. I like her entire pinboard of free printables on pinterest. 104. I followed on Twitter. tcarver@bakeru.edu 105. I Pinned the Bathroom Signs from The confident Mom on Pinterest onto my House Cleaning and Organizing Board. 106. I liked the Confident Mom on facebook. mrspastormcabee@gmail.com 107. I am following her on twitter 108. I follower on Pinterest and love her Kid stuff board! I could realy use this with out family of 8!!! tcarver@bakeru.edu 110. I liked Confident Mon on Facebook. Don’t have twitter 113. I pinned the lace bowls on Pinterest and can’t wait until I have time to go back and browse all her boards. mrspastormcabee@gmail.com 116. Liked Confident Mom on Facebook! 117. I repinned the Routine Cards download 119. Pinned a favorite item from her pinterest. 120. I liked the Confident Mom on FB!!!! 121. I follow on pinterest and repinned the free printable calendar to start with!!! 123. I liked Confident Mom on FB!!!!! 124. Pinned the Mom challenge & started following all her boards. Some great pins for me & the kids 125. Also liked & following on facebook 127. I liked confident mom on FB. 128. I follow confident mom on Twitter. 129. I follow confident mom on Twitter 130. i am following on facebook and also on pinterest; hope i can win this 131. I repinned from confident mom on pinterest devenish5 at Comcast dot net 132. I like Confident mom on FB!!!! 134. Pinned chore charts and organization printable s. 135. I liked confident mom on FB. 137. Shared & liked on Facebook! 140. I liked confident mom of FB 142. I liked on Facebook and follow on Twitter. 144. I have liked the Confident Mom on Facebook. 145. liked on fb 146. I liked the Confident Mom on FB 147. I am following the Confident Mom on Twitter! 148. I follow confident mom on FB 149. I pinned a date idea with my boys on pinterest 150. Liked confident mom on Facebook. 151. I liked Confident Mom on FB jennf83 at yahoo dot com 152. I followed Confident Mom on Twitter jennf83 at yahoo dot com 153. “Like” The Confident Mom on Facebook – aprildrew04@gmail.com 155. Follow The Confident Mom on Twitter – aprildrew04@gmail.com 156. I repined the blogging calendar from Confident Mom 157. Definitely like the facebook page! 159. Liked on FB! 160. Repinning a great one! Thank you! 161. Followed on Twitter! 162. Repin your favorite pin from The Confident Mom on Pinterest – aprildrew04@gmail.com I followed all her boards, looks like a lot of good ideas, and repinned Downloadable chore cards and routine cards. 163. Following on Pinterest too! hgarner2831@sbcglobal.net 164. I “like” the Confident Mom on FB 165. Following her on Twitter, THANKS! kmillette1019@gmail.com 166. Happily pinning away! Couldn’t choose just one! 167. Repinned her free bathroom printables..cute! jennf83 at yahoo dot com 168. I liked on FB naner1 at gmail dot com 169. Liked on Facebook, Thanks girls…great stuff! kmillette1019@gmail.com 170. I liked on twitter naner1 at gmail dot com 171. “Like” the Confident Mom on Facebook – czabel@gmail.com 172. I pinned some of the recipes. naner1 at gmail dot com 173. Pinned on Pinterest – czabel@gmail.com 174. Re-pinned this post it’s too great not to! kmillette1019@gmail.com 175. This comment has been removed by the author. 177. This comment has been removed by the author. 178. I just repinned her pin for the healing salve recipe. I can’t wait to try it. 180. Liked on facebook~ jessicaraeharkins at gmail dot com 181. Repinned the coconut lotion bar to my Pinterest board, Aromatherapy. 182. This comment has been removed by the author. 183. Followed on FB – kassandracolbert at gmail.com 185. Followed on twitter – kassandracolbert at gmail.com 186. Just liked The Confident Mom on Facebook! Thank You!! 187. repinned from the craft board the modpog clear plates with maps. I have bunches of disney maps and this would be a great way to use them – kassandracolbert at gmail.com 188. Repinned “25 ways to be a calm parent” 189. Repinned “30 day Mom challenge” jessicaraeharkins at gmail dot com 190. I liked CM’s Facebook page! 191. I liked confident mom on FB 192. I am now following CM on Twitter! 193. I liked the Confident Mom on facebook 194. I followed CM on Pinterest and went a little pin-nutty and repinned multiple of her pins…lots of great stuff! 195. I liked the CM on facebook. Awesome! 196. Followed on pinterest and repinned the Household Planner 197. Liked CM on fb! fun fun. 198. followed CM on twitter. katieafabbro@gmail.com 199. I liked confident mom on Facebook 201. I’m following confident mom on pinterest 202. I follow The Confident Mom on Pintrest & re-pinned one of her bedroom pins. 203. I’m following confident mom on twitter 204. I liked confident mom on Facebook 205. I like The Confident Mom on FB & left a comment as well. 206. I have followed the confident mom on pinterest! 207. I have like the confident mom on FB! 208. liked CM on pinterest and repinned. katieafabbro@gmail.com ty! 209. I liked her on Facebook! This organizer looks like it would be so helpful! 210. Liked CM on Facebook and repinned several things on Pinterest. olsenbethany at gmail.com 212. I liked on facebook ~ brookepool77@gmail.com 214. Liked Confident Mom on Facebook. tonietater@yahoo.com 216. I liked The Confident Mom on FB! jenn at mjray dot com 217. Following on Facebook 218. Following The Confident Mom on Pinterest, repinned her Christmas Stocking Printables! 219. Following on Twitter 220. Repinned many recipes. Love the crockpot recipe for Jimmy Dean Sausage breakfast casserole and the mixes, too. 221. I liked it on Facebook! 222. I followed CM on Facebook! 223. I followed CM on pinterest, and pinned some great ideas. 226. Followed on Pinterest and repined a free blog printable. Was hard to pick a “Favorite” as there are so many nice pins. http://pinterest.com/pin/266416134178661625/ 227. Like the Confident Mom on Facebook 228. Liked Confident Mom on facebook 229. Followed Confident Mom on Twitter 230. Repinned Meal Planning prinatables on Pinterest. rachelmurray329atgmaildotcom 231. I am following Confident Mom on Facebook. 232. I follow CM on Pinterest. 233. I “Liked” Confident Mom on Facebook! Elaine B. 234. I follow on Twitter, too! 235. I pinned several things from Confident Mom on Pinterest. Too many good things to pick just a favorite! I am now following her. Great stuff! Elaine B. P.S. I am on Twitter, but I can not figure it out… so no twitting for me 237. Liked on pinterest and repinned. 238. I liked on Facebook, and I pinned the really cool “Tips to Save Money Dining” It’s a really nice blog – glad you told us about it. 239. I liked the Confident Mom on facebook 240. I have followed Confident Mom on Twitter (@MyPiecesOfLife) 241. I liked the Confident Mom on facebook, but when I tried to sign up for her e-mails, PFFFFT. 243. I have followed on Facebook (ejohns71@hotmail.com) 244. I am following on Twitter! (ejohns71@hotmail.com) 246. I am following on Pinterest! (ejohns71@hotmail.com) 248. I repinned one of her parenting tips on pinterest 249. I liked the confident mom on facebook- rebecca.noll.85@gmail.com 250. I liked The Confident Mom on FB and I’m following her on Twitter. 251. I repinned a gluten free recipe- rebecca.noll.85@gmail.com 252. I liked The Confident Mom on FB and following her on Twitter. Would love to win this calendar!! 253. I liked The Confident Mom on FB and I’m following her on Twitter. 256. Liked Confident Mom on Facebook 257. I repinned several of your photography pins. You have great ones about using a DSLR. Thank you! 258. I liked the Confident Mom on Facebook. 259. I am following the Confident Mom on Twitter. 260. I am following the Confident Mom on Pinterest. 261. I Liked CM on Facebook! 262. I Liked CM facebook page. 263. I like the confident mom on facebook 264. I Follow the confident mom on twitter! 265. I repinned her gluten free recipes tonight…the strawberry rhubard crisp and blueberry crisp. And will pin a bunch more! 266. Followed on FB- thanks for suggesting, looking forward to her tips and advice on how I can be a more “confident mom”! 267. Liked on FB- thanks for suggesting, looking forward to her tips and advice on how I can be a more “confident mom!” 268. Liked the Confident Mom on FaceBook 269. I follow CM on FB! howellfarm @ gmail . Com 270. I follow CM on FB! howellfarm @ gmail . Com 271. I am following CM on Pinterest and repinned positive printables! howellfarm @ gmail . Com 272. repinned 31 Days to Mom Mojo Good Stuff! 275. Repinned her “20 mother/son date ideas” on Pinterest 276. I liked confident Mom on Facebook-forgot to leave my email the first time I posted. Mamap1101@gmail.com 277. I repinned 31 Days of Mom Mojo!! forgot to leave my email the first time I posted. Mamap1101@gmail.com 278. I followed the confident mom on facebook. 279. I liked the CM on FB. abcmpsizin@gmail.com 281. Repinned a recipe. Yum. abcmpsizin@yahoo.com 282. Repinned on Pinterest the cute Easter egg container idea 283. Liked Confident Mom on FB. caroljeanwiley at gmail dot com 284. I like Confident Mom on Facebool 285. Follow confident mom on twitter 286. i pinned on pinterest, their planners are adorable! 288. I followed the Confident Mom on Pinterest…. 289. Followed on Facebook Donna McBroom-Theriot 290. Followed via Twitter @MyBookofStories Forgot my email in the last entry! mylife (dot) onestoryatatime@yahoo.com 291. Pinned (and followed) the meal planning chart and blogging planner. Donna McBroom-Theriot mylife (dot) 292. I repinned the calm quote. Fits perfectly for me right now. “Sometimes God calms the storm…sometimes He lets the storm rage and calms His child.” caroljeanwiley at gmail dot com 294. Following Confident Mom on FB debbie.k.jordan@gmail.oom 295. Repinned a quote about Motherhood. debbie.k.jordan@gmail.com 296. I followed CM on Facebook. Need help! 298. I’m following on twitter. 299. I repined the free planner printables. 301. I follow on twitter. @dezroute 303. I like the Confident Mom on fb 304. I like the Confident Mom on twitter 305. I liked her on Facebook! ♥ Samantha 306. I follow her on Twitter! ♥ Samantha 308. Is this open to people outside the US? 309. I liked the Confident Mom on facebook…domaidl@yahoo.com 310. I follow the Confident Mom on Pinterest. My favorite that I pinned was the Great Giveaways..every little bit helps. domaidl@yahoo.com 311. Following Confident Mom on Twitter…domaidl@yahoo.com 312. I “Like” The Confident Mom on Facebook keshakeke at gmail dot com 313. I Follow The Confident Mom on Twitter keshakeke at gmail dot com 314. I followed Confident Mom on Pinterest and repinned http://pinterest.com/pin/118993615126164370/ keshakeke{at} gmail {dot} com 315. I liked the Confident Mom on FB. 316. I followed the Confident Mom on Twitter. 317. I repinned from bedroom a bunkbed room idea. 318. I follow The Confident Mom on Twitter 319. Repinned her pin of the chalk paint method from Imparting Grace – http://pinterest.com/pin/125537908334771179/ 320. I like the Confident Mom on Facebook! 321. I am liking her on facebook… 322. I “Like” The Confident Mom on Facebook. 323. I Follow The Confident Mom on Twitter. 324. I Repinned my favorite pins from The Confident Mom on Pinterest. 326. My favorite pin on pinterest that I repinned so far has to be the 30 day mom challenge. 328. I liked the confident mom! 329. I followed The Confident Mom on Twitter. 330. I repinned the The Confident Mom’s pin on staying calm as a parent. 332. I repinned my favorite pin on pinterest. 333. I like the Confident Mom on Facebook! 334. I pinned several things but the one I will make first is the coffee filter wreath! 335. I have liked the Confident Mom on Facebook! 336. I liked the Confident Mom on Twitter! 338. I followed Confident Mom on Pinterest and repinned her Weekly and Monthly Menu Plans! 341. I liked her on Facebook. Cjdray2@yahoo.com 342. I followed her on twitter. Cjdray 343. I re-pinned a bruschetta chicken recipe that looks yummy. Cjdray2@yahoo.com 344. I repinned the idea about date nights in a jar! Easy. 345. I liked the confident mom on Facebook! 346. I have liked the confident Mom on facebook. 347. I repinned her 31 prayers for children on Pinterest. jenlee818@gmail.com 348. I liked the Confident Mom on Facebook! 349. I repinned one of CM’s recipe pins called “Smoky Corn Chowder with Shrimp” – yum! 350. liked confident mom on facebook 351. Just liked Confident Mom on Facebook 352. I liked the confident mom on facebook 353. I also pinned her joblist for kids (wonderful!) 357. I have followed on twitter, facebook & pinterest – loved the natural room scents & using bounce on baseboards to clean. 358. The calendar is a great idea! I would love to have it to help me get organized. 359. Thanks for referring me to her Pinterest site. There are a lot of great ideas there. 361. I also followed CM on Twitter 362. And last but not least I pinned my fave pin from CM as well! 363. Followed The Confident Mom on Facebook as TxTerri Sweeps 364. Already following The Confident Mom on twitter as @TxTerriSweeps 365. Like’d the Confident Mom on Facebook! 366. Following Susan Heid@confidentmom on Twitter! 367. I re-pinned “free printables for bathroom and bedrooms” off of her Free Printables Board! She has soooo many awesome things on her Pinterest..totally following all!! 368. Following Confident Mom on Pinterest now and have already repinned loads of her ideas! 369. I liked Confident Mom on FB. 370. I repined Confident Mom on FB. 373. I repinned my favorite pin on pinterest. 374. Followed the CM on pinterest! 375. I “liked” CM on facebook! 376. I followed and repinned on Pinterest and liked on Facebook. Can’t wait to see all the new things! 377. Followed on Pinterest and repinned. And liked on Facebook. Looking forward to more cool things! 378. Wow, looks great! I am I liked Confident Mom on Facebook. squeakandus at msn dot com 379. I’m following on Twitter now. squeakandus at msn dot com 380. I repined on pintrest! squeakandus at msn dot com 381. This looks like just what I need! Thanks! I wanna win! I liked them on FB!!! 382. Following CM on Twitter! -Desiree Larson (uptowndez@gmail.com) 383. Following CM on Twitter (uptowndez) Desiree – uptowndez@gmail.com 384. I like Confident Mom on Facebook 385. i pinned!!! mspiggy381 at hotmail dot com 386. i like her facebook page!!! mspiggy381 at hotmail dot com 387. I liked the Confident Mom on FB! 390. I liked The Confident Mom on Facebook! 391. I follow confident mom on Pinterest! 392. I’m a twitter follower too! 394. This is great! I liked CM on facebook. 395. I’ve liked the Confident Mom on FB for quite some time! 396. I pinned tons of her gardening ideas… had to stop~ LOL 397. Ack.. Forgot to add my email address… so doing it again. I already liked Confident Mom on FB… lkatrin at aol dot com 398. Lkatrin at aol dot com… I pinned tons of gardening items but my favorite was the weed killer recipe. Can hardly wait! Maybe I’ll have flowers by the time my husband comes home from deployment. 399. I also pinned many things from her boards. 400. I pinned a build your own storage bins! Pretty neat!! 401. I followed The Confident Mom on Facebook and Twitter. I pinned the kiss the chorelist goodbye. My email is trishayoder@gmail.com Love your site and her’s by the way, I am a new follower of both 402. I pinned the Confident Mom’s quote on Pinterest, and liked her on Facebook! 403. I liked the confident mom on Facebook! 404. Please pick me I need help so busy need (HELP)! 405. Liked her on FB…not a twitter person so I will have to skip that one:) 406. AHHH! How have I not come across her stuff before! I love, love, LOVE all her charts and printables, especially the age-appropriate chore charts for kids. I am a desperatey unorganized mom who loves charts, printables and planners. Now if I could just stick with any one system for more than two weeks 408. I followed CM on Facebook 409. I followed CM on Twitter. 410. I’m following CM on Pinterest, and I pinned a quote about loving an imperfect person that was just perfect for me tonight! 411. I liked the confident mom on Facebook. 412. Liked Confident Mom on FB 413. I followed Confident Mom on Twitter 414. I repinned. Especially love the idea of glass frames for pictures instead of canvas. 415. I liked on fb and am following on twitter! I´m all the way in Spain!! I need the planner, por favor!!! 416. I have liked Confident Mom on facebook. 417. I have followed Confident Mom on twitter. 418. I pinned on Pinterest and am going back to pin some more! 419. I “liked” Confident Mom on Facebook. YAH! imooseu@gmail.com 420. I’m following Confident Mom on Facebook! Thanks! imooseu@gmail.com
{"url":"http://www.sixsistersstuff.com/2013/01/get-organized-in-2013-weekly-household.html?showComment=1357768498603","timestamp":"2014-04-20T09:12:14Z","content_type":null,"content_length":"602385","record_id":"<urn:uuid:738a65d9-ad6c-411b-b191-93c4eb573c86>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
Declare the Area of the Octagon September 2nd 2005, 08:03 AM #1 Jul 2005 Declare the Area of the Octagon If I cut off angles of square I get ordered octagon. The task is to declare these octagon's area through square side a. The answer is 2a^2*[sqrt(2)-1] Last edited by Math Help; September 2nd 2005 at 10:43 PM. Reason: added image I hope it is not too much Hi (I don't know what is your math background so just that you know before reading this that I am going to use : Pytagoras and simplification of radicals, I hope it is ok with you), A good way to find the area of the octogon is to subtract the area of the four triangles you have in the corners from the area of the square. 1) Area of square : A(sq.) = side^2 = a^2. 2) Area of triangle : A(tr.) = 1/2 b*h where b is the base and h the height. We have neither of them. ok, lets take the triangle at the right bottom /| . We will call it ABC with A the vertex with the right angle (since this angle is one the angles of the square so its measure is 90 degrees) And let B be the vertex on the same horizontal line as A. So CA is the height of the right-angled triangle ABC and AB is the base. If you understand that ABC is isosceles skip this paragraph. If not, lets proof that the angles at B and C are 45 degrees. In any polygon (3 or more sides), the sum of the measures of the angles is given by (c-2)*180 degrees where c is the number of sides. So for an octogon we have (8-2)*180=1080 degrees. Now, since it is regular, each angle measures 1/8 of 1080 which is 135 degrees. You can see in the figure that the angle at B is next to one of the angles of the octogon (lets call it D) with D+B=180 degrees so B=180-D=180-135=45 degrees. And by the same trick or by the sum of the angles in a triangle, you can find that C measures 45 degrees also. So the triangle ABC is isosceles which gives us that the measure of AB = measure of CA. Lets name the measure of a side of the octogon : s. So the hypotenuse of the traingle ABC has measure s (it is a side of the octogon). So by Pytagoras : s^2 = CA^2 + AB^2 = 2*AB^2 (since CA=AB) so s^2 /2 = AB^2 and by taking square roots on each side we have s/sqrt(2) = AB. OK! So are almost done ! now, the area of the triangle ABC is given by A(tr.) = AB*AC /2 and we know that AB=AC=s/sqrt(2) so A(tr.) = (s/sqrt(2) * s/sqrt(2)) /2 = s^2 /4. Now, we have 4 triangles like ABC so A(4 tr.) = 4*A(tr.) = s^2. Yeah ! Now, we don't want to have new unknowns added to a (the side of the square) but we can see in the figure that a = AB + s + AB (I hope you see it : if you look at the bottom side of the square you see that it equals one side of the octogon plus 2 measures of AB). So a= 2AB + s and then, knowing that AB = s/sqrt(2), we replace and find a=2s/sqrt(2) + s = sqrt(2) s + s = s(sqrt(2) + 1) by factorization. So we isolate the s so s = a /(sqrt(2) + 1) Ok, so A(4 tr.) = (a/(sqrt(2) + 1))^2 = a^2 /(sqrt(2) + 1)^2. A(octogon) = A(sq.) - A(4 tr.) = a^2 - a^2 /(sqrt(2) + 1)^2 = a^2 (1-1/(sqrt(2) + 1)^2) = a^2 {(sqrt(2) + 1)^2 - 1}/(sqrt(2) + 1)^2 = a^2 {(2 + 2sqrt(2) + 1) - 1}/(sqrt(2) + 1)^2 = a^2 (2sqrt(2) + 2)/(sqrt(2) + 1)^2 So A(octogon) =2a^2 (sqrt(2) + 1)/(sqrt(2) + 1)^2 = 2a^2 /(sqrt(2) + 1) which is the same as your answer but for the beauty of math we multiply the numerator and the denominator with (sqrt(2) - 1) which gives after simplification 2a^2 (sqrt(2) - 1). Last edited by hemza; September 3rd 2005 at 11:47 AM. I looked at your solution and wonder that a = 2*AB + s. I reckoned that a = 3*AB. Strange. Generally I understand and appreciate your explanation. Last edited by totalnewbie; September 4th 2005 at 05:48 AM. It is rather strange that you find a=3AB because the figure shows that "a" contains at least "s" and 2AB and "s" > AB since "s" is the hypotenuse of the triangle ABC so a > 3AB. how do you find a=3AB ? I think that the figure you have does not reflect the reality because "a" in your figure seems to be devided in 3 equal parts which is definitly not the case (see the figure attached here). Because "s" is also the hypotenuse of the triangle ABC it has to be greater than AB and AC : s>AB and s>AC but we don't care about AC so s>AB. It is very important that when you rely on a figure to do an exercice it must be done precisely. Last edited by hemza; September 6th 2005 at 06:54 AM. I assumed that AB = s But that made it clear: "Because "s" is also the hypotenuse of the triangle ABC it has to be greater than AB and AC : s>AB and s>AC but we don't care about AC so s>AB" I am not good at theory but I try to. Last edited by totalnewbie; September 6th 2005 at 08:20 AM. don't worry don't worry you're doing fine, it just takes some practice... September 3rd 2005, 11:37 AM #2 Junior Member Aug 2005 September 4th 2005, 04:59 AM #3 Jul 2005 September 6th 2005, 06:27 AM #4 Junior Member Aug 2005 September 6th 2005, 08:17 AM #5 Jul 2005 September 6th 2005, 03:02 PM #6 Junior Member Aug 2005
{"url":"http://mathhelpforum.com/geometry/831-declare-area-octagon.html","timestamp":"2014-04-17T21:00:07Z","content_type":null,"content_length":"45113","record_id":"<urn:uuid:bca58106-8da9-402e-b9e7-d0c94addac7a>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project The Black-Scholes European Call Option Formula Corrected Using the Gram-Charlier Expansion It has long been well-known that the Black–Scholes model frequently misprices deep in-the-money and out-of-the-money options. A large part of the problem seems to lie in the normality assumptions of the Black–Scholes model. Empirical evidence shows that actual stock prices and stock returns have a distribution that is usually skewed and has a larger kurtosis than the log-normal distribution. There are a number of approaches that attempt to correct this problem. Here we illustrate an approach based on using the Edgeworth (or Gram–Chalier) series, which allows one to expand a given probability density function in terms of the probability density function of the normal distribution and cumulants of the given PDF. Using a finite truncation of this series instead of the original PDF we obtain a formula for option prices with correction terms for nonzero values of skewness and excess kurtosis (kurtosis -3). The plot shows the Black–Scholes and the corrected Black–Scholes values of the European call option on a stock with initial price of 100 that pays no dividend against the "percentage moneyness" of the option defined as , where is the initial price of the stock, is the strike price, is the time to expiry, and is the interest rate (which in this Demonstration is taken to be 0). The first systematic empirical study of the mispricing of options by the Black–Scholes model appears to be that of F. Black [1] (of Black–Scholes fame), who observed that the Black–Scholes formula overprices deep in-the-money options and underprices deep out-of-the-money options. A later study by Macbeth and Merville [2] reached the exact opposite conclusion. The reason for this was explained in [3]. The authors used the Edgeworth expansion (see [5]), which enabled them to obtain a formula for the value of a call option that accounts for the possibility of nonzero skewness and excess kurtosis in the distribution of stock returns. They showed that the sign of skewness determines the overpricing-underpricing behavior of Black–Scholes for in-the-money and out-of-the-money options, while the kurtosis has the dominant effect for at-the-money options. This Demonstration is based on the formula for the call option obtained in [4]. The difference between this work and [3] is that in [4] the Gram–Charlier expansion is applied to the distribution of log-returns, while in [3] the Edgeworth expansion is applied to the distribution of stock prices. The conclusions of both studies are thus essentially equivalent. It should be noted that the "probability density function" obtained by truncating an infinite series is not a true PDF and can assume negative values. This can cause the formula for option price to return negative values for deep out-of-the-money options. [1] F. Black, "Fact and Fantasy in the Use of Options," Financial Analysts Journal , 1975 pp. 55–72. [2] J. Macbeth and L. Merville, "An Empirical Examination of the Black–Scholes Call Option Pricing Model," Journal of Finance, 34 , pp. 1173–86. [3] R. Jarrow and A. Rudd, "Approximate Option Valuation for Arbitrary Stochastic Processes," Journal of Financial Economics, 10 , 1982 pp. 347–69. [4] C. J. Corrado and T. Su, "Skewness and Kurtosis in S&P500 Index Returns Implied by Option Prices," Journal of Financial Research, 19 (2), 1996 pp. 175–192. [5] M. Kendall and A. Stuart, The Advanced Theory of Statistics, Vol. 1: Distribution Theory , 4th ed., New York: Macmillan, 1977.
{"url":"http://demonstrations.wolfram.com/TheBlackScholesEuropeanCallOptionFormulaCorrectedUsingTheGra/","timestamp":"2014-04-16T10:28:55Z","content_type":null,"content_length":"46464","record_id":"<urn:uuid:2e1f4fab-296a-4d36-9e87-7a25b800051e>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
Schnucks Triple Coupons are BACK! Well you were asking so here you go! TRIPLE COUPONS are BACK at all Schnucks stores! In Northern IL and Southern WI we can triple up to 20 coupons with a face value of $0.75 or less. But wait – this promotion is ONLY RUNNING THURSDAY , FRIDAY AND SATURDAY JAN 16 – 18, 2014! Here are the details: • The first 20 coupons with a face value of up to 75¢ will be tripled. (this means you can triple up to 20 coupons that are $0.75 or less. You CAN use MORE than 20 coupons BUT coupon #21 and on will only be redeemed at face value. SO make sure to give the cashier your coupons you wish to triple first!) • Additional coupons will be redeemed at face value. • Limit 20 triple coupons per day, per household. (This means you cannot do more than one transaction per day) While sometimes people want to be able to do more this policy is in place to try to help ensure that people are not coming in and clearing shelves of the hot items, giving us all a chance to get the deals! • Coupons 76¢ and over will be redeemed at face value. • Limit one coupon per item. • Limit three identical coupons. So for example you can only use 3 of the same coupon ( so only 3 $0.50/1 Cheerios or whatever. ) • No rainchecks. • Coupon value may not exceed price of item. (this means that if you buy an items say that is $1.50 and you have a $0.55 coupon for it the coupon would not triple to $1.65. That is MORE than the price of the item. The coupon however will triple to the price of the item making the item FREE) • Excludes FREE coupons, Schnucks coupons, tobacco coupons and items prohibited by law. • We reserve the right to limit quantities purchased and coupons redeemed. I will have the full ad match up shortly but in the mean time to help you plan here are some good deals I see as of Tuesday night Jan 15 .Please note that prices found in store certainly could change Wednesday morning. Also note that prices listed below are based off of the E State Street store. Prices do vary by store and region. While I strive to find as many deals as I can I always miss things so use this as a guide but look around as well for more deals! Deals from the ad: • $0.50/2 Progresso. 4/$5 in the ad so just $1 for 2 after coupon triples • Betty Crocker Potatoes 10/$10 Use the $0.50/2 from 12/15 which will triple to $1.50 making 2 just $0.50. There was also the same coupon in 12/15 • Kelloggs Krave, Froot Loops, Scooby Doo, Cinnamon Jacks, Frosted Mini Wheats or Raisin Bran 2/$5. $0.50/1 Krave here under zip 77477 which will triple to $1.50 making that just $1. • Old El Paso Taco Shells 3/$5 $0.60/3 here which will triple to $1.80 making 3 $3.20. The better deal is that the Stand in Stuff are included and we have a $0.50/1 coupon here which will triple making the stand N Stuff just $0.16 each • Many Quaker products are 2/$4, 2/$5 and 3/$6. Popped were 10/$10. We got a $0.75/2 Quaker Bars, popped or instant oatmeal coupon in 1/12 and also a $0.75/2 Quaker Oatmeal or Cereal coupon as well in 1/12. The best deal will be if popped stay $1 then 2 bags for free. For products that are $2 each ( so 2/$4, 3/$6 ) you are paying $1.75 for 2 after coupon triples. Watch your coupons and items and make sure you dont mix products with coupons ( ie: dont get a Bar and a cereal…neither coupon will work. It has to be bars OR cereal. Make sense) • Hormel Fully Cooked Entrees $5.99 $0.75/1 coupon here which will triple to $2.25 making this $3.74 • 2 FREE Schnucks Yogurt wyb 2 Belvita which are currently $3.33 each. There was a $0.75/2 in 1/5 so $4.41 for 2 Schnucks yogurt and 2 Belvita Other deals not in ad: Please note prices could chance on Wed. Prices listed are as of Tuesday • $0.75/1 Land O Lakes Garlic and Herb Butter. This is $2.24 Coupon will triple to the price making this FREE • $0.75/2 Betty Crocker Ultimate potatoes or Ultimate Hamburger Helper. The Hamburger Helper ultimate is currently 10/$10 so FREE for 2 after coupon triples IF the price stays 10/$10 • $0.75/1 Welch Light Beverage. These are currently 2/$4 IF they stay that price then FREE after coupon triples • $0.75/1 Dole Fruit Smoothie Shaker These are currently $1.99 so free after triple if they stay $1.99 • $0.75/2 Kids Yoplait including Gogurt. Gogurt and Trix are currently 2/$4 so $1.75 for 2 after triple. Check your store for Dora and Minnie Yogurt 4 ct also which tends to run $1.99 or less regular price • $0.50/2 Milky Way or 3 Musketters. These should be $1 so just $0.50 for 2 after triple • $0.50/2 Twix. These should also be $1 each so $0.50 after triple for 2 • Save 75¢ off one (1) Molly McButter product This is currently on sale 2/$4 so FREE after triple if it stays 2/$4 • Save 50¢ off any 1 Vlasic or Farmers Garden product Products range from $!1.87 and up close to free or a great deal depending on what you purchase • Save 75¢ on any ONE (1) Kikkoman Naturally Brewed Soy Sauce this runs $1.59 ish so any price up to $2.25 is FREE • ***SAVE $0.50 on ANY SUPERPRETZEL® Product price back up to $3.29 so $1.79 after triple. • Welch Grape Natural Spread is $2.29 – $2.59. We got a $0.75/1 in 1/12 which will triple to $2.25 making this $0.04 to $0.34 • $0.60/3 Old El Paso. Seasonings are currently $0.75 so $0.45 for 3. Soft shells, beans are currently 4/$5 so $1.95 for 3 after triple • SAVE $0.50 on any ONE (1) Snuggle® Product runs $3.99 so $2.49 after triple if it price stays $3.99. Also check 1/5 for the same coupon • Blistex is currently $1 We got a $0.35/1 in 1/12 which will triple to the price making it free • $0.70/1 Kelloggs To Go Breakfast Shake Mix. These are currently 2/$6 so $0.90 if they stay $2/$6 • $0.55/1 Jennie O Turkey Bacon. This is currently $2.99 so $1.34 after triple • $0.75/1 Land O Lakes Saute Express $3.18. Coupon will triple to $2.25 making this $0.93 • ***$0.50/1 Progresso Recipe Starters. These were reg price at $2.49. Did not go on sale. I would hold coupon as they do go on sale every so often for $1.50 or less • $0.40/6 Yoplait. These run $0.50 ish so $1.80 for 6 after triple • $0.75/2 Betty Crocker Cookie Mixes, Frosting, Supreme Brownie Mix, dessert bar or Cake mixes. Cookies are currently on sale 2/$4. If they stay that price then $1.75 for 2 after triple coupon. Cake mixes are $1.99 so $1.73 after coupon triple. Frosting is $2.29 so $2.33 for 2 after triple coupon. Supreme Brownies are also 2/$4 so $1.75 for 2. Same coupon in 12/15 • $0.75/1 Fiber One Cereal. These are 2/$6 right now. I’m not sure they will stay that price. If they do then $0.75 cereal after coupon triples. I believe the regular price is around $4.19 so $1.94 • $0.75/2 Cascadian Farms Products. There are some cereals and granola bars on sale 2/$5 so $2.75 for 2 after triple. • $0.75/2 Campbells Homestyle Soup. Chicken Noodles is $1.77 Coupon will triple to $2.25 making 2 just $1.29 • $0.75/1 Nudges product. Check for Nudges Vitamin Small Dog treats for $2.99 Coupon will triple to $2.25 making these $0.24 • $0.75 off of 5 Weight Watchers products. Final price will vary based on what you purchase • $0.75/1 Dole Dippers. These are currently $3.69 so $1.44 after triple • New York Garlic Knots . We got a $0.50/1 in 1/12 which will triple to $1.50. These are currently $2.96 ( so $1.46) These do go on sale many times for $2 each so if they do then great deal at • New York Bread is currently $2.99 and we got a $0.40/1 in 1/12. The coupon will triple to $1.20. If these go on sale for $2 each then great deal • There was a $0.55/1 Motts Snack and Go 4 or 12 pack Pouches,6 cup single serve or 6 cup medleys in 1/12. Currently the 4 ct Snack and Go are 2/$5 so $0.85 after coupon triples. The 6 pack singles are $2.29 so $0.64 after triple coupon • There was a $0.40/2 Brooks Chili Beans coupon in 1/12. State has smaller cans for $0.79 each so just $0.38 for 2 after coupon triples • Maruchan Bowl and Yakisoba. There was a $0.50/1 ( one for each) in 1/12. Please note that the Yakisoba are always $1. On a regular day (not a triple day) that $0.50 coupon will double anyway so free anyway. The bowls are $1.19 so again any other day that coupon will double and those would be $0.19. These are two coupons that always give us free to almost free product so I usually do NOT use it during triples. Wait until next week and Yakisoba will still be free. Yes if you decide you want to use the bowl coupon it will be free during triples. • Werther Sugar Free $0.75/1 coupon in 1/12. The Sugar Free are 2/$3 right now. I believe regular price is $2.19 ish so FREE (up to in store price of $2.25 for the sugar free variety) • $0.55/2 Hormel Chili ( any) in 1/12. Right now some are one sale 2/$3 and others are $1.89 so $1.35 for 2 after triple if they stay 2/$3 $2.13 if they are $1.89 • $0.55/1 Claussen Pickles in 1/12. These are $3.99 so $2.34 after triple • $0.75/1 Scrubbing Bubbles All Purpose Cleaner with Fantastik. This is currently $2.49 so $0.24 after triple • $0.75/2 Ziploc in 1/12. Some are currently 2/$5 so $2.75 for 2 after triple • $0.75/1 Heluva Good dip in 12/15. This regular price is $2.28 so just $0.03 after triple • $0.75/2 Jolly Time popcorn in 1/5. These are currently $1.29 so just $0.33 for 2 after coupon triples • Musselman Big cup and 6 ct cups are $2.49. We have a $0.40/1 Big cup and a $0.75/2 6 pack in 1/5. Coupons make Big cup $1.29 and $ ct 2.73 for 2 6 ct • $0.75/1 Carmex Moisture Plus in 1/5. This is $2.85 so $0.60 after triple • We have a $0.50/2 Knorr Pasta or Rice sides in 1/5 . These are currently 4/$5 so just $1 for 2 after triple • $0.50/2 Pillsbury Toaster Strudel or Toaster Scrambles or Pillsbury pancakes ( excl Heat N Go Mini) in 1/5. These are currently $2 so $2.50 for 2 after triple • $0.75/2 Ragu in 1/5. These run $2.29 reg price so $2.33 for 2 after triple • $0.75/1 Desitin in 1/5. This runs $3 so just $0.75 after triple • Some Prego is currently 2/$3 If it stays that price use the $0.40/2 from 1/5 so $1.80 for 2 after triple • V8 Fusion is currently 2/$5 If it stays that price use the $0.50/1 from 1/5 which will make this $1 after triple • $0.50/1 Colgate 4 oz or larger in 1/5. Some stores carry basic Colgate paste for around $1 so free after triple ( up to $1.50 price). Note if they are $1 wait until triples are over then use the coupon, The coupon will still double any other non triple day so still free • Betty Crocker Fruit Snacks are currently 2/$4 We have a $0.50/2 printable here and also the same coupon from 1/5. Both will triple to $1.50 so $2.50 for 2 after triple • Knorr Bouillon Cubes $0.50 off 8 ct in 1/5. These are regular price $1.19. If you want them free use the coupon during triples otherwise hold and wait for a non triple day. The coupon will still double making it $0.19 • Some stores sell 4 ct Angel Soft bath tissue for $1.49 ( reg rolls) use the $0.45/1 in 1/5 which will triple to $1.35 making this $0.14 • $0.45/3 Hunts Snack pack from 1/5. These are on sale 10/$10 so $1.95 for 3 after triple • ***75¢ on any THREE (3) boxes (50 ct. or larger) OR any ONE (1) Bundle Pack® of Kleenex® Facial Tissues (not valid on trial size) Kleenex are back up to $1.88 $3.39 for 3 after triple. I would skip this deal • Go way back to Nov 17 for Green Giant Steamers coupon • Go way back to Nov 3 for Del Monte canned vegetable coupons This entry was posted in Schnucks, Uncategorized. Bookmark the permalink.
{"url":"http://www.northernillinoiscouponing.com/2014/01/14/schnucks-triple-coupons-are-back/","timestamp":"2014-04-19T14:36:12Z","content_type":null,"content_length":"43490","record_id":"<urn:uuid:08e488fa-bf45-495e-a362-b4c9bc67ca9c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Hash table vs Balanced binary tree up vote 22 down vote favorite What factors should I take into account when I need to choose between a hash table or a balanced binary tree in order to implement a set or an associative array? add comment 10 Answers active oldest votes This question cannot be answered, in general, I fear. The issue is that there are many types of hash tables and balanced binary trees, and their performances vary widely. So, the naive answer is: it depends on the functionality you need. Use a hash table if you do not need ordering and a balanced binary tree otherwise. For a more elaborate answer, let's consider some alternatives. Hash Table (see Wikipedia's entry for some basics) • Not all hash tables use a linked-list as a bucket. A popular alternative is to use a "better" bucket, for example a binary tree, or another hash table (with another hash function), ... • Some hash tables do not use buckets at all: see Open Addressing (they come with other issues, obviously) up vote 19 down • There is something called Linear re-hashing (it's a quality of implementation detail), which avoids the "stop-the-world-and-rehash" pitfall. Basically during the migration vote accepted phase you only insert in the "new" table, and also move one "old" entry into the "new" table. Of course, migration phase means double look-up etc... Binary Tree • Re-balancing is costly, you may consider a Skip-List (also better for multi-threaded accesses) or a Splay Tree. • A good allocator can "pack" nodes together in memory (better caching behavior), even though this does not alleviate the pointer-look-up issue. • B-Tree and variants also offer "packing" Let's not forget that O(1) is an asymptotic complexity. For few elements, the coefficient is usually more important (performance-wise). Which is especially true if your hash function is slow... Finally, for sets, you may also wish to consider probabilistic data structures, like Bloom Filters. @ProfVersaggi: Actually, that's not even true; some hash tables handle duplicates poorly, but some do well. I advise you to read Joaquín M López Muñoz's entries on the topic. He authored, and is maintaining, Boost MultiIndex. – Matthieu M. Jan 17 at 7:58 add comment Hash tables are generally better if there isn't any need to keep the data in any sort of sequence. Binary trees are better if the data must be kept sorted. up vote 24 down vote Couldn't say it any better. – jerluc Jan 31 '11 at 0:05 3 its a good answer, but not the whole story... – Mitch Wheat Jan 31 '11 at 0:07 While not maintaining sorting, hash-tables that can maintain (insert) order are somewhat trivial. – user166390 Jan 31 '11 at 0:09 1 That's not so easy. I'm afraid of a couple of things: 1. hash tables have got bad performance (O(n)) at the worst case 2. in order to resize the hash table I've got to rehash anything, this is pretty expensive. This question is to know how can I avoid such points and to be informed about the other issues I'm missing. – peoro Jan 31 '11 at 0:22 pst: Maintaining insert order is possible with almost any 'black-box' collection; to what extent can one maintain sort order with a hash table better than with a 'black-box'? – supercat Jan 31 '11 at 0:24 show 6 more comments Hash tables are faster lookups: • You need a key that generates an even distribution (otherwise you'll miss a lot and have to rely on something other than hash; like a linear search). • Hash's can use a lot of empty space. You may reserve 256 entries but only need 8 (so far). up vote 6 Binary trees: down vote • Deterministic. O(log n) I think... • Don't need extra space like hash tables can • Must be kept sorted. Adding an element in the middle means moving the rest around. What do you mean when you say that binary trees are deterministic? Hash tables are deterministic as well. Also, operations on binary trees are O(h) where h is the height. If it's a balanced binary tree, then h=O(log(n)). – Daniel Egeberg Jan 31 '11 at 0:11 1 Not true! Hash tables can "miss". For instance if you have an array of 10 and use a phone number to index into it (use a modulo for instance) you could get a hash that points you to the first element of the array. However, if when the array was built 9 other numbers with the same hash were used first; you actually have to go all the way to the last element. In a binary search you are guaranteed to get BigO(log n) no matter what. !DISCLAIMER! It all depends on how you build up your hash sort/search. There are many ways... – whitey04 Jan 31 '11 at 0:14 1 Adding an element in the middle does not mean moving the rest around. Its a linked data structure, not an array (maybe you are confusing Binary Search Tree with Binary Search which are two very different things. All operations are O(log(n)), if adding/removing to the middle meant moving the rest it would have been O(n). – MAK Jan 31 '11 at 4:27 1 It all depends on how you implement it... Using a linked tree is a good way to bypass the insertion problem of a binary search. However, The binary search (with a tree underneath it or not) will always return a result in O(log n). A hash can't unless the input key is 1:1 with the generated hash. – whitey04 Jan 31 '11 at 4:57 add comment A worthy point on a modern architecture: A Hash table will usually, if its load factor is low, have fewer memory reads than a binary tree will. Since memory access tend to be rather costly compared to burning CPU cycles, the Hash table is often faster. In the following Binary tree is assumed to be self-balancing, like a red black tree, an AVL tree or like a treap. On the other hand, if you need to rehash everything in the hash table when you decide to extend it, this may be a costly operation which occur (amortized). Binary trees does not have this limitation. Binary trees are easier to implement in purely functional languages. up vote 6 down vote Binary trees have a natural sort order and a natural way to walk the tree for all elements. When the load factor in the hash table is low, you may be wasting a lot of memory space, but with two pointers, binary trees tend to take up more space. Hash tables are nearly O(1) (depending on how you handle the load factor) vs. Bin trees O(lg n). Trees tend to be the "average performer". There are nothing they do particularly well, but then nothing they do particularly bad. add comment A binary search tree requires a total order relationship among the keys. A hash table requires only an equivalence or identity relationship with a consistent hash function. If a total order relationship is available, then a sorted array has lookup performance comparable to binary trees, worst-case insert performance in the order of hash tables, and less complexity and memory use than both. The worst-case insertion complexity for a hash table can be left at O(1)/O(log K) (with K the number of elements with the same hash) if it's acceptable to increase the worst-case lookup complexity to O(K) or O(log K) if the elements can be sorted. Invariants for both trees and hash tables are expensive to restore if the keys change, but less than O(n log N) for sorted arrays. up vote 5 down vote These are factors to take into account in deciding which implementation to use: 1. Availability of a total order relationship. 2. Availability of a good hashing function for the equivalence relationship. 3. A-priory knowledge of the number of elements. 4. Knowledge about the rate of insertions, deletions, and lookups. 5. Relative complexity of the comparison and hashing functions. 1 "A binary search tree requires a total order relationship among the keys. A hash table requires only an equivalence or identity relationship with a consistent hash function." This is misleading. A binary search tree could always just use the same keys as the hash table: hash values. It is not a restriction on cases when trees may be used, compared to hash tables. – rlibby Feb 10 '11 at 19:43 @rlibby Though most implementations of hash keys by default use types on which a total order is defined (integers or pointers), only equivalence is required if you provide your own hashes. So, in general, you cannot use a binary search tree upon hash keys, because you don't know what the hashes are, where they came from, or much less if they support a total order relationship. – Apalala Feb 18 '11 at 14:17 1 but if I'm understanding your suggestion correctly, then such a hash value also cannot be used in a hash table. Surely if it can be used in a hash table then it can also be used in a tree set. If it can be used in a table, then it must map to some index in the table. One could use the function that generates this index to generate keys for the tree set. – rlibby Feb 20 '11 at 23:02 1 @rlibby A hash table requires that elements that are equal have the same hash, but it doesn't require that elements that are different have different hashes. If different elements have the same hash, then there's no total-order relationship. – Apalala Nov 12 '12 at 4:08 add comment To add to the other great answers above, I'd say: up vote 2 Use a hash table if the amount of data will not change (e.g. storing constants); but, if the amount of data will change, use a tree. This is due to the fact that, in a hash table, once the down vote load factor has been reached, the hash table must resize. The resize operation can be very slow. 2 The worst-case time for adding an element to a hash table is O(n) because of the resize, but if a hash table doubles in size each time, the fraction of additions that require a rehash will drop as the table size increases. The average number of rehash operations per element will never exceed two, no matter how big the table gets. – supercat Jan 31 '11 at 18:15 If the hash table size is doubling, then I'd be surprised if the number of collisions decreased because hash tables work best (i.e a low number of collisions) when the size of the table is prime. Also, if you're asking the system to give you twice as much memory each time you resize, you'll quickly run out of memory (or slow the system down if the system rearranges its memory to give you the amount of contiguous memory you're asking for). – Davidann Jan 31 '11 at 19:00 doubling is a common strategy but it isn't required. What is required is exponential growth. You can pick a smaller exponent if you like, it will just mean that the average number of rehash operations will be higher. In any case the amortized cost of n inserts in a table with exponential growth is O(n), while self-balancing binary search trees cost O(n*log(n)). – rlibby Feb 10 '11 at 19:38 add comment If you''ll have many slightly-different instances of sets, you'll probably want them to share structure. This is easy with trees (if they're immutable or copy-on-write). I'm not sure up vote 1 down how well you can do it with hashtables; it's at least less obvious. add comment If you only need to access single elements, hashtables are better. If you need a range of elements, you simply have no other option than binary trees. up vote 0 down vote add comment In my experience, hastables are always faster because trees suffer too much of cache effects. To see some real data, you can check the benchmark page of my TommyDS library http://tommyds.sourceforge.net/ up vote 0 down vote Here you can see compared the performance of the most common hashtable, tree and trie libraries available. add comment One point that I don't think has been addressed is that trees are much better for persistent data structures. That is, immutable structures. A standard hash table (i.e. one that uses a single array of linked lists) cannot be modified without modifying the whole table. One situation in which this is relevant is if two concurrent functions both have a copy of a hash table, and one of them changes the table (if the table is mutable, that change will be visible to the other one as well). Another situation would be something like the following: def bar(table): # some intern stuck this line of code in table["hello"] = "world" return table["the answer"] def foo(x, y, table): z = bar(table) if "hello" in table: raise Exception("failed catastrophically!") return x + y + z up vote 0 important_result = foo(1, 2, { down vote "the answer": 5, "this table": "doesn't contain hello", "so it should": "be ok" # catastrophic failure occurs With a mutable table, we can't guarantee that the table a function call receives will remain that table throughout its execution, because other function calls might modify it. So, mutability is sometimes not a pleasant thing. Now, a way around this would be to keep the table immutable, and have updates return a new table without modifying the old one. But with a hash table this would often be a costly O(n) operation, since the entire underlying array would need to be copied. On the other hand, with a balanced tree, a new tree can be generated with only O(log n) nodes needing to be created (the rest of the tree being identical). This means that an efficient tree can be very convenient when immutable maps are desired. add comment Not the answer you're looking for? Browse other questions tagged algorithm language-agnostic data-structures hash tree or ask your own question.
{"url":"http://stackoverflow.com/questions/4846468/hash-table-vs-balanced-binary-tree/4846488","timestamp":"2014-04-18T06:59:49Z","content_type":null,"content_length":"117788","record_id":"<urn:uuid:20d2c5bd-8af0-4ee0-bcd3-a7ca7b025a70>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: use the quadratic formula to solve the quadratic equation x^{2}-8x-4=0 • one year ago • one year ago Best Response You've already chosen the best response. compare this equation general quadratic equation ax^{2}+bx+c find values of a,b and c. As ax^{2}+bx+c=x^{2}-8x-4. So a=1,b=-8,c=-4 Best Response You've already chosen the best response. Then Put values of a,b,c in Quadratic Formula To Get Answer Which Is: \[x=-b \pm \sqrt{b^{2}-4ac} / 2a\] Best Response You've already chosen the best response. U Got It Mrs.Sheppard? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50787fd3e4b0ed1dac50f644","timestamp":"2014-04-17T07:14:01Z","content_type":null,"content_length":"32439","record_id":"<urn:uuid:f8c8194c-f7c7-473d-b105-b02a5689e1da>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Pre-Algebra: Word Problems Find study help on linear applications for pre-algebra. Use the links below to select the specific area of linear applications you're looking for help with. Each guide comes complete with an explanation, example problems, and practice problems with solutions to help you learn linear applications for pre-algebra. The most popular articles in this category
{"url":"http://www.education.com/study-help/study-help-pre-algebra-linear-applications/page4/","timestamp":"2014-04-16T04:46:28Z","content_type":null,"content_length":"96799","record_id":"<urn:uuid:b4670bea-87ce-4701-af8f-99c84831f8fa>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Jokes Charles Dodgson, aka Lewis Carroll, was a professor of mathematics at Oxford University for most of his life. The Alice books provide ample evidence for his great love of logic puzzles and word games. And there are several moments in chapters 5, 6, and 7 of Alice which make most sense when thought of as a Mathematical Joke. Here are some examples: Chapter 5 1) "One side will make you grow taller, and the other side will make you grow shorter." "One side of what? The other side of what?" thought Alice to herself. "Of the mushroom," said the Caterpillar, just as if she had asked it aloud; and in another moment it was out of sight. Alice remained looking thoughtfully at the mushroom for a minute, trying to make out which were the two sides of it; and, as it was perfectly round, she found this a very difficult question... The joke: A circle is a simple shape of Euclidean geometry consisting of those points in a plane which are equidistant from a given point (Wikipedia). As such, a circle has no “sides,” or rather it has an infinite number of sides. Thus Alice might rightly be confused when asked to find the two sides of a perfectly round object. 2) "I've seen a good many little girls in my time, but never one with such a neck as that! No, no! You're a serpent, and there's no use denying it. I suppose you'll be telling me next that you never tasted an egg!" "I have tasted eggs, certainly," said Alice, who was a very truthful child, "but little girls eat eggs quite as much as serpents do, you know." "I don't believe it," said the Pigeon; "but if they do, why, then they're a kind of serpent; that's all I can say." The joke: In this case the pigeon uses the mathematical property of exclusivity to “prove” that little girls are a type of serpent. According to the pigeon, having a long neck and eating eggs are not merely properties of a serpent, they are exclusive properties of a serpent. This means that only things called “serpents” have both long necks and an affinity for eating eggs and that if a creature has those two properties it MUST be a serpent. Thus, because Alice has a long neck, has eaten eggs and claims to be a little girl, the pigeon “logically” concludes that little girls must be a kind of serpent. Chapter 6 1) "There's no sort of use in knocking," said the Footman, "and that for two reasons. First, because I'm on the same side of the door as you are…” "There might be some sense in your knocking," the Footman went on, without attending to her, "if we had the door between us. For instance, if you were inside, you might knock, and I could let you out, you know." The joke: In this case the Footman is explaining (somewhat unorthodoxly) a rule of geometry, namely that the line joining two points on the same side of a line will not intersect the line. (See 2) "How am I to get in?" asked Alice again, in a louder tone. "Are you to get in at all?" said the Footman. "That's the first question, you know." It was, no doubt: only Alice did not like to be told so. The joke: The Frog Footman points out that Alice’s assumption that she can get into the house, is not necessarily true. This is actually a type of assumption that mathematicians must make all the time, that the problem they are trying to solve can be solved. And though it is important to recognize that this is an assumption, most mathematicians don’t like having it pointed out to them 3) "And how do you know that you're mad?" "To begin with," said the Cat, "a dog's not mad. You grant that?" "I suppose so," said Alice. "Well, then," the Cat went on, "you see a dog growls when it's angry, and wags its tail when it's pleased. Now I growl when I'm pleased, and wag my tail when I'm angry. Therefore I'm mad." The joke: This is an example of Carroll poking fun at deductive reasoning, a useful mathematical tool, but one which can easily be misused. The Cheshire Cat lays out his case this way: ■ We agree that Dogs are not mad ■ The properties of a Dog (and therefore of “not mad”-ness): ★ 1) Growl when angry ★ 2) Wag tail when pleased ■ Properties of a Cat ★ 1) Wag tail when angry ★ 2) Growl when pleased ■ Therefore Cats are not dogs and therefore not “not mad” ■ Thus, Cats are mad. Chapter 7 1) "Then you should say what you mean," the March Hare went on. "I do," Alice hastily replied; "at last--at least I mean what I say--that's the same thing, you know." "Not the same thing a bit!" said the Hatter. "Why, you might just as well say that 'I see what I eat' is the same thing as 'I eat what I see'!" "You might just as well say," added the March Hare, "that 'I like what I get' is the same thing as 'I get what I like'!" "You might just as well say," added the Dormouse, which seemed to be talking in its sleep, "that 'I breathe when I sleep' is the same thing as 'I sleep when I breathe'!" "It is the same thing with you," said the Hatter, and here the conversation dropped, and the party sat silent for a minute…" The joke: In this case Alice makes the mistake of applying a mathematical principle to language. The commutative properties of addition and multiplication in algebra is defined as the property which allows numbers to be added or multiplied in any order and still give the same result. Thus 5 + 7 = 7 + 5 and 5 * 7 = 7 * 5. Alice tries to the apply the commutative property to her language asserting that “I mean what I say” = “I say what I mean,” which the Hatter, Hare, and Dormouse contradict with their counter-examples. 2) "What a funny watch!" she remarked. "It tells the day of the month, and doesn't tell what o'clock it is!" "Why should it?" muttered the Hatter. "Does your watch tell you what year it is?" "Of course not," Alice replied very readily: "but that's because it stays the same year for such a long time together." "Which is just the case with mine," said the Hatter. Alice felt dreadfully puzzled. “It's always six o'clock now.” The joke: Because the hour and minute at the Mad Tea Party never change, the Hatter’s watch turns to show the day of the month. He doesn’t need to reference the time “because it stays the same for such a long time together.” Martin Gardner points out that “one is reminded also of an earlier piece by Carroll in which he proves that a stopped clock is more accurate than one that loses a minute a day. The first clock is exactly right twice every twenty-four hours, whereas the other clock is exactly right only once in two years” (96-97). 3) "Take some more tea," the March Hare said to Alice, very earnestly. "I've had nothing yet," Alice replied in an offended tone: "so I can't take more." "You mean you can't take less," said the Hatter: "it's very easy to take more than nothing." The joke: In this instance, the Hatter both points out the ambiguity of the term “more” and draws our attention to the paradox of negative numbers, which describe a quantity “less than nothing.” Helen Pycior argues that Carroll took “the concept literally, and forced his readers to consider less tea than that contained in an empty cup and fewer hours of study than none. In contrast to such mathematicians as De Morgan, who sought viable analogues of the negative numbers in such concrete objects as financial debts and lines drawn backwards from a zero point, Carroll presented physical situations in which "quantity less than nothing" was nonsensical” (164). 4) A bright idea came into Alice's head. "Is that the reason so many tea-things are put out here?" she asked. "Yes, that's it," said the Hatter with a sigh: "it's always tea-time, and we've no time to wash the things between whiles." "Then you keep moving round, I suppose?" said Alice. "Exactly so," said the Hatter: "as the things get used up." "But what happens when you come to the beginning again?" Alice ventured to ask. "Suppose we change the subject…" The joke: Here the joke is that Alice wants to know whether or not the movement around the table operates on modular arithmetic. Modular arithmetic “counts” by cycling through a set of numbers an infinite number of times. A clock, for instance, counts time by counting the hours 1 through 12 over and over and over and over again. Alice wants to know if this is how the mad tea party works. Will the Hatter, Hare and Dormouse continue around and around and around the table ad infinitem? Or is there a “stop” rule? Unfortunately (for Alice’s curiosity and ours) the March Hare interrupts at this point to change the subject. Read More Slide Presentation on Math Elements in Chapters 4-6 of Alice "The Hidden Math Behind Alice in Wonderland" - By Keith Devlin on MAA.org
{"url":"http://www.carleton.edu/departments/ENGL/Alice/Footmathjokes.html","timestamp":"2014-04-16T13:21:55Z","content_type":null,"content_length":"18776","record_id":"<urn:uuid:d225bf0d-fee8-4c6a-8a17-3bf8b90a4929>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: THE RELAXATION-TIME WIGNER EQUATION Anton Arnold Abstract. The relaxation{time (RT) Wigner equation models the quantum{mechanical mo- tion of electrons in an electrostatic eld, including their interaction with phonons. We discuss the conditions on a Wigner distribution function for being `physical', and show that they will stay `physical' under temporal evolution. Particular attention is paid to the proper de nition of the particle density for Wigner functions w =2 L1. For the 1D{periodic, self{consistent RT{Wigner{Poisson equation we give a local convergence result towards the steady state. 1. Introduction. This paper is concerned with the analysis of the relaxation{time Wigner equation and the physical propertiesof its solution. The Wigner formalism, which represents a phase{space description of quantum mechanics, has in recent years attracted considerable attention of solid state physicists for including quantum e ects into the simulation of ultra{ integrated semiconductor devices, like resonant tunneling diodes, e.g. ( 7], 10], 5]). Also, the Wigner ({Poisson) equation has recently been the objective of a detailed mathematical analysis. For a physical derivation and the discussion of many of its analytical properties we refer the reader to 15], 12], 6] (and references therein). The real{valued Wigner (quasi) distribution function w = w(x v t) describes the state of an electron ensemble in the 2d{dimensional position{velocity (x v){phase space. In the absence of collision and scattering, and in the e ective{mass approximation, its time evolution under the action of the (real{valued) electrostatic potential V(x t) is governed by
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/403/1896304.html","timestamp":"2014-04-19T02:31:11Z","content_type":null,"content_length":"8756","record_id":"<urn:uuid:1f3a32d3-d297-4f5b-8cab-d394e8e9b3da>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00101-ip-10-147-4-33.ec2.internal.warc.gz"}
Talk:Markus-Lyapunov Fractals From Math Images Response to Checklist 7/7/11 14:44 Overall, this page is in really great shape. I have a couple of minor comments in red. Great work!! AnnaP 7/10 References and footnotes Click on any of the pictures to see author and original location. I include one reference at the bottom of the page, which is where I found the information on this page that isn't my own mathematics, "common sense" information, or information from discussion with Steve. In the basic description, I relate it to population growth. In "why it's interesting," I relate it to fractals and the larger concept of fractal patterns in chaos. I also relate it to art. I wanted to explore the actual application for which Markus created this system, but was unable to find sufficient information. Prose and Structure I aimed for a logical progression of ideas, and tried to show this progression and remind readers of the connections between the ideas. Each of my sections starts with a sentence or two indicating the direction and point of the section. In the more mathematical section, I can't see any reasonable way to move the bulk of the math later than where it is. Integration of Images All of the images are either explicitly referenced in the text or are minor extensions or extra examples placed so that their correlation to the content is clear. Further, every image has a caption stating clearly how it relates to the content. The content here is extensively related to content on Logistic Bifurcation, and this page links to that one in multiple key places. Such topics as chaos, summation notations, and modular arithmetic are also used and links to relevant pages are provided. In the section on fractal properties, the reader is directed to a large number of other examples of this property in related mathematical The mathematical section provides a derivation for the Lyapunov exponent and shows how and why it's used as it is for the logistic map. This section also provides a graph of Lyapunov exponents for logistic systems to show this application. The "forcing the rates of change" section includes an example of a fractal with a different period to show the impact of this mathematical idea. I might actually include a couple of graphs to show how quickly the things converge when $\lambda <0$, and how quickly they diverge for $\lambda >0$. Just picking -1 and 1 and graphing the value of dxn/dxo as a function of n would be pretty easy I added this in below the illustration of dx[0] and dx[n]. Accuracy and Precision Terms that may be unfamiliar are all either explained in the text, included with explanatory balloons, or included as links to relevant helper pages. Where necessary, equations are provided to define terms. I stick to consistent, clear terms as often as possible. In your bullet points in your basic description, please avoid the word "it" at least for the first one. I had to pause and think for a second "the logistic map? That doesn't make any sense if that's what 'it' means! OHH she means the exponent..." Fixed this, I believe. I've played with window size to make sure it doesn't do anything too dramatic to the page. Paragraphs are as short as I feel comfortable making them. Bubbles and links are used for almost all terms to define them. Text breaks are used to make sure images don't interfere with unrelated sections. As is, the basic description looks like a wall of text. Are you completely, totally 100% sure that you can't break those up to add some needed white space? For example, I could see a paragraph break in between these sentences "...with those rates of change will behave. Markus then created a color scheme to represent different Lyapunov exponents..." I added the break you suggested as well as one in the last paragraph. General Comments • Kate 18:21, 6 July 2011 (UTC): I think this page is awesome, and probably just about ready for final review! :) [S:Here is the the Blue Fern page that I talked about. You might want to find a way to link to it somewhere on the page.:S] Thanks! Done. AnnaP 6/16 Old comments: Section-specific Comments • [S:You should make that blurb a complete sentence. It sounds unfinished to me.Richard 6/30 :S] Kate 18:21, 6 July 2011 (UTC): Seconded. Good point. Done. Basic Description • Kate 18:21, 6 July 2011 (UTC): This is a useful indicator because, for the logistic map, I think the first comma is unnecessary. □ I see what you're saying. It's definitely a bit choppy. I'm leaving it in for now, because I want it to be very clear that these bullet points are not true for every dynamic system. I tried to figure out a more graceful way to maintain the emphasis created by that comma without having the slightly awkward punctuation, and I couldn't find one. I'm happy to talk about this more, • [S:In the first bullet, it might be good to clarify what "it" refers to...the rate of change in population?Richard 6/30 :S] If it is zero, the population change is neutral; at some point in time, it reaches a fixed point and remains there. Done. Old comments: A More Mathematical Explanation • [S:Kate 18:21, 6 July 2011 (UTC): I think it might be a good idea to set it up so that there's that little note saying that understanding of this section requires knowledge of the logistic map, and link to that page, because I'd probably be quite confused here if I hadn't read that page.:S] Done. The Lyapunov Exponent • [S:When you first introduce summation notation it might be cool to change it to thisRichard 6/30 :S] "Generalizing this for all n, we consider every step of iteration using summation notation: " Kate 18:21, 6 July 2011 (UTC): Please do this! My poor helper page isn't linked to from anywhere and is currently useless.</s> Love it. Done. • You still refer to "change" in the paragraphs at the end of this section, but are you referring to the rate of change or the population's change? □ Had a conversation about this issue with Richard. We decided that it would be much clearer if I provided a visual representation of the "dx" notion and used the word "difference" instead of Why It's Interesting • <s>Kate 18:21, 6 July 2011 (UTC): Your second image in this section interferes with the Teaching Materials heading when the window's large.Taken care of. • [S:Self-similarity would be a good word to bold in this section.Richard 6/30:S] Done. • Is there a picture of it on a t-shirt or something?Richard 6/30 □ Um... No. I couldn't find one. Thing is, this fractal is used in graphics all the time; I've seen it. But in those cases, it's not usually called by it's name, so it's sort of impossible to find online. Old comments:
{"url":"http://mathforum.org/mathimages/index.php/Talk:Markus-Lyapunov_Fractals","timestamp":"2014-04-19T02:23:50Z","content_type":null,"content_length":"37719","record_id":"<urn:uuid:cc47c8fa-2e73-4f7b-a902-597adb7ebedb>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible to make a program that divides to an infinite decimal place? 10-17-2005 #1 Registered User Join Date Oct 2005 Is it possible to make a program that divides to an infinite decimal place? I'm trying to get one to work, but it only goes up to 16 decimal places, then after that, it just gives me 0's. Well of course it's possible to write a program that would do that. The problem is that the hardware the program is running on is eventually going to run out of space. But if you want to get as close as possible, you're going to have to define your own date type (probably using a linked list to store an unknown-beforehand amount of data) and write your own code for dividing with said data type. Good luck, but it's probably not worth the effort. in fact it is easyer than you think. because the anwser would be zero. if you were to have something like that it would be "infinitly close to zero" and in math infinitly close to X is infact X. a great example 1/3=.333333... 2/3=.66666... and 3/3=.9999... the number .99999.... is infinitly close to 1 and it is one since 9/9 is also equale to one. some one might argue that 1/3 is not equale to .999999.... but it makes no diffrence because .999999... is a rational number and 1 is a rational number. and there is a rule (not sure what it is called) that says between any two diffrent rational numbers there is another diffrent rational number. and you obviously cant fit anything inbetween 1 and .999999... also you cant actualy have and infinite decimal places since you can not quantify infinity. (i hate it when people said this before i understood it but it is true that) infinity is a concept not a number. Is it possible to make a program that divides to an infinite decimal place? . No. All computers work with a finite number of decimals or digits. No computer in the world can work with an infinite number of decimals because the computer would need an infinite amount of RAM and an infinite hard drive size. I don't think anyone has been able to design those -- yet. Not to mention that if it can go to infinity the calculation would never actually finish. some one might argue that 1/3 is not equale to .999999.... If I didn't know what you really meant to type because of context, I would. But either way - try applying your logic to PI. If you're just talking about 'dividing,' then you are looking for a rational number datatype. Then you can have infinite decimal numbers represented, in fraction form (only if the decimal is repeating, of course). As long as you avoid square roots, limits of sequences, Functions of Interest, and mainly stick to basic arithmetic, you should be able to represent numbers exactly and do computations with them. Any number you want to compute is always possible to represent in a finite amount of space. the number .99999.... is infinitly close to 1 .99999.... and 1 are not numbers. They are representations of numbers. But of course you are right in that they both represent the same number. Last edited by Rashakil Fol; 10-17-2005 at 11:22 PM. ok, forget what about the infinity thing, i'm just try to make a program divide any number and keep dividing it for as long as it has enough memory. And if it takes time for the number to be divided, how do i make the program update what's in the window of what it's already got for the answer? oh. i see what you mean. you want to know in decimal what for example: 23/27 is. right? well if both numbers are whole numbers eventualy the decimal number will start to repeate. my computer says 23/27 is: 0.043478260869565217391304347826087. this would be writen like this: because after 913 it starts over with 043478... hope that helps some one might argue that 1/3 is not equale to .999999.... If I didn't know what you really meant to type because of context, I would. But either way - try applying your logic to PI. sorry. i ment 3/3 not 1/3 . the logic of pi would not fit here though because you cannot represent pi as a fraction. Last edited by squeaker; 10-18-2005 at 02:30 PM. Reason: mistake See this thread for doing math with unlimited numeric size (limited by computer RAM and hard driver) because you cannot represent pi as a fraction. Cant you represent pi as 22/7? Isnt that a fraction? Or you can represent it 3 and 1/7? That is what pi is... and you obviously cant fit anything inbetween 1 and .999999... Sure you can _ _ 1 - .9 = .01 - .99999999999.... Don't quote me on that... ...seriously Cant you represent pi as 22/7? Isnt that a fraction? Or you can represent it 3 and 1/7? That is what pi is... No, those fractions yield a crude approximation of pi. pi is an irrational number. pi = 3.141592653 22/7 = 3.142857142 I'm not sure who said that you can't divide to an infinite decimal place, but it is not true. You can write a program that will find any decimal place. If you can write and infinite loop then you can work out a program to infinite decimal place. You can't store all the info at the same time because of hardware restraints, but you can't display it all any way so why does that matter? You can at least make it so that the user couldn't tell the difference. I'll write a program to do it and prove you all wrong. I'll be posting again in another hour with the source code to work out the square root of 2 to infinite decimal places.( I might actually wait till tomorrow because I'm kinda tired) Don't quote me on that... ...seriously I'll write a program to do it and prove you all wrong. I'll be posting again in another hour with the source code to work out the square root of 2 to infinite decimal places.( I might actually wait till tomorrow because I'm kinda tired) good luck -- as someone else mentioned, it will take an infinite amount of time to do the calculations 10-17-2005 #2 Super Moderator Join Date Sep 2001 10-17-2005 #3 Registered User Join Date Oct 2005 10-17-2005 #4 Registered User Join Date Aug 2005 10-17-2005 #5 ^ Read Backwards^ Join Date Sep 2005 10-17-2005 #6 Super Moderator Join Date Sep 2001 10-17-2005 #7 Join Date Jul 2005 10-18-2005 #8 Registered User Join Date Oct 2005 10-18-2005 #9 Registered User Join Date Oct 2005 10-18-2005 #10 Registered User Join Date Aug 2005 10-18-2005 #11 Registered User Join Date May 2005 10-18-2005 #12 Captain - Lover of the C Join Date May 2005 10-19-2005 #13 10-19-2005 #14 Captain - Lover of the C Join Date May 2005 10-20-2005 #15 Registered User Join Date Aug 2005
{"url":"http://cboard.cprogramming.com/cplusplus-programming/71027-possible-make-program-divides-infinite-decimal-place.html","timestamp":"2014-04-17T11:49:47Z","content_type":null,"content_length":"102742","record_id":"<urn:uuid:f3e7a50f-067a-4ec8-99de-b7ffb19bdffd>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Venice, CA Algebra 1 Tutor Find a Venice, CA Algebra 1 Tutor ...I have helped over 100 students who have been behind in reading and writing, due to dyslexia, improve significantly. The method I use most often, which has been very effective, is Orton-Gillingham. I have been working with it for the past eight years, and it is the right method for 95% of my students. 16 Subjects: including algebra 1, reading, English, writing ...I have tutoring experience with individuals with ADHD, learning disorders, and Autism spectrum diagnoses. I have experience providing one on one tutoring services to both children and adults in ESL. As a doctoral student in Clinical Psychology, I have studied the etiology, symptoms, and treatment of all types of ADHD. 44 Subjects: including algebra 1, English, reading, writing ...This helps students internalize (take in and make it an integral part of one's self) the organizing role I serve as a tutor. Students will learn about polynomial functions, factoring, and graphing and solving quadratics (e.g completing the square, using the quadratic formula), etc. This is one of my specialties. 17 Subjects: including algebra 1, chemistry, English, biology ...I would be glad to tutor you or your student for the GED. I went to the University of Chicago for graduate school for my Master's degree where I was expected to be excellent at every subject. I received an A average there. 109 Subjects: including algebra 1, reading, Spanish, chemistry ...I would also be able to help in the English and Science areas. Because of my 19 years of experience teaching middle school (3 years) and high school math, I would be able to help review your math content. I have taught Geometry for many years which has a logic element to it. 15 Subjects: including algebra 1, geometry, GRE, statistics
{"url":"http://www.purplemath.com/venice_ca_algebra_1_tutors.php","timestamp":"2014-04-21T12:56:09Z","content_type":null,"content_length":"24119","record_id":"<urn:uuid:8faa806b-734c-4041-84de-5972b6406847>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00050-ip-10-147-4-33.ec2.internal.warc.gz"}
[SPLIT] solving y(y+1)(y+2)(y+3) = 7920 Please help me with this. I really confused about the calculation I'm trying to solve y(y+1)(y+2)(y+3) = 7920, which is a problem from my friend's kid. First I multiplied it all out: (y^2 + y)(y + 2)(y + 3) = 7920 (y^3 + 3y^2 + 2y)(y + 3) = 7920 y^4 + 6y^3 + 11y^2 + 6y - 7920 = 0 I tried to solve it as a quadratic equation, but that didn't seem to work since this polynomial has 5 terms. I think it is 3 terms to be a trinomial. Since this equation can't be grouped as a trinomial, I am not able to solve it by factoring. Am I on the right track? I haven't done any maths for more than 3 years, and I think I forgot almost everything I learned! Can you please give me a hint? Re: [SPLIT] solving y(y+1)(y+2)(y+3) = 7920 mathlearner2011 wrote:I'm trying to solve y(y+1)(y+2)(y+3) = 7920.... (y^2 + y)(y + 2)(y + 3) = 7920 (y^3 + 3y^2 + 2y)(y + 3) = 7920 y^4 + 6y^3 + 11y^2 + 6y - 7920 = 0 This is correct. mathlearner2011 wrote:I tried to solve it as a quadratic equation.... Quadratics are equations of degree 2, not degree 4. mathlearner2011 wrote:Since this equation can't be grouped as a trinomial, I am not able to solve it by factoring.... First, apply the Rational Roots Test to obtain a listing of possible roots (zeroes, solutions). Then use synthetic division to determine which of the possible roots are actual roots. Once you have found two, you will be down to a quadratic, to which you can apply the Quadratic Formula. For further information on this method, please review the following: Solving Polynomials Re: [SPLIT] solving y(y+1)(y+2)(y+3) = 7920 $7920 = 2^4\cdot 3^2\cdot 5\cdot 11=8\cdot9\cdot10\cdot11$ therefore $y=8$ Re: [SPLIT] solving y(y+1)(y+2)(y+3) = 7920 Martingale wrote:$7920 = 2^4\cdot 3^2\cdot 5\cdot 11=8\cdot9\cdot10\cdot11$ therefore $y=8$ so $y$ could also be -11
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=8&t=2072","timestamp":"2014-04-16T16:52:31Z","content_type":null,"content_length":"24353","record_id":"<urn:uuid:85402391-1d3a-4188-abd4-6f9664d524f7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00011-ip-10-147-4-33.ec2.internal.warc.gz"}
converting road base - OnlineConversion Forums Originally Posted by Unregistered We are constructing trails and using road base by the ton. How many tons will it take to cover 8500 cubic yards. That depends on the density of the material. Without knowing the density of the road base we wont be able to calculate the weight to volume conversion. Can you weigh a known amount of it?
{"url":"http://forum.onlineconversion.com/showthread.php?t=4661","timestamp":"2014-04-16T13:03:31Z","content_type":null,"content_length":"41639","record_id":"<urn:uuid:988c7008-f6b4-4199-a43f-9775ff33785f>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
AT&T 2 Week Countdown - Android Forums 2 Weeks Today (or Possible yesterday depending on shipping speed) AT&T pre-orders should start rolling in. 14 Days! Fourteen is a composite number, its divisors being 1, 2, 7 and 14. 14 is the 3rd discrete semiprime (2.7) and the 3rd member of the (2.q) discrete semiprime family. The number following 14—15—is itself a discrete semiprime and this is the first such pair of discrete semiprimes. The next example is the pair commencing 21. The aliquot sum σ(n) of 14 is 10, also a discrete semiprime and this is again the first example of a discrete semiprime having an aliquot sum in the same form. 14 has an aliquot sequence of 6 members (14,10,8,7,1,0) 14 is the third composite number in the 7-aliquot tree. Fourteen is itself the Aliquot sum of two numbers; the discrete semiprime 22, and the square number 196. Fourteen is the base of the tetradecimal notation. In base fifteen and higher bases (such as hexadecimal), fourteen is represented as E. Fourteen is the sum of the first three squares, which makes it a square pyramidal number. This number is the lowest even n for which the equation φ(x) = n has no solution, making it the first even nontotient (see Euler's totient function). 14 is a Catalan number, the only semiprime among all Catalan numbers. Take a set of real numbers and apply the closure and complement operations to it in any possible sequence. At most 14 distinct sets can be generated in this way. This holds even if the reals are replaced by a more general topological space. See Kuratowski's closure-complement problem. Fourteen is a Keith number in base 10: 1, 4, 5, 9, 14, 23, 37, 60, 97, 157... Fourteen is an open meandric number. Fourteen is a Companion Pell number. The cuboctahedron, the truncated cube, and the truncated octahedron each have fourteen faces. The rhombic dodecahedron, which tessellates 3-dimensional space and is the dual of the cuboctahedron, has fourteen vertices. The truncated octahedron, which also tessellates 3-dimensional space, is the only permutohedron. Number 14 is a Karmic number and these people need to learn independence, self-initiative, unity and justice. Their great need in life is to achieve balance, harmony, temperance and prudence. If they act cautiously they can be fortunate in money matters or changes in business. They have the motivation to make a success in anything they do. They are warm and creative and have a great deal of natural wisdom. This number is one of everlasting movement and brings trials and dangers from a great variety of experiences. These people sometimes experiment for the sake of experience. Such behaviour may lead to chaos, but their aim is to try for progressive change and the final joy of renewal and growth. And yes, waiting is the hardest part!
{"url":"http://androidforums.com/samsung-galaxy-note-3/771009-t-2-week-countdown.html","timestamp":"2014-04-17T10:34:42Z","content_type":null,"content_length":"97591","record_id":"<urn:uuid:b4aa03e0-eb31-4a99-a59f-dd43c21ecd14>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00529-ip-10-147-4-33.ec2.internal.warc.gz"}
The Planted Tank Forum - View Single Post - KN03 Originally Posted by Wasserpest You can't really test for Potassium. Just take Chucks calculator, plug in 1 ml water, your tank size, and then play around with the teaspoons and see how many you need to get to 20 ppm. After a gradual initial addition to bring it up to that level you will be dosing just for the water that you change. I'm really confused now. On Chuck's site it says "this is the total level you should target for the tank. For nutrients like Potassium and magnesium, I add enough so that each week, I'm adding close to this amount." I was assuming that you start from scratch each week - so that in my 50 gal tank, for instance, I would add around a teaspoon of K2SO4 every week, which would add around 15 ppm of potassium. Which is correct? Do you add enough for the whole tank each week, or just enough for the water you're replacing? If you just add enough for the water you're replacing, that seems to suggest that your plants aren't using up what you put in the tank the week before.
{"url":"http://www.plantedtank.net/forums/showpost.php?p=16198&postcount=7","timestamp":"2014-04-17T19:21:07Z","content_type":null,"content_length":"17811","record_id":"<urn:uuid:b4c9bb1a-67e0-45e7-a8db-85622c613c1e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Placement Test The Math Diagnostic Testing Project (MDTP) has four different levels. You will be asked to select the level at which you feel best prepared. Remember, no calculators are permitted during testing! To help you determine which Math Test level is right for you, Santiago Canyon College has provided several resources that we encourage you to utilize prior to taking the MDTP: MATH Decision Chart - information on basic concepts covered across each of the four Math Placement Test levels to help you decide which level is right for you. To review the MATH Decision Chart, click here. Math Placement Test Sample Questions - Questions covering the basic knowledge across each of the four Math Placement Test levels are available in the Testing Center (E-303) and the Counseling Center (D-106) in paper form. Online versions of sample questions are available here: Math Level 1: Algebra Readiness Sample Questions Math Level 2: Elementary Algebra Sample Questions Math Level 3: Intermediate Algebra Sample Questions Math Level 4: Pre-Calculus Sample Questions To schedule an appointment to take the Math Placement test, click here. Preparing for your Math Placement Test MyMathTest - a dynamic, interactive online testing program that assesses student strengths and gaps in mathematical knowledge. SCC provides access to this website as a wonderful preparation tool for students prior to taking the Math Placement Test. For more information on accessing MyMathTest, click here. Tutorial Videos and Practice Worksheets - Need to brush up on your algebra math skills before taking the placement test? Click here to review helpful videos and complete sample worksheets that will aid you in preparing for the Math Placement Test. Math Placement Test scores are valid for one year. If you took the math test more than one year prior to the date you register and you did not take a math class during that year, you will need to
{"url":"http://www.sccollege.edu/DEPARTMENTS/TESTING/Pages/MathPlacementTest.aspx","timestamp":"2014-04-18T05:44:31Z","content_type":null,"content_length":"31563","record_id":"<urn:uuid:5e4060c4-96c5-49b1-a9fe-0a403c436fd4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00609-ip-10-147-4-33.ec2.internal.warc.gz"}
Proceedings of the American Mathematical Society ISSN 1088-6826(online) ISSN 0002-9939(print) Limit cycles for cubic systems with a symmetry of order 4 and without infinite critical points Authors: M. J. Álvarez, A. Gasull and R. Prohens Journal: Proc. Amer. Math. Soc. 136 (2008), 1035-1043 MSC (2000): Primary 34C07, 34C14; Secondary 34C23, 37C27 Published electronically: November 30, 2007 MathSciNet review: 2361879 Full-text PDF Free Access Abstract | References | Similar Articles | Additional Information Abstract: In this paper we study those cubic systems which are invariant under a rotation of radians. They are written as where is complex, the time is real, and , are complex parameters. When they have some critical points at infinity, i.e. , it is well-known that they can have at most one (hyperbolic) limit cycle which surrounds the origin. On the other hand when they have no critical points at infinity, i.e. there are examples exhibiting at least two limit cycles surrounding nine critical points. In this paper we give two criteria for proving in some cases uniqueness and hyperbolicity of the limit cycle that surrounds the origin. Our results apply to systems having a limit cycle that surrounds either 1, 5 or 9 critical points, the origin being one of these points. The key point of our approach is the use of Abel equations. • 1. A. A. Andronov, E. A. Leontovich, I. I. Gordon, and A. G. Maĭer, Qualitative theory of second-order dynamic systems, Halsted Press (A division of John Wiley & Sons), New York-Toronto, Ont., 1973. Translated from the Russian by D. Louvish. MR 0350126 (50 #2619) • 2. V. Arnol′d, Chapitres supplémentaires de la théorie des équations différentielles ordinaires, “Mir”, Moscow, 1980 (French). Translated from the Russian by Djilali Embarek. MR 626685 • 3. Marc Carbonell and Jaume Llibre, Limit cycles of polynomial systems with homogeneous nonlinearities, J. Math. Anal. Appl. 142 (1989), no. 2, 573–590. With the collaboration of Bartomeu Coll. MR 1014597 (91d:58205), http://dx.doi.org/10.1016/0022-247X(89)90021-8 • 4. Chong Qing Cheng and Yi Sui Sun, Metamorphoses of phase portraits of vector field in the case of symmetry of order 4, J. Differential Equations 95 (1992), no. 1, 130–139. MR 1142279 (93a:58144), http://dx.doi.org/10.1016/0022-0396(92)90045-O • 5. CHERKAS, L.A. Number of limit cycles of an autonomous second-order system. Diff. Equations, 5 (1976), 666-668. • 6. Shui-Nee Chow, Cheng Zhi Li, and Duo Wang, Normal forms and bifurcation of planar vector fields, Cambridge University Press, Cambridge, 1994. MR 1290117 (95i:58161) • 7. B. Coll, A. Gasull, and R. Prohens, Differential equations defined by the sum of two quasi-homogeneous vector fields, Canad. J. Math. 49 (1997), no. 2, 212–231. MR 1447489 (98j:34041), http:// • 8. A. Gasull and J. Llibre, Limit cycles for a class of Abel equations, SIAM J. Math. Anal. 21 (1990), no. 5, 1235–1244. MR 1062402 (91e:34036), http://dx.doi.org/10.1137/0521068 • 9. John Guckenheimer, Phase portraits of planar vector fields: computer proofs, Experiment. Math. 4 (1995), no. 2, 153–165. MR 1377416 (97g:34041) • 10. E. I. Horozov, Versal deformations of equivariant vector fields for cases of symmetry of order 2 and 3, Trudy Sem. Petrovsk. 5 (1979), 163–192 (Russian). MR 549627 (80k:58079) • 11. Bernd Krauskopf, Bifurcation sequences at 1:4 resonance: an inventory, Nonlinearity 7 (1994), no. 3, 1073–1091. MR 1275542 (95d:58097) • 12. N. G. Lloyd, A note on the number of limit cycles in certain two-dimensional systems, J. London Math. Soc. (2) 20 (1979), no. 2, 277–286. MR 551455 (80k:34039), http://dx.doi.org/10.1112/jlms • 13. A. I. Neĭshtadt, Bifurcations of the phase pattern of an equation system arising in the problem of stability loss of self-oscillations close to 1:4 resonance; Russian transl., J. Appl. Math. Mech. 42 (1978), no. 5, 896–907 (1979). MR 620880 (83e:58064) • 14. P. Yu, M. Han, and Y. Yuan, Analysis on limit cycles of 𝑍_{𝑞}-equivariant polynomial vector fields with degree 3 or 4, J. Math. Anal. Appl. 322 (2006), no. 1, 51–65. MR 2238147 (2007f:34060), • 15. André Zegeling, Equivariant unfoldings in the case of symmetry of order 4, Serdica 19 (1993), no. 1, 71–79. MR 1241150 (94i:58165) Similar Articles Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 34C07, 34C14, 34C23, 37C27 Retrieve articles in all journals with MSC (2000): 34C07, 34C14, 34C23, 37C27 Additional Information M. J. Álvarez Affiliation: Departament de Matemàtiques i Informàtica, Universitat de les Illes Balears, 07122, Palma de Mallorca, Spain Email: chus.alvarez@uib.es A. Gasull Affiliation: Departament de Matemàtiques, Edifici C, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona, Spain Email: gasull@mat.uab.cat R. Prohens Affiliation: Departament de Matemàtiques i Informàtica, Universitat de les Illes Balears, 07122, Palma de Mallorca, Spain Email: rafel.prohens@uib.cat DOI: http://dx.doi.org/10.1090/S0002-9939-07-09072-7 PII: S 0002-9939(07)09072-7 Keywords: Planar autonomous ordinary differential equations, symmetric cubic systems, limit cycles Received by editor(s): March 24, 2006 Received by editor(s) in revised form: January 16, 2007 Published electronically: November 30, 2007 Additional Notes: The first two authors were partially supported by grants MTM2005-06098-C02-1 and 2005SGR-00550. The third author was supported by grant UIB-2006. This paper was also supported by the CRM Research Program: On Hilbert’s 16th Problem. Communicated by: Carmen C. Chicone Article copyright: © Copyright 2007 American Mathematical Society The copyright for this article reverts to public domain 28 years after publication.
{"url":"http://www.ams.org/journals/proc/2008-136-03/S0002-9939-07-09072-7/","timestamp":"2014-04-19T18:12:07Z","content_type":null,"content_length":"41999","record_id":"<urn:uuid:362016f4-3da8-4397-9558-22fe860258d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Lobachevskii Journal of Mathematics http://ljm.ksu.ru Vol. 16, 2004, 17 – 56 © P. K. Jakobsen and V. V. Lychagin Per K. Jakobsen and Valentin V. Lychagin ABSTRACT. We outline an extention of probability theory based on positive operator valued measures. We generalize the main notions from probability theory such as random variables, conditional expectations, densities and mappings. We introduce a product of extended probability spaces and mappings, and show that the resulting structure is a monoidal category, just as in the classical 1. Introduction In this paper we present an extension of standard probability theory. An extended probability space is defined to be a normalized positive operator valued measure defined on a measurable space of events. This notion of extended probability space includes probability spaces and spectral measures as important special cases. The use of the word probability in this context is justified by showing that extended probability spaces enjoy properties analog to all the basic properties of classical probability spaces. Random vectors are defined as a generalization of the usual Hilbert space of square integrable functions. This generalization is well known in the literature and was first described by Naimark. Expectation and conditional expectation is defined for extended probability spaces by orthogonal projections in complete analogy with probability spaces. The introduction of probability densities presents special problems in the context of extended probability spaces. For the case of probability spaces a probability density is any normalized positive integrable function, whereas for the case of extended probability spaces it turns out that the right notion is not a density but a half density. These half densities are elements in a Hilbert module of length one. Special cases of such half densities are well known in quantum mechanics where they are called wave functions. We define a random operator to be a linear operator on the space of half densities. The expectation of random operators are operators acting on the Hilbert space underlying the extended probability space. For the case of probability spaces the notion of random vectors and random operators coincide. We introduce mappings or morphisms of extended probability spaces through a generalization of the notion of absolute continuity in probability theory. Half densities plays a pivotal role in this generalization. We show that the morphisms can be composed and that extended probability spaces and morphisms forms a category just as for probability spaces. The Naimark construction extends to morphisms and in fact defines a functor on the category of extended probability spaces. Extended probability spaces can be multiplied and we furthermore show that this multiplication can be extended to morphisms in such a way that it defines a monoidal structure on the category of extended probability spaces. This is in complete analogy with the case of probability spaces and testify strongly to the naturalness of our constructions. We do not in this paper attempt to give any interpretation of extended probabilities beyond the one implied by the strong structural analogies that we have shown to exists between the categories of probability spaces and extended probability spaces. It is well known that the interpretation of the classical Kolmogorov formalism for standard probability theory is not without controversy as the old debate between frequentists and Bayesians, among others, clearly demonstrate. Our theory of extended probability spaces is evidently a generalization of the Kolmogorov framework and it might be hoped that this enlarged framework will put some of the controversy in a different light. As a case in point note that extended probabilities are in general only partially ordered. The notion of partially ordered probabilities has been discussed and argued over for a very long time. In our theory of extended probability spaces, ordered and partially ordered probabilities lives side by side and enjoy the same formal categorical properties. 2. Extended probability spaces In this section we will make some technical assumptions that will assumed to hold throughout this paper. These assumptions are not necessarily the most general ones possible. A measurable space [5] is a pair X = 〈ΩX,BX〉 where ΩX is a set and BX is a σ-algebra on ΩX . A measurable map f : X → Y is a map of sets ΩX → ΩY such that f−1(A) ∈B X for all A ∈BY . Let Ω be a set and let τ be a topology on Ω. In this paper the term topology is taken to mean a second countable,locally compact Hausdorff topology [3]. Note that any such space is metrizable,Polish and σ-compact. The Borel structure corresponding to a topology τ is the smallest σ-algebra containing the topology τ and is denoted by B(τ). A Borel space is a measurable space where the σ-algebra is a Borel structure. Any continuous map f : 〈ΩX,τX〉→〈ΩY ,τY 〉 is measurable with respect to the Borel structures B(τX) and B(τY ). Borel sets are the observable events to which we must assign Let now 〈ΩX,B(τX)〉 be a Borel space and let O(HX) be the real C∗ algebra [4] of bounded operators on the real Hilbert space HX. A positive operator valued measure (POV) [1] defined on 〈ΩX,B(τX)〉 is a map FX from B(τX) to O(HX) such that F X (∅) = 0,FX(ΩX) = 1. The map FX is assumed to be finitely additive on disjoint union of sets and for any increasing sequence of sets {V i} satisfy the following continuity condition FX(lim i→∞V i) = sup{FX(V i) ∣i = 1, 2, 3,....}, where the supremum is taken with respect to the usual partial ordering of self adjoint operators. The supremum always exists since the sequence {F X(V i)} is increasing and bounded above by FX(lim i→∞V i). The continuity condition implies that FX is additive on countable disjoint unions. FX(Ui=1∞V i) = ∑ i=1∞F X(V i), where the sum converges in the strong operator topology, that is, pointwise convergence in norm. A positive operator valued measure is a spectral measure if F X (V ) is a projector for all V ∈B. A necessary and sufficient condition for a POV, F X ,to be a spectral measure is that it is FX(V 1 ∩ V 2) = FX(V 1)FX(V 2). We are now ready to define our first main object Definition 1. A extended probability space Xis a triple X = 〈ΩX,B(τX),FX〉where F X : B(τX)→O(HX)is a positive operator valued measure. Note that a probability space X = 〈ΩX,B(τX),μX〉 can be identified with a extended probability space in many different ways. In fact for any given Hilbert space HX we can identify the probability space with a extended probability space X = 〈ΩX,B(τX),FX〉 where F X (V ) = μX(V )IHX. 3. Random vectors In standard probability theory quadratic integrable random variables and their expectation plays an important role. We will now review the classical Naimark construction of the analog of such random variables for the case of extended probability spaces. We will call such random variables random vectors. The space of random vectors forms a Hilbert spaces and we use this structure to define expectation and conditional expectation by orthogonal projections in complete analogy with the standard case. 3.1. The space of random vectors. Let 〈Ω,B,F〉 be a extended probability space and let S be the linear space of simple measurable functions v : Ω → H. The linear structure is defined through pointwise operations as usual. Elements in S can be written as finite sums of characteristic functions. v = ∑ iξiθV i , where {V i} is a B -measurable partition of the set Ω. We define a pseudo inner product on S by 〈v,w〉 = ∑ i,j〈F(V i ∩ Wj)ξi,ηj〉H, where v = ∑ iξiθV i, w = ∑ jηjθWj and 〈〉H is the inner product in the Hilbert space H. The product is not definite. In fact we have 〈v,v〉 = 0 ⇕ ∑ i〈F(V i)ξi,ξi〉H = 0 ⇕ 〈F(V i)ξi,ξi〉 = 0 for all i. The last identity follows from the fact the F (V i) is a positive operator. So for any simple function v = ∑ ξiθV i we have 〈v, v〉 = 0 if and only if F(V i)ξi⊥ξi for all i. This is of course true if V i is of F measure zero but it can also be true if F(V i)≠0 but ξi is in the kernel of F(V i). Since 〈〉 is a pseudo inner product the set of elements of length zero, 〈v, v〉 = 0, form a linear subspace and we can divide S by this subspace. and thereby get a, in general, incomplete inner product space. The completion of this space with respect to the associated norm is by definition the space of random vectors and is a Hilbert space. We will use the notation L2 (B,F) or just L2(F) for this space in analogy with the classical notation L2 (μ). The set of equivalence classes of simple functions [v] evidently form a dense set in L2(F). Denote this dense subspace by T(F). We have a well defined isometric embedding π of H into L2 (F) defined by π(ξ) = [ξθΩ]. We also have a spectral measure P : B→O(L2(F)). On the dense set T(F) the spectral measure is given by P(α)[v] = [∑ iξiθV i∩α], where v = ∑ ξiθV i. In fact the existence of this spectral measure is the whole point of the Naimark construction. It show that by extending the Hilbert space one can turn any POV into a spectral measure. This idea has been generalized by Sz.-Nagy and J. Arveson into a theory for generating representations of ∗ ́-semigroups but we will not need any of these generalization in our work. As our first example let μ be a measure on the measurable space 〈Ω,B〉 and let H be a Hilbert space. Define a positive operator valued measure on 〈Ω,B〉 acting on H by F(U) = μ(U)1H. For this case we have 〈[v], [w]〉 = ∑ i,j〈μ(V i∩Wj)ξi,ηj〉H = ∑ i,j〈ξi,ηj〉Hμ(V i∩Wj) = ∫ 〈v,w〉Hdμ, where for any H valued functions f,g we define 〈f,g〉H(x) = 〈f(x),f(x)〉H. Thus for this case our space L2(F) will be the space of H valued function elements such that ∫ 〈f,f〉Hdμ < ∞. When H = ℂ the space L2(F) turns into the space of square integrable complex valued functions L2 (μ). As our second example let H be two dimensional and let a basis {ξ1,ξ2} be given. With respect to this basis we have F(U) = μ(U) ω(U) ω(U)ν(U) , where μ and ν and ω are signed measures. In order for F(U) to be positive for all U it is easy to see that μ and ν must be positive measures and that the following inequality must hold ω(U)2 ≤ μ(U)ν(U). Any function f : Ω → H determines a pair of real valued functions {f1,f2} through f(x) = f1(x)ξ1 + f2(x)ξ2. The inner product in L2 (F) is given in terms of the measures μ,ν and ω as 〈(f1,f2), (g1,g2)〉 = ∫ f1g1dμ + ∫ f2g2dν + ∫ (f1g2 + f2g1)dω. Similar expressions for the inner product in L2 (F) exists for any finite dimensional Hilbert space H. 3.2. The expectation of random vectors. Recall that we have a isometric embedding π : H → L2(F) defined by π(ξ) = [ξθΩ]. Note that the image π(H) ⊂ L2(F) is a closed subspace and therefore the orthogonal projection onto π(H) exists. Let QH be this orthogonal projection. Definition 2. The expectation of a random vector f ∈ L2(F)is the unique element E(f) ∈ Hsuch that π(E(f)) = QH(f). The following result is a immediate consequence of the definition Proposition 3. The expectation is a surjective continuous linear map : L2(F) → Hand is the adjoint of the embedding π 〈f,π(ξ)〉 = 〈E(f),ξ〉 ∀ξ ∈ H. Note that adjointness condition uniquely determines the expectation. In fact we could define the expectation to be the adjoint of the embedding π. Using this proposition it is easy to verify that the expectation of a simple function element [v] where v = ∑ ξiθV i is given by E([v]) = ∑ iF(V i)(ξi). This example makes it natural to introduce a integral inspired notation for the expectation E(f) = def∫ dFf. Note that it is natural to put the differential dF in front of f to emphasize the fact that F is a operator valued measure that acts on the function valued of f. Let {ξi} be an orthonormal basis for H. For general elements f the following formula holds E(f) = ∑ i〈f,π(ξi)〉ξi. 3.3. Conditional expectation. Let A⊂B be a σ-subalgebra. We can restrict the POV F to A and will in this way get the Hilbert space L2(A,F) of A measurable random vectors. We obviously have a isometric embedding of L2 (A,F) into L2 (B,F). Thus L2(A,F) can be identified with a closed subspace of L2 (B,F) and therefore the orthogonal projection QA : L2(B,F) → L2(A,F) is defined. In complete analogy with the classical case we now define Definition 4. The conditional expectation of a element f ∈ L2(B,F)is given by EA(f) = QA(f) ∈ L2(A,F). It is evident that L2(A,F) is isomorphic to H when A = {Ω,∅} and that for this case we have EA(f) = π(E(f)). Let us consider the next simplest case when A is generated by a partition {A1...An} where Ω = ∪Ai and Ai ∩ Aj = ∅ when i≠j. We need the following result Proof. Let [vn] be a Cauchy sequence in the inner product space T (A,F). This means that ∣∣[vn] − [vm]∣∣2 → 0 when m and n goes to infinity. But vn = ∑ iξinθ Ai and since F(Ai) are positive operators we get ∑ i〈F(Ai)(ξin − ξ im),ξ in − ξ im〉 → 0 ⇓ 〈F(Ai)(ξin − ξ im),ξ in − ξ im〉 → 0 for all i. Let Li = F(Ai)(H) be the range of F(Ai) and let L⊥ be the orthogonal complement of Li. We have Li⊥ = Ker(F(A i)) and since Li by assumption is a closed subspace we have the decomposition H = Li ⊕ Li⊥ . Write ξi n = r in + t in with ri n ∈ L i⊥ and ti n ∈ L i. We then have by orthogonality 〈F(Ai)(tin − t im),t in − t im〉→ 0. Clearly F(Ai)∣Li : Li → Li is a positive, bounded, injective and surjective map. Let Ti : Li → Li be the square root of this operator. It is also a positive bounded injective and surjective map and therefore has a bounded inverse. From the previous limit we can conclude that 〈Ti(tin − t im),T i(tin − t im)〉→ 0. Thus {Ti(tin)} is a Cauchy sequence in Li and since Li is closed there exists a element yi ∈ Li such that T i (tin) → y i. From the previous remarks the element ξi = Ti−1(y i) ∈ Li exists and lim n→∞tin = lim n→∞Ti−1(T i(tin)) = T i−1(lim n→∞Ti(tin)) = T i−1(y i) = ξi . If we let v = ∑ ξiθAiwe have ∣∣[vn] − [v]∣∣2 = ∑ i〈F(Ai)(ξin − ξ i),ξin − ξ i〉 = ∑ i〈Ti(tin − ξ i),Ti(tin − ξ i)〉 = ∑ i〈Ti(tin) − y i,Ti(tin) − y i〉 = ∑ i∣∣Ti(tin) − y i∣∣→ 0. Therefore T(A,F) is complete. □ The assumption in the proposition holds for example if H is finite dimensional or if H is infinite dimensional but all the F(Ai) are orthogonal projectors or isomorphisms. For the classical measure case H ≈ ℝ and the proposition is true. Let v = ∑ ξjθV j be a simple function in L2(B,F). Then by the previous proposition the conditional expectation must be of the form QA (v) = ∑ ηiθAi. It is uniquely determined by the conditions 〈v − QA(v),ξθAj〉H = 0 for all ξ ∈ H and j = 1..n. These conditions give us the following systems of equations for the unknown vectors ηi: F(Ai)ηi = ∑ kF(V k ∩ Ai)ξk for any i. This systems does not have a unique solution in H but all solutions represents the same element in L2 (A,F) = T(A,F). For the special case v = ξ0θC we get the simplified system F(Ai)ηi = F(C ∩ Ai)ξ0. When dim H = 1 and F(Ai) = μ(Ai) we get the usual classical expression for the conditional expectation of C given A. 4. Densities and random operators Densities are important for most applications of probability theory. For us they will make their appearance when we seek to generalize the relation of absolute continuity between measures to the context of positive operator valued measures. This generalization will play a pivotal role when we define maps between extended probability spaces. The generalization of the notion of density to the case of operator measures turns out to be surprisingly subtle. 4.1. The Hilbert module of half densities. Let ν be a measure. A density is a positive measurable function ρ such that ∫ ρdν = 1. Using this density we can define a new measure μ(V ) = ∫ V ρdν. If we try to generalize this formula directly to the case of POV measures we run into problems. Let F be a POV defined on a measurable space 〈Ω,B(τ)〉 and let ρ be a function as above. Then we can certainly define a new POV measure by the following formula E(V ) = ∫ V ρdF. There is nothing inconsistent in this definition, the only problem is that it is very limited. In fact if Ω is a finite set then any POV measure on Ω is given by a finite set {Fi} of positive operators between zero and the identity with the single condition ∑ Fi = 1. If E is the new POV determined by the above formula then we have Ei = ρiFi for some set of numbers {ρi}. Thus each Ei is proportional to Fi. Now if the numbers ρi were changed into positive operators we could produce a much more general E starting from a given F. We would thus be considering a formula like E(V ) = ∫ V ρdF, where ρ is a positive operator valued function. However even if we could make sense of the proposed integral we would have problems. This is because the product of positive operators is positive if and only if they commute. This would put a highly nontrivial constraint on the allowed densities, constraints it would be difficult to verify and keep track of. There is however a natural way out of these problems. It is very simple to verify that if F is a POV measure acting on H and Q a operator, then QFQ∗ is a new POV measure. This suggest that we consider a density to be a operator valued function ϕ such that We could then use this density to define a new POV measure by On a formal level this now looks fine, the only remaining problem is to make sense of the proposed integrals. We will now proceed to do this. V = {s = ∑ isiθV i ∣ si ∈O(H) V i ∈B(τ)}, where {V i} form a measurable partition of Ω. These are simple measurable operator valued functions. The set V is a real linear space through pointwise operations as usual. We can define a left action of O(H) on V in the following way as = ∑ i(asi)θV i. This action clearly makes V into a left module over the real C∗- algebra O(H). Define an O(H) valued product on V through 〈s,t〉 = ∑ i,jsiF(V i ∩ Wj)tj∗, where s = ∑ siθV i and t = ∑ tjθWj. This product is clearly bilinear over the real numbers. Proposition 6. The following properties 〈s,s〉 ≥ 0, 〈as,t〉 = a〈s,t〉, 〈s,t〉 = 〈t,s〉∗, 〈s,at〉 = 〈s,t〉a∗ Thus the product is like a Hermitian product where the role of complex numbers are played by the elements of the real C∗ -algebra O(H). Such structures have been known and studied for a long time. They leads, as we will see, in a natural way to the idea that probability densities for operator measures are elements in a Hilbert module. Our main sources for the theory of Hilbert modules are the paper [10] and the book [2]. Chapters on Hilbert modules can also be found in the books [7] and [13]. Note that the product we have constructed is not positive definite. In fact, since the sum of positive operators in a real C∗ -algebras is zero only if each operator is zero, the identity 〈s, s〉 = 0 holds if and only if siF(V i)si∗ = 0for all i. These identities can easily be satisfied for nonzero operators si . In fact if F (V i) are projectors and si are projectors orthogonal to F(V i) then the equations are clearly satisfied. In order to make the product definite we will need to divide out by the set of simple functions whose square is zero 〈s, s〉 = 0. In order to do this we will need the analog of the Cauchy-Swartz inequality. For any element s ∈ V we know that 〈s,s〉≥ 0 and therefore there exists a positive operator h such that h2 = 〈s,s〉. Denote this operator by ∣s∣. Thus we have ∣s∣2 = 〈s,s〉. Also for any element s ∈ V define a real number ∣∣s∣∣ by ∣∣s∣∣2 = ∣∣〈s,s〉∣∣ where ∣∣〈s,s〉∣∣ is the operator norm of the positive operator 〈s, s〉. With these definitions at hand we can now state the following Cauchy Swartz inequalities for V . The proof of this proposition is an adaption of the proof in [13] to the case of real C∗ algebras. Proposition 7. The following forms of the Cauchy-Swartz inequality 〈s,t〉〈t,s〉 ≤∣s∣2∣∣t∣∣2, ∣∣〈s,t〉∣∣ ≤∣∣s∣∣∣∣t∣∣ Proof. A positive linear functional, ω ,on O(H) is a real valued linear functional such that ω(a) ≥ 0 whenever a ≥ 0. A state on O(H) is a positive linear functional such that ω(1) = 1 and ω(a) = ω (a∗). The main property that makes states useful in C∗ algebra theory is that if a≠0 there exists a state such that ω(a) = ∣∣a∣∣. From this it follows immediately that if ω(a) = 0 for all states ω then a = 0 and this implies that if ω(a) ≤ ω(b) for all states then a ≤ b. In this way verification of inequalities in a C∗ algebra is reduced to the verification of numerical inequalities. Also recall that in any real C∗ -algebra the following important inequality holds [4] ω(a∗b∗ba) ≤∣∣b∗b∣∣ω(a∗a) For any given state ω define (s,t)ω = ω(〈s,t〉). It is evident that (, )ω is a pseudo inner product on V . It therefore satisfy the Cauchy-Swartz inequality (s,t)ω2 ≤ (s,s) ω(t,t)ω. Define a = 〈s,t 〉. We clearly have ω(aa∗) = ω(a〈t,s〉) = ω(〈at,s〉) = (at,s) ω. ω(aa∗) ≤ [(at,at) ω(s,s)ω]1 2 = [ω(a〈t,t〉a∗)(s,s) ω]1 2 = [ω(a∣t∣2a∗)(s,s) ω]1 2 ≤∣∣〈t,t〉∣∣1 2 ω(aa∗)1 2 ω(〈s,s〉)1 2 . Dividing by ω(aa∗)1 2 we find ω(aa∗)1 2 ≤∣∣t∣∣ω(〈s,s〉)1 2 = ω(∣∣t∣∣〈s,s〉). The first inequality now follows since this numerical inequality holds for all states ω. As for the second inequality recall that in any real C∗ -algebra we have ∣∣aa∗∣∣ = ∣∣a∣∣2 and for any pair of operators 0 ≤ a ≤ b we have ∣∣a∣∣≤∣∣b∣∣. Using this we have ∣∣〈s,t〉∣∣2 = ∣∣〈s,t〉〈s,t〉∗∣∣ = ∣∣〈s,t〉〈t,s〉∣∣≤∣∣∣s∣2∣∣t∣∣2∣∣ = ∣∣s∣∣2∣∣t∣2 and this proves the second inequality. □ From the second inequality we can in the usual way conclude that the triangle inequality holds for ∣∣∣∣. Let N be the subset of elements in V of pseudonorm zero. N = {s ∣∣∣s∣∣ = 0}. For any operator a ∈O(H) and a pair of elements s and t in N we now have ∣∣as∣∣2 = ∣∣〈as,as〉∣∣ = ∣∣a〈s,s〉a∗∣∣≤∣∣a∣∣∣∣s∣∣2 ∣∣a∗∣∣ = 0 ∣∣s + t∣∣ ≤∣∣s∣∣ + ∣∣t∣∣ = 0. Thus N is a submodule and we can therefore define a quotient module H˜ = V∕N. Elements in H˜ are equivalent classes of simple operator valued functions denoted by [s]. Note that for any elements [s], [t] ∈H˜ with [s] = 0 we have ∣∣〈s,t〉∣∣≤∣∣s∣∣∣∣t∣∣ = 0, and as a consequence of this 〈s,t〉 = 0. We therefore have a well defined operator valued product on H ˜ defined through 〈[s], [t]〉 = 〈s,t〉 This product enjoy the same properties as the product on V and is in addition positive definite. Thus H˜ with this product is a pre-Hilbert module with a norm ∣∣∣∣ defined on the underlying real vector space. In general this vector space is not complete with respect to the norm. We can however complete the vector space with respect to the norm. The resulting structure is a Hilbert module over the real C∗ -algebra O(H). We will call it the Hilbert module corresponding to the extended probability space 〈Ω,B(τ),F〉. With the analogy with Hilbert spaces in mind we will consider 〈ϕ,ϕ〉 to the the square length of ϕ. Note that for a general Hilbert module the length is a positive operator, not a positive number. Also note that in order to simplify the notation we use the same symbol ∣∣∣∣ for the norm on H and for the operator norm on O(H). This is the sense of the formula ∣∣ϕ∣∣2 = ∣∣〈ϕ,ϕ〉∣∣. We have now made sense of equation ( 1). It just state that ϕ should be a element in the Hilbert module H of length 1. We will next proceed to make sense of equation (2). Note that what we do is in fact to prove the analog of the easy part of the classical Radon-Nikodym theorem. For any U ∈B(τ) define a map PU : V → V by PU(s) = ∑ isiθV i∩U. This map is clearly a O(H) module morphism. Proposition 9. The following properties PU ∘ PU = PU, PU(as) = aPU(s), ∀a ∈O(H), PU∩V = PU ∘ PV , 〈PU(s),t〉 = 〈s,PU(t)〉, 〈s,PU(s)〉 ≥ 0, PV + PW = PV ∪W , if V ∩ W = ∅, 〈PU(s),PU(s)〉 ≤〈s,s〉, ∣∣PU(s)∣∣ ≤∣∣s∣∣ The last property shows that if ∣∣s∣∣ = 0 then ∣∣PU(s)∣∣ = 0. Therefore P U induce a well defined map, also denoted by PU, on H˜ through PU([s]) = [PU(s)]. The last property shows also that the map P U is bounded on H˜. It therefore extends to a unique bounded linear map on H. This map clearly also enjoy the properties listed in the previous Let now ϕ be a element in the Hilbert module H of unit length 〈ϕ,ϕ〉 = 1. For each set U ∈B(τ) define a operator Eϕ(U) on the Hilbert space H by Eϕ(U) = 〈ϕ,PU(ϕ)〉. Clearly Eϕ(Ω) = 1 and Eϕ(U) ≥ 0 for all U. It is also evident from the previous proposition that Eϕ is finitely additive on disjoint sets. It is in fact also countably additive as we now show. Proof. Let first s = ∑ isiθV i be a element in V with 〈s,s〉 = 1 and let {Tj} be a increasing sequence of sets with limit T = ∪jTj. The set of operators {Es(Tj)} is a increasing sequence of positive operators. The supremum of this sequence exists [1]. Denote the supremum by Sup{Es(Tj)}. In order to show that Es is a positive operator valued measure we only need to show that Es(∪jTj) = Sup{Es(Tj)}. It is a fact [1] that the sequence Es (Tj) converges strongly to the limit Sup{Es(Tj)}. Since the strong limit is unique when it exists we must only show that Es (Tj)(x) → Es(∪jTj)(x) for all elements x ∈ H. We know that F is a positive operator valued measure so F(Tj ∩ V i) → F(T ∩ V i) strongly. But then since all si are bounded operators we have siF(Tj ∩ V i)si∗(x) → s iF(T ∩ V i)si∗(x) ⇓ ∑ isiF(Tj ∩ V i)si∗(x) →∑ isiF(T ∩ V i)si∗(x) ⇓ Es(Tj)(x) → Es(T)(x), for all x ∈ H. This proves that Es is a POV. Next for any element [s] in H˜ we define E[s](U) = 〈[s],PU([s])〉. It is trivial to verify that E[s] = Es so that the previous proof show that E[s] is a POV. Finally let ϕ be a arbitrary element in H. Then there exists a sequence of elements [sn ] in H such that [sn ] → ϕ. Since E[sn] is a POV we know that for all x ∈ Hμx n(U) = 〈E [sn](U)x,x〉H is a measure. Let μx be the positive set function defined by μx(U) = 〈Eϕ(U)x,x〉H. By continuity we know that E[sn](U) → Eϕ(U) in the uniform norm and thus strongly. But then by continuity of the inner product on H we can conclude that lim n→∞μxn(U) = μ X(U), for all sets U ∈B(τ). This implies through the Vitali-Hahn-Saks theorem [5] that μx is a measure and then it follows [1] that Eϕ is a POV. □ We have now made sense of equation (2) and are now ready to define the symbolic expressions occurring in equation (1) and (2). We define the integrals ∫ ϕdFψ∗and ∫ V ϕdFϕ∗as follows: ∫ ϕdFψ∗ = def〈ϕ,ψ〉, ∫ V ϕdFϕ∗ = def〈ϕ,P V (ϕ)〉. We have thus found that probability densities for operator valued measures are not functions but elements in a Hilbert module. They should in fact not be thought of as densities but as half densities, their square is a density in the above sense. This is a startling conclusion. Half densities are however not unfamiliar to anyone that has been exposed to quantum mechanics. Wave functions are half densities. In fact wave functions appear naturally in this scheme. If F is a positive operator valued measure acting on a real two dimensional Hilbert space we are lead to define densities as functions whose values are operators on the plane. The complex numbers are isomorphic to a special subalgebra of operators on the plane (the conformal operators). Thus a large class of densities can be identified with complex valued functions of length one. Since self-adjoint operators are now naturally identified with real numbers the length can be considered to be a number. What we are describing are of course wave functions. Thus densities for positive operator valued measures acting on a two-dimensional plane are wave functions. 4.2. Random operators. Recall [2] that a map A : H→H is said to be adjointable if there exists a map denoted by A∗ : H→H such that 〈A∗ϕ,ψ〉 = 〈ϕ,Aψ〉, for all elements ϕ and ψ in H. A map is self-adjoint if A∗ = A. It follows directly from the algebraic properties of the inner product and the completeness of the underlying real vector space that any adjointable map is a bounded O(H) module morphism. In fact the set of all adjointable maps form a abstract real C∗ -algebra that we denote by A. We will call the elements in A random operators. The expectation of a random operator A with respect to a density ϕ is by definition given by 〈A〉 = 〈ϕ,Aϕ〉. The expectation of a random operator with respect to a density ϕ is thus a operator on H. We can also use the density to define a POV acting on H as we have seen. Note that the expectation of self-adjoint random operators is a self-adjoint operator in O(H). Returning to the two dimensional example discussed above we see that in that case for complex valued densities the expectation of self-adjoint random operators can be identified with real numbers and thus the expectation of random operators can be thought of as numbers. In higher dimensions and for more general densities no such identification with real numbers is possible. Furthermore no such reduction should be expected. After all, the self-adjoint elements in a real C∗ -algebra are the right analog of real numbers. Let us assume that the real Hilbert space underlying the extended probability space X is one dimensional. If we choose a basis we can identify the Hilbert space with ℝ and the Hilbert module HX with the real Hilbert space of square integrable functions on ℝ. A positive operator valued measure is through the basis identified with a probability measure and therefore for a half density ϕ ∈HX the formula E(V ) = 〈ϕ,PV ϕ〉 turns into μ(V ) = ∫ ϕ2dν. The half density ϕ is of course not uniquely determined by the probability measures μ and ν unless we by convention always take the positive square root. If all our observables are random vectors then it does not matter which half density we choose, they will all produce the same expectation. Thus by restricting to random vectors as our observables the difference between the various half densities ϕ are not observable. However there is really no rational reason to restrict to this class of observables. If we include random operators in our observables the difference between the half densities are readily observable. 5. The category of extended probability spaces In classical probability theory the notion of morphisms of probability spaces plays a role at least as important as the notion of a probability space. In fact from the Categorical point of view morphisms are the most important element in any theory construction. All other entities should be defined in terms of the morphisms. In this section we review the notion of a morphism in the context of probability spaces and then define the corresponding notion for extended probability spaces. The naturalness of our definition is verified by proving that extended probability spaces and morphisms forms a category. We also show that just as for the case of probability spaces we get a functor mapping the category of extended probability spaces into the category of Hilbert spaces. The existence of this functor is a verification of the naturalness of our constructions. Let X = 〈ΩX,B(τX),μX〉 and Y = 〈ΩY ,B(τY ),μY 〉 be probability spaces. A morphism f : X → Y is a measurable map f : ΩX → ΩY such that μY is absolutely continuous with respect to the push forward of the measure μX by f, μY ≤ f∗μX. By the Radon-Nikodym theorem this means that there exists a probability density ρ : ΩY → ℝ such that μY (V ) = ∫ f−1(V )ρdμX. There are several other possibilities for morphisms of probability spaces [11]. We could have required f∗μX ≤ μY or f∗μX ≈ μY . They can all be composed and lead to a category structure. However the only possibility that generalize well to extended probability spaces is the first one μY ≤ f∗μX. 5.1. Morphisms of extended probability spaces. In this section we will introduce the notion of mapping between extended probability spaces and will then use mappings to define morphisms. This distinction between mappings and morphisms does not exist for probability spaces. In order to define what a mapping is in the context of extended probability spaces, we must first generalize the notions of absolute continuity and push forward to positive operator valued measures. We will do this by combining them into a single entity. Definition 11. Let X = 〈ΩX,B(τX),FX〉be a extended probability space, Y = 〈ΩY ,B(τY )〉a measurable space and hthe 3 tuple h = 〈fh,gh,ϕh〉where fh : ΩX → ΩY is a measurable map,gh : HY → HXis a isometry and ϕh ∈HXis a element in the Hilbert module corresponding to X. Then the push forward of FXby his the positive operator valued measure,h∗FX, defined on the measurable space Y by h∗FX(V ) = gh∗∘〈ϕ h,Pfh−1(V )ϕh〉∘ gh, where gh∗is the adjoint of gh. Note that we have gh∗ = g h−1 ∘ Q h where Qh is the orthogonal projection onto the closed subspace gh (HY ) ⊂ HX and therefore gh∗∘ g h = 1 and gh ∘ gh∗ = Q h.We can now define mappings between extended probability spaces using push forward in a very simple way. Definition 12. Let X = 〈ΩX,B(τX),FX〉and Y = 〈ΩY ,B(τY ),FY 〉be extended probability spaces. A mapping h : X → Y is a 3 tuple,h, as in the previous definition such that h∗FX = FY . Let us assume that the real Hilbert spaces underlying the extended probability spaces X and Y are one dimensional. If we choose basis for these two spaces we can identify the Hilbert spaces with ℝ, the positive operator valued measures with probability measures μ and ν and the half density ϕ with a real valued function on ΩX. We must have gh = 1 and the condition for h = 〈fh, 1,ϕh〉 to be a mapping is ν(V ) = ∫ fh−1(V )ϕh2dμ. This is of course the condition for fh to be a mapping between the probability spaces 〈ΩX,B(τX),μ〉 and 〈ΩX,B(τX),μ〉 if we identify the classical density with ϕh2. Our first goal is to show that the proposed mappings can be composed. In order to do this we must first define a certain pullback of half densities induced by a mapping. Let therefore mappings h : X → Y and k : Y → Z of extended probability spaces be given. Let us first define a measurable map fk∘h, a isometry gk∘h and a linear map h∗by fk∘h = fk ∘ fh : ΩX → ΩZ, gk∘h = gh ∘ gk : HZ → HX, h∗(a) = g h ∘ a ∘ gh∗ : O(H Y ) →O(HX). The map h∗ has the following easily verifiable properties Define a linear map h∗ : V Y →HX by h∗(s) = ∑ jh∗(s j)Pfh−1(V j)(ϕh), where s = ∑ sjθV j. The map h∗ has the following important properties Proposition 14. The map h∗is bounded and h∗(s + t) = h∗(s) + h∗(t), h∗(as) = h∗(a)h∗(s), 〈h∗(s),h∗(t)〉 = h∗(〈s,t〉), [s] = 0⇒[h∗(s)] = 0, h∗(P V (s)) = Pfh−1(V )(h∗(s)). Proof. Let s = ∑ siθV i and t = ∑ tjθWj. Then it is easy to verify that {V i ∩ Wj} form a partition of ΩY and that s + t = ∑ (si + tj)θV i∩Wj. But then we have h∗(s + t) = ∑ i,jh́∗(s i + tj)Pfh−1(V i∩Wj)(ϕh) = ∑ i,jh∗(s i)Pfh−1(V i)∩fh−1(Wj)(ϕh) + ∑ i,jh∗(t j)Pfh−1(V i)∩fh−1(Wj)(ϕh) = ∑ ih∗(s i)Pfh−1(V i)(ϕh) + ∑ jh∗(t j)Pfh−1(Wj)(ϕh) = h∗(s) + h∗(t). This proves the second statement. For the third statement we have h∗(as) = h∗(∑ iasiθV i) = ∑ ih∗(as i)Pfh−1(V i)(ϕh) = ∑ h∗(a)h∗(s i)Pfh−1(V i)(ϕh) = h∗(a)h∗(s), 〈h∗(s),h∗(t)〉 = ∑ i,j〈h∗(s i)Pfh−1(V i)(ϕh),h∗(t j)Pfh−1(Wj)(ϕh)〉 = ∑ i,jh∗(s i) ∘〈ϕh,Pfh−1(V i∩Wj)(ϕh)〉∘ h∗(t j)∗ = gh ∘∑ i,jsi ∘ gh∗∘〈ϕ h,Pfh−1(V i∩Wj)(ϕh)〉∘ gh ∘ tj∗∘ g h∗ = gh ∘∑ i,jsi ∘ h∗FX(V i ∩ Wj) ∘ tj∗∘ g h∗ = gh ∘∑ i,jsi ∘ FY (V i ∩ Wj) ∘ tj∗∘ g h∗ = gh ∘〈s,t〉∘ gh∗ = h∗(〈s,t〉) proves the fourth statement. The first and last statement in the proposition follows from the fourth. Finally h∗(P V (s)) = h∗(∑ isiθV ∩V i) = ∑ ih∗(s i)Pfh−1(V ∩V i)(ϕ) = ∑ ih∗(s i)Pfh−1(V )(Pfh−1(V i)(ϕ))) = Pfh−1(V )(h∗(s)). □ Using this proposition we can extend the map h∗ to a continuous linear map from HY to HX . This map is given on the dense set HY ˜ by h∗([s]) = h∗(s). All the properties in the proposition holds for the extension. We are now ready to prove that our mappings can be composed Theorem 15. Let h : X → Y and k : Y → Zbe mappings of extended probability spaces. Define ϕk∘h ∈HXby ϕk∘h = h∗(ϕ k). Then k ∘ h = 〈fk∘h,gk∘h,ϕk∘h〉 is a mapping of extended probability spaces k ∘ h : X → Zand we have (k ∘ h)∗ = h∗∘ k∗. Proof. In order to show that k ∘ h is a mapping we must prove that (k ∘ h)∗FX = FZ. But doing this is now a straight forward calculation if we use the previous proposition. (k ∘ h)∗FX(V ) = gk∘h∗∘〈ϕ k∘h,Pfk∘h−1(V )(ϕk∘h)〉∘ gk∘h = gk∗∘ g h∗∘〈h∗(ϕ k),Pfh−1(fk−1(V ))(h∗(ϕ k))〉∘ gh ∘ gk = gk∗∘ g h∗∘〈h∗(ϕ k),h∗(P fk−1(V )(ϕk))〉∘ gh ∘ gk = gk∗∘ g h∗∘ g h ∘〈ϕkPfk−1(V ) (ϕk)〉∘ gh∗∘ g h ∘ gk = gk∗∘〈ϕ kPfk−1(V )(ϕk)〉∘ gk = FZ(V ). The last statement in the theorem is also proved by direct calculation. Let s = ∑ sjθV j ∈ V Z. Then we have (k ∘ h)∗([s]) = ∑ j(k ∘ h)∗(s j)Pfk∘h−1(V j)(ϕk∘h) = ∑ jh∗(k∗(s j))Pfh−1(fk−1(V j))(h∗(ϕ k)) = ∑ jh∗(k∗(s j))h∗(P fk−1(V j)(ϕk)) = h∗(∑ jk∗(s j)Pfk−1(V j)(ϕk)) = h∗(k∗(s)). Since the identity holds on a dense subset is also holds for all elements in HZ and this proves the theorem. □ We now can use this Theorem to define composition of mappings Definition 16. Let h : X → Y and k : Y → Zbe mappings of extended probability spaces. Then k ∘ his the composition of kand h. It is now straight forward to prove that composition of mappings is associative. Theorem 17. Let h : X → Y , k : Y → Zand r : Z → Tbe mappings of extended probability spaces. Then we have r ∘ (k ∘ h) = (r ∘ k) ∘ h. Proof. Clearly we have fr∘(k∘h) = f(r∘k)∘h and gr∘(k∘h) = g(r∘k)∘h. And from the previous theorem we have ϕr∘(k∘h) = (k ∘ h)∗(ϕ r) = h∗(k∗(ϕ r)) ϕ(r∘k)∘h = h∗(ϕ r∘k) = h∗(k∗(ϕ r)) □ Extended probability spaces and mappings of extended probability spaces does unfortunately not form a category, we will in general not have unit morphisms. For a given extended probability space X = 〈ΩX,B(τX),FX〉 the only reasonable candidate for a unit morphism is 1X = 〈1ΩX, 1HX, 1HXθΩX〉. For this mapping it is easy to show that Thus the mapping is not a unit morphism in the categorical sense unless gh is a isomorphism. It is for this reason that we distinguish between mappings and the yet to be defined morphisms. Morphisms will be defined in terms of a equivalence relation on mappings. Recall that for any mapping h : X → Y , Qh : HX → gh(HY ) is the orthogonal projection on the closed subspace gh (HY ). Definition 19. Two mappings h,k : X → Y of extended probability spaces are equivalent if fh = fk, gh = gk, Qhϕh = Qkϕk. If hand kare equivalent we will write h ≈ k. The defined relation is a equivalence relation. In order to define morphisms we must show that composition of mappings extends to equivalence classes of mappings. For this we need the following two Lemma 20. Let h : X → Y and k : Y → Zbe mappings of extended probability spaces. Then Qk∘h = h∗(Q k). Proof. For any ξ ∈ HX , Qk∘h(ξ) is the unique vector in gh(gk(HZ)) such that ξ − Qk∘h(ξ) is orthogonal to gh(gk(HZ)). But for any η = gh(gk(α)) in gh(gk(HZ)) we have 〈ξ − h∗(Q k)(ξ),η〉 = 〈ξ − (gh ∘ Qk ∘ gh∗)(ξ),g h(gk(α)))〉 = 〈gk∗(g h∗(ξ)) − (g k∗∘ g h∗∘ g h ∘ Qk ∘ gh∗)(ξ),α〉 = 〈gk∗(g h∗(ξ)) − g k∗(g h∗(ξ)),α〉 = 0. Therefore by uniqueness Qk∘h(ξ) = h∗(Q k)(ξ). □ Proof. We only need to verify the identity on the dense subset HX ˜ ⊂HX. But for any [s] ∈HX ˜ with s = ∑ siθV i we have h′∗([s]) = ∑ ih′∗(s i)Pfh′(V i)(ϕh′) = ∑ i(gh′∘ si ∘ gh′−1 ∘ Q h′)Pfh′(V i)(ϕh′) = ∑ i(gh ∘ si ∘ gh−1)Q h′Pfh(V i)(ϕh′) = ∑ i(gh ∘ si ∘ gh−1)P fh(V i)(Qh′ϕh′) = ∑ i(gh ∘ si ∘ gh−1)P fh(V i)(Qhϕh) = h∗([s]). □ We can now prove that composition is well defined on classes. Proposition 22. Let h, h′ : X → Y be equivalent and k, k′ : Y → Zbe equivalent. Then k ∘ h ≈ k′∘ h′. Proof. We only need to prove that ϕk∘h = ϕk′∘h′. But using the previous two lemmas we have Qk∘hϕk∘h = h∗(Q k)h∗(ϕ k) = h∗(Q kϕk) = h′∗(Q k′ϕk′) = Qk′∘h′(ϕk′∘h′). Definition 23. A morphism between extended probability spaces Xand Y is a equivalence class, [h],of mappings h : X → Y . In order to keep the notation simple we will always denote a morphism [h] by a representative mapping h. Thus when we speak of a morphism h we mean the class [h]. The meaning will always be clear, we just have to make sure that any operations involving morphisms does not depend on choice of representative. We can now formulate the main result of this subsection. Proof. We know that composition is well defined and associative. For any object X, let the unit mapping be 1X = 〈1ΩX, 1Hx, 1HXθΩX〉. From proposition 18 we have for any morphisms h : X → Y h ∘ 1X ≈ h, 1Y ∘ h = 〈fh,gh,Qhϕh〉≈ h because Qh is a projection. □ We know that the category of probability spaces[11] has a terminal object, T ,in the categorical sense, there is a unique morphism from any probability space X to T . Here T = 〈ΩT ,BT ,μT 〉 with ΩT = {∗} , BT = {∅,{∗}} and μT the only possible probability measure on BT . The existence of T makes it possible to define points in probability spaces categorically. We will now see that the category of extended probability spaces does not have a terminal object and thus extended probability spaces will not have points in the categorical sense, but only generalized points. The only possible candidate for a terminal object in the category of extended probability spaces is the object T = 〈ΩT ,BT ,FT 〉 where F T : BT →O(ℝ) ≈ ℝ is the only possible positive operator valued measure, F T (ΩT ) = 1ℝ. We will now show that T is in fact not a terminal object. Let h : X → T be any morphism of extended probability spaces. We have h = 〈fh,gh,ϕh〉 and clearly fh : ΩX → ΩT = {∗}is unique. The map gh : ℝ → HX is a isometry and is therefore determined by a vector ξh ∈ HX where 〈ξh ,ξh〉 = 1 and gh (1) = ξh. The vector ξh and element ϕh ∈HX must satisfies the single condition h∗FX(ΩT ) = FT (ΩT ) = 1ℝ. Using the definition of h∗ we find that the following identity must be satisfied 〈〈ϕh,ϕh〉(ξh),ξh〉 = 1, and clearly this identity will be satisfied by many choices of ϕh and ξh . Thus the morphism h is not uniquely determined and therefore T is not a terminal object. 5.2. The Naimark functor. In probability theory there is a certain functor that plays a major role in the theory. We will now review the construction of this functor and show that a analog functor is defined on the category of extended probability spaces. The existence of this functor testify to the naturalness of our constructions. The functor will be called the Naimark functor since the Naimark dilatation construction plays a major role in its construction. Let us start with a review of the functor for the case of probability spaces. For any probability space X = 〈ΩX,B(τX),μX〉 define a Hilbert space,denoted by L2(X), by L2(X) = L2(μX). Let X = 〈ΩX,B (τX),μX〉 and Y = 〈ΩY ,B(τY ),μY 〉 be two probability spaces and let f : ΩX → ΩY be a morphism of probability spaces in the sense that μY (V ) = ∫ f−1(V )ρdμX Define a mapping L2(f) : L2(Y ) → L2(X) by L2(f)(ξ) = ρ(ξ ∘ f) It is easy to verify, using the Radon Nikodym theorem, that L2 (f) is in fact a isometry and moreover that L2 is a functor from the category of probability spaces to the category of Hilbert spaces. We will now show that it is possible to define a functor, also denoted by L2, from the category of extended probability spaces to the category of Hilbert spaces that for probability spaces reduce to the functor discussed above. Let X and Y be extended probability spaces and let L2(X) and L2(Y ) be the corresponding Hilbert spaces of random vectors. Informally to any morphism h : X → Y of extended probability spaces we will define a isometry L2 (h) : L2(Y ) → L2(X) by the formula L2(h)(ξ)(x) = ϕh∗(x)(g h((ξ ∘ fh)(x))) It is easy to see that the mapping L2 (f) is a special case of this general formula. Of course we can not use this formula to actually define L2 (h) since elements in L2 (Y ) are not vector functions and elements in HX are not operator valued functions. The action of elements in HX on L2 (X) implied by the formula must also be made sense of and since morphisms are classes of mappings we need to prove independence of representative.. We will now prove that the map L2(h) exists and that it defines a functor. Recall that if SY denote the space of simple HY valued functions with inner product 〈v,w〉 = ∑ i,j〈FY (V i ∩ Tj)ξi,ηj〉HY then L2(Y ) is the closure of TY = {[v]∣v ∈ SY } where [v] = 0 iff 〈v, v〉 = 0. For any extended probability space, V X is the linear space of simple operator valued functions occurring in the construction of the Hilbert module HX. For a measurable map f : ΩX → ΩY ,a isometry g : HY → HX and a element v = ∑ iξiθV i ∈ SY define a linear map tvf,g : V X → L2(X) by tvf,g(s) = [∑ i,jsj∗(g(ξ i))θf−1(V i)∩Wj] where s = ∑ jsjθWj ∈ V X. Proof. Let v = ∑ iξiθV i and s = ∑ jsjθWj. Then we have 〈tvf,g(s),t vf,g(s)〉 = ∑ i,j〈FX(Wj ∩ f−1(V i))sj∗(g(ξ i)),sj∗(g(ξ i))〉HX = ∑ i,j〈(sjFX(Wj ∩ f−1(V i))sj∗)(g(ξ i)),g(ξi)〉HX = ∑ i〈〈s,Pf−1(V i)(s)〉(g(ξi)),g(ξi)〉HX ≤∑ i〈〈s,s〉(g(ξi)),g(ξi) 〉HX ≤ cv,g∣∣s∣∣2. In the last line we used the Cauchy-Swartz inequality and the definition of the norm in the Hilbert module. □ This lemma implies that if [s] = 0 then [tvf,g(s)] = 0 and therefore we can extend tvf,g to a bounded linear map tvf,g : H X → L2(X). It is defined on the dense subset HX ˜ by tvf,g([s]) = [t vf,g The following proposition sets the stage for proving the existence of the Naimark functor. Proposition 26. Let h : X → Y be a mapping of extended probability spaces. Then there exists a isometry L2 (h) : L2(Y ) → L2(X)that is defined on the dense subset T Y by L2(h)([v]) = tvfh,gh (ϕh), and that satisfy L2(k ∘ h) = L2(h) ∘ L2(k), L2(1X) = 1L2(X). Proof. We will start by showing that tvfh,gh only depends on the class of v. Let {sn} be a sequence of elements in HX converging to ϕh. For each n we can define a positive operator valued measure on 〈ΩY ,B(τY )〉 acting on the Hilbert space HY by FY n(V ) = g∗∘〈s n,Pf−1(V )(sn)〉∘ g. By continuity FY n(V ) → F Y (V ) strongly and thus weakly. But then we have 〈tvfh,gh (ϕh),tvfh,gh (ϕh)〉 = lim n→∞〈tvfh,gh (sn),tvfh,gh (sn)〉 = lim n→∞∑ i〈〈sn,Pf−1(V i)(sn)〉(g(ξi)),g(ξi)〉HX = lim n→∞∑ i〈(g∗∘〈s n,Pf−1(V i)(sn)〉∘ g)(ξi)),ξi〉HY = lim n→∞∑ i〈FY n(V i)(ξi),ξi〉HY = ∑ i〈FY (V i)ξi,ξi〉HY = 〈v,v〉. The assumption [v] = 0 means that 〈v,v〉 = 0, so tv fh,gh depends only on the class of v. Therefore L2 (h) is well defined on the dense subset TY and the argument just given show that it is a isometry. It therefore extends to a isometry from L2(Y ) to L2(X). For the last part of the Theorem let [sn ] and [tm ] be sequences in HX and HY converging to ϕh and ϕk. Here sn = ∑ lsnlθWnl and tm = ∑ jtmjθTmj. For [v] ∈ TZ ⊂ L2(Z) with v = ∑ iξiθV i we have by continuity of all maps involved that if we define [u] ∈ TY ⊂ L2(Y ) by um = ∑ i,jtmj∗(g k(ξi))θfk−1(V i)∩Tmj then we have L2(h) ∘ L2(k)([v]) = L2(h)(tvfk,gk (ϕk)) = L2(h)(tvfk,gk (lim m→∞[tm])) = lim m→∞L2(h)(tvfk,gk ([tm])) = lim m→∞L2(h)(∑ i,jtmj∗(g k(ξi))θfk−1(V i)∩Tmj) = lim m→∞L2(h)([um]) = lim m→∞tumfh,gh (ϕh) = lim m→∞ lim n→∞tumfh,gh ([sn]) = lim m→∞ lim n→∞∑ i,j,l(snl∗∘ g h ∘ tmj∗∘ g k)(ξi)θfh−1(fk−1(V i)∩Tmj)∩Wnl. Note that h∗([t m]) = ∑ jh∗(t mj)Pfh−1(Tmj)(ϕh) = ∑ jh∗(t mj)Pfh−1(Tmj)(lim n→∞[sn]) = lim n→∞∑ j,lh∗(t mj)snlθfh−1(Tmj)∩Wnl. We have L2(k ∘ h)([v]) = tvfk∘h,gk∘h (ϕk∘h) = tvfk∘h,gk∘h (h∗(ϕ k)) = tvfk∘h,gk∘h (h∗(lim m→∞[tm])) = lim m→∞tvfk∘h,gk∘h (h∗([t m])) = lim m→∞tvfk∘h,gk∘h (lim n→∞∑ j,lh∗(t mj)snlθfh−1(Tmj)∩Wnl) = lim m→∞ lim n→∞∑ i,j,l(h∗(t mj)snl)∗(g k∘h(ξi))θfk∘h−1(V i)∩fh−1(Tmj)∩Wnl = lim m→∞ lim n→∞∑ i,j,l(snl∗∘ g h ∘ tmj∗∘ g h∗∘ g h ∘ gk)(ξi)θfh−1(fk−1(V i)∩Tmj)∩Wnl = lim m→∞ lim n→∞∑ i,j,l(snl∗∘ g h ∘ tmj∗∘ g k) (ξi)θfh−1(fk−1(V i)∩Tmj)∩Wnl. The last statement of the theorem is verified by a trivial calculation. □ We are now finally ready to prove the existence of the Naimark functor. Theorem 27. L2 (h)is a well defined functor from the category of extended probability spaces to the category of Hilbert spaces. Proof. We only need to prove that L2 (h) is well defined for a given morphism h. The functorial properties follows from the previous proposition. Assume h ≈ h′. Let us first assume that the densities of h and h′ are [s] and [s′]. We can without loss of generality assume that s and s′ are of the form s = ∑ isiθWi, s′ = ∑ isi′θ Wi, since we can bring it to this form by the same construction as in lemma 29. The equivalence then amounts to Qh si = Qh′si′ for all i. Then on the dense subset TY ⊂ L2(Y ) we have for v = ∑ ξiθV i L2(h)([v]) = ∑ i,jsj∗(g h(ξi))θfh−1(V i)∩Wj = ∑ i,j(sj∗∘ Q h ∘ gh)(ξi)θfh−1(V i)∩Wj = ∑ i,j(sj′∗∘ Q h′∘ gh′)(ξi)θfh′−1(V i)∩Wj = L2(h′)([v]). The case for general densities follows by continuity. □ The Naimark functor L2 is not the only functor occurring in this theory. In fact if we recall the properties of the pullback operation h → h∗ defined earlier in this section we can define a second Theorem 28. For any extended probability space X, define a Hilbert module H(X) = HXand for any morphism h : X → Y of extended probability spaces define a morphism of Hilbert modules H(h) = h∗. Then H is a functor from the category of extended probability spaces to the category of Hilbert modules. For the case of probability spaces the Hilbert module H(X) and the space of random vectors L2(X) are both isomorphic to the Hilbert space of square integrable real valued function. This is why random variables and densities appear to be taken from the same space in probability theory. But this is a very special situation. If the underlying Hilbert space is not one dimensional but two dimensional the densities and random vectors start to reveal their different nature. As we have discussed previously for this case a important subclass of densities are the one whose values are contained in the conformal group of the plane. These densities form a sub-Hilbert module that is actually a isomorphic to the complex Hilbert space of complex valued functions. 6. Monoidal structure on the category of extended probability spaces In probability theory the notion of product measures and product densities play a major role. It is through these that dependence and independence for random variables are defined. From a categorical point of view the situation is summarized by saying that the category of probability spaces supports a monoidal structure. We will now show that the category of extended probability spaces also supports a monoidal structures and that as a consequence the notions of dependence and independence can be defined. Let us start by reviewing the notion of a monoidal structure for a category. A monoidal structure in a category is basically a product in the category that is associative up to natural isomorphism and has a unit object up to natural isomorphism. What this means is that if X,Y and Z are objects in the category and if the product is denoted by ⊗ then we require that there exists a isomorphism αXY Z : X ⊗ (Y ⊗ Z) →(X ⊗ Y ) ⊗ Z. Similarly if I is the unit object we require that there exists isomorphisms βX : I ⊗ X → X and γX : X ⊗ I → X. The isomorphisms can not be arbitrarily chosen for different objects, they must form the components of a natural transformation. In addition they must satisfies a set of equations known as the MacLane coherence conditions. These equations ensure that associativity and unit isomorphisms can be extended consistently to products of finitely many objects. The conditions that must be satisfied by α,γ and β are the following. For all objects X,Y ,Z and T we must have αX⊗Y,Z,T ∘ αX,Y,Z⊗T = (αX,Y,Z ⊗ 1T ) ∘ αX,Y ⊗Z,T ∘ (1X ⊗ αY,Z,T ), (γX ⊗ 1Y ) ∘ αX,I,Y = 1X ⊗ βY , γI = βI. These are the MacLane coherence conditions. The naturality conditions are expressed as follows. For any arrows f : X → X′,g : Y → Y ′ and h : Z → Z′ we must have ((f ⊗ g) ⊗ h) ∘ αX,Y,Z = (f ⊗ (g ⊗ h)) ∘ αX′,Y ′,Z′, f ∘ βX = βX′∘ (1I ⊗ f), f ∘ γX = γX′∘ (f ⊗ 1I). In general such equations are difficult to solve, there is a very large number of variables and equations. However in some simple situations the naturality conditions can be used to reduce the system of equations to a much smaller set. The reader not familiar with categories,natural transformations and Coherence conditions might want to consult the book [8] for a elementary introduction to the categorical view of mathematics, a more advanced introduction can be found in the book [9] The notion of product measures in probability theory has of course been known for a long time. The corresponding monoidal structure in the category of probability spaces is described in detail in [11 ]. The main features are as follows. For two probability spaces X = 〈ΩX,B(τX),μX〉 and Y = 〈ΩY ,B(τY ),μY 〉 their product is the probability space X ⊗ Y = 〈, ΩX × ΩY ,B(τX ⊗ τY ),μX ⊗ μY 〉, where μX ⊗ μY is the product measure. The product of two morphisms f : X → Y and g : X′ → Y ′ is a morphism f ⊗ g : X ⊗ X′ → Y ⊗ Y ′ where f ⊗ g = f × g is just the Cartesian product of the maps f and g. The associativity and unit isomorphisms are just the usual one from the category of sets. αXY Z((x, (y,z))) = ((x,y),z),βX((∗,x)) = x, and γX((x,∗)) = x. For the category of probability spaces this choice of α, β and γ are the only possible ones as we show in [11]. The unit object for the monoidal structure is the trivial, one-point probability space. 6.1. Product of extended probability spaces and morphisms. We will now define the product of extended probability spaces and morphisms and show that this product is a bifunctor on the category of extended probability spaces. Let X = 〈ΩX,B(τX),FX〉 and Y = 〈ΩY ,B(τY ),FY 〉 be two extended probability spaces. The product of the two positive operator valued measures FX and FY always exists and is uniquely determined [1] by its value on measurable boxes by (FX ⊗ FY )(C × D) = FX(D) ⊗ FY (D). The product measure acts on the Hilbert space HX ⊗ HY . The tensor product is the Hilbert tensor product. We now need to extend the product to morphisms and show that it is a bifunctor. Before we do this we must specify the relationship between the Hilbert modules HX ⊗HY and HX⊗Y . We will show that, as expected, we can map the first into the second using a continuous injective module morphism. We will start by constructing this morphism. Recall that for any extended probability space X, HX is the completion of the dense subspace HX ˜ = {[s]∣s ∈ V X} and V X = {s = ∑ isiθV i∣si ∈O(HX),{V i} is a B(τX) measurable partition of ΩX} is the real linear space of simple O(HX) valued measurable functions on ΩX. For a pair of extended probability spaces define a map γXY : V X × V Y → V X⊗Y by γXY (s,t) = ∑ i,j(si ⊗ tj)θV i×Wj, where s = ∑ siθV i and t = ∑ tjθWj. For this map we have the following Proof. We evidently have γXY (as,t) = γXY (s,at) for all real numbers a. Let s = ∑ i=1ns iθV i and r = ∑ k=1mr kθCk be two elements in V X. Define a new sequence of sets {Al} where Al = V l for l = 1..n and Al = Cl−n for l = n + 1,....n + m and let L = {1, 2,...n + m}. Let S = {σ : L → ℤ2} be the set of all ℤ2 = {−1, +1} valued functions on the index set L. The set S is a index set for a new partition, {Tσ} σ∈S of the set ΩX defined by Tσ = ∩ l∈LAlσ(l), where for any set U we define U+1 = U and U−1 = Uc, the complement of U. We evidently have V i = ∪{σ∣σ(i)=1}Tσ, Ck = ∪{σ∣σ(n+k)=1}Tσ. s + r = ∑ σ ∑ {i∣σ(i)=1}si + ∑ {k∣σ(k+n)=1}rk θTσ. But then we have for any t = ∑ tjθWj ∈ V Y that γXY (s + r,t) = ∑ σ,j ∑ {i∣σ(i)=1}si + ∑ {k∣σ(k+n)=1}rk ⊗ tj θTσ×Wj = ∑ σ,j∑ {i∣σ(i)=1}(si ⊗ tj)θTσ×Wj + ∑ σ,j∑ {k∣σ(k+n)=1}(rk ⊗ tj)θTσ×Wj = ∑ i,j(si ⊗ tj)∑ {σ∣σ(i)=1}θTσ×Wj + ∑ k,j(rk ⊗ tj)∑ {σ∣σ (n+k)=1}θTσ×Wj = ∑ i,j(si ⊗ tj)θV i×Wj + ∑ k,j(rk ⊗ tj)θCk×Wj = γXY (s,t) + γXY (r,t). This show that γ is bilinear. For the second part of the statement in the lemma we have 〈γXY (s,t),γXY (s,t)〉 = ∑ i,j,k,l(si ⊗ tj)FX⊗Y ((V i × Wj) ∩ (V k × Wl))(sk ⊗ tl)∗ = ∑ i,j,k,l(si ⊗ tj)(FX(V i ∩ V k) ⊗ FY (Wj ∩ Wl))(sk∗⊗ t l∗) = ∑ i,j(si ⊗ tj)(FX(V i) ⊗ FY (Wj))(si∗⊗ t j∗) = ∑ isiFX(V i)si∗⊗∑ jtjFY (Wj)tj∗ = 〈s,s〉⊗〈t,t〉. But [s] = 0 implies that 〈s,s〉 = 0 and the identity just derived then implies that 〈γXY (s,t),γXY (s,t)〉 = 0 and therefore by definition [γXY (s,t)] = 0. □ Using the lemma we have a well linear map, also denoted by γXY , from HX ˜ ⊗HY ˜ to HX⊗Y ˜ γXY ([s] ⊗ [t]) = [γXY (s,t)]. The map γXY satisfy the following important identity Proof. Any v ∈HX ˜ ⊗HY ˜ is of the form v = ∑ isi ⊗ ti where si = ∑ jsijθV ij and ti = ∑ ktikθWik. But then we have 〈γXY (v),γXY (v)〉 = ∑ i,j,k,l,m,n(sij ⊗ tik)FX⊗Y ((V ij × Wik) ∩ (V lm × Wln))(slm ⊗ tln)∗ = ∑ i,j,k,l,m,n(sij ⊗ tik)(FX(V ij ∩ V lm) ⊗ FY (Wik ∩ Wln))(slm∗⊗ t ln∗) = ∑ i,l ∑ j,msijFX(V ij ∩ V lm) slm∗⊗∑ k,ntikFY (Wik ∩ Wln)tln∗ = ∑ i,l〈si,sl〉⊗〈ti,tl〉 = ∑ i,l〈si ⊗ ti,sl ⊗ tl〉 = 〈v,v〉. □ We can now state and prove the main property of γXY . First we will recall some facts about (external) tensor products of Hilbert modules. Let HX ⊗HHY denote the tensor product of HX and HY ,as real vector spaces, with topology determined by the norm induced from the operator valued inner product 〈ϕ ⊗ ψ,ϕ′⊗ ψ′〉 = 〈ϕ,ϕ′〉⊗〈ψ,ψ′〉. The completion of HX ⊗HHY is the external tensor product [2] of the Hilbert modules HX and HY and will be denoted by HX ⊗HY . It is a module over the spatial tensor product O(HX) ⊗O(HY )[12] of the represented C∗− algebras O(HX) and O(HX). Proposition 31. There exists an injective morphism of Hilbert modules γXY : HX ⊗HY →HX⊗Y such that 〈γXY (v),γXY (v)〉 = 〈v,v〉. HX ˜ ⊗HHY ˜is a dense subspace of HX ⊗HY and on this dense subspace γXY is given by γXY ([s] ⊗ [t]) = [γXY (s,t)]. Proof. Let HX ˜ ⊗πHY ˜ and HX ⊗πHY be the projective tensor products [6] of the underlying real vector spaces. Note that the tensor product spaces have not been completed with respect to the projective norm. The embedding HX ˜ ⊗πHY ˜↪HX ⊗πHY is know to exist and be dense [6]. The norm on HX ˜ ⊗HHY ˜ and HX ⊗HHY induced by the operator valued inner product is evidently a cross norm and it is know that the projective norm is the largest possible cross norm. Therefore we can conclude that HX ˜ ⊗HHY ˜ is a dense subspace of HX ⊗HHY and thus by completion in HX ⊗HY . By the previous lemma γXY is bounded and therefore extends uniquely to a bounded map γXY : HX ⊗HY →HX⊗Y . The first identity in the statement of the proposition follows from the previous lemma and the continuity of the operator valued inner product. □ In order to introduce tensor product of morphisms between extended probability spaces we need the previous proposition and the following lemma Lemma 32. For any measurable sets C ∈B(τX)and D ∈B(τY )we have the identity γXY ∘ (PC ⊗ PD) = PC×D ∘ γXY Proof. For C ∈B(τX) and D ∈B(τY ) we have (γXY ∘ (PC ⊗ PD))([s] ⊗ [t]) = γXY ([PC(s)] ⊗ [PD(t)]) = ∑ i,j(si ⊗ tj)θ(V i∩C)×(Wj∩D) = ∑ i,j(si ⊗ tj)θ(V i×Wj)∩(C×D) = PC×D(γXY ([s] ⊗ [t]). By continuity and density we can conclude that the identity γXY ∘ (PC ⊗ PD) = PC×D ∘ γXY holds on HX ⊗HY . □ Let now h : X → Y and k : X′ → Y ′ be morphisms of extended probability spaces. We thus have h = 〈fh,gh,ϕh〉 and k = 〈fk,gk,ϕk〉 where ϕh ∈HX and ϕk ∈HX′. Define a 3-tuple h ⊗ k by h ⊗ k = 〈fh⊗h,gh⊗k,ϕh⊗k〉, where fh⊗k = fh × fk , gh⊗k = gh ⊗ gk and ϕh⊗k = γXX′(ϕh ⊗ ϕk). Then we have Proof. We need to prove that (h ⊗ k)∗FX⊗X′ = FY ⊗Y ′. But this is true because (h ⊗ k)∗FX⊗X′(C × D) = gh⊗k∗∘〈ϕ h⊗k,Pfh⊗k−1(C×D)(ϕh⊗k)〉∘ gh⊗k = (gh ⊗ gk)∗∘〈γ XX′(ϕh ⊗ ϕk), (Pfh⊗k−1(C×D) ∘ γXX′)(ϕh ⊗ ϕk))〉∘ (gh ⊗ gk) = (gh∗⊗ g k∗) ∘〈γ XX′(ϕh ⊗ ϕk), (γXX′∘ (Pfh−1(C) ⊗ Pfk−1 (D)))(ϕh ⊗ ϕk)〉∘ (gh ⊗ gk) = (gh∗⊗ g k∗) ∘〈ϕ h ⊗ ϕk,Pfh−1(C)(ϕh) ⊗ Pfk−1(D)(ϕk))〉∘ (gh ⊗ gk) = (gh∗∘〈ϕ h,Pfh−1(C)(ϕh)〉∘ gh) ⊗ (gk∗∘〈ϕ k,Pfk−1(D)(ϕk)〉∘ gk) = (h∗FX)(C) ⊗ (k∗FX′)(D) = FY (C) ⊗ FY ′(D) = FY ⊗Y ′(C × D), where we have used the previous lemma. This proves that h ⊗ k is a mapping of extended probability spaces. In order to show that it is also a morphism we must show that it is independent of choice of representatives. Thus assume that h ≈ h′ and k ≈ k′. We need to show that h ⊗ k ≈ h′⊗ k′ and this amounts to proving that Qh⊗kϕh⊗k = Qh′⊗k′ϕh′⊗k′. But from the identity (gh ⊗ gk)(HX ⊗ HX′) = gh(HX) ⊗ gk(HX′) we have Qh⊗k = Qh ⊗ Qk and the rest of the proof is a simple calculation. □ Having proved that h ⊗ k is a morphism our next goal is to prove that it behaves as a functor under composition. For this we need the following lemma. Proof. By continuity we only need to prove the identity on the dense subset HY ˜ ⊗HHY ′˜ ⊂HY ⊗HY ′. But on this subset we have ((h ⊗ k)∗∘ γ Y Y ′)([s] ⊗ [t]) = (h ⊗ k)∗(γ Y Y ′(s,t)) = ∑ i,j(h ⊗ k)∗(s i ⊗ tj)P(fh×fk)−1(V i×Wj)(ϕh⊗k) = ∑ i,j(h∗(s i) ⊗ k∗(t j))(P(fh×fk)−1(V i×Wj) ∘ γXX′)(ϕh ⊗ ϕk) = ∑ i,j(h∗(s i) ⊗ k∗(t j)) (γXX′∘ (Pfh−1(V i) ⊗ Pfk−1(Wj)))(ϕh ⊗ ϕk) = γXX′(∑ i,j(h∗(s i)Pfh−1(V i)(ϕh)) ⊗ (k∗(t j)Pfk−1(Wj)(ϕk))) = (γXX′∘ (h∗⊗ k∗))([s] ⊗ [t]). □ We can now prove our first main result in this section Theorem 35. The operation ⊗is a bifunctor on the category of extended probability spaces. (h′⊗ k′) ∘ (h ⊗ k) = (h′∘ h) ⊗ (k′∘ k), 1X ⊗ 1Y = 1X⊗Y . Proof. The unit property is trivial to verify and for the first identity we only need to prove that γXX′(ϕk∘h ⊗ ϕk′∘h′) = (h ⊗ h′)∗(ϕ k⊗k′). But using the previous lemma we have γXX′(ϕk∘h ⊗ ϕk′∘h′) = γXX′(h∗(ϕ k) ⊗ h′∗(ϕ k′)) = (γXX′∘ (h∗⊗ h′∗))(ϕ k ⊗ ϕk′) = ((h∗⊗ h′∗) ∘ γ Y Y ′)(ϕk ⊗ ϕk′) = (h∗⊗ h′∗)(ϕ k⊗k′). □ 6.2. The monoidal structure. Showing that ⊗ exists and is a bifunctor is the only hard part in proving that there is a monoidal structure on the category of extended probability spaces. The only reasonable candidate for a unit object is clearly the extended probability space T discussed previously. For any objects X,Y and Z define ηX = 〈fηX,gηX,ϕηX〉, γX = 〈fγX,gγX,ϕγX〉, αXY Z = 〈fαXY Z,hαXY Z,ϕαXY Z〉, fηX(∗,x) = fγX(x,∗) = x, fαXY Z((x, (y,z)) = ((x,y),z), gηX(ξ) = 1 ⊗ ξ, gγX(ξ) = ξ ⊗ 1, gαXY Z(ξ ⊗ (ξ′⊗ ξ′′)) = (ξ ⊗ ξ′) ⊗ ξ′′, ϕηX = 1HT⊗X, ϕγX = 1HX⊗T, ϕαXY Z = 1HX⊗(Y ⊗Z). These are obviously the simplest choices we can make and it is a tedious but simple exercise prove the following theorem. This is the second main result of this section. Theorem 36. ηX,γXand αXY Zare morphisms of extended probability spaces ηX : T ⊗ X → X, γX : X ⊗ T → X, αXY Z : X ⊗ (Y ⊗ Z) → (X ⊗ Y ) ⊗ Z, and are the components of natural isomorphisms. Furthermore 〈⊗,T,η,γ,α〉is a monoidal structure on the category of extended probability spaces. [1] Sterling K. Berberian. Notes on Spectral Theory. Van Nostrand, 1966. [2] E. C.Lance. Hilbert C*-Modules: A Toolkit for Operator Algebraists. University press, 1995. [3] Karl Stromberg Edwin Hewitt. Real and Abstract Analysis. Springer Verlag, 1969. [4] K. R. Goodearl. Notes on Real and Complex C*-Algebras. Shiva Publishing Limited, 1982. [5] Konrad Jacobs. Measure and Integral. Academic Press, 1978. [6] G. Köthe. Topological Vector Spaces, volume II. Springer Verlag, 1979. [7] N. P. Landsman. Mathematical Topics Between Classical and Quantum Mechanics. Springer Verlag, 1998. [8] F. W. Lawere and S. H. Schanuel. Conceptual Mathematics. Cambridge, 1997. [9] S. Mac Lane. Categories for the Working Mathematician. Springer, 1998. [10] William L. Paschke. Inner product modules over b*-algebras. Transactions of the American Mathematical Society, 182:443–468, August 1973. [11] Valentin Lychagin Per Jakobsen. Relations and quantizations in the category of probabilistic bundles. Acta Applicandae Mathematicae, 82(3):269–308, 2004. [12] John R. Ringrose Richard V. Kadison. Fundamentals of the Theory of Operator Algebras, volume II. Academic Press, 1986. [13] Nik Weaver. Mathematical Quantization. Chapman and Hall/CRC, 2001. UNIVERSITY OF TROMSO,9020 TROMSO, NORWAY E-mail address: perj@math.uit.no E-mail address: lychagin@mat-stat.uit.no Received October 1, 2004
{"url":"http://www.maths.soton.ac.uk/EMIS/journals/LJM/vol16/jcl-mml.html","timestamp":"2014-04-19T22:27:10Z","content_type":null,"content_length":"760318","record_id":"<urn:uuid:fc559fb6-3a82-4809-9e66-68a618d6dec3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment September 2005 How to take a penalty: the mathematical curiosities of sport Rob Eastaway and John Haigh Have you ever wondered what shape a football is? No, it is not a sphere - it is far closer to something called a truncated icosahedron, also known as a "buckyball". It consists of 12 black pentagons and 20 white hexagons and is about the most effective way of creating something nearly spherical out of flat panels. Curious sporting-related mathematical facts like this can be found throughout Eastaway and Haigh's book "How to take a penalty, the hidden mathematics of sport". As its name suggests, this book is about the mathematics hidden within the world of sport. It is not about any one sport in particular; actually many are discussed. The book is an engaging read and contains many interesting facts about sports, such as why the number of players in a football team is 11 (apparently this originates from the fact that a cricket team has this number, although the reason as to why this is so is more mysterious). It must be emphasised though, that this book is not just a collection of facts. It shows how players of a game can benefit from a well thought-out strategy, the use of basic logic and probability theory. In fact, a lot of the mathematical analysis, which is relegated to an appendix, consists of basic probability. This is presented in a way that is accessible to a GCSE student. Although the calculations involved are often basic, their results can be far from intuitive. For example, the best serving strategy in tennis is a risky fast first serve and slow and safe second serve. In darts, if one wants to win two consecutive legs (a leg is won by the first to score 501 points) it is best to opt to throw second for the first leg, despite the fact that you are less likely to win the leg if you do so. The book also contains a little puzzle in each chapter, complete with solution. As the authors point out, the initial plan was to divide the book up according to the various types of sports, subjecting each to a mathematical analysis. It soon transpired, however, that many issues, such as the winning of the toss of the coin before a game, or "subjective scoring", are common to many different types of sports. These, sometimes unexpected, similarities led the authors to adopt a more unified approach. The book has thus been divided into chapters which treat these common phenomena and skip between the different sports. Although it is hard to see how this could be avoided without duplicating material, the style, while generally very natural and colloquial, does sometimes suffer from these constant changes. This book is packed with information, so much so that one would certainly benefit from a second read. At the same time, though, it is not too intense. In short, it is a collection of nice examples of how you can employ mathematical reasoning in everyday situations (such as sport) to improve your position and chances of winning. Book details: How to take a penalty: the mathematical curiosities of sport Rob Eastaway and John Haigh Hardback - 192 pages (2005) Robson Books Ltd ISBN: 1861058365 You can buy the book and help Plus at the same time by clicking on the link on the left to purchase from amazon.co.uk, and the link to the right to purchase from amazon.com. Plus will earn a small commission from your purchase. James Lucietti has recently submitted his PhD thesis in String Theory at the University of Cambridge.
{"url":"http://plus.maths.org/content/comment/reply/3206","timestamp":"2014-04-19T02:41:46Z","content_type":null,"content_length":"24927","record_id":"<urn:uuid:41664c9d-45ea-482a-9ca5-50c7512652c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
The Case for Rigor: MLCS vs. Intermediate Algebra A frequent concern that arises when courses like MLCS are proposed is the issue of rigor. Often, it is assumed that an alternative to intermediate algebra is being developed because students can't pass intermediate algebra. And with that comes the debate of whether someone should get a college degree without being able to prove competency in high school algebra 2. The insinuation is that our current system is just too hard and folks looking for change are really looking to reduce standards. I do believe students are capable of passing intermediate algebra. Students in our school do very well in that course through our redesign. And through that success, they fare extremely well in college algebra, the true goal of intermediate algebra. I also believe intermediate algebra is a good course and one that we should not eliminate. The question is should they have to take that course? And are our reasons for requiring it outdated? Intermediate algebra is commonly used as a prerequisite for college level math because it weeds out students who are not college ready. In other words, it's a hoop. If our goal is evidence of college readiness and therefore rigor and high standards, we can get there in other ways than intermediate algebra. And in doing so, we can accomplish what I believe to be the real goal of developmental math: preparing students for the college level math courses they will take. We teach intermediate algebra because of history and tradition, which is not sufficient. For students headed to statistics or liberal arts math, there are no skills in intermediate algebra that will help them be successful. Most students see the course as an exercise in moving letters on a page and are often quite irritated in those two follow-up courses when they discover they didn't need any of the skills they worked so hard on. But intermediate algebra does have rigor and high standards. So it does accomplish one goal: putting stronger students in college level courses. Using it as one size fits all prerequisite is my issue. Read any article about the job market and what employers need and a theme is common: graduates lack skills necessary for the workforce. I believe the time we spend with students should be meaningful and of value to them. Not everything has to be immediately useful, but much should be. Or at least much more than we currently do should be useful. And the processes used to develop content should have meaning beyond the course. We should be preparing them for what's next in their program of study but also to be productive citizens and employees. In MLCS, there are some skills we work on that I'm very confident students will not use in real life. So why do we still include them? Because of the way they're developed and the additional skills and techniques students get along the way. For example, we do a problem about a school increasing tuition and the effects of loss of credit hour enrollment due to increases. We build a cost model, which is quadratic, and analyze it numerically and graphically. We then learn about the vertex, how to find it, what it signifies, and how to use it. In an intermediate algebra class, students are given quadratic functions and asked for the vertex. Then there are a handful of applications for students to practice using it. But the focus is on the symbolic manipulation. In MLCS, the focus is on problem solving, new functions that arise when problem solving, and ways to work with them. Students exercise skills they've already learned and extend their ability to analyze a situation. It's rigorous and difficult, but worth the class time spent on it. While I can't guarantee they'll use the skill developed in their daily life, I strongly believe they'll use the processes involved. When I'm teaching algebra to students who are not headed on the calculus track, I don't understand our goal anymore. That's why I've stopped teaching developmental algebra for students heading to statistics. I can't sell a course based on exercising one's brain. We could do Sudoku and chess for a semester and exercise our brains. That sounds absurd, but so does moving letters around for 4 months when students will never use that skill again. And do their brains really get exercised? We like to believe that happens because it justifies what we do. But I really question how much learning is taking place in developmental math classrooms. Students are mimicking and enduring but they're not retaining and applying. Learning is defined as: The acquisition of knowledge or skills through experience, practice, or study, or by being taught. Notice it's not the exposure to knowledge or skills; it's the acquisition of them. I don't believe our students are acquiring much from our developmental algebra classes. And with the amount of time and cost they spend there, that's not acceptable. So back to rigor: why does MLCS have it? And how is it possible to have a comparable level of rigor of intermediate algebra without the symbolic manipulation that intermediate algebra includes? Here's how: depth and expectations. Whatever we do in MLCS, we do it deeply and frequently. There is very little "one and done" of a topic. Every skill is developed because we need to use it. If we develop a Venn diagram, it's so that we can use it as a tool to make comparisons and gain further insight on a situation. For example, we use Venn diagrams to compare and contrast high school and college. We also use them to compare and contrast variables and constants, which is an important distinction. It's not, "graph y = 3x - 8." It's determining if a situation is linear, if a model will help solve further problems, and using that model's equation and graph to answer questions. Using a skill after you determined it should be used is much more difficult than performing a skill after being told when and how. But that's how life is. I don't get new projects with a detailed roadmap attached to them and "view an example." I get new projects and the instruction "make it happen." What, when, how and why is up to me to figure out. That's the way of the world and certainly the job environment. It's very beneficial for students to experience those types of challenges in the safe environment of the classroom. But every time you decide to go deeper with a skill, you lose time that would allow you to go further in breadth. That approach is one I've used for years in my statistics courses. I never get to ANOVA, but my students can collect real data and test hypotheses using it. They can obtain and analyze statistics. I sacrifice more topics for fewer topics done deeper, where real life activities are the norm. The end result is a hard course with great value. And I never once get asked "when am I going to use this?" A slight perk, but one that I cherish. The other component that ensures rigor are the expectations of the course. In our version of MLCS, we make students write, explain, and research problems including open ended problems. This approach makes them love MyMathLab problems because they're a very simple, boiled-down part of the course. But learning is about understanding (and showing that) as well as application. So half of their tests are applications. And not routine, previously seen, canned applications. Problems are truly problems and challenging. I've never given a developmental algebra test with more than 10% of the problems being word problems. 50% almost seems cruel. Yet that is far from the case in MLCS. And students can do them. The old adage of depth over breadth is truly exhibited in this challenging course. But students can rise to the challenges and in so doing, they reach the level of college ready. No, it's not intermediate algebra. It's just as hard, but it prepares the students for what's ahead of them. I absolutely believe intermediate algebra has value, just not for every student. The same could be said for my math for elementary teachers courses. They're wonderful, but I can't imagine a pre-med major getting much out of them. 1 comment: 1. I NEVER BELIEVED IN LOVE SPELLS UNTIL I MET THIS WORLD’S TOP SPELL CASTER. HE IS REALLY POWERFUL AND COULD HELP CAST SPELLS TO BRING BACK ONE’S GONE,LOST. Hello everyone in this forum, My life is back!!! After 8 years of marriage, my husband left me with our three kids. I felt like my life was about to end, and was falling apart. I contacted A spell caster called Dr Ehijie, I explained all my problem to him . In just 3 days, my husband came back to us and show me and my kids much love and apologize for all the pain he have bring to the family. We solved our issues, and we are even happier than before you are the best spell caster Dr Ehijie, i really appreciate the love spell you castes for me to get this man back to my life i will keep sharing more testimonies to people about your good work Thank you once again. you may contact him via (allmightbazulartemple@gmail.com).incase you are in any problem you can contact this man for help he is always there in his temple to help you solve your problem Contact Email is (allmightbazulartemple@gmail.com)
{"url":"http://almydoesmath.blogspot.com/2012/06/case-for-rigor-mlcs-vs-intermediate.html","timestamp":"2014-04-19T14:28:51Z","content_type":null,"content_length":"116866","record_id":"<urn:uuid:76a47bac-104f-4258-a8d7-3431fcb41112>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help September 21st 2010, 05:45 PM #1 Sep 2010 My colleague's daughter wanted to be a stop sign for Halloween, and he needed to cut out such a sign from a piece of cardboard that was 3 feet by 4 feet. What cuts did he make so that the stop sign (a regular octagon) was as big as possible? Intuitively, we want to make the height as large as possible, leaving a little left over material on the sides. (We could alternately think of it as make the sides as large as possible leaving a little left over for the height, it doesn't matter.) Using this way of thinking, obviously, we're thinking of the cardboard as having the 3-foot side running vertically and the 4-foot side running horizontally. When we see the octagon as triangles and rectangles, we know that the interior rectangle has the same height as the sides of the octagon, which we'll call $a$. Looking at the triangles, then, we want to find out what is the side length of a right triangle with hypotenuse $a$. Whatever that side length is, call it $b$, we want to double it, add it to $a$, and set that equal to 3. When we solve for $a$, we'll know the value of $b$. Thus he started a cut on the left side (could be on the right, again, not really important how you view this), b units down from the top, at a 45-degree angle from the board (since the interior angle of an octagon is 135) until he cut that piece off. He did a symmetrical cut at the bottom. He also made a cut $a+2b$ units to the right of the upper-left corner (measured from the point before anything had been cut), straight down the cardboard. He then made cuts on the right that were symmetric to those on the left. I'll see if I can make and post a .pdf showing the picture that I used to think about this. Adding some calculations to ragnar's description: 1. All sides of the octogon have the length a. 2. The width of the board is 3'. It consists of $2k + a = 3$ Since a is the hypotenuse of an isosceles right triangle with side length k you know: $k^2+k^2 = a^2~\implies~k = \frac12 a \cdot \sqrt{2}$ Thus the 1st equation becomes: $a\cdot \sqrt{2}+a=3~\implies~a=\dfrac{3}{1+\sqrt{2}}~\app rox~1.24264'$ Elegant solution, earboth! Are there any simplier ways to do this? September 21st 2010, 08:49 PM #2 Jun 2010 September 21st 2010, 08:59 PM #3 Jun 2010 September 22nd 2010, 01:21 AM #4 September 22nd 2010, 08:38 AM #5 Super Member May 2006 Lexington, MA (USA) September 25th 2010, 04:30 PM #6 Sep 2010
{"url":"http://mathhelpforum.com/geometry/157003-halloween.html","timestamp":"2014-04-19T17:34:28Z","content_type":null,"content_length":"42862","record_id":"<urn:uuid:afe35f85-f9af-4047-9eff-e4aa4ac89840>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Re: Measuring Models/pseudo R-squared [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Re: Measuring Models/pseudo R-squared From Paul Millar <paul.millar@shaw.ca> To statalist@hsphsun2.harvard.edu Subject Re: st: Re: Measuring Models/pseudo R-squared Date Wed, 06 Apr 2005 13:11:39 -0600 This is an interesting article. The pre measure examined there uses the mean of the dependent variable as the standard for comparison. The ssc command -pre- uses the mode for categorical variables. It would be interesting to see how that measure performs and to see how it performs across model types. - Paul At 12:16 PM 06/04/2005, you wrote: Alfred DeMaris wrote a paper on the performances of the various measures of pseudo-R-squared a few years back. DeMaris, A. (2002) Explained Variance in Logistic Regression. A Mote Carlo Study of Proposed Measures. Sociological Methods & Research 31, 27-74. ----- Original Message ----- From: "Paul Millar" <paul.millar@shaw.ca> To: <statalist@hsphsun2.harvard.edu> Sent: Wednesday, April 06, 2005 8:04 PM Subject: st: Measuring Models Most of the various kinds of pseudo-R2s are attempts at providing the equivalent of the "variance explained" interpretation of the OLS R2. The other interpretation of R2 is the proportional reduction in errors when predicting the dependent variable or PRE. This is a measure of the predictive capability of the model and can be calculated for other models as well - the ssc post-estimation command -pre- will calculate this for common model types (logit, ologit, mogit, poisson and the like). Some don't like it because for some models it can actually be negative if the model is worse than predicting the mode (for example with logit or probit models that model a rare phenomenon). Nevertheless, I think it is useful to know how the model improves prediction capability - this might in fact be one of the more important measures of a model, yet it doesn't seem to be widely used. I prefer the plain old Pseudo-R2 (the proportional improvement in the log-likelihood) for pseudo-R2s, since it is available for all models and is easily calculated and understood. It is somewhat analogous to the pre, in that it measures the improvement of the log-likelihood instead of the reduction of errors. - Paul Millar University of Calgary * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-04/msg00196.html","timestamp":"2014-04-18T08:31:34Z","content_type":null,"content_length":"9045","record_id":"<urn:uuid:1aa7434b-e701-4c00-93a0-388ca322256e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Intersection of Circles Date: 01/16/2002 at 06:36:21 From: Peter Knoben Subject: Intersection of circles I have two circles, one with radius R and centerpoint (a,b) and one with radius r and centerpoint (c,d). These two circles intersect. How can I find the coordinates of the intersection point(s)? I already tried the circle formula: (x-a)^2+(y-b)^2 = R^2 and (x-c)^2+(y-d)^2 = r^2 but if I make an equation of these two formulas I get an equation with square roots and exponentials that I can not solve. If it's possible, I would prefer a solution without the use of sine, cosine, or tangent Best regards, Date: 01/16/2002 at 12:43:18 From: Doctor Peterson Subject: Re: Intersection of circles Hi, Peter. You can solve the equations you gave, if you approach it the right (x-a)^2 + (y-b)^2 = R^2 (x-c)^2 + (y-d)^2 = r^2 If you expand each of these and then subtract one from the other, you will eliminate the squares, and will be left with a linear equation that you can easily solve for y. Replace y in one of the original equations with that expression, and you have a quadratic you can solve for x. It will be extremely ugly, but is not really hard. When you are finished, compare your result to this, which I found by searching our archives for the words "intersection circles": Intersecting Circles I found a clearer solution than Dr. Ken's computer-generated solution linked from that page. Here it is: Let the centers be: (a,b), (c,d) Let the radii be: r, s e = c - a [difference in x coordinates] f = d - b [difference in y coordinates] p = sqrt(e^2 + f^2) [distance between centers] k = (p^2 + r^2 - s^2)/(2p) [distance from center 1 to line joining points of intersection] x = a + ek/p + (f/p)sqrt(r^2 - k^2) y = b + fk/p - (e/p)sqrt(r^2 - k^2) x = a + ek/p - (f/p)sqrt(r^2 - k^2) y = b + fk/p + (e/p)sqrt(r^2 - k^2) I found this solution using translation and rotation to simplify the math. To do the rotation, I used the fact that sin(angle) = f/p cos(angle) = e/p - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/51836.html","timestamp":"2014-04-17T11:04:12Z","content_type":null,"content_length":"7423","record_id":"<urn:uuid:3d47a7e7-aa4b-4416-aa60-666417b47bcf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursive definition?? January 30th 2008, 01:04 AM #1 Junior Member Jan 2008 Waipahu, HI Recursive definition?? I'm really sorry about how messy this looks; I'm still trying to figure out how to use the math notation thing. Anyway, the problem is: Given A0=1 and An+1=(3*An)+1, find the definition for An I just have no idea how to go about doing this. You don't have to give the answer, just any sort of hint as to procedure would be great. Thank you! I'm really sorry about how messy this looks; I'm still trying to figure out how to use the math notation thing. Anyway, the problem is: Given A0=1 and An+1=(3*An)+1, find the definition for An I just have no idea how to go about doing this. You don't have to give the answer, just any sort of hint as to procedure would be great. Thank you! $A_1=3 A_0+1=3+1$ $A_2=3 A_1+1=3^3 + 3 +1$ $A_3=3 A_2+1=3^3+3^2+3+1$ and so on.. So I think I'm getting An=Sum from i=1 to n of $3^i$ That makes sense, because you multiply each previous term by 3, so there must be an exponent in there... haha, i think it's coming together in my head. Thank you! Close. But actually the sum will be from i = 0: $A_n = \sum_{i = 0}^n 3^i$. January 30th 2008, 01:37 AM #2 Grand Panjandrum Nov 2005 January 30th 2008, 02:25 AM #3 Junior Member Jan 2008 Waipahu, HI January 30th 2008, 02:37 AM #4
{"url":"http://mathhelpforum.com/discrete-math/27089-recursive-definition.html","timestamp":"2014-04-25T01:09:33Z","content_type":null,"content_length":"42839","record_id":"<urn:uuid:84e72703-7cbc-4e26-9d2a-7e3fe8ae0c6a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
An Introduction To Signals Why Do You Need To Know About Signals? What Are Signals? Signal Representation - Sinusoidal Signals What If Things Are A Little Different? You are at: Basic Concepts - Signals - Introduction Return to Table of Contents Why Do You Need To Know About Signals? Electricity has been with us for a while now, and we use it in many ways that were never anticipated when we began to use electricity. • After electricity was discovered and understood, almost all of the first applications were ones that involved power. The application may have been an electric light or it may have been a motor. In either case, the application of electricity involved the use of large amounts of energy and power. • However, other early applications of electricity were used to transmit information. The telegraph, and later the telephone, had a very significant effect on the history of the United States. • As things progressed it became necessary to design an build systems to distribute large amounts of electrical power. • The city of Sunbury, PA, was the first to have an electrical distribution system for lighting on July 4, 1883. That system was a DC system. • Later, AC systems were developed. Edison, who installed that first DC system in Sunbury, and who had begun installing DC systems in many other cities bitterly fought the introduction of AC systems. Edison put out advertisements that showed an electric chair and asked the public if they wanted AC in their home since it was used in the electric chair for killing people. AC systems prevailed, despite Edison's protestations. AC systems were based on a time varying voltage. (AC stands for Alternating Current.) As people learned to deal with these time varying voltages they realized that time varying voltages could be used to transmit information. With the invention of radio it was realized that information could be transmitted by controlling the parameters of a sinusoidally varying voltage. Today, we have vast industries that use electricity to distritute information. Those industries are the TV networks (over the air, cable and satellite), phone systems, the internet and other radio communciation systems, for example. If you want to understand the basics of how the information distribution industry works you will need to know about the various forms of electrical signals. Goals For This Lesson Goals for this lesson are simple. Given a signal, Be able to describe the signal when possible. Be able to use time-varying currents and voltages in KCL and KVL. Electrical Signals There are many different kinds of electrical signals. If we look at how signals are generated, we find that there are many different kinds of electrical signal sources. Here are some. • Microphones • Fax machines • Thermocouples • Remote controls for television sets What could possibly all of these signals have in common? Well, they are all electrically generated signals, and electrical engineers have to deal with them. • If you are a sound engineer, you will have to deal with the small voltages produced by microphones and you will have to worry about processing the signal so that you can reproduce the sound signals accurately. • If you design fax machines you will need to ensure that your signals are transmitted accurately so that information is not corrupted when it is sent. • If you need to control temperature in an industrial process, you will need to worry about the small signal from a thermocouple - how to amplify it - how to process it. • TV remotes are designed to set the channel accurately. They set the channel, adjust volume, etc. by sending signals to the TV. What is common through all of this is the need to be able to manipulate signals in different ways. That might include doing one or more of the following. • Amplify a signal - make it larger • Remove noise from a signal • Change a signal to emphasize certain characteristics - for example adding bass boost to a sound signal. When you start to operate on signals in this way you are entering the realm of signal processing. Today, signal processing is often done after digitizing a signal - making a digital version of the signal - and the processing that is done there is referred to as digital signal processing, or simply DSP. If you will be dealing with signals, then you will need to have some sort of model for the signals you work with. Usually that will be some sort of mathematical representation. The simplest representation for a signal is to represent the signal as a function of time. For example, the voltage that appears across the terminals of a microphone will vary in time when the microphone "picks up" a sound. Then, we might say something like: Microphone voltage = V[mike](t) Representing a signal as a time function is so common that there are many instruments and data gathering devices that give you a picture of a voltage time function. The most common instrument that gives a picture of a voltage time function is the oscilloscope. At this point, we can consider some specific kinds of signals. We'll start with periodic signals, and in particular we'll start with sinusoidal signals. Periodic Signals Periodic signals are signals that repeat in time with a certain period. The most fundamental periodic signal is the sinusoidal signal. Any other periodic signal can be thought of as a combination of sinusoidal signals added together. That approach is based on the Fourier Series and you can go to that topic when you know enough about sinusoidal signals. In this section you will begin to learn about sinusoidal signals. Sinusoidal signals, based on sine and cosine functions, are the most important signals you will deal with. They are important because virtually every other signal can be thought of as being composed of many different sine and cosine signals. They form the basis for many other things you will do in signal processing and information transmission. Eventually you will deal with signals as different as voice signals, radar signals, measurement signals and entertainment signals like those found in television and radio. Sinusoidal signals are the starting point for almost all work in signal processing and information transmission. In this part of this lesson you will learn: • What Sinusoidal Signals are and How they are Represented • The Parameters of a Sinusoidal Signal • How to Measure the Parameters of a Sinusoidal Signal. Representation of Sinusoidal Signals If you put a voltage signal into an oscilloscope you can get a picture of how the signal varies in time. Sinusoidal signals are often voltages which vary sinusoidally in time. (Sinusoidal signals could be, however, other physical variables like current, pressure, or virtually any other physical variable.) Here's a simulator that will let you put various kinds of signals on a simulated oscilloscope. Click the Start button to see a typical sinusoidal signal. You can write a mathematical expression for the voltage signal as a function of time. Call that mathematical expression V(t). V(t) will have this general form. V(t) = V[max]sin(wt + f) This signal has three parameters, the maximum voltage, or amplitude, denoted by V[max], the angular frequency denoted by w and the phase angle denoted by f. We'll examine these separately. Amplitude of Sinusoidal Signals The amplitude of a sinusoidal signal is the largest value it takes (when the sine function has a value of +1 or -1). Amplitude has whatever units the physical quantity has, so if it is a voltage signal, like the one below, it might have an amplitude of 10 volts. On the other hand if the shock absorbers on your car are bad, your car can run down the road and vibrate up and down with an amplitude of three inches. For the signal we saw above - repeated here - the amplitude is 100 volts. However, you can change the amplitude of the sinusoidal signal here by typing in a different value. Do that now. P1. Here is a sinusoidal signal. Willy Nilly has measured this signal, and acquired it in a computer file and plotted it for you. Determine the amplitude of this signal. Frequency of Sinusoidal Signals Frequency is a parameter that determines how often the sinusoidal signal goes through a cycle. It is usually represented with the symbol f, and it has the units hertz (sec^-1 ). Here is the simulator you saw above. This signal repeats every 4 seconds. We say that this signal has a period of 4 seconds, and we usually represent the period of the signal as "T". Here we have T = 4 sec. The frequency of the signal is the reciprocal of the period. That's why the frequency is indicated as 0.25 in the simulator. When "f" is the frequency, we have: f = 1/T f is in Hertz (Hz) T is in seconds (sec) For the signal above, the frequency is 0.25 Hz. We get that from: f = 1/T = 1/2 = 0.25 Hz. Now, you can also change the frequency in the simulator above. Change the frequency and observe what happens. Try a frequency of 0.5 Hz, 1 Hz, etc. P2. Here is Willy Nilly's signal again. Determine the frequency of this signal. Hint: First determine the period of the signal. A sinusoidal signal (sine or cosine) can be represented mathematically. If we attempt to use the information we have on the sinusoidal signal we've been looking at to write a mathematical expression for the signal, we would write (Note the 2p factor in the expression!): V(t) = A cos(2pft) In this expression: • A is the amplitude, • f is the frequency. □ w = 2pf is the angular frequency Frequency of a sinusoid is something that you can perceive. Frequency of a sinusoid determines the pitch you hear in the sound it make when a speaker is driven with a sinusoid. Here are three signals you can listen to by clicking the hotwords. They are chosen to be at frequencies in ratios of 2:1. Notice how these sounds are what a musician would call an octave apart. Phase of Sinusoidal Signals Sinusoidal signals don't need to start at zero at t = 0. There are other possibilities. Here are two sinusoidal signals. These two signals have the same amplitude and frequency, but they are not the same. The difference in the two signals is in their phase. Phase is another parameter of sinusoids. Consider how we might write mathematical expressions for the signals in the plot above. • For the "red" signal, we can write: □ v[red](t) = 150sin(2p60t) = 150cos(2p60t - p/2) □ This signal has a phase angle of -p/2. • For the "blue" signal, we can write: □ v[blue](t) = 150sin(2p60t-p/2) = -150cos(2p60t) □ This signal has a phase angle of -p radians. • In general, any sinusoidal signal can be written as: □ v[signal](t) = Asin(2pft + f), where: □ A = amplitude, □ f = frequency, □ f = phase. Note the following points about sinusoidal signals. • Any time you use a sinusoidal signal you have to make an arbitrary decision about where the time origin (t = 0) is located. □ If you have just one signal you can often choose the time origin to be the instant when the signal goes through zero. Then □ If you have more than one signal, you can often choose one of the signals as a reference - with zero phase - and measure phase from that reference. • In the example above, we chose the red signal as the reference, and the blue signal has a phase of -p/2 radians. • We have used the sine function here, but we could also have done everything with cosines. Here is the simulator again. This time you can vary the phase. We have set things up so that you can input the phase in degrees rather than radians, and we have done the conversion internally. That's the way EEs normally think of things anyway. P3. Here are two signals. Both signals have the same amplitude. Determine the amplitude of the two signals. P4. Next, determine the frequency of the two signals. P5. Now, determine the phase of the "blue" signal assuming that the "red" signal is the reference. Give your answer in radians. What If The Signal Isn't Sinusoidal? Sinusoidal signals aren't always the most interesting kind of signal. They keep doing the same thing over and over. However, other signals which contain information can often be thought of as combinations of sinusoidal signals. That includes periodic signals - which repeat in time but not sinusoidally - and even non-periodic signals. Even totally random signals are often viewed as having frequency components and that concept is borrowed from concepts that first arise when you consider sinusoidal signals. So, even if you don't ever see a sinusoidal signal again, you may well be trying to deal with sinusoidal components. As you get into the study of signals you'll deal with numerical algorithms - like the FFT - that decompose signals into sinusoidal components. You'll find many uses for whatever you learn about sinusoids. What About Other Signals? There are many other kinds of signals besides sinusoidal signals. • Some signals are periodic, but not sinusoidal. Those signals will have a Fourier Series representation. That's just a way of representing periodic signals as sums of sinusoids. Fourier Series are at the root of numerical algorithms like the Fast Fourier Transform - the FFT - and are widely used in the analysis of signals. • Other signals may not even be periodic. Those signals are also of interest, and we can look at how those signals look. Here is an example of a periodic, non-sinusoidal signal. Now, difficult as it is to imagine, this signal can be represented as a sum of sinusoidal signals. While that is a subject for another lesson, you should be motivated to learn everything you can about sinusoidal signals. The things you don't learn can prevent you from continuing when you get to concepts like representing this signal with a sum of sinusoidal signals. Other Signals Information carrying signals will vary in time. A constant signal really doesn't convey any information. However, when you are dealing with time-varying signals things that you know about constant signals may still hold true. In particular. • KVL still holds for time-varying signals. In fact, KVL holds at every instant of time. • KCL also is true at every instant of time. We'll ask you to use those in some of the problems below. Problems Send your comments on these lessons.
{"url":"http://www.facstaff.bucknell.edu/mastascu/eLessonsHTML/Signal/Signal1.htm","timestamp":"2014-04-19T22:05:59Z","content_type":null,"content_length":"39091","record_id":"<urn:uuid:f4069310-859b-4914-9835-2345dbd43fa9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
On interpolation by analytic functions with special properties and some weak lower bounds on the size of circuits with symmetric gates Results 1 - 10 of 11 - Computational Complexity , 1994 "... . Define the MODm -degree of a boolean function F to be the smallest degree of any polynomial P , over the ring of integers modulo m, such that for all 0-1 assignments ~x, F (~x) = 0 iff P (~x) = 0. We obtain the unexpected result that the MODm -degree of the OR of N variables is O( r p N ), wher ..." Cited by 56 (6 self) Add to MetaCart . Define the MODm -degree of a boolean function F to be the smallest degree of any polynomial P , over the ring of integers modulo m, such that for all 0-1 assignments ~x, F (~x) = 0 iff P (~x) = 0. We obtain the unexpected result that the MODm -degree of the OR of N variables is O( r p N ), where r is the number of distinct prime factors of m. This is optimal in the case of representation by symmetric polynomials. The MOD n function is 0 if the number of input ones is a multiple of n and is one otherwise. We show that the MODm -degree of both the MOD n and :MOD n functions is N\Omega\ Gamma1/ exactly when there is a prime dividing n but not m. The MODm -degree of the MODm function is 1; we show that the MODm -degree of :MODm is N\Omega\Gamma30 if m is not a power of a prime, O(1) otherwise. A corollary is that there exists an oracle relative to which the MODmP classes (such as \PhiP) have this structure: MODmP is closed under complementation and union iff m is a prime power, - COMPUTATIONAL COMPLEXITY , 1995 "... In this paper we describe a new technique for obtaining lower bounds on restriced classes of nonmonotone arithmetic circuits. The heart of this technique is a complexity measure for multivariate polynomials, based on the linear span of their partial derivatives. We use the technique to obtain new lo ..." Cited by 38 (6 self) Add to MetaCart In this paper we describe a new technique for obtaining lower bounds on restriced classes of nonmonotone arithmetic circuits. The heart of this technique is a complexity measure for multivariate polynomials, based on the linear span of their partial derivatives. We use the technique to obtain new lower bounds for computing symmetric polynomials and iterated matrix products. - In Proceedings of “Combinatorics, Paul Erdos is Eighty , 1994 "... We prove upper bounds on the randomized communication complexity of evaluating a threshold gate (with arbitrary weights). For linear threshold gates this is done in the usual 2 party communication model, and for degree-d threshold gates this is done in the multiparty model. We then use these upp ..." Cited by 29 (1 self) Add to MetaCart We prove upper bounds on the randomized communication complexity of evaluating a threshold gate (with arbitrary weights). For linear threshold gates this is done in the usual 2 party communication model, and for degree-d threshold gates this is done in the multiparty model. We then use these upper bounds together with known lower bounds for communication complexity in order to give very easy proofs for lower bounds in various models of computation involving threshold gates. This generalizes several known bounds and answers several open problems. , 1991 "... Several recent results in circuit complexity theory have used a representation of Boolean functions by polynomials over finite fields. Our current inability to extend these results to superficially similar situations may be related to properties of these polynomials which do not extend to polyno ..." Cited by 11 (2 self) Add to MetaCart Several recent results in circuit complexity theory have used a representation of Boolean functions by polynomials over finite fields. Our current inability to extend these results to superficially similar situations may be related to properties of these polynomials which do not extend to polynomials over general finite rings or finite abelian groups. Here we pose a number of conjectures on the behavior of such polynomials over rings and groups, and present some partial results toward proving them. 1. Introduction 1.1. Polynomials and Circuit Complexity The representation of Boolean functions as polynomials over the finite field Z 2 = f0; 1g dates back to early work in switching theory [?]. A formal language L can be identified with the family of functions f i : Z i 2 ! Z 2 , where f i (x 1 ; : : : ; x i ) = 1 iff x 1 : : : x i 2 L. Each of these functions can be written as a polynomial in the variables x 1 ; : : : ; x n . We can consider algebraic formulas or circuits "... We develop upper and lower bound arguments for counting acceptance modes of communication protocols. A number of separation results for counting communication complexity classes is established. This extends the investigation of the complexity of communication between two processors in terms of compl ..." Cited by 6 (2 self) Add to MetaCart We develop upper and lower bound arguments for counting acceptance modes of communication protocols. A number of separation results for counting communication complexity classes is established. This extends the investigation of the complexity of communication between two processors in terms of complexity classes initiated by Babai, Frankl, and Simon [Proc. 27th IEEE FOCS 1986, pp. 337--347] and continued in several papers (e.g., Halstenberg and Reischuk [Journ. of Comput. and Syst. Sci. 41(1990), pp. 402--429], Karchmer et al. [Journ. of Comput. and Syst. Sci. 49(1994), pp. 247--257] More precisely, it will be shown that the communication complexity classes MOD p P cc and MOD q P cc are incomparable with regard to inclusion, for all pairs of distinct prime numbers p and q. The same is true for PP cc and MODmP cc , for any number m 2. Moreover, nondeterminism and modularity are incomparable to a large extend. On the other hand, if m = p l 1 1 \Delta : : : \Delta p l r r ... "... . We develop an analytic framework based on linear approximation and duality and point out how a number of apparently diverse complexity related questions -- on circuit and communication complexity lower bounds, as well as pseudorandomness, learnability, and general combinatorics of Boolean func ..." Cited by 3 (2 self) Add to MetaCart . We develop an analytic framework based on linear approximation and duality and point out how a number of apparently diverse complexity related questions -- on circuit and communication complexity lower bounds, as well as pseudorandomness, learnability, and general combinatorics of Boolean functions -- fit neatly into this framework. This isolates the analytic content of these problems from their combinatorial content and clarifies the close relationship between the analytic structure of questions. (1) We give several results that convert a statement of nonapproximability from spaces of functions to statements of approximability. We point how that crucial portions of a significant number of the known complexity-related results can be unified and given shorter and cleaner proofs using these general theorems. (2) We give several new complexity-related applications, including circuit complexity lower bounds, and results concerning pseudorandomness, learning, and combinator... - In IEEE FOCS "... We study solution sets to systems of generalized linear equations of the form ℓi(x1, x2, · · · , xn) ∈ Ai (mod m) where ℓ1,...,ℓt are linear forms in n Boolean variables, each Ai is an arbitrary subset of Zm, and m is a composite integer that is a product of two distinct primes, like 6. Our main ..." Cited by 3 (1 self) Add to MetaCart We study solution sets to systems of generalized linear equations of the form ℓi(x1, x2, · · · , xn) ∈ Ai (mod m) where ℓ1,...,ℓt are linear forms in n Boolean variables, each Ai is an arbitrary subset of Zm, and m is a composite integer that is a product of two distinct primes, like 6. Our main technical result is that such solution sets have exponentially small correlation, i.e. exp ( − Ω (n) ) , with the boolean function MODq, when m and q are relatively prime. This bound is independent of the number t of equations. This yields progress on limiting the power of constant-depth circuits with modular gates. We derive the first exponential lower bound on the size of depth-three circuits of type MAJ ◦ AND ◦ MOD A m (i.e. having a MAJORITY gate at the top, AND/OR gates at the middle layer and generalized MODm gates at the base) computing the function MODq. This settles a decade-old open problem of Beigel and Maciel [5], for the case of such modulus m. Our technique makes use of the work of Bourgain [6] on estimating exponential sums involving a low-degree polynomial and ideas involving matrix rigidity from the work of Grigoriev and Razborov [15] on arithmetic circuits over finite fields. , 1996 "... We show that the result of Barrington, Straubing and Thérien [5] provides, as a direct corollary, an exponential lower bound for the size of depth-two MOD 6 circuits computing the AND function. This problem was solved, in a more general way, by Krause and Waack [8]. We point out that all known lower ..." Cited by 2 (0 self) Add to MetaCart We show that the result of Barrington, Straubing and Thérien [5] provides, as a direct corollary, an exponential lower bound for the size of depth-two MOD 6 circuits computing the AND function. This problem was solved, in a more general way, by Krause and Waack [8]. We point out that all known lower bounds rely on the special form of the MOD 6 gate occurring at the bottom of the circuits, so that in fact, proving a lower bound for "general" MOD 6 circuits of depth two is still an open question. , 2001 "... plus an arbitrary linear function of n input variables. Keywords: Circuit complexity, modular circuits, composite modulus 1 Introduction Boolean circuits are one of the most interesting models of computation. They are widely examined in VLSI design, in general computability theory and in complexit ..." Cited by 1 (1 self) Add to MetaCart plus an arbitrary linear function of n input variables. Keywords: Circuit complexity, modular circuits, composite modulus 1 Introduction Boolean circuits are one of the most interesting models of computation. They are widely examined in VLSI design, in general computability theory and in complexity theory context as well as in the theory of parallel computation. Almost all of the strongest and deepest lower bound results for the computational complexity of finite functions were proved using the Boolean circuit model of computation ([13], [22], [9], [14], [15], or see [20] for a survey). Even these famous and sophisticated lower bound results were proven for very restricted circuit classes. Bounded depth and polynomial size is one of the most natural restrictions. Ajtai [1], Furst, Saxe, and Sipser [5] proved that no polynomial sized, constant depth circuit can compute the PARITY function. Yao [22] and Hastad [9] generalized this result , 1997 "... This paper characterizes all the factorizations of a polynomial with coefficients in the ring Z n where n is a composite number. We give algorithms to compute such factorizations along with algebraic classifications. Contents 1 Introduction 3 1.1 Circuit complexity theory . . . . . . . . . . . . ..." Add to MetaCart This paper characterizes all the factorizations of a polynomial with coefficients in the ring Z n where n is a composite number. We give algorithms to compute such factorizations along with algebraic classifications. Contents 1 Introduction 3 1.1 Circuit complexity theory . . . . . . . . . . . . . . . . . . . . . . 3 2 Some Important Tools in Z n [x] 4 2.1 The Z n [x] phenomena . . . . . . . . . . . . . . . . . . . . . . . . 4 2.2 The Chinese Remainder Theorem . . . . . . . . . . . . . . . . . . 5 2.3 Irreducibility criteria in Z p k [x] . . . . . . . . . . . . . . . . . . . 7 2.4 Hensel's Lemma . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.5 A naive approach to factoring . . . . . . . . . . . . . . . . . . . . 11 3 The Case of Small Discriminants 12 3.1 The p-adic numbers . . . . . . . . . . . . . . . . . . . . . . . . . 12 3.2 Resultants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.3 The correspondence to factoring over the p-adics . . . . ....
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1621132","timestamp":"2014-04-17T23:41:29Z","content_type":null,"content_length":"38511","record_id":"<urn:uuid:8a470c2b-1b61-4e20-a705-ccb424a62b5d>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
s theorem Szemer\'edi's theorem Szemerédi’s theorem The case $k=3$ was first proved by Roth[4]. His method did not seem to extend to the case $k>3$. Using completely different ideas Szemerédi proved the case $k=4$[5], and the general case of an arbitrary $k$[6]. The best known bounds for $N(k,\delta)$ are $e^{{c(\log\frac{1}{\delta})^{{k-1}}}}\leq N(k,\delta)\leq 2^{{2^{{\delta^{{-2^% {{2^{{k+9}}}}}}}}}},$ where the lower bound is due to Behrend[1] (for $k=3$) and Rankin[3], and the upper bound is due to Gowers[2]. For $k=3$ a better upper bound was obtained by Bourgain $N(3,\delta)\leq c\delta^{{-2}}e^{{2^{{56}}\delta^{{-2}}}}.$ Mathematics Subject Classification no label found no label found Added: 2002-12-27 - 02:06
{"url":"http://planetmath.org/SzemeredisTheorem","timestamp":"2014-04-16T13:16:24Z","content_type":null,"content_length":"55918","record_id":"<urn:uuid:66e7f59f-3ed5-43dc-afe6-3e7c796567e1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00471-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: A Point on Understanding Date: Dec 27, 2012 7:28 AM Author: Domenico Rosa Subject: Re: A Point on Understanding On 15 Dec 2012, Robert hansen wrote: > One of his observations was that students will accept > that 0.333... is 1/3 but have trouble accepting that > 0.999... is 1. I wonder if "accepting" this would be a problem if these students had been taught how to convert a repeating decimal into a fraction, as we were when I was in high school? Let x=0.999... then 10x=9.999... then 9x=9 and x=1
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7944189","timestamp":"2014-04-19T10:47:38Z","content_type":null,"content_length":"1417","record_id":"<urn:uuid:780a3bf0-2de7-4d92-ab32-0b23c2a2adf2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help May 28th 2010, 10:14 PM #1 Junior Member Jul 2009 Consider a circle and two points A and B in the exterior of the circle, located in the extension of the diameter of the circle. Determine the path that joins A and B that doesnt touch the interior of the circle and which is the shortest path that doesnt touch the interior of the circle that joins A and B Consider a circle and two points A and B in the exterior of the circle, located in the extension of the diameter of the circle. Determine the path that joins A and B that doesnt touch the interior of the circle and which is the shortest path that doesnt touch the interior of the circle that joins A and B A diagram would be useful. But lets suppose A and B are on the same diameter extended but on opposite sides of the circle. Draw in the tangents from A and B to the circle both on the same side of the diameter. Now the shortest path is along the tangent from A to the point of tangency, then along the circumference of the circle to the point of tangency of the other tangent, then along the tangent to B. Depending on the exact wording of the question your job is to find the length of this path, or to prove that this is the shortest path. For the latter you need to consider paths made up of two straight segments from the points to the circle and an arc connecting the points where the lines meet the circle. The fact that the length of an arc of a circle is greater than that of the corresponding chord may be useful. Consider a circle and two points A and B in the exterior of the circle, located in the extension of the diameter of the circle. Determine the path that joins A and B that doesnt touch the interior of the circle and which is the shortest path that doesnt touch the interior of the circle that joins A and B The shortest such path is the semi-circle having A and B as endpoints of a diameter. i didnt understand this part also, what if instead of a circle you have an ellipse between the two points? does the same solution still holds? Actually, I didn't read what you wrote properly. For some reason, I was assuming that A and B are on the circle when you had said clearly that they were exterior to the circle. Captain Black's response is correct- Draw lines from A and B tangent to the circle. To do that, find the bisector of the line from A to the center of the circle, O, and construct a circle with that point as center and diameter |OA|. That circle will cut the original circle at two points. The line from A to either of those points is gives a tangent to the circle. Do the same with O and B. The straight line from A to the tangent point on the circle, the curve around the circle to the corresponding tangent point for B, then straight to B is the shortest route from A to B that does not go into the interior of the circle. May 29th 2010, 12:15 AM #2 Grand Panjandrum Nov 2005 May 29th 2010, 01:33 AM #3 MHF Contributor Apr 2005 May 29th 2010, 08:29 AM #4 Junior Member Jul 2009 May 29th 2010, 11:15 AM #5 MHF Contributor Apr 2005 May 29th 2010, 11:22 AM #6 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/146821-distance.html","timestamp":"2014-04-17T18:53:56Z","content_type":null,"content_length":"49358","record_id":"<urn:uuid:a014af20-ca92-4642-8a33-cf78ee1719c8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to T 07-21-2013 09:01 AM 64SS327 I worked part time at a GoodYear dealership about ten years ago. We used the Equal brand balancing compound and never had any complaints. I seem to remember something about not using it in the steer tires on semis but it's been so long I don't remember. I used Equal in both my 77 Bronco with 16/38.5-15 Super Swampers and 78 GMC with 31's with good results. I put it in when I mounted the tires so I can't say it made the ride any better. I just never had any issues with it in my tires. 07-20-2013 03:20 PM ForceFed86 Went 147 mph on my bead balanced tires this weekend...must be magic! 07-13-2013 08:20 PM Too Many I recently retired from trucking after 28 years as an owner/operator and for the last 12 years have used balancing beads exclusively in my semi tires. They are the ONLY balancing method Projects that eliminated cupping and flat spots. The Peterbilt I sold 3 weeks ago had 1.2 million miles on it of which I drove it 800,000 miles with beads in the tires. Since I also do all my own tire work, I moved the beads from a worn tire to the new, saving the cost of bead replacement too. Very cost effective, accurate balancing. 07-06-2013 06:46 PM strobe Quote: Originally Posted by So lets get this straight. 1.) You didn't put the "compound" material in your tires. 2.) You don't know what "compound" was used or how much? Yet you can claim that all tire balance beads are total crap and can't work properly when like a million people have been using it for decades and claim it works. I personally have used them and they worked great in my application. It's very simple physics and works just fine in a "real world setting". Ie smooth as glass at 137mph in a big *** slick. Did you look at the video above? A caveman could understand it. The fact that you cant scares me a bit. Really? Actually cavemen were very clever I hope your not suggesting otherwise in answer to your 1st point ; it is an invalid entry ie it is not relevant. the compound was in the tire regardless of who put it there. I know because I saw the tire removed and emptied of the beads. point 2 half valid in that I dont know the exact weight but I do know that they are a commercially available brand caveman point ; I understand the plastic bottle physics perfectly rather you jump the gun in assuming I don't and if you are suggesting that it is in anyway capable of being a control experiment for a tire at 137 then ... Finally the video is not real world its a constant shape plastic bottle as opposed to a rubber tire which every turn has a flat area at the contact with the road and in some conditions is subject to huge external accelerations which are otherwise known as bumps anyway its all academic because I had a truck that was shaking itself to bits and now the beads are out its silky smooth so I need no further evidence,perhaps the truth lies in the middle ie it works in some applications but not in others after all we are in very different fields but make no mistake it doesn't work in mine anyway don't take it all too personally its only a tiny bit of a very big picture 07-03-2013 09:17 AM o1marc Sorry, didn't read all the posts here. I have had nothing but positive results from DynaBeads. 07-03-2013 09:07 AM ForceFed86 Quote: Originally Posted by I'm surprised with your results at 137mph and a big ***slick. I assume you are drag racing? The Dyna Beads instructions highly recommends they not be used in racing situations especially drag racing. I'm sure dyna bead has to claim that for liability purposes. I didn't use dyna beads. I used 4.5oz of airsoft pellets in each tire. Without some sort of bead lock it's very typical for a tire to slip on the rim when drag racing. So the typical spin balance doesn't work well for us. Myself and and many others at our track use the airsoft pellets and they work great. Honestly I've "spin balanced" a few slicks/drag radials and they didn't perform that badly. A little wobble on the big end of the track is normal. As I had said back in my original post on this thread I think one of the slicks were just beyond what a standard balance could compensate for. I was very suprised the beads not only "fixed" the issue, but the car acted better than ever before at speed. My next step was just to junk both slicks so a $4 airsoft pellet experiment was worth it IMO. 07-03-2013 08:33 AM o1marc Quote: Originally Posted by So lets get this straight. 1.) You didn't put the "compound" material in your tires. 2.) You don't know what "compound" was used or how much? Yet you can claim that all tire balance beads are total crap and can't work properly when like a million people have been using it for decades and claim it works. I personally have used them and they worked great in my application. It's very simple physics and works just fine in a "real world setting". Ie smooth as glass at 137mph in a big *** slick. Did you look at the video above? A caveman could understand it. The fact that you cant scares me a bit. I'm surprised with your results at 137mph and a big ***slick. I assume you are drag racing? The Dyna Beads instructions highly recommends they not be used in racing situations especially drag racing. 07-03-2013 05:53 AM ForceFed86 Quote: Originally Posted by My experience backs up what forestrytodd says ie this concept is junk It was put in the front tyres of my Scania 124 without me knowing and immediately on long haul work(smooth road- constant speeds) at 56mph there was a wobble on the front axle that would come and go at regular intervals(as the compound aggregated itself into one place and then would spread out again and the wobble would go away before repeating the cycle again . . .and again and again and again until it drove me so mad wondering what the hell could be doing this. I finally went to a proper tyre shop who asked if it had compound in the tyres at which point I said "has it got WHAT!" so after taking the crap out they balanced the wheels the proper way with weights and it's been silk ever since DONT GO NEAR THE STUFF anybody with high school physics will know its it cant work in a real world setting (on paper maybe) So lets get this straight. 1.) You didn't put the "compound" material in your tires. 2.) You don't know what "compound" was used or how much? Yet you can claim that all tire balance beads are total crap and can't work properly when like a million people have been using it for decades and claim it works. I personally have used them and they worked great in my application. It's very simple physics and works just fine in a "real world setting". Ie smooth as glass at 137mph in a big *** slick. Did you look at the video above? A caveman could understand it. The fact that you cant scares me a bit. 07-03-2013 01:58 AM strobe balancing beads/compound My experience backs up what forestrytodd says ie this concept is junk It was put in the front tyres of my Scania 124 without me knowing and immediately on long haul work(smooth road- constant speeds) at 56mph there was a wobble on the front axle that would come and go at regular intervals(as the compound aggregated itself into one place and then would spread out again and the wobble would go away before repeating the cycle again . . .and again and again and again until it drove me so mad wondering what the hell could be doing this. I finally went to a proper tyre shop who asked if it had compound in the tyres at which point I said "has it got WHAT!" so after taking the crap out they balanced the wheels the proper way with weights and it's been silk ever since DONT GO NEAR THE STUFF anybody with high school physics will know its it cant work in a real world setting (on paper maybe) 10-02-2012 12:18 PM ForceFed86 If old Bill down at the piggly wiggly says it don't work.... Common sense and first hand experiance with the product tells me otherwise. Still using this stuff in my slicks, works great! 10-02-2012 11:40 AM o1marc Quote: Originally Posted by My father sold tire mounters and balancer's a few years back. I asked his opinion on them and his response was "What" Simply put - don't even waist your time looking at it. I can't imagine why a guy who sells expensive machines would laugh at a $30 solution that work now and has been used for decades. Dump truck guys would put 3-4 golf balls in their tires to do the exact same thing BITD. 05-17-2012 08:00 AM bentwings I hang out in a large truck shop and every over the road truck that I see has massive balancers at each wheel. The shop guys claim they really work. 90-100k miles on big truck tires must mean something. They change 100 tires a day sometimes on this fleet. 05-17-2012 01:08 AM DesignoSLK Quote: Originally Posted by xxllmm4 I have actually used these beads on a bunch of my cars and trucks and have always been happy with them. I have used the stainless steel ones at I have them in my WRX Tires and at 120 MPH have not a had any shakes or shimmy. I have a Harbor Freight tire changer and do all my own tires but don't have a balancing machine, no need to have one with the beads. These are really common with 4x4 guys, motorcycles and Sorry for the old bump... xxllmm4 - I assume your WRX tires are low profile? I have 225/40ZR18s on my MB SLK230 and am having a heck of a time getting balanced. Driving the Autobahn daily, I need rock solid balancing. Are you still happy with the results of these? 02-13-2012 01:50 PM JusttCruzn Balance Beads WOW...very nice Old Fool. I'm so glad there are folks like you out there that can understand that stuff. As for me?... I just know what I feel in the steering wheel and the seat of my pants ... a smoooooth ride I do not have to know exactly how a wrist watch works. I just know that it does and I use it EVERY day. 02-13-2012 01:24 PM Old Fool Take a baton, it has a heavy and light end. place it on a finger. When it is balanced the light end is farther from your finger and the heavy end is closer. That demonstrates the center of Rotational balance, not the physical center of the baton. The beads will travel to the greatest distance from the Rotational balance center which will then move the center of rotational balance closer to the center of rotation. The beads acting as a fluid will continue to adjust as long as a rotational force greater than their individual mass is present. That is a simple explanation of how it works. Someone said their memory of physics say it won't work, After some Googling I came across the physics formula that explains why it does work: Think about it like this: x = \alpha \cos \frac{s}{\alpha} \ ; y=\alpha \sin\frac{s}{\alpha} \ . x^2+y^2 = \alpha^2 \ , which can be recognized as a circular path around the origin with radius α. The position s = 0 corresponds to [α, 0], or 3 o'clock. To use the above formalism the derivatives are needed: y'(s) = \cos \frac{s}{\alpha}\ ; \ x'(s) = -\sin \frac{s}{\alpha} \ y''(s) = -\frac{1}{\alpha}\sin\frac{s}{\alpha} \ ; \ x''(s) = -\frac{1}{\alpha}\cos \frac{s}{\alpha} \ . With these results one can verify that: x'(s)^2 + y'(s)^2 = 1 \ ; \frac{1}{\rho} = y''(s)x'(s)-y''(s)x''(s) = \frac{1}{\alpha}\ . The unit vectors also can be found: \mathbf{u}_t(s) = \left[-\sin\frac{s}{\alpha},\ \cos\frac{s}{\alpha} \right]\ ; \mathbf{u}_n(s) = \left[\cos\frac{s}{\alpha},\ \sin\frac{s}{\alpha} \right] \ , which serve to show that s = 0 is located at position [ρ, 0] and s = ρπ/2 at [0, ρ], which agrees with the original expressions for x and y. In other words, s is measured counterclockwise around the circle from 3 o'clock. Also, the derivatives of these vectors can be found: \frac{d}{ds}\mathbf{u}_t(s) = -\frac{1}{\alpha} \left[\cos\frac{s}{\alpha},\ \sin\frac{s}{\alpha} \right]\ = -\frac{1}{\alpha}\mathbf{u}_n(s) \ ; \ \frac{d}{ds}\mathbf{u}_n(s) = \frac{1}{\alpha} \left[-\sin\frac{s}{\alpha},\ \cos\frac{s}{\alpha} \right] = \frac{1}{\alpha}\mathbf{u}_t(s) \ . To obtain velocity and acceleration, a time-dependence for s is necessary. For counterclockwise motion at variable speed v(t): s(t) = \int_0^t \ dt' \ v(t') \ , where v(t) is the speed and t is time, and s(t=0) = 0. Then: \mathbf{v} = v(t)\mathbf{u}_t(s) \ , \mathbf{a} = \frac{dv}{dt}\mathbf{u}_t(s)+v\frac{d}{dt}\mathbf{ u}_t(s) = \frac{dv}{dt}\mathbf{u}_t(s)-v\frac{1}{\alpha}\mathbf{u}_n(s)\frac{ds}{dt} where it already is established that α = ρ. This acceleration is the standard result for non-uniform circular motion.
{"url":"http://www.hotrodders.com/forum/newreply.php?do=newreply&p=1518433","timestamp":"2014-04-16T23:02:53Z","content_type":null,"content_length":"89680","record_id":"<urn:uuid:c0153ca0-3c8f-4459-bc72-8855cc721f81>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Axiom of Parallels Next: Models Up: Foundations of geometry Previous: Axioms of Order The axiom in this section caused the most controversy and confusion of all. The axioms of parallels (which is also an incidence axiom) is Axiom of Parallels Given a line and a point outside it there is exactly one line through the given point which lies in the plane of the given line and point so that the two lines do not meet. Note that, while asserting that there is a line through the given point that doesn't meet the given line, it also says there is only one such line. In other words, it also asserts that all the ``other'' lines co-planar with the given line meet that line. This motivates the introduction of the following (stronger and stranger) version of the Axiom of Parallels: Projective Axiom of Parallels Any pair of lines that lie in the same plane meet. The idea behind this axiom is that even (apparently) parallel lines appear to meet at the horizon. We can demonstrate that this axiom is consistent with the axioms of Incidence by means of Linear Algebra as in the examples below. Next: Models Up: Foundations of geometry Previous: Axioms of Order Kapil H. Paranjape 2001-01-20
{"url":"http://www.imsc.res.in/~kapil/geometry/euclid/node4.html","timestamp":"2014-04-17T02:11:53Z","content_type":null,"content_length":"4077","record_id":"<urn:uuid:47aa977d-f05a-4a76-8a56-8b76c37dc022>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
La Jolla Difference Set Repository Here are the most wanted cases of open difference set parameters. The other tables are organized by the size of k. Baumert first compiled a list of cyclic difference set parameters with k&lt=100. Lander gave a table for abelian difference sets with k&lt=50, which Kopilovich extended to k&lt=100. Lopez and Sanchez extended the list to parameters with k&lt=150. Here is a list of cyclic difference set parameters with k&lt=150 This table extends the list to parameters with k&lt=300. This table contains certain known difference sets with k>300, including cyclic Hadamard difference sets with v &lt 10000. This page gives details of computations showing that no planar cyclic difference sets of non-prime power order exist for for orders up to 2*10^9. Many of the difference sets were produced using magma. For other constructions and nonexistence proofs, see references in our paper. If you know of a set that's missing, please contact Dan Gordon at gordon@ccrwest.org.
{"url":"http://www.ccrwest.org/diffsets/diff_sets/index.html","timestamp":"2014-04-20T03:43:19Z","content_type":null,"content_length":"1733","record_id":"<urn:uuid:6b931de3-c3a3-49dc-a0a1-df9e09de5b39>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00628-ip-10-147-4-33.ec2.internal.warc.gz"}
Yahoo Groups Re: AI-GEOSTATS: Risk Assessment with Gaussian Simulation? Expand Messages View Source Hi Brian, One hundred realizations are typically generated mainly for CPU reasons. You are perfectly right that this number is too small when looking at small probabilities like 0.05 or 0.01. It's why I wouldn't recommend using stochastic simulation to derive probability of occurrence of events at pixel locations. Just use kriging to build your local probability distributions. Use simulation if you have a transfer function, such as flow simulator, that requires a model of spatial uncertainty, or if you need to derive block probability distributions (upscaling or aggregation problems). More generally, there is more research to be done on the use of stochastic simulation for probabilistic assessment, including the question of equally-probability of realizatiuons being generated. ________ ________ | \ / | Pierre Goovaerts |_ \ / _| Assistant professor __|________\/________|__ Dept of Civil & Environmental Engineering | | The University of Michigan | M I C H I G A N | EWRE Building, Room 117 |________________________| Ann Arbor, Michigan, 48109-2125, U.S.A _| |_\ /_| |_ | |\ /| | E-mail: |________| \/ |________| Phone: (734) 936-0141 Fax: (734) 763-2275 On Mon, 29 Apr 2002, Brian R Gray wrote: > I am curious about the use of 100 realizations to generate a probability > map. is this a standard approach? if so, is a "small" p-value (such as > .05) used? if so, it would seem like 100 iterations might be a smallish > sample size for distinguishing, say, .05 (ie 5 outcomes out of 100) from, > say, .01. is 100 used because it seems like it is a reasonable number or > because of the computer time restrictions? > do geostat folks treat these as realizations or as pseudo-realizations? > brian > **************************************************************** > Brian Gray > USGS Upper Midwest Environmental Sciences Center > 575 Lester Avenue, Onalaska, WI 54650 > ph 608-783-7550 ext 19, FAX 608-783-8058 > brgray@... > ***************************************************************** > Chaosheng Zhang > <Chaosheng.Zhang@nui To: ai-geostats@... > galway.ie> cc: Dave McGrath <dmcgrath@...> > Sent by: Subject: AI-GEOSTATS: Risk Assessment with Gaussian Simulation? > ai-geostats-list@uni > l.ch > 04/27/2002 10:25 AM > Please respond to > Chaosheng Zhang > Dear list, > First, I would like to say thank you to Gregoire for keeping this list > alive. > I'm trying to do "risk assessment", and I have some questions about risk > assessment with Gaussian Simulation: > (1) How to produce a probability map? > With Gaussian simulation, we can produce many maps/realisations, e.g., 100. > Based on the 100 maps, a probability map of higher than a threshold can be > produced. I wonder how to produce such a probability map? My understanding > is that for each pixel, we just count how many values out of the 100 are > >threshold, and the number is regarded as the "probability". Am I right? It > seems that this is a time consuming procedure with GIS map algebra. Are > there any suggestions for a quick calculation? > (2) Is a probability map better than a Kriging interpolated map for the > purpose of risk assessment? > (3) Is "PCLASS" function in IDRISI 32 Release 2 better/easier than the > probability map from Gaussian simulation? > >From the online help of IDRISI 32 R2, Section "Kriging and Simulation > Notes", it says "If the final goal of simulated surfaces will be to > directly reclassify the surfaces by a threshold value, and calculate a > probability of occurrence for a process based on that threshold, > conditional simulation may be unnecessary. Instead kriging and variance > images may be created and then used together with PCLASS." Any comments? > (4) How to carry out "PCLASS"? > Following the above question, I have a problem in doing PCLASS. I cannot > input the file name of Kriging variance to the field of "Value error" of > the documentation file. It seems that this field only accepts a "value", > not an "image file name" or anything in text. Anyone has the experience? > Cheers, > Chaosheng Zhang > ================================================= > Dr. Chaosheng Zhang > Lecturer in GIS > Department of Geography > National University of Ireland > Galway > IRELAND > Tel: +353-91-524411 ext. 2375 > Fax: +353-91-525700 > Email: Chaosheng.Zhang@... > ChaoshengZhang@... > Web: http://www.nuigalway.ie/geography/zhang.html > ================================================= > -- > * To post a message to the list, send it to ai-geostats@... > * As a general service to the users, please remember to post a summary of any useful responses to your questions. > * To unsubscribe, send an email to majordomo@... with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list > * Support to the list is provided at http://www.ai-geostats.org * To post a message to the list, send it to ai-geostats@... * As a general service to the users, please remember to post a summary of any useful responses to your questions. * To unsubscribe, send an email to majordomo@... with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list * Support to the list is provided at http://www.ai-geostats.org View Source Chaosheng, I agree with Pierre that if your only goal is to generate a probability map, then IK is faster and more straightforward than simulation and that MG kriging will give the same results, faster, than MG simulation. However, we have found a couple of practical reasons where it may be advantageous to use simulation for soil contamination studies, so I'll add my two cents worth to this discussion: 1) When trying to explain the concepts of spatial variability and uncertainty, we have found that showing example realizations of what the possible distribution of contaminants could look like provides the groups involved to get a more intuitive understanding of these ideas. People understand the idea of flipping a coin 100 times to get the probability of heads or tails, but have a hard time visualizing in their mind what a "coin flip" looks like in a 2-D soil contamination problem. Showing some example conditional realizations gives them a stronger feel for the nature of the answers geostats is providing to their questions. 2) A number of sites are in the process of designing chemical and/or mechanical treatment systems for the soil that will be removed from the site while the remediation map is being determined. One set of design parameters for these treatment systems is the best and worst case estimates of the total amount of contamination (curies, grams, etc.) contained in the soil at the site. These best/worst case estimates depend on the joint estimate of the contamination at all locations across the site. This is something simulation provides, but kriging doesn't. 3) For soils with radioactive contaminants, there are a number of different sensors (e.g., a gamma detector mounted several meters off the ground) being deployed at field sites that integrate the activity of the contaminant over a larger area/volume. Simulation of the fine scale distribution of the activity can be useful in looking at how these sensors scale up the activity values to the integrated measurement. Also when looking at IK vs MG kriging (or simulation) keep in mind that rarely do the client, stakeholder(s) and regulator(s) have a single action level or threshold that they have all agreed to for application at the site. There are usually multiple thresholds corresponding to different future-land use scenarios and different health risk models. If creating the probabilty maps through IK then each different threshold requires a new set of indicator variograms. If you use MG kriging or simulation, you only need do the variography once-keep in mind that the MG assumption does have other problems with connectivity of extreme values that may or may not be important in your application (this is generally a bigger concern in fluid flow problems than in soil contamination problems). I'll add my thanks to Gregoire for 7 years of superb work! Sean A. McKenna Ph.D. Geohydrology Department Sandia National Laboratories PO Box 5800 MS 0735 Albuquerque, NM 87185-0735 ph: 505 844-2450 -----Original Message----- From: Chaosheng Zhang [mailto: Sent: Monday, April 29, 2002 3:57 AM To: Pierre Goovaerts ; Dave McGrath Subject: Re: AI-GEOSTATS: Risk Assessment with Gaussian Simulation? Thanks for the comments. It's my first time to use Gaussian simulation to do something possibly useful, and I have also found the calculation quite slow even though the speed of my computer is not so bad. I'm using Idrisi 32 (with GStat), and the grid is about 500*500. What I worry about is that how useful these realizations are? Obviously they are not "realistic" even though some people say they want to produce a more realistic map, instead of the smoothed Kriging map. Another concern is that the probability map produced based on these realisations may not be so good as the PCLASS (available in Idrisi), as PCLASS may have a better probability background or clearer assumption. In PCLASS, the square root (not sure yet???) of Kriging variances can be used as the RMS (root mean square) or standard deviation of the pixel corresponding to the Kriging map, and the probability > a threshold can be calculated based on the normal assumption. More comments and suggestions will give me more confidence in doing the risk assessment (heavy metal pollution in soils of a mine area). ----- Original Message ----- From: "Pierre Goovaerts" <goovaert@...> To: "Chaosheng Zhang" <Chaosheng.Zhang@...> Cc: <ai-geostats@...>; "Dave McGrath" <dmcgrath@...> Sent: Saturday, April 27, 2002 4:53 PM Subject: Re: AI-GEOSTATS: Risk Assessment with Gaussian Simulation? > Hello, > In the past few years stochastic simulation has > been increasingly used to produce probability maps. > To my opinion it's generally a waste of CPU time since > similar information can be retrieved using kriging, > either in a multiGaussian framework or applied to > indicator transforms. > The issue of when using simulation vs kriging > is further discussed in: > Goovaerts, P. 2001. > Geostatistical modelling of uncertainty in soil science. > Geoderma, 103: 3-26. > I take this opportunity to thank Gregoire > for a remarkable and often challenging job > of keeping this e-mail list alive through the years. > Pierre > ________ ________ > | \ / | Pierre Goovaerts > |_ \ / _| Assistant professor > __|________\/________|__ Dept of Civil & Environmental Engineering > | | The University of Michigan > | M I C H I G A N | EWRE Building, Room 117 > |________________________| Ann Arbor, Michigan, 48109-2125, U.S.A > _| |_\ /_| |_ > | |\ /| | E-mail: goovaert@... > |________| \/ |________| Phone: (734) 936-0141 > Fax: (734) 763-2275 > On Sat, 27 Apr 2002, Chaosheng Zhang wrote: > > Dear list, > > > > First, I would like to say thank you to Gregoire for keeping this list > > > > I'm trying to do "risk assessment", and I have some questions about risk assessment with Gaussian Simulation: > > > > (1) How to produce a probability map? > > > > With Gaussian simulation, we can produce many maps/realisations, e.g., 100. Based on the 100 maps, a probability map of higher than a threshold can be produced. I wonder how to produce such a probability map? My understanding is that for each pixel, we just count how many values out of the 100 are >threshold, and the number is regarded as the "probability". Am I right? It seems that this is a time consuming procedure with GIS map algebra. Are there any suggestions for a quick calculation? > > > > (2) Is a probability map better than a Kriging interpolated map for the purpose of risk assessment? > > > > (3) Is "PCLASS" function in IDRISI 32 Release 2 better/easier than the probability map from Gaussian simulation? > > > > >From the online help of IDRISI 32 R2, Section "Kriging and Simulation Notes", it says "If the final goal of simulated surfaces will be to directly reclassify the surfaces by a threshold value, and calculate a probability of occurrence for a process based on that threshold, conditional simulation may be unnecessary. Instead kriging and variance images may be created and then used together with PCLASS." Any comments? > > > > (4) How to carry out "PCLASS"? > > > > Following the above question, I have a problem in doing PCLASS. I cannot input the file name of Kriging variance to the field of "Value error" of the documentation file. It seems that this field only accepts a "value", not an "image file name" or anything in text. Anyone has the experience? > > > > Cheers, > > > > Chaosheng Zhang > > ================================================= > > Dr. Chaosheng Zhang > > Lecturer in GIS > > Department of Geography > > National University of Ireland > > Galway > > IRELAND > > > > Tel: +353-91-524411 ext. 2375 > > Fax: +353-91-525700 > > Email: Chaosheng.Zhang@... > > ChaoshengZhang@... > > Web: http://www.nuigalway.ie/geography/zhang.html > > ================================================= > > > > * To post a message to the list, send it to ai-geostats@... * As a general service to the users, please remember to post a summary of any useful responses to your questions. * To unsubscribe, send an email to majordomo@... with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list * Support to the list is provided at http://www.ai-geostats.org * To post a message to the list, send it to ai-geostats@... * As a general service to the users, please remember to post a summary of any useful responses to your questions. * To unsubscribe, send an email to majordomo@... with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list * Support to the list is provided at http://www.ai-geostats.org View Source My tuppence worth. The major advantages of simulation as a risk assessment tool lie in the cases where you are trying to derive some conclusion from the data rather than just look at the values themselves. For example, see Bill and my papers at Battelle Conference 1987 or the paper at the Geostat Avignon in 1988. There are oters. All of these are available in Word format for download at my page We were trying to derive the travel path of a particle given the pressure of fluid in an aquifer. Not a linear transform by anyone's standards. Isobel Clark Do You Yahoo!? Everything you'll ever need on one web page from News and Sport to Email and Music Charts * To post a message to the list, send it to * As a general service to the users, please remember to post a summary of any useful responses to your questions. * To unsubscribe, send an email to with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list * Support to the list is provided at View Source >From: "McKenna, Sean A" <samcken@...> >1) When trying to explain the concepts of spatial variability and >uncertainty, we have found that showing example realizations of what the >possible distribution of contaminants could look like provides the groups >involved to get a more intuitive understanding of these ideas. Taking this a step further, there was a paper in the AAPG Stochastic Modeling and Geostatistics Volume entitled "The Visualization of Spatial Uncertainty" (R Mohan Srivastava) which proposes the use of probability field simulation to generate dynamic animations of different realizations. I have yet to see it being implemented in commercial software, although in concept I can see the benefit of having something like this to illustrate the "equiprobable" realizations. The idea was to generate smooth transitions of successive "frames" by sampling from adjacent columns of a set of probability values, for a movie-like effect. * To post a message to the list, send it to * As a general service to the users, please remember to post a summary of any useful responses to your questions. * To unsubscribe, send an email to with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list * Support to the list is provided at View Source Dear Syed, et al., I did much of what you described in the GRASS GIS a while back. (GRASS is public domain, not commercial, but it is a very good GIS.) The title of the paper is "Visualizing Spatial Data Uncertainty Using Animation" and a copy of it is located at: The special issue of Computers & Geosciences (Vol. 23, No. 4, pp. 387-395, 1997) included a CD-ROM that contained some of the animations in MPEG form. My web site includes the animations and instructions on how to construct them. I used spherical interpolation to generate smooth transitions between realizations in order to keep the interpolations valid statistically. I have a more recent work that studies user perception of animated maps representing data and application uncertainty. An outline of that work from a conference presentation (with all equations and animations) is available at: The full paper is about to head out for peer review. sincerely, chuck Syed Abdul Rahman Shibli wrote: > >From: "McKenna, Sean A" <samcken@...> > > > >1) When trying to explain the concepts of spatial variability and > >uncertainty, we have found that showing example realizations of what the > >possible distribution of contaminants could look like provides the groups > >involved to get a more intuitive understanding of these ideas. > Taking this a step further, there was a paper in the AAPG Stochastic > Modeling and Geostatistics Volume entitled "The Visualization > of Spatial Uncertainty" (R Mohan Srivastava) which proposes the use > of probability field simulation to generate dynamic animations > of different realizations. I have yet to see it being implemented in > commercial software, although in concept I can see the benefit > of having something like this to illustrate the "equiprobable" > realizations. The idea was to generate smooth transitions of > successive "frames" by sampling from adjacent columns of a set of > probability values, for a movie-like effect. Chuck Ehlschlaeger N 40 46' 07.7", W 73 57' 54.4" Dep. of Geography 212-772-5321, fax: 212-772-5268 Hunter College 695 Park Ave. New York, NY 10021 "We should not be ashamed to acknowledge truth from whatever source it comes to us, even if it is brought to us by former generations and foreign people. For whoever seeks the truth there is nothing of higher value than truth itself" - al-Kindi * To post a message to the list, send it to * As a general service to the users, please remember to post a summary of any useful responses to your questions. * To unsubscribe, send an email to with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list * Support to the list is provided at View Source Dear all, Thanks for so many interesting replies and thoughtful discussion. This is not a summary yet, as I am expecting more to come. Just to express my feeling about Indicator Kriging. To produce a probability map, IK might be one of the choices. However, I always feel that too much information is lost when doing the indicator transformation. When I see so many "0"s in a dataset, I just feel the data quality is too poor. Well, the other method of combination of Kriging and Kriging variance for risk assessment has not been well discussed yet, and I would like to read more comments. My last question "(4) how to carry out "PCLASS" " is now answered by the developer of Idrisi. The fact that the file name of Kriging variance cannot be entered (with Metadata command) is a bug of the program, which will be corrected soon. At present time, a text editor may be used to modify the image documentation file. Now, let me discuss how I would like to make a probability map based on Kriging and Kriging variance. For each pixel of the Kriging interpolated map, there is a value of Kriging variance. The Kriging variance is a measure of uncertainty (which is related to sampling density and spatial variation, etc.???). If we assume that the value of the Kriging pixel follow a normal distribution and the standard deviation is equal to the SQRT of Kriging variance, the probability of any threshold can be calculated. Furthermore, to make the risk assessment more realistic, I would like to include other errors, such as sampling error and laboratory analysis error into risk assessment. These errors can hardly be quantified, but if we say 10% or 20% of the pixel value (for soil samples), perhaps there is no objection. Therefore, the standard deviation of the pixel is increased by adding this kind of errors. I am not clear how to calculate the total standard deviation of the two sources, is it: Total standard deviation = SQRT (Kriging Variance + SQUARE (Sampling Errors) ) ? Any ideas and comments on this method? Chaosheng Zhang > On Sat, 27 Apr 2002, Chaosheng Zhang wrote: > Dear list, > First, I would like to say thank you to Gregoire for keeping this list > I'm trying to do "risk assessment", and I have some questions about risk assessment with Gaussian Simulation: > (1) How to produce a probability map? > With Gaussian simulation, we can produce many maps/realisations, e.g., > 100. Based on the 100 maps, a probability map of higher than a threshold > be produced. I wonder how to produce such a probability map? My > understanding is that for each pixel, we just count how many values out of > the 100 are >threshold, and the number is regarded as the "probability". > I right? It seems that this is a time consuming procedure with GIS map > algebra. Are there any suggestions for a quick calculation? > (2) Is a probability map better than a Kriging interpolated map for the > purpose of risk assessment? > (3) Is "PCLASS" function in IDRISI 32 Release 2 better/easier than the > probability map from Gaussian simulation? >From the online help of IDRISI 32 R2, Section "Kriging and Simulation > Notes", it says "If the final goal of simulated surfaces will be to > reclassify the surfaces by a threshold value, and calculate a probability > occurrence for a process based on that threshold, conditional simulation > be unnecessary. Instead kriging and variance images may be created and > used together with PCLASS." Any comments? > (4) How to carry out "PCLASS"? > Following the above question, I have a problem in doing PCLASS. I cannot > input the file name of Kriging variance to the field of "Value error" of > documentation file. It seems that this field only accepts a "value", not > "image file name" or anything in text. Anyone has the experience? > Cheers, > Chaosheng Zhang > ================================================= > Dr. Chaosheng Zhang > Lecturer in GIS > Department of Geography > National University of Ireland > Galway > IRELAND > Tel: +353-91-524411 ext. 2375 > Fax: +353-91-525700 > Email: Chaosheng.Zhang@... > ChaoshengZhang@... > Web: http://www.nuigalway.ie/geography/zhang.html > ================================================= * To post a message to the list, send it to * As a general service to the users, please remember to post a summary of any useful responses to your questions. * To unsubscribe, send an email to with no subject and "unsubscribe ai-geostats" followed by "end" on the next line in the message body. DO NOT SEND Subscribe/Unsubscribe requests to the list * Support to the list is provided at Your message has been successfully submitted and would be delivered to recipients shortly.
{"url":"https://groups.yahoo.com/neo/groups/new_ai_geostats/conversations/topics/603?xm=1&o=1&l=1","timestamp":"2014-04-20T11:29:21Z","content_type":null,"content_length":"85259","record_id":"<urn:uuid:4dc16ebf-ac75-474b-a4fa-6196358fc71a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
South Elgin Precalculus Tutor ...My students have ranged from middle school to college. I pride myself on being able to push past any difficulty a student is faced with.I love this subject and am extremely competent teaching it. I have dozens of creative ways to help students get past their barriers in understanding it. 21 Subjects: including precalculus, chemistry, calculus, statistics ...The Binomial Theorem. Arithmetic Skills & Concepts 1. Numbers, Symbols, and Variables 2. 17 Subjects: including precalculus, reading, calculus, geometry ...I look forward to working with you or your student.I taught Algebra 1 my entire teaching career. I have Bachelor of Science in Mathematics Education and a Master of Science in Applied Mathematics. I taught four years of High School Math (3 years teaching Algebra 2/Trig). I've been tutoring math since I was in high school 15 years ago. 10 Subjects: including precalculus, calculus, algebra 2, algebra 1 ...I am 52 years old and plan to teach part-time at a junior college when I approach retirement. Mathematics is not hard. I truly believe that and feel that there are not many good teachers in the 7 Subjects: including precalculus, algebra 1, prealgebra, algebra 2 ...In the past 5 years, I've written proprietary guides on ACT strategy for local companies. These guides have been used to improve scores all over the midwest. I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the ACT. 24 Subjects: including precalculus, calculus, physics, geometry
{"url":"http://www.purplemath.com/South_Elgin_Precalculus_tutors.php","timestamp":"2014-04-20T11:03:29Z","content_type":null,"content_length":"24018","record_id":"<urn:uuid:2e02b54f-c52b-46f4-9446-331f6fb1ebd7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00565-ip-10-147-4-33.ec2.internal.warc.gz"}
The Stanly News and Press (Albemarle, NC) September 19, 2012 Math success begins with algebra By Marianne Bright for the SNAP Wednesday, September 19, 2012 — Changes in society and new expectations of colleges and employers have revolutionized the math curriculum in schools nationwide. Success in algebra often correlates to success in college, so it is very important for today’s students to do their best with this critical subject. What practical steps can be taken to ease parental concerns and help families build confidence in this new approach to middle and high school mathematics? Students who take advanced mathematics courses during high school, and begin to study algebra during middle school, are at an advantage. Traditionally, students cannot take a higher-level mathematics class in high school until they have successfully completed Algebra 1. Encourage children to take algebra early in their educational careers, if they are academically ready. Students who do not take courses covering algebraic concepts early in their schooling risk missing important opportunities for growth. Some high schools require children to complete specific math requirements in order to graduate. By the end of junior and senior years, students who have not planned ahead have fewer options in what classes they can take and may not be able to complete prerequisite courses. This can restrict a student’s college options and limit their career aspirations. Persuade children to take additional math classes. Many students indicate that they do not plan to take math classes beyond their school requirements. Math classes offer critical learning skills that are needed throughout life. Success in algebra correlates with success in higher education and learning reasoning skills. Taking additional math classes helps children to become logical, independent thinkers. Technology should support math instruction and students should be encouraged to use all of the modern tools at their disposal to gain an understanding of the underlying reasoning and computations used in problem-solving. Educators believe that infusing learning aids and technology during in-school math instruction and homework completion provides an advantage at test time because it allows students to easily absorb and retain crucial math concepts.
{"url":"http://www.thesnaponline.com/opinion/x1023295766/Math-success-beings-with-algebra/print","timestamp":"2014-04-19T22:09:36Z","content_type":null,"content_length":"4698","record_id":"<urn:uuid:ffef0804-3cdc-42ae-aa1e-3642bccfd99a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Bounding probability of event relating to Poisson distribution September 27th 2010, 11:25 AM #1 Sep 2010 Bounding probability of event relating to Poisson distribution Let X have Poisson(λ) distribution and let Y have Poisson(2λ) distribution. (i) Prove P (X ≥ Y ) ≤ exp(−(3 − √8)λ) if X and Y are independent. (ii) Find constants A < ∞, c > 0, not depending on λ, such that, without assuming independence, P (X ≥ Y ) ≤ A exp(−cλ). A hint says: Note that P (X ≥ Y ) = P (tX ≥ tY ) = P (exp tX ≥ exp tY ) ∀ t greater than 0. Now try to bound the right hand side using appropriate expectation inequalities and optimize over t. So far I'm trying to do i). I understand what the hint is trying to get at, but I'm not sure which inequalities to use. I tried Jensen's inequality and Chebyshev's inequality but I couldn't get it to work. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/157603-bounding-probability-event-relating-poisson-distribution.html","timestamp":"2014-04-18T18:14:20Z","content_type":null,"content_length":"30353","record_id":"<urn:uuid:b1b5791d-6b08-4ed2-a154-70baf8e7a423>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple integration \int_{0}^{1}\int_{0}^{1} min(x,y) dxdy = \int_{0}^{1}\int_{y}^{1} y dxdy = \int_{0}^{1} y(1-y)dy = 1/6 What's wrong with it? Thank you. Can you write this in LaTex? I can't seem to follow what you have there. On the square you can write the function $\text{min}(x,y)=\begin{cases} x, \qaud \text{ if } y \ge x \\ y, \qaud \text{ if } y \le x \end{cases}$ So you can break this into two different integrals over each triangle. Or you can do one of the integrals and multiply it by 2. Why does this work?
{"url":"http://mathhelpforum.com/calculus/190875-simple-integration.html","timestamp":"2014-04-19T19:47:41Z","content_type":null,"content_length":"35476","record_id":"<urn:uuid:819cab68-f266-4237-bad3-5e3cabeb71dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
This information is part of the Modelica Standard Library maintained by the Modelica Association. Rigid body with mass, inertia tensor, different shapes for animation, and two frame connectors (12 potential states) Name Type Description frame_a Frame_a Coordinate system fixed to the component with one cut-force and cut-torque frame_b Frame_b Coordinate system fixed to the component with one cut-force and cut-torque Name Type Default Value Description animation Boolean true = true, if animation shall be enabled (show shape between frame_a and frame_b and optionally a sphere at the center of mass) animateSphere Boolean true = true, if mass shall be animated as sphere provided animation=true r Position[3] Vector from frame_a to frame_b resolved in frame_a r_CM Position[3] Vector from frame_a to center of mass, resolved in frame_a m Mass Mass of rigid body I_11 Inertia 0.001 (1,1) element of inertia tensor I_22 Inertia 0.001 (2,2) element of inertia tensor I_33 Inertia 0.001 (3,3) element of inertia tensor I_21 Inertia 0 (2,1) element of inertia tensor I_31 Inertia 0 (3,1) element of inertia tensor I_32 Inertia 0 (3,2) element of inertia tensor angles_fixed Boolean false = true, if angles_start are used as initial values, else as guess values angles_start Angle[3] {0,0,0} Initial values of angles to rotate frame_a around 'sequence_start' axes into frame_b sequence_start RotationSequence {1,2,3} Sequence of rotations to rotate frame_a into frame_b at initial time w_0_fixed Boolean false = true, if w_0_start are used as initial values, else as guess values w_0_start AngularVelocity[3] {0,0,0} Initial or guess values of angular velocity of frame_a resolved in world frame z_0_fixed Boolean false = true, if z_0_start are used as initial values, else as guess values z_0_start AngularAcceleration {0,0,0} Initial values of angular acceleration z_0 = der(w_0) shapeType ShapeType "cylinder" Type of shape r_shape Position[3] {0,0,0} Vector from frame_a to shape origin, resolved in frame_a lengthDirection Axis r - r_shape Vector in length direction of shape, resolved in frame_a widthDirection Axis {0,1,0} Vector in width direction of shape, resolved in frame_a length Length Modelica.Math.Vectors.length(r - Length of shape width Distance length/world.defaultWidthFraction Width of shape height Distance width Height of shape. extra ShapeExtra 0.0 Additional parameter depending on shapeType (see docu of Visualizers.Advanced.Shape). sphereDiameter Diameter 2*width Diameter of sphere enforceStates Boolean false = true, if absolute variables of body object shall be used as states (StateSelect.always) useQuaternions Boolean true = true, if quaternions shall be used as potential states otherwise use 3 angles as potential states sequence_angleStates RotationSequence {1,2,3} Sequence of rotations to rotate world frame into frame_a around the 3 angles used as potential states Modelica.Mechanics.MultiBody.Examples.Elementary.InitSpringConstantDetermine spring constant such that system is in steady state at given position Modelica.Mechanics.MultiBody.Examples.Elementary.FreeBodyFree flying body attached by two springs to environment Modelica.Mechanics.MultiBody.Examples.Systems.RobotR3.Components.MechanicalStructureModel of the mechanical part of the r3 robot (without animation) Modelica.Mechanics.MultiBody.Examples.Loops.Utilities.CylinderBaseOne cylinder with analytic handling of kinematic loop Modelica.Mechanics.MultiBody.Examples.Loops.Utilities.Cylinder_analytic_CADOne cylinder with analytic handling of kinematic loop and CAD visualization Modelica.Mechanics.MultiBody.Examples.Loops.Utilities.EngineV6_analyticV6 engine with analytic loop handling
{"url":"http://reference.wolfram.com/system-modeler/libraries/Modelica/Modelica_Mechanics_MultiBody_Parts_BodyShape.html","timestamp":"2014-04-18T08:13:02Z","content_type":null,"content_length":"41473","record_id":"<urn:uuid:e118b69d-3964-4a05-b4b3-bd42c1be57c8>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: In the lab, Ahmad has two solutions that contain alcohol and is mixing them with each other. He uses twice as much Solution A as Solution B. Solution A is 19% alcohol and Solution B is 15% alcohol. How many milliliters of Solution A does he use, if the resulting mixture has 159 milliliters of pure alcohol? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ee2fb18e4b0a50f5c5664b7","timestamp":"2014-04-20T21:11:52Z","content_type":null,"content_length":"37284","record_id":"<urn:uuid:8cf8a9db-d447-4ac0-83b3-2bf8d0a3a3be>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Graphing Trigonometric Functions: Examples Graphing Trig Functions: Examples (page 2 of 3) Sections: Introduction, Examples with amplitude and vertical shift, Example with phase shift • Graph one period of s(x) = –cos(3x) The "minus" sign tells me that the graph is upside down. Since the multiplier out front is an "understood" –1, the amplitude is unchanged. The argument (the 3x inside the cosine) is growing three times as fast (because of the 3), so the period is one-third as long; the period for this graph will be (2/3)π. Here is the regular graph of cosine: I need to flip this upside down, so I'll swap the +1 and –1 points on the graph: ...and then I'll fill in the rest of the graph: Copyright © Elizabeth Stapel 2010-2011 All Rights Reserved And now I need to change the period. Rather than trying to figure out the points for the graph on the regular axis, I'll instead re-number the axis, which is a lot easier. The regular period is from 0 to 2π, but this graph's period goes from 0 to (2π)/3. Then the midpoint of the period is going to be (1/2)(2π)/3 = π/3, and the zeroes will be midway between the peaks and troughs. So I'll erase the x-axis values from the regular graph, and re-number the axis: Notice how I changed the axis instead of the graph. You'll quickly get pretty good at drawing a regular sine or cosine, but the shifted and transformed graphs can prove difficult. Instead of trying to figure out all of the changes to the graph, just tweak the axis system. • Graph at least one period of f(θ) = tan(θ) – 1 The regular tangent looks like this: The graph for tan(θ) – 1 is the same shape, but shifted down by one unit. Rather than try to figure out the points for moving the tangent curve one unit lower, I'll just erase the original horizontal axis and re-draw the axis one unit higher: << Previous Top | 1 | 2 | 3 | Return to Index Next >> Cite this article as: Stapel, Elizabeth. "Graphing Trig Functions: Examples." Purplemath. Available from http://www.purplemath.com/modules/grphtrig2.htm. Accessed
{"url":"http://www.purplemath.com/modules/grphtrig2.htm","timestamp":"2014-04-17T19:33:35Z","content_type":null,"content_length":"23384","record_id":"<urn:uuid:6c471800-65c2-4533-b38f-ee8274f1e466>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
When are non-quasi-coherent sheaves used? up vote 4 down vote favorite Non-quasi-coherent sheaves of $\mathcal O_X$ modules on a scheme seem like a wild concept to me; are they actually used for something? ag.algebraic-geometry sheaf-theory 10 Canonical flasque resolutions, infinite direct products, extension by zero from a locally closed set (see the discussion of excision early in SGA2), sheaf-Hom (and sheaf-Ext) between quasi-coherent sheaves, topological pullbacks of sheaves (even q-coh. ones) along scheme morphisms,... – BCnrd Nov 2 '10 at 15:08 If $X$ is a scheme defined over a base $S$, and $G$ is a group scheme over $S$, then we get a sheaf on $X$ induced by $G$ (namely the sheaf of $S$ morphisms from $X$ to $G$). This is not in general quasi-coherent. The sheaf induced by $G_m$ in particular occurs a lot in nature, for example $H^1(X, G_m) = Pic(X)$. – Daniel Loughran Nov 2 '10 at 15:17 Dear Daniel: that's not a sheaf of $O_X$-modules in most cases (e.g., not for $\mathbf{G}_m$). – BCnrd Nov 2 '10 at 15:34 1 General module sheaves appear as soon as you want to consider schemes as a full subcategory of ringed spaces. And this happens, of course, very often, for example when some constructions leave the category of schemes. – Martin Brandenburg Nov 2 '10 at 15:49 @BCnrd: Woops sorry I misread the question. Thanks for pointing that out! – Daniel Loughran Nov 2 '10 at 16:14 add comment 1 Answer active oldest votes One can think the adeles on a curve (or higher adeles on other spaces) as a sheaf of $\mathcal O$-algebras. That is, consider the sheaf $B(U)=\prod_{x\in U}\mathcal O_x$, where $\mathcal O_x$ is the completion of $\mathcal O$. Then the sheaf $A=B\otimes K$, where $K$ is the sheaf of rational functions, has the adeles as global sections. There is a short exact sequence $\ up vote 4 mathcal O\to K\times B\to A$. One can tensor a quasicoherent sheaf with this to obtain a resolution to compute cohomology. Indeed, Weil introduced the adeles (after the earlier ideles) down vote specifically to prove Riemann-Roch. I'm not sure when this was reinterpreted in terms of sheaves, which were only introduced later. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry sheaf-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/44563/when-are-non-quasi-coherent-sheaves-used/44584","timestamp":"2014-04-20T06:25:07Z","content_type":null,"content_length":"56037","record_id":"<urn:uuid:704b9f5f-bf2c-41bf-8b0e-5e0fe9d056e4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving singular second-order initial/boundary value problems in reproducing kernel Hilbert space In this paper, we presents a reproducing kernel method for computing singular second-order initial/boundary value problems (IBVPs). This method could deal with much more general IBVPs than the ones could do, which are given by the previous researchers. According to our work, in the first step, the analytical solution of IBVPs is represented in the RKHS which we constructs. Then, the analytic approximation is exhibited in this RKHS. Finally, the n-term approximation is proved to converge to the analytical solution. Some numerical examples are displayed to demonstrate the validity and applicability of the present method. The results obtained by using the method indicate the method is simple and effective. Mathematics Subject Classification (2000) 35A24, 46E20, 47B32. 1. Introduction Initial and boundary value problems of ordinary differential equations play an important role in many fields. Various applications of boundary to physical, biological, chemical, and other branches of applied mathematics are well documented in the literature. The main idea of this paper is to present a new algorithm for computing the solutions of singular second-order initial/boundary value problems (IBVPs) of the form: where , for x ∈ [0, 1], p ≠ 0, p(x), q(x), r(x) ∈ C[0, 1]. a[1], b[1],c[1], a[2], b[2], c[2 ]arc real constants and satisfy that a[1 ]u(0) + b[1 ]u'(0) + c[1 ]u (1) and a[2 ]u(1) + b[2]u'(1) + c[2] u'(0) are linear independent. F(x, u) is continuous. Remark 1.1. We find that if the problems are two-point BVPs; if the problems are initial value problems; if the problems are periodic BVPs; if the problems are anti-periodic BVPs. Such problems have been investigated in many researches. Specially, the existence and uniqueness of the solution of (1.1) have been discussed in [1-5]. And in recent years, there are also a large number of special-purpose methods are proposed to provide accurate numerical solutions of the special form of (1.1), such as collocation methods [6], finite-element methods [7], Galerkin-wavelet methods [8], variational iteration method [9], spectral methods [10], finite difference methods [11], etc. On the other hands, reproducing kernel theory has important applications in numerical analysis, differential equation, probability and statistics, machine learning and precessing image. Recently, using the reproducing kernel method, Cui and Geng [12-16] have make much effort to solve some special boundary value problems. According to our method, which is presented in this paper, some reproducing kernel Hilbert spaces have been presented in the first step. And in the second step, the homogeneous IBVPs is deal with in the RKHS. Finally, one analytic approximation of the solutions of the second-order BVPs is given by reproducing kernel method under the assumption that the solution to (1.1) is unique. 2. Some RKHS In this section, we will introduce the RKHS and . Then we will construct a RKHS , in which every function satisfies the boundary condition of (1.1). 2.1. The RKHS Inner space is defined as is absolutely continuous real valued functions, u' ∈ L^2[0, 1]}. The inner product in is given by and the norm is denoted by . From [17,18], is a reproducing kernel Hilbert space and the reproducing kernel is 2.2. The RKHS Inner space is defined as is absolutely continuous real valued functions, u"' ∈ L^2[0, 1]}. From [15,17-19], it is clear that become a reproducing kernel Hilbert space if we endow it with suitable inner product. Zhang and Lu [18] and Long and Zhang [19] give us a clue to relate the inner product with the boundary conditions (1.1). Set L = D^3, and where a[3], b[3], c[3 ]is random but satisfying that γ[3 ]is linearly independent of γ[1 ]and γ[2]. It is easy to know that γ[1], γ[2], γ[3 ]are linearly independent in Ker L. Then from [18,19], it is easy to know one of the inner products of and its corresponding reproducing kernel K[2](t, s). 2.3. The RKHS Inner space is defined as are absolutely continuous real valued functions, u"' ∈ L^2[0, 1], and, a[1 ]u(0) + b[1 ]u'(0) + c[1 ]u(1) = 0, a[2 ]u(1) + b[2]u'(1) + c[2]u'(0) = 0}. It is clear that is the complete subspace of , so is a RKHS. If P, which is the orthogonal projection from to , is found, we can get the reproducing kernel of obviously. Under the assumptions of Section 2, note Theorem 2.1. Under the assumptions above, P is the orthogonal projection from to . That means . At the same time, for any P is self-conjugate. And P is idempotent. So P is the orthogonal projection from to . The proof of the Theorem 2.1 is complete. Now, is a RKHS if endowed the inner product with the inner product below and the corresponding reproducing kernel K[3](t, s) is given in Appendix 4. 3. The reproducing kernel method In this section, the representation of analytical solution of (1.1) is given in the reproducing kernel space . Note Lu = p(x)u"(x) + q(x)u'(x) + r(x)u(x) in (1.1). It is clear that is a bounded linear operator. Put φ[i](x) = K[1](x[i], x), Ψ[i](x) = L*φ[i](x), where L* is the adjoint operator of L. Then Lemma 3.1. Under the assumptions above, if is dense on [0, 1] then is the complete basis . The orthogonal system of can be derived from Gram-Schmidt orthogonalization process of , and Theorem 3.1. If is dense on [0, 1] and the solution of (1.1) is unique, the solution can be expressed in the form Proof. From Lemma 3.1, is the complete system of . Hence we have and the proof is complete. The approximate solution of the (1.1) is If (1.1) is linear, that is F(x, u(x)) = F(x), then the approximate solution of (1.1) can be obtained directly from (3.3). Else, the approximate process could be modified into the following form: Next, the convergence of u[n](x) will be proved. Lemma 3.2. There exists a constant M, satisfied , for all . Proof. For all x ∈ [0, 1] and , there are That is, By Lemma 3.2, it is easy to obtain the following lemma. Lemma 3.3. If , ||u[n]|| is bounded, x[n ]→ y(n → ∞) and F(x, u(x)) is continuous, then . Theorem 3.2. Suppose that ||u[n ]|| is bounded in (3.3) and (1.1) has a unique solution. If is dense on [0, 1], then the n-term approximate solution u[n](x) derived from the above method converges to the analytical solution u(x) of (1.1). Proof. First, we will prove the convergence of u[n ](x). From (3.4), we infer that The orthonormality of yield that That means ||u[n+1]|| ≥ ||u[n]||. Due to the condition that ||u[n]|| is bounded, ||u[n]|| is convergent and there exists a constant ℓ such that If m > n, then In view of (u[m ]- u[m-1]) ⊥ (u[m-1 ]- u[m-2]) ⊥ ··· ⊥ (u[n+1 ]- u[n]), it follows that The completeness of shows that u[n ]→ ū as n → ∞ in the sense of . Secondly, we will prove that ū is the solution of (1.1). Taking limits in (3.2), we get If n = 1, then If n = 2, then It is clear that Moreover, it is easy to see by induction that Since is dense on [0, 1], for all Y ∈ [0, 1], there exists a subsequence such that It is easy to see that . Let j → ∞, by the continuity of F(x, u(x)) and Lemma 3.3, we have At the same time, . Clearly, u satisfies the boundary conditions of (1.1). That is, ū is the solution of (1.1). The proof is complete. In fact, u[n](x) is just the orthogonal projection of exact solution ū(x) onto the space . 4. Numerical example In this section, some examples are studied to demonstrate the validity and applicability of the present method. We compute them and compare the results with the exact solution of each example. Example 4.1. Consider the following IBVPs: Where . The exact solution is . Using our method, take a[3 ]= 1, b[3 ]= c[3 ]= 0 and n = 21, 51, N = 5, . The numerical results are given in Tables 1 and 2. Table 1. Numerical results for Example 4.1 (n = 21, N = 5) Table 2. Numerical results for Example 4.1 (n = 51, N = 5) Example 4.2. Consider the following IBVPs: where f(x) = π cos(πx) - sin(πx)(x^2 + (-1 + x) * x * sin^2(π* x)). The true solution is u(x) = sin(πx) + 1. Using our method, take a[3 ]= 1, b[3 ]= c[3 ]= 0, and N = 5, n = 21, 51, . The numerical results are given in Figures 1, 2, 3, and 4. Figure 1. The absolute error of Example 4.2 (n = 21, N = 5). Figure 2. The relative error of Example 4.2 (n = 21, N = 5). Figure 3. The absolute error of Example 4.2 (n = 51, N = 5). Figure 4. The relative error of Example 4.2 (n = 51, N = 5). Er Gao gives the main idea and proves the most of the theorems and propositions in the paper. He also takes part in the work of numerical experiment of the main results. Xinjian Zhang suggests some ideas for the prove of the main theorems. Songhe Song mainly accomplishes most part of the numerical experiments. All authors read and approved the final manuscript. Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2012/1/3?fmt_view=mobile","timestamp":"2014-04-19T14:33:59Z","content_type":null,"content_length":"152259","record_id":"<urn:uuid:2cf182f0-f6ef-4f9c-b8bc-480318970864>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Ingeniare. Revista chilena de ingeniería Servicios Personalizados Links relacionados versión On-line ISSN 0718-3305 Ingeniare. Rev. chil. ing. v.18 n.1 Arica abr. 2010 Ingeniare. Revista chilena de ingeniería, vol. 18 Nº 1, 2010, pp. 44-52 César San Martín^1 Edgar Estupiñán^2 Daniel San Martín^3 ^1Departamento de Ingeniería Eléctrica. Universidad de La Frontera. Casilla 54-D. Temuco, Chile. E-mail: csmarti@ufro.cl ^2Escuela Universitaria de Ingeniería Mecánica. Universidad de Tarapacá. Casilla 6-D. Arica, Chile. E-mail: eestupin@uta.cl ^3Laboratorio de Vibraciones Mecánicas. Universidad Técnica Federico Santa María. Sede Concepción. Casilla 457. Concepción, Chile. En este trabajo se presenta una metodología para detectar fallas incipientes en máquinas rotatorias. La metodología está basada en el análisis de cicloestacionariedad, la cual está presente en las señales de vibración generadas por máquinas rotatorias. De particular interés son las componentes cicloestacionarias de segundo orden y de órdenes superiores, puesto que contienen información relevante, que puede ser usada para detección temprana de fallas en rodamientos y transmisiones de engranajes. La primera etapa de la metodología consiste en la separación de las componentes de primer orden de la señal de vibración, para posteriormente centrar el análisis en la señal residual, la cual contiene las componentes ciclostacionarias de órdenes superiores. Luego, la señal residual es digitalmente filtrada y demodulada, considerando el rango de frecuencia de mayor importancia. Finalmente, la señal residual demodulada es autocorrelacionada, obteniendo una señal donde las componentes espectrales generadas por la presencia de una posible falla localizada pueden ser efectivamente detectadas. La metodología es validada analizando mediciones experimentales de vibraciones para dos casos particulares. El primero es la detección de una grieta en uno de los dientes de un sistema de transmisión y, el segundo, la detección de una picadura en la pista interna de un sistema de rodamientos. Los resultados muestran que el método propuesto para el monitoreo de condición de máquinas rotatorias es una herramienta útil en las tareas de diagnóstico de fallas, el cual complementa los análisis con técnicas de diagnóstico tradicionales. Palabras clave: Análisis de cicloestacionariedad, diagnóstico de fallas, análisis de vibraciones, monitoreo de condición. In this work, an effective methodology to detect early stage faults in rotating machinery is proposed. The methodology is based on the analysis of cyclostationarity, which is inherent to the vibration signals generated by rotating machines. Of a particularly interest are the second and higher orders cyclostationary components since they contain valuable information, which can be used for the early detection of faults in rolling bearings and gear systems. The first step of the methodology consists in the separation of the first-order periodicity components from the raw signal, in order to focus the analysis in the residual part of the signal, which contains the second and higher order periodicities. Then, the residual signal is filtered and demodulated, using the frequency range of highest importance. Finally, the demodulated residual signal is auto-correlated, obtaining an enhanced signal that may contain clear spectral components related to the presence of a prospective localized fault. The methodology is validated analyzing experimental vibration data for two different cases. The first case is related to the detection of a crack in one of the teeth of a gearbox system and the second case is related to the detection of a pitfall in the inner race of a rolling bearing. The results show that the proposed method for the condition monitoring of rotating machines is a useful tool for the tasks of fault diagnosis, which can complement the analysis made using traditional diagnostic techniques. Keywords: Cyclostationary analysis, fault diagnosis, vibration analysis, condition monitoring. Vibration signals generated by rotating machines may be considered as non-stationary processes that present periodic (i.e. cyclic) variations in the time domain in some of its statistics [1], which is a main characteristic of those type of signals named cyclostationary signals. A vibratory signal x(t) is said to be nth order cyclostationary with period Τ if its nth order moment exist and is periodic in the time domain with the period Τ. Typical examples of first order periodicity (FOP) vibration signals are generated by rotating machines with misaligned couplings and/or unbalanced rotors, whereas, modulated vibratory signals generated by wear mechanisms, friction and impact forces are some examples of second order periodicity (SOP) processes. In order to analyze FOP signals and to extract the required information for the fault detection tasks, the classical spectral analysis is an adequate a practical tool that may be used for most of these cases. However, when SOP signals have to be analyzed (e.g. signals with amplitude and/or frequency modulations), the analysis should be carried out using more sophisticated tools, in order to be able to identify variations in the statistics of the signals, containing meaningful information of the system under analysis [2]. In some cases, demodulation techniques may be satisfactorily used to analyze SOP signals, as long as, either the resonant zones or the main frequency ranges of the expected faults can be known in advance. However, the efficacy of the demodulation techniques diminishes when the signal contains higher orders of cyclostationarity, as well as, random noise components. The basic idea behind the theory of cyclostationary analysis is to apply an appropriate quadratic transformation to a SOP signal in order to obtain a modified signal of FOP [3]. Then, the modified signal can be analyzed with traditional diagnostic techniques applied to the mechanical components under study. In this framework, the methodology proposed in this study, incorporates time-frequency analysis of SOP signals in combination with traditional techniques typically used in fault detection (e.g. spectral analysis, enveloping analysis, etc). First, the components of FOP are reduced using an adaptive filtering method. In this way, a residual signal containing the SOP components is also obtained. Then, the residual signal is filtered in order to highlight the SOP components of the signal. In this stage, the cutting frequencies of the filter are estimated by using a time-frequency transformation based on the cyclic auto-correlation function. Finally, the filtered residual signal is demodulated and auto-correlated, obtaining a resultant signal with useful information for fault diagnostic purposes. In summary, from the time-frequency analysis, appropriate filters are configured, and then, using an enveloping detector the residual signal is demodulated, and finally, in order to improve the signal to noise ratio (SNR), a matched filter based in the autocorrelation function is used. (a) (b) Figure 1. a) Picture of an induced fault (of 10mm length) on the tooth surface of the pinion of a single-stage spur gear transmission. b) Picture of an induced localized fault in the inner race of a radial ball bearing. This procedure has been tested with two cases using experimental vibration data from two test rigs used to simulate faults in gears and rolling bearings respectively (Figure 1). The results show that the proposed methodology is an effective tool for the early detection and diagnostic of faults in rotating machinery. This work is organized as follows: firstly, the principles of cyclostationarity and the basics of the proposed method is presented and validated using experimental vibration data of a faulty gearbox and a faulty rolling bearing; finally, the main conclusions are drawn. A well detailed tutorial on the principles of cyclostationarity, focused on mechanical applications is given in [4]. However for the completeness of this work, and to address the use of cyclostationarity towards the proposed method, the basics of cyclostationarity are included here. A non-stationary signal can be considered as cyclostationary with FOP and SOP components, only if its moments of first and second order are periodic, in other words, if the moments satisfy the equations (1) and (2) [5]: where, Ε is the expected operator and Τ is the period or cycle of the signal x(t). An auto-correlation function with variation in time can be associated to the signal x(t), which is given by: where, t is the time lag and β satisfies: In general, it is possible to assume that a vibration signal x(t) is composed of FOP, SOP and random noise, as shown in equation (4). Considering that the focus of the analysis is on the SOP components of the signal x(t), the first stage of the procedure consists of using an LMS (least mean square) adaptive filter [6], in order to reduce the FOP components from the signal to be analyzed. In this way, the FOP components are separated from the raw signal and a residual signal (i.e. error signal) containing the SOP components and random noise is obtained. If a typical vibration signal containing amplitude modulations (i.e. SOP components), is assumed, the residual signal can be expressed as in equation (5). where, i = 1, 2, , N is the number of modulation signals, ƒ[ci] and ƒ[0] are the modulating and modulated signal respectively, b is a constant and n (t) is white noise with unknown variance. Since the main interest here, is to extract the information of the modulating signals from the signal that includes the SOP components (x[SOP]), a simple demodulator (i.e. a low pass filter with cut frequency ƒ[0]) can be used. In order to obtain a good estimation of the cut frequency ƒ[0], it is used a time-frequency distribution of the Cohen type [7], which is given by equation (6): where, Φ is an arbitrary function (kernel) and r[x] corresponds to the instantaneous ACF given by equation (3). The type of time-frequency distribution is determined by the selected function Φ. For instance, if Φ is equal to 1, the Wigner-Ville distribution is obtained, which is given by (7), whereas, if Φ is a cubic function type, that helps to reduce the frequency cross terms, the Zhao-Atlas-Marks (ZAM) distribution is obtained [8], which is given by (8): Finally, in order to enhance the SOP signal, the autocorrelation function of the filtered signal is computed. The autocorrelation function of a signal x[c](t) is given by: In summary, the main steps of the proposed method are listed below: - LMS adaptive filtering: to separate the FOP components from the measurement vibration signal, obtaining a residual signal with the SOP components. - Time frequency transformation: the estimation of ƒ[0] (required for the further digital filtering), is done by using the ZAM distribution. - Digital filtering: the residual signal is filtered using the cutting frequencies identified in the previous stage. - Noise reduction: the filtered residual signal and specially the SOP components are enhanced by using the autocorrelation function (matched filter). - Detection and diagnosis of faults: the spectrum of the enhanced residual signal is analyzed, looking for spectral components that might be related to the presence of a fault. In this section, the proposed method is validated by analyzing experimental vibration data from two different cases (see Figures 2 and 3). The first case corresponds to the detection of a fault in a one-stage gearbox, and the second case corresponds to the detection of a localized pitfall in a rolling bearing. Case 1: A faulty one-stage gearbox In this case, the experimental data is taken from a test rig, which consists of an asynchronous electrical motor controlled by a frequency converter and coupled to a single-stage spur gear transmission through a flexible coupling. The pinion has 17 teeth and the wheel has 28 teeth. The system is under a constant load, which is supplied by a DC generator, as it is illustrated in the sketch of Figure 4. The rotational frequencies and mesh frequencies of the gearbox are listed in the Table 1. Table 1. Main characteristic frequencies of the gearbox. When a local fault of the cracked tooth type occurs in one of the gears of the system, it is expected to have a vibration signature containing amplitude modulations of the fundamental gear mesh frequency and its harmonics with a modulating frequency equal to the rotational frequency of the faulty gear [9]. Therefore, in this particular case, if spectral components at a frequency of 17 Hz and its multiples are identified in the spectrum of the vibration signal, it can be associated to a fault in the pinion. In contrast, if the spectral components are at the frequency of 10.32 Hz and its multiples, the fault can be associated to the gear. The following analysis is done for vibration data taken from the test rig with a faulty pinion. The vibration data were acquired from two piezoelectric accelerometers mounted on the supports A and B, shown in Figure 4, and using a data acquisition system configured with a sampling frequency of 30 kHz. The waveform time and the frequency spectrum of the acquired vibration signal are shown in Figures 2a and 2b respectively. The main mesh frequency and some of its harmonics can be identified from the spectrum of Figure 2b. Figure 2. Waveform time and spectrum of the raw vibration signal - case I. Figure 3. Waveform time and spectrum of the raw vibration signal - case II. Figure 4. Sketch of the test rig of case I - a 1-stage gearbox with fault in the pinion. In order to separate the FOP components, a LMS adaptive filter with 500 coefficients and a learning rate of 0.01ms was used. The waveform time and the spectrum of filtered signal, which contains the FOP cyclostationary components, are shown in Figure 5. In the same manner, the waveform time and the spectrum of the residual signal, which contains the SOP cyclostationary components and signal noise, are shown in Figure 6. From the spectra of Figures 5 and 6, it can be observed that the FOP components are predominated, when they are compared to the other components, which is generally expected [1]. Analyzing the spectrum of Figure 6, two possible resonant zones of the system can be identified, with the range frequencies between 1200 to 2600 Hz and between 2.800 to 5800 Hz approximately. To complement the analysis, and before of filtering the signal containing the SOP components, the ZAM transform was applied to the residual signal, obtaining the time-frequency distribution shown in Figure 7. Despite the frequency resolution is a bit course (Δƒ 208 Hz), it is enough to visualize the variation in time of the two main resonant zones. It can be observed, that the resonant zones are excited approximately every 0.003 s, which corresponds to a close value of the fundamental mesh frequency (289 Hz). In Figure 7, can be seen that the impulsive variations are clearer defined in the second frequency range (2800 - 5800 Hz), therefore, these frequencies are selected as the cutting frequencies of the further filtering stage of the residual signal. In order to filter the residual signal, a finite impulse response (FIR) filter was used. The implemented bandwidth filter has 400 coefficients with the low cut frequency of 2500 Hz and the high cut frequency of 6500 Hz. The waveform time and the spectrum of the filtered signal are shown in Figure 8. This signal has the typical pattern of an amplitude-modulated signal found in a mechanical system. Therefore, if a demodulation process is applied to the filtered signal, the results are shown in Figure 9. Figure 5. Waveform time and spectrum of the filtered signal (FOP components) - case I. The demodulation technique applied is as follows: first, a high pass filter with cut frequency of 2800 Hz is applied; second, the signal is rectified and the mean value of the signal is subtracted and third, a low pass filter with cut frequency of 200 Hz is applied. Both of the filters used for the demodulation are of the infinitive impulse response (IIR), and with 5 coefficients. From Figure 9, a fault in the pinion can be confirmed since clear spectral components at 17 Hz and its first harmonics are presented in the spectrum. Additionally, in order to enhance the main components of interest in the residual signal the auto-correlation function can be used and the result is shown in Figure 10. This last step in the methodology could be avoided, as in this case, where the spectral components were already identified with the filtered and demodulated residual signal (Figure 11), however, in other cases where the vibration signals contain higher noisy components, the use of the auto-correlation function is very useful to clean the signal and therefore, should be include it in the analysis. Case II: A faulty rolling bearing In this case, the method is applied to experimental vibration data taken from a test rig with an incipient localized fault in the inner race of one of the bearings that support the shaft. The test rig consists of an asynchronous electrical motor controlled by a frequency converter, which drives a rotor shaft supported by two radial ball bearings. A schematic drawing of the test rig is shown in Figure 11. A static load can be indirectly applied to the bearings by using a tensor pulley system mounted in the centre of the shaft. The vibration data were acquired using piezoelectric accelerometers mounted on the supports A and B, and using a data acquisition system with a sampling frequency of 30 kHz. The vibration data analyzed for this case is for a faulty bearing located at the motor side (bearing A). The main fault bearing frequencies for the bearing under study are listed in the Table 2. Table 2. Main fault frequencies of the radial ball bearing. Figure 6. Waveform time and spectrum of the residual signal (SOP components and noise) - case I. Figure 7. ZAM time-frequency distribution of the residual signal - case I. It has been shown by several studies that similarly to the case of faulty gears, localized faults in bearings generate spectral sidebands around the resonant frequencies, which are related to the source frequency of the fault [10]. However, when a fault is located in the inner race, it is a complete challenge to detect it in an early stage, due to the low amplitude of the spectral vibration components related to this fault (BPFI), which may be hidden by the background noise of the signal and by the cyclostationary components of FOP. The waveform time and the frequency spectrum of the vibration signal taken from the accelerometer mounted on the bearing A, are shown in Figures 3a and 3b, respectively. From the waveform time, some impulsive events can be identified, which seem to be modulated and periodic, however, it is not possible either from the waveform or the spectrum, to identify precisely their periodicity and/or frequency of repetition, which should be equal to the BPFI frequency listed in Table 2. Figure 8. Residual signal: band-pass filtered (2800 6500 Hz) - case I. Figure 9. Residual signal: filtered and demodulated case I. Figure 10. Residual signal: filtered, demodulated and auto-correlated - case I. Figure 11. Sketch of the test rig of case II bearing with a fault in the inner race. Figure 12. Waveform time and spectrum of the filtered signal (FOP components) - case II. Figure 13. Waveform time and spectrum of the residual signal (SOP components and noise)-case II. Following with the application of the proposed method, the FOP components were separated from the raw signal using an adaptive filtering scheme and using the same parameters for the filter used in case I. The waveform time and the spectrum of the filtered signal, which contains the FOP cyclostationary components, are shown in Figure 12 and the waveform time and the spectrum of the residual signal, which contains the SOP cyclostationary components and signal noise, are shown in Figure 13. In contrast to the results obtained in case I for the filtered and residual signals (see Figures 5 and 6), in this case, the cyclostationary components of SOP are predominated when they are compared to the FOP components in the signal. This behavior can be due to several factors: the modulation of the zone load over the localized fault in the inner race (i.e. the inner race is rotating at the rotational shaft speed), slip motion between the rolling elements and the races, and random vibration components generated by friction mechanisms (e.g. the friction in the -tensor-pulley system). In order to identify an appropriated frequency range for the further filtering stage, the ZAM transform was applied to the residual signal, obtaining the time-frequency distribution shown in Figure 14 and computed with a Δƒ 110 Hz. In this figure, it can be identified the presence of short duration events in the range of frequency between 800 to 2000 Hz, which may involve resonant frequencies of the bearing races. Therefore, this frequency range is selected for the configuration of band-pass filter used to filter the residual signal. Figure 14. ZAM time-frequency distribution of the residual signal - case II. Figure 15. Residual signal: band-pass filtered (800 2.000 Hz) - case II. The waveform time and the spectrum of the filtered signal, using a FIR filter with 400 coefficients, are shown in Figure 5. Even though, the impulsive events are notorious in the waveform time of the filtered signal, it is still not possible to determine clearly their periodicity. However, when the filtered signal is demodulated the fault frequencies at BPFI (44 Hz) can be precisely identified from the spectrum, as shown in Figure 16. Figure 16. Residual signal: filtered and demodulated case II. Finally, and in order to reconfirm the results obtained, the auto-correlation function is applied to the demodulated signal in order to enhance even more the main components of interest, as it can be seen in the results shown in Figure 17. In this way, the diagnostic of the fault is confirmed and it is very precise since the frequency of the main spectral component found is very close to the theoretical value of the BPFI frequency. With the results obtained from the analysis of these two cases, it is shown the effectiveness of the proposed methodology when it is applied to the detection and diagnostic of localized faults, particularly in gear systems and rolling bearings. In this work, it has been proposed a practical procedure based on the cyclostationary analysis of vibration signals, which can be used for the early detection of localized faults in mechanical components, such as gears and bearings. Vibration signals can be assumed as a combination of FOP and SOP components. Figure 17. Residual signal: filtered, demodulated and auto-correlated - case II. FOP signals are generated by rotating machines with misaligned couplings and/or unbalanced rotors, whereas, SOP components correspond to the modulated vibratory signals generated by wear mechanisms, friction and impact forces. To analyze FOP signals and to extract the required information for the fault detection tasks, the classical spectral analysis is an appropriate tool. For SOP signals the analysis should be carried out using more sophisticated tools. In this work, a procedure to analyze SOP signal is presented and tested in two practical cases. From the time-frequency analysis, appropriate filters are configured in order to obtain the SOP signal and then, using an enveloping detector the useful signal is obtained. To improve the signal to noise ratio (SNR), a matched filter based in the autocorrelation function is used. The results of applied the proposed method has been validated with the analysis of vibration data taken from two different laboratory test rigs. In order to extent this results to industrial applications, some of the further aspects of the present work include the analysis of vibration data taken from industrial rotating machinery, such as, multi-stage gearboxes and rotors supported on rolling bearings. Additionally, the procedure can be adapted and modified to be applied in cases where the early detection of localized faults is even more challenged, as for instance, detection of faults in the rolling elements of bearings and in general in mechanical systems with variable load and speed conditions. Finally, when more than one fault occur is possible to obtain the detection of several faults in time-frequency analysis, but in this case, several SOP components are obtained. Nevertheless, this aspect is part of the future research work. This paper was partially supported by Universidad de La Frontera, DIUFRO DI08-0048. [1] J. Antoni, F. Bonnardot, A. Raad and M. El Badaoui. "Cyclostationary modelling of rotating machine vibration signals". Mechanical Systems and Signal Processing. Vol. 18 Nº 6, pp. 1285-1314. 2004. [ Links ] [2] A.C. McCormick and A.K. Nandy. "Cyclostationarity in Rotating Machine Vibrations". Mechanical Systems and Signal Processing. Vol. 12, Issue 2, pp. 225-242. March 1998. [ Links ] [3] F. Bonnardot, R.B. Randall and F. Guillet. "Extraction of second- order cyclostationarity sources - Application to vibration analysis". Mechanical System and Signal Processing. Vol. 19, Issue 6, pp. 1230 1244. 2005. [ Links ] [4] Jérôme Antoni. "Cyclostationarity by examples". Mechanical Systems and Signal Processing. Vol. 23, Issue 4, pp. 987-1036. May 2009. [ Links ] [5] C. Capdessus, M. Sidahmed and J.L. Lacoume. "Cyclostationary Processes: Applications in Gear faults early Diagnosis". Mechanical Systems and Signal Processing. Vol. 14, Issue 3, pp. 371-385. 2000. [ Links ] [6] S.K. Lee and P.R. White. "The enhancement of impulsive noise and vibration signals for fault detection in rotating and reciprocating machinery". Journal of Sound and Vibration. Vol. 217 Nº 3, pp. 485-505. October 1998. [ Links ] [7] A.D. Poularikas. "The Transforms and Applications". Handbook. Second Edition. Boca Raton, Florida, USA. 2000. [ Links ] [8] Yunxin Zhao, Les E. Atlas and Robert J. Marks. "The Use of Cone-Shaped Kernels for Generalized Time-Frequency Representations of Nonstationary Signals". IEEE Transactions on Acoustics, Speech, and Signal Processing. Vol. 38, pp. 1084-1091. 1990. [ Links ] [9] R.B. Randall. "A new method of modeling gear faults". Journal of Mechanical Design. Vol. 104, pp. 259-267. 1982. [ Links ] [10] Y.T. Su and S.J. Lin. "On initial fault detection of a tapered roller bearing: frequency domain analysis". Journal of Sound and Vibration. Vol. 155, pp. 75-84. 1992. [ Links ]
{"url":"http://www.scielo.cl/scielo.php?script=sci_arttext&pid=S0718-33052010000100006&lng=es&nrm=iso","timestamp":"2014-04-18T21:05:11Z","content_type":null,"content_length":"58931","record_id":"<urn:uuid:1827a1c7-ed08-4424-b04d-b0ba18f01180>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Height 1. 93463 When a ball is thrown, its height in feet h after t seconds is given by the equation h= vt - 16 t^2 where v is the initial upwards velocity in feet per second. If v= 30 feet per second, find all values of t for which h=13 feet. Do not round any intermediate steps. Round the answer to 2 decimal (If there is more than one answer, enter additional answers with the "or" icon.) The height of a ball is found.
{"url":"https://brainmass.com/math/algebra/93463","timestamp":"2014-04-17T18:24:08Z","content_type":null,"content_length":"27549","record_id":"<urn:uuid:ebef96a6-0829-4d0e-9e01-02005a77acae>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Shapiro, Bruce E. - Biological Network Modeling Center, California Institute of Technology • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • ELEMENTS BOOK 1 . Definitions • Systems of Linear Equations in two variables (4.1) • Math 326 -Exam 1 -14 Sept 2010 1. Rewrite the following statements formally. • WARM UP EXERCISE The ozone level (in parts per billion) on a • Math 326 Lecture Notes Bruce E. Shapiro • \alpha \theta o o \tau \beta \vartheta \pi \upsilon • Math 326 -Exam 1 -14 Sept 2010 Rules of the Exam • Introduction to Functions Section 2.1 • 2005 BE Shapiro [Math 351 Spring 2005 Revised 3/3/05] Page 6. 1 6. Method of Successive Approximations • Annals of Mathematics A Set of Postulates for Plane Geometry, Based on Scale and Protractor • 2005 BE Shapiro [Math 351 Spring 2005 Revised 7-May-2005] Page 11.1 11. Series solutions at ordinary points • Math 326 -Exam 2 -12 Oct 2010 Rules of the Exam • Math 103L: Exercise on Functions (Section 2.1) These problems are a sample of the kinds of problems that may appear on the • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Math 103L: Continuity (Section 10.2 and 10.4) These problems are a sample of the kinds of problems that may appear on the • EUCLID'S ELEMENTS OF GEOMETRY The Greek text of J.L. Heiberg (18831885) • 2005 BE Shapiro [Math 351 Spring 2005 Revised 16 Feb 2005] Page 3.1 3. Slopes and Flows • by Wikibooks contributors Created on Wikibooks, • The Computable Differential Equation • Limits of rational functions at x Example: f(x) = • Geometry Mathematics Content Standards Mathematics • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Multivariate Calculus in 25 Easy Lectures • Developmental Simulations with Cellerator Bruce E. Shapiro* and Eric D. Mjolsness • Warm-up: Price Demand equation Several companies make a 37 inch, Plasma HDTV. Right now, if • Lecture Notes in Differential Equations • Metric Postulates for Plane Geometry Author(s): Saunders MacLane • Math 462 -Syllabus1 "Advanced Linear Algebra" • Group Project Due 7 May 2009 • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • J. theor. Biol. (1998) 194, 551559 Article No. jt980774 • MATHSBML AND SYSTEMS BIOLOGY SIMULATIONS BRUCE E. SHAPIRO(1) • Depolarization Interstitial • Journal of Computational Neuroscience 10, 99120, 2001 c 2001 Kluwer Academic Publishers. Manufactured in The Netherlands. • UNIVERSITY OF CALIFORNIA Los Angeles • Introducing LATEX Bruce E Shapiro • In CONGRESS, July 4, 1776. The unanimou` Declaration of t thirteen united State` of America, • LATEX2 for authors c Copyright 19952005, LATEX3 Project Team. • Foundations of Geometry1 Math 370 -Spring 2009 • Project Gutenberg's The Foundations of Geometry, by David Hilbert • Metric Postulates for Plane Geometry Author(s): Saunders MacLane • Exploring Advanced Euclidean with Geometer's Sketchpad • Math 280 -Extra Credit Project Spring 2011 -Class 15089 • Lecture Notes in Differential Equations • Using Computer Algebra for Developmental Modeling: Introduction to Signal Transduction, Cellerator, and the Computable Plant • Euclidean Constructions This section outlines the basic Euclidean con- • Foundations of Geometry Math 370 -Spring 2009 -Section 14974 • Introduction to Judith Hohenwarter and Markus Hohenwarter • Project on the Origins of Geometry http://beshapiro.com/math370/origins-project.html • 2005 BE Shapiro [Math 351 Spring 2005 Revised 6 Feb 2005] Page 1.1 1. Differential Equations • 2005 BE Shapiro [Math 351 Spring 2005 Revised 25 Mar 2005] Page 2.1 2. Uniqueness • 2005 BE Shapiro [Math 351 Spring 2005 Corrected 25 March 2005] Page 4.1 4. Methods for Solving First Order Equations • 2005 BE Shapiro [Math 351 Spring 2005 Revised 24 April 2005] Page 7.1 7. Approximate and Numerical Solutions • 2005 BE Shapiro [Math 351 Spring 2005 Revised 5 May 2005] Page 8.1 8. Linear Equations with Constant Coefficients • 2005 BE Shapiro [Math 351 Spring 2005 Revised 18 May 2005] Page 12.1 12. Series Solutions at Regular Singularities • 2005 BE Shapiro Page 14.1 14. Critical Points of Autonomous Linear Systems • 2005 BE Shapiro Page A.1 A. Summary of Methods and Results • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Math 326 -Exam 3 -18 Nov 2010 Rules of the Exam • 1. (15 points, 3 points each) Let A = {4, 5, 6}, B = {3, 4}, C = {7, 8, 9}. Find (a) A C = {4, 5, 6, 7, 8, 9} • Math 103L: Exercise on Linear Equations (Section 1.1) These problems are a sample of the kinds of problems that may appear on the • Math 103L: Graphing (Section 2.2) These problems are a sample of the kinds of problems that may appear on the • Math 103L: Quadratics and general polynomials (Section 2.3) These problems are a sample of the kinds of problems that may appear on the • Math 103L: Section 12.6. Exercise on Cost, Revenue, Profit and maximal profit. 1. A company manufactures and sells x things per week. The weekly price demand and • Math 103L: Section 11.7. Exercise on elasticity, cost, revenue, profit, and maximal profit. The first few problems are the main exercise. The last 3 are just for practice. • Math 103L: Solving Systems of Equations (Sections 4.1-3) Solve using substitution or elimination. • Math 103L: Matrix Operations (Section 4.4) These problems are a sample of the kinds of problems that may appear on the final • Math 103 Section 1.1: Linear Equations • WARM UP EXERCISE Please take derivatives of the • WARM UP EXERCSE A cable company has found that the total number • 4.2 Systems of Linear equations and Augmented Matrices • 4.3 Gauss Jordan Elimination Any linear system must have exactly one solution, no solution, • 4.4 Matrices: Basic Operations Addition and subtraction of matrices • 10.4 The Derivative The student will learn about • California State University Northridge Lecture Notes for Math 481A • The Computable Differential Equation • 42 GRADES EIGHT THROUGH TWELVE--GEOMETRY The geometry skills and concepts developed in this discipline are useful to all • Homework Set # 2 Ima Good Student • 2005 BE Shapiro [Math 351 Spring 2005 revised 4 May 2005] Page 10.1 10. Linear Equations with Variable Coefficients • Discrete Mathematics with Applications, 3rd Edition Susanna S. Epp • Errata in Version 10.2 Last revised March 20, 2011 • Graphical Method for Force Analysis: Macromolecular Mechanics With Atomic Force Microscopy • Math 326 -Exam 2 Solutions -12 Oct 2010 1. (a) (2 points) State precisely but concisely what it means when a|b. • 2005 BE Shapiro Page 9.1 9. Higher Order Equations with Constant Coefficients • 4.5 Inverse of a Square In this section, we will learn how to find an • Mathematics Framework for California • Math 103 Section 1.2: Linear Equations and • Math 150B Spring 2011 Independent Study on Differential Equation • WARM UP EXERCSE Roots, zeros, and x-intercepts. • 2005 BE Shapiro [Math 351 Spring 2005 Revised 4 May 2005] Page 13.1 13. Linear Systems • Second Order Linear Equations1 Linear Equation: General Form • Adopted by the of Education • "Advanced Linear Algebra" Math 462 -Fall 2009 -Section 15224 • High School Instructional Material Survey Thursday, October 30, 2008 • 2005 BE Shapiro [Math 351 Spring 2005 revised 3/25/05] Page 5.1 5. First Order Linear Equations • Math 150A Exam 2 Answers Your Name Here Section 15608 2 October 2009 • Worksheet 6 For each function, (a) Find all aysmptotes; (b) Identify the critical points; (c) Identify possible inflection • Exam 3 Solutions 1. Find the linearization of f(x) = 8 5x + • Math 150A, Fall 2009 LectureNotes • 1 Presented at the AAS/GSFC 13th International Symposium on Space Flight Mechanics • "Advanced Linear Algebra" Math 462 -Fall 2009 -Section 15224 • Math 103L: Interest (Sections 3.1-2) These problems are a sample of the kinds of problems that may appear on the • Grade-Level Considerations • Note: This lesson uses pre-created tools. You don't have to do that, although you can if you want to. It is not necessary to incorporate technology into every lesson, and for some lessons it might not be appropriate. • Math 462 Notes Last Revised: December 5, 2009 • 2002 Jet Propulsion Laboratory, California Institute of Technology. All rights reserved. Do not copy or distribution without written permission • Copyright 2002 by the California Commission on Teacher Credentialing Permission is granted to make copies of this document for noncommercial use by educators. • WARM UP EXERCSE A company makes "Notebook" • Introducing LATEX Bruce E Shapiro • Mathematics Introduction to Grades Eight Through Twelve • Math 150A Exam 4 Answers 1. Find the value of x for which the slope of the curve y = 1 + 40x3 • Lecture Notes in Differential Equations • Haskell Cheat Sheet This cheat sheet lays out the fundamental ele- • Lagrange Interpolation Suppose we know the values of some function f(x) at n + 1 distinct grid points • Error Analysis for Iterative Definition 12.1 (Order of Convergence). We say that a sequence pn converges • Linear Systems In this section we will study the solution of a linear system of n equations with n • Hermite Interpolation One of the problems with polynomial interpolation is that although it fits the points • Theorems About Derivatives Definition 4.1 (Derivative). The derivative is given by either of the following two • Fixed and Floating Point The two most common representations of numbers in computers are • Roots and Bisection The first numerical problem we will face is root finding: given a function f(x), find a • Synthetic Division and Horner's Definition 14.1 (Polynomial). Let a0, . . . , an be arbitrary constants. Then any • Numerical Differentiation Recall the definition of a derivative • Muller's Method Muller's method is s based on the idea that if a straight line is good, then a parabola • Newton Interpolation Newton's method for interpolation is derived by seeking a polynomial of the form • Numerical Integration The simplest method for numerical integration is a direct implementation of Riemann • Number Representation Number representations in computers are limited because they only store a finite • Fixed Point Iteration Anyone who has every played with their calculator by typing in a number and then • Polynomial Least Squares Fit B.E.Shapiro • Richardson Extrapolation Richardson extrapolation gives a method to "accelerate" the convergence of a se- • Secant Method The main problem with Newtons method is that we need to know both the function • Limits and Continuity In the next sections we will make a brief review of some mathematical preliminaries • Math 150 -Fall 2009 -Section 15608 http://beshapiro.com/math150A • The Aitken-Steffensen Methods Definition 13.1. Let pn be a sequence. Then the first forward difference is • Cubic Splines As before we are trying to find an interpolating function for a function that we know • Newton's Method Suppose we already have an estimate p0 for the root of f(x). If we project the tangent • We will frequently use iterative processes in our study of numerical analysis. In such a process, one computes a sequence of values, usually in a loop or other similar control • As software designers we will need to understand the sources of error in a numerical calculation if we want to avoid disasters such as the one we discussed in lesson 1. To • Mathematics 150A Semester Review Problems • Getting Started With LATEX Getting Started With LATEX • Lecture Notes in Differential Equations • Computers and Operating Systems Computers and Operating Systems • Applications in Linear Algebra Applications in Linear Algebra • Getting Started With Linux Getting Started With Linux • Introducing Bruce E Shapiro • Math 382: Scientific Computing California State University, Northridge • Data Representation Data Representation • Numerical Solution of Differential Equations Initial Value Problem: Euler's Method • The Onset of Chaos The Onset of Chaos • "Scientific Computing" Math 382/L -Spring 2010 -Section 16077/16078 • Math 382/382L: Introduction to Python Math 382/382L Scientific Computing • Using Linux Using Linux • There Ain't No Free Beer! There Ain't No Free Beer! • Root Finding Root Finding • Fractals and Stuff Fractals and Stuff • Lecture Notes in Differential Equations • Data Fitting Data Fitting • Math 382/382L Scientific Computing California State University Northridge • Foundations of Geometry Lecture Notes for Math 370
{"url":"http://www.osti.gov/eprints/topicpages/documents/starturl/16/456.html","timestamp":"2014-04-20T00:00:43Z","content_type":null,"content_length":"34878","record_id":"<urn:uuid:b2efaac3-e7d8-4842-a273-91a758cd6aa8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00387-ip-10-147-4-33.ec2.internal.warc.gz"}
A turbulent far-wake, beginning with a 'top hat' mean velocity profile, was simulated satisfactorily. This was the first temporal simulation with the AGE method. At the Center for Turbulence Research (CTR), Stanford University/NASA-Ames, the initial velocity field from a spectral simulation of the far-wake of a parallel flat plate was made available to start an AGE method simulation. Agreement in results was very close, but the AGE method was around 20 times faster. Initial results from a low Reynolds number turbulent channel flow agree quite well with earlier spectral simulations at the CTR, though in this case the AGE method is only about half the speed of the Future work will include: simulations of far-wakes from various initial conditions in larger domains; efforts to speed up the channel flow calculations; and simulation of a spatially developing boundary layer. Most of this work will be done at Ames, though it is interesting to note that the VPP at ANU is significantly faster (per processor) than the Ames Cray C90s. What computational techniques are used? The Advected Grid Explicit (AGE) method, developed by the investigator, is essentially a finite difference solution of the Navier-Stokes equations along with mass continuity and an equation of state. Vectorization on the VPP exceeds 98% in some cases. R.A. Antonia, D.K. Bisset, P. Orlandi & B.R. Pearson, Reynolds number dependence of the second-order turbulent pressure structure function, Phys Fluids 11(1), 241-243 (1999). D.K. Bisset, The AGE method for direct numerical simulation of turbulent shear flow, Int J Numer Meth Fluids 28, 1013-1031 (1998). D.K. Bisset, Further development of the AGE method, in Numerical Methods for Fluid Dynamics VI, ed. M.J. Baines, Oxford University Computing Laboratory, Oxford 1998. D.K. Bisset, Numerical simulation of heat transfer in turbulent mixing layers, in Proc 13th Australasian Fluid Mech Conf, eds. M.C. Thompson & K. Hourigan, Monash University, Clayton Vic 1998. D.K. Bisset & R.A. Antonia, Three-dimensional simulations of turbulent planar jets, in Advances in Turbulence VII, ed. U. Frisch, Kluwer Academic, Dordrecht 1998.
{"url":"http://anusf.anu.edu.au/annual_reports/annual_report98/Appendix_B/E_Bisset_g91_98.html","timestamp":"2014-04-16T22:03:30Z","content_type":null,"content_length":"11380","record_id":"<urn:uuid:d0b1547f-ab2c-405e-a4e3-bf28fb5e6cab>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Relation From Set A to B October 30th 2012, 02:52 PM Relation From Set A to B $A = \{0, 1, 2, 3, 4\}\and\B = \{0, 1, 2, 3\},a\epsilon A \and\ b \epsilon B$ I need to find the Relation such that $R = \{(a, b)|\gcd(a, b)=1\}$ I included (0, 0) as one of my ordered pairs in the relation set, but apparently it isn't suppose to be in it. Why is that? Isn't 0 divisible by one? And for the same two sets, A and B, I have to find the relation $R=\{(a,b)|lcm(a, b)=2\}$ What I found was $R=\{(2,2),(4,2)\}$ But the only correct ordered-pair in that set is (2, 2). I'm not entirely sure what I did wrong... October 30th 2012, 05:09 PM Re: Relation From Set A to B Hello, Bashyboy! $A = \{0, 1, 2, 3, 4\}\text{ and }B = \{0, 1, 2, 3\},\;a\in A\text{ and }b \in B$ I need to find the Relation such that: $R \:=\: \{(a, b)\,|\,\gcd(a, b)=1\}$ I included (0, 0) as one of my ordered pairs in the relation set, but apparently it isn't suppose to be in it. Why is that? .Isn't 0 divisible by one? By convention, we do not consider zero in discussions of GCDs and LCMs. Recall the defintion of a GCD. . . It is the greatest number that divides into two (or more) numbers. What is the GCD of 0 and 17? Both 0 and 17 are divisible by 17. . . Hence: . $\text{gcd}(0,17) \,=\,17$ What is the GCD of 0 and 0? Both 0 and 0 are divisible by $\text{ any number}e 0$ . . Hence: . $\text{gcd}(0,0) \,=\,\text{any number}e 0$ And for the same two sets, $A$ and $B$, I have to find the relation: $R\:=\:\{(a,b)\,|\,\text{lcm}(a, b)=2\}$ What I found was $R=\{(2,2),(4,2)\}$ But the only correct ordered-pair in that set is (2, 2). I'm not entirely sure what I did wrong. Your second pair is incorrect: . $\text{lcm}(4,2) = 4$ I would include: . $\begin{Bmatrix}\text{lcm}(1,2)\,=\,2 \\ \text{lcm}(2,1)\,=\,2 \end{Bmatrix}$ November 2nd 2012, 05:10 AM Re: Relation From Set A to B (1, 3) and (3, 1) are in the relation. This pertains to both transitivity and antisymmetry. Rather, 0 and 0 do not have the greatest common divisor because any number is a divisor. A stylistic remark: In this case, instead of "a ∈ A and b ∈ B," it should say "a ranges over A and b ranges over B." The former phrase raises questions whether you are talking about some specific a and b that you forgot to introduce or you are considering some arbitrary a and b and, if so, for what reason. The latter phrase means that whenever a is used in what follows, the reader may assume that it is some element of A. November 2nd 2012, 05:20 AM Re: Relation From Set A to B So, are you saying that, perhaps, the answer key is wrong; and that {(1,3), (1,4), (2,3), (2,4), (3,1), (3,4)} is transitive and antisymmetric? Also, thank you for your last remark, concerning the phrasing of certain things. November 2nd 2012, 06:55 AM Re: Relation From Set A to B
{"url":"http://mathhelpforum.com/discrete-math/206420-relation-set-b-print.html","timestamp":"2014-04-18T21:16:14Z","content_type":null,"content_length":"13685","record_id":"<urn:uuid:3b65485d-2905-49da-9053-414c2d0cefd5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
to access context sensitive information within break-rewrite Major Section: BREAK-REWRITE (brr@ :target) ; the term being rewritten (brr@ :unify-subst) ; the unifying substitution General Form: (brr@ :symbol) where :symbol is one of the following keywords. Those marked with * probably require an implementor's knowledge of the system to use effectively. They are supported but not well documented. More is said on this topic following the table. :symbol (brr@ :symbol) ------- --------------------- :target the term to be rewritten. This term is an instantiation of the left-hand side of the conclusion of the rewrite-rule being broken. This term is in translated form! Thus, if you are expecting (equal x nil) -- and your expectation is almost right -- you will see (equal x 'nil); similarly, instead of (cadr a) you will see (car (cdr a)). In translated forms, all constants are quoted (even nil, t, strings and numbers) and all macros are :unify-subst the substitution that, when applied to :target, produces the left-hand side of the rule being broken. This substitution is an alist pairing variable symbols to translated (!) terms. :wonp t or nil indicating whether the rune was successfully applied. (brr@ :wonp) returns nil if evaluated before :EVALing the rule. :rewritten-rhs the result of successfully applying the rule or else nil if (brr@ :wonp) is nil. The result of successfully applying the rule is always a translated (!) term and is never nil. :failure-reason some non-nil lisp object indicating why the rule was not applied or else nil. Before the rule is :EVALed, (brr@ :failure-reason) is nil. After :EVALing the rule, (brr@ :failure-reason) is nil if (brr@ :wonp) is t. Rather than document the various non-nil objects returned as the failure reason, we encourage you simply to evaluate (brr@ :failure-reason) in the contexts of interest. Alternatively, study the ACL2 function tilde-@- :lemma * the rewrite rule being broken. For example, (access rewrite-rule (brr@ :lemma) :lhs) will return the left-hand side of the conclusion of the rule. :type-alist * a display of the type-alist governing :target. Elements on the displayed list are of the form (term type), where term is a term and type describes information about term assumed to hold in the current context. The type-alist may be used to determine the current assumptions, e.g., whether A is a CONSP. :ancestors * a stack of frames indicating the backchain history of the current context. The theorem prover is in the process of trying to establish each hypothesis in this stack. Thus, the negation of each hypothesis can be assumed false. Each frame also records the rules on behalf of which this backchaining is being done and the weight (function symbol count) of the hypothesis. All three items are involved in the heuristic for preventing infinite backchaining. Exception: Some frames are ``binding hypotheses'' (equal var term) or (equiv var (double-rewrite term)) that bind variable var to the result of rewriting :gstack * the current goal stack. The gstack is maintained by rewrite and is the data structure printed as the current ``path.'' Thus, any information derivable from the :path brr command is derivable from gstack. For example, from gstack one might determine that the current term is the second hypothesis of a certain rewrite rule. In general brr@-expressions are used in break conditions, the expressions that determine whether interactive breaks occur when monitored runes are applied. See monitor. For example, you might want to break only those attempts in which one particular term is being rewritten or only those attempts in which the binding for the variable a is known to be a consp. Such conditions can be expressed using ACL2 system functions and the information provided by brr@. Unfortunately, digging some of this information out of the internal data structures may be awkward or may, at least, require intimate knowledge of the system functions. But since conditional expressions may employ arbitrary functions and macros, we anticipate that a set of convenient primitives will gradually evolve within the ACL2 community. It is to encourage this evolution that brr@ provides access to the *'d data.
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/1/6/language/acl2-html-docs/BRR_at_.html","timestamp":"2014-04-17T12:48:19Z","content_type":null,"content_length":"6479","record_id":"<urn:uuid:e0a6346e-79c4-49f4-b90f-e58234cace8e>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Advice on a "great" self study book. PDE Hello, for the first time this summer i wont be taking ( at least i hope so) any classes. That will give me a lot of free time to do as i please. i will be a senior in college and i still have to take pde's. The prof that teaches the course, at least from what i hear, is a maniac, he makes the class impossible and the highest grade he gives out is a C. i definitely do not want a C, but also would like to learn the material. Im in need of a great self study book. A book that goes thru everything in detailed steps. Some previous books that i thought were good self study books were div grad curl ( i hope that the correct order), apps of quantum mechanics by Zettili, calculus an intuitive approach by Morris Thank you for any input. Also i would like to note that i have already purchased the shaums outline on Fourier analysis and pde's, they are good, but not great for self study.
{"url":"http://www.physicsforums.com/showthread.php?t=472839","timestamp":"2014-04-18T18:29:21Z","content_type":null,"content_length":"31247","record_id":"<urn:uuid:6bd24fc5-2af6-4981-ac34-03db1289cd1f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Convergence of fixed point iteration question April 19th 2010, 01:34 PM #1 Feb 2009 Convergence of fixed point iteration question Let p>0 and $x = \sqrt{p+\sqrt{p+\sqrt{p+ \cdots }}}$ , where all the square roots are positive. Design a fixed point iteration $x_{n+1} = F (x_{n})$ with some F which has x as a fixed point. We prove that the fixed point iteration converges for all choices of initial guesses greater than -p+1/4. Let $x_{n+1}=F(x_{n})= \sqrt{p+x_{n}}$ so x is a fixed point for F since F(x)=x. Now let $g(x)=\sqrt{p+x}$. We have $g'(x)=\frac{1}{2 \sqrt{p+x}}$ . I can see that for $x > -p + 1/4$, we have that g'(x) <1. From there I am not sure how to proceed to obtain convergence for $x_{0} > -p +\frac{1}{4}$ . Last edited by math8; April 19th 2010 at 01:55 PM. Follow Math Help Forum on Facebook and Google+ Such a problem recently has been 'attacked' in... Here the sequence is defined as... $x_{n+1} = \sqrt {p+x_{n}} \rightarrow \Delta_{n}= x_{n+1}-x_{n} = \sqrt{p + x_{n}} - x_{n} = f(x_{n})$ (1) The fixed point is the $x_{0}$ for which is $f(x_{0})=0$ so that is... $x_{0} = \frac{1 + \sqrt{1 + 4 p}}{2}$ (2) ... and that means that it must be $p > - \frac{1}{4}$... Kind regards Last edited by chisigma; April 20th 2010 at 04:58 AM. Follow Math Help Forum on Facebook and Google+ April 20th 2010, 04:35 AM #2
{"url":"http://mathhelpforum.com/differential-geometry/140123-convergence-fixed-point-iteration-question.html","timestamp":"2014-04-19T10:36:44Z","content_type":null,"content_length":"36783","record_id":"<urn:uuid:32bc1dd0-2164-48c7-8aa1-fe193e874dc7>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Jean Robert Argand A selection of articles related to jean robert argand. Original articles from our library related to the Jean Robert Argand. See Table of Contents for further available material (downloadable resources) on Jean Robert Argand. Body Mysteries >> Sexuality Jean Robert Argand is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Jean Robert Argand books and related discussion. Suggested Pdf Resources Suggested News Resources In 1806, Jean-Robert Argand put forward the idea that complex numbers describe points on a plane – given a reference, a tells us how far left or right the point is, whereas b tells us how far up or down it is. Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/jean-robert-argand/","timestamp":"2014-04-16T07:17:18Z","content_type":null,"content_length":"28342","record_id":"<urn:uuid:ce36247a-bd12-4ebb-9f7b-ed60083c2e73>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] strong hypotheses and the theory of N Harvey Friedman friedman at math.ohio-state.edu Sun Mar 14 20:28:52 EDT 2010 Here are two relevant observed facts. 1. Any two natural formal systems that interpret EFA = exponential function arithmetic, are comparable under interpretability. 2. Any two natural formal systems of set theory satisfying minimal requirements, are comparable under interpretations that preserve the ordered ring of natural number parts. Much stronger forms of 2 are in fact observed. Item 2 ensures that the provable arithmetic sentences are comparable. Harvey Friedman On Mar 14, 2010, at 5:19 AM, Monroe Eskew wrote: It would seem a reasonable requirement that all strong hypotheses which set theorists explore or use should all agree on the theory of natural numbers. So then how do we know whether whatever large cardinal, forcing axiom, determinacy statement, etc. we're looking at will not say anything about omega that a different such hypothesis contradicts? Is there some computable property of theories \phi(T) that we can check in advance, to make sure that all T satisfying \phi have pairwise consistent theories about naturals? Here I want of course \phi to be useful in practice so that the standard strong hypotheses in set theory like large cardinals are able to satisfy it. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2010-March/014454.html","timestamp":"2014-04-18T10:39:11Z","content_type":null,"content_length":"3714","record_id":"<urn:uuid:0b26a429-d1f0-4dd4-ac6d-efee59298f19>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Thermodynamics: Structure Thermodynamics deals with large systems that consist of more particles than can be reasonably dealt with in a usual mechanics approach. We shift our focus from the variables that govern each individual particle to those that describe the system as a whole. In the first SparkNote, we delved into the quantum basis for a statistical approach to thermodynamics, and presented the four Laws that can be viewed as postulates or quantum-verified relations and truths. We developed two variables that can be used to describe a large system, namely the entropy and the temperature. We will pick up where we left off, defining more variables to describe a system. We will look at the pressure of a system and see how it relates to what we have already done. We will define the notion of the chemical potential. We will collect all of the variables needed to specify the state of a large system, and note the distinction between intensive and extensive variables. Having all of the variables before us, we will look at what is known as the thermodynamic identity, a crucial equation that we will use throughout our entire study of thermodynamics. We will utilize a mathematical tool known as the Legendre transform to assist in defining three other forms of energy, namely the free energy, the Gibbs free energy and the enthalpy in terms of the energy U and the thermodynamic variables. We will come to understand why there are so many formulations of the energy, and realize how useful these different forms can be for solving problems. We will revisit the thermodynamic identity and look at what each term represents. This analysis will become especially important when we look at engines later. Finally, we will utilize some clever mathematical tricks to obtain the Maxwell Relations.
{"url":"http://www.sparknotes.com/physics/thermodynamics/structure/summary.html","timestamp":"2014-04-19T22:14:48Z","content_type":null,"content_length":"50996","record_id":"<urn:uuid:63024b9f-b4b2-470a-8f58-3fbed0f795b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00039-ip-10-147-4-33.ec2.internal.warc.gz"}