content
stringlengths
86
994k
meta
stringlengths
288
619
english-A CLOCKWORK ORANGE Number of results: 72,220 english-A CLOCKWORK ORANGE i need to write a femisnist criticism on the book A CLOCKWORK ORANGE. please give me an idea of what to do. im stuck. about the mistreatment of women? the rape of the woman in the house, statutory rape of the two girls, disrespect to his mother, derogatory terms used for women... Tuesday, January 23, 2007 at 7:42pm by Emily Does anyone know where I can find the last chapter in A Clockwork Orange? Perhaps online? Thanks! Since that book is not out of copywright, you will not find it on line. But this site has a good synopsis of the chapters. http://www.sparknotes.com/lit/clockworkorange/ If you ... Sunday, April 15, 2007 at 1:14pm by George In the book "A Clockwork Orange" what motivates Alex to choose to be "good" by excersing his free will? I think it is his boredom that motivates him, but I need to expand upon that idea. Can YOU help me with this? What motivates him? Thank you very much. http://www.sparknotes.... Saturday, November 11, 2006 at 5:22pm by Existentialist A liter of orange fruit drink contains 22% orange juice. How many millimeters of orange juice must be added to produce a mixture containing 50% orange juice? Monday, May 16, 2011 at 5:58am by Anonymous english 3 First I would choose another book that had essentially the same focus. With Lord of the Flies I would think of a book or movie that involved the conflict of young people who are pretty much on their own. Think about Clockwork Orange, Brave New World, Catcher in the Rye, The ... Thursday, April 9, 2009 at 2:08pm by GuruBlue 8th Math - Probability A basket contains 5 green lollipops, 12 red lollipops, and 7 orange lollipops. When a lollipop is taken from the basket, it is not replaced. What is P(orange, then orange)? I know the answer. It's 42 /552. But HOW do I figure this out? I can do the first p(orange) but if it's ... Tuesday, April 2, 2013 at 4:50pm by L0velyLayna I'm doing a 3000-4000 word essay on the book A clockwork Orange.Here's my topic: An Analysis of the "evil" character Alex Is this topic too broad? Should I narrow it down? Or is it just crap and I should come up with another one? Thanks for the help. First define "evil" as ... Monday, October 9, 2006 at 12:40pm by Andre one canned jucie drink is 15% orange jucie; another is 10% orange jucie. How many liters of each should be mixed together in order to get 5L that is 11% orange jucie? How many liter of the 15% orange jucie? how many liters of the 10% orange jucie? Sunday, March 21, 2010 at 1:25pm by Tim 4th grade math 1 apple slice, 3 orange slice, plus 1 apple slice, 3 orange slice, plus 1 apple slice, 3 orange slice, plus 1 apple slice, 3 orange slice, plus 1 apple slice, 3 orange slice, plus 1 apple slice, 3 orange slice, plus 1 apple slice, 3 orange slice, plus 1 apple slice, 3 orange ... Monday, February 7, 2011 at 11:31pm by Right Answer one canned jucie drink is 15% orange jucie; another is 10% orange jucie. How many liters of each should be mixed together in order to get 5L that is 11% orange jucie? How many liter of the 15% orange jucie and how many liters of the 10% orange jucie should be in this mixture Tuesday, March 16, 2010 at 9:44pm by Sue math (help!!!!) one canned jucie drink is 15% orange jucie; another is 10% orange jucie. How many liters of each should be mixed together in order to get 5L that is 11% orange jucie? How many liter of the 15% orange jucie and how many liters of the 10% orange jucie should be in this mixture Wednesday, March 17, 2010 at 4:37pm by sue Math Inequalities One canned juice drink is 25% orange juice another is 5% orange juice. How many liters of each should be mixed together in order to get 20L that is 24% orange juice? How many liters of the 25% orange juice should be in mixture? How many liters of the 5% orange juice should be ... Thursday, November 5, 2009 at 3:57pm by Anna Math 116 one canned juice drink is 15% orange juice; another is 5% orange juice. how many liters of each should be mixed together in order to get 10L that is 9% orange juice? Saturday, August 16, 2008 at 3:59pm by Sam One canned juice drink is 25% orange juice; another is 10% orange juice. How many liters of each should be mixed together in order to get 15L that is 125 orange juice? Sunday, January 23, 2011 at 11:18pm by Anonymous One canned juice drink is 30% orange juice, another is 10% orange juice. How many liters of each should be mixed together in order to get 20L that is 18% orange juice? Wednesday, November 23, 2011 at 8:41pm by ms Math (Algebra) Again please NO answers! Only on how to solve. One canned juice drink is 20% orange juice; another is 5% orange juice. How many liters of each should be mixed together in order to get 15L that is 14% orange juice? Sunday, July 4, 2010 at 2:30pm by Lynn A spherical 0.34 kg orange, 2.0 cm in radius, is dropped from the top of a building of height 31 m. After striking the pavement, the shape of the orange is a 0.60 cm thick pancake. Neglect air resistance and assume that the collision is completely inelastic.(a) Estimate how ... Thursday, November 4, 2010 at 11:53pm by Tired One canned juice drink is 15% orange juice; another is 10% orange juice. How many liters of each should be mixed together in order to get 5L that is 11% orange juice? How many liters of the 15% orange juice should be in the mixture__ L ? How many liters of the 10% orange juice... Wednesday, February 19, 2014 at 4:29pm by spacegazing Algebra 2 How much of a 60% orange juice drink must be mixed with 30 gallons of a 10% orange juice drink to obtain a mixture that is 50% orange juice. Tuesday, May 1, 2012 at 5:52pm by Alex I'm writing an essay on the book "A clockwork Orange". I'm not sure wut my research question or topic will be. It'll probably revolve around the analysis of the characteristic of the main character Alex. I'll be talking about the good and evil in him and how he is vulnerable ... Friday, October 6, 2006 at 1:57am by anon 8th Math - Probability Total=24 p(orange,then orange)=7/24*6/23 =42/552 this is bcus the no of orange is 7 & when one of it is taking out 6 will remain & the total no will reduce to 23. Tuesday, April 2, 2013 at 4:50pm by Paul Ayogu M.D Percent word problem A pitcher holds 2 gallons of orange juice. The orange juice is made 20% concentrated juice and 80% water. How much water is used in the orange juice? Friday, January 18, 2013 at 4:36pm by Jacob 8th Math - Probability Total=24 p(orange,then orange)=7/24*6/23 =42/552 this is bcus the no of orange is 7 & when one of it is taking out 6 will remain & the total no will reduce to 23. Tuesday, April 2, 2013 at 4:50pm by Paul 8th Math - Probability Total=24 p(orange,then orange)=7/24*6/23 =42/552 this is bcus the no of orange is 7 & when one of it is taking out 6 will remain & the total no will reduce to 23. Tuesday, April 2, 2013 at 4:50pm by Paul One canned juice drink contains 30% orange juice; another is 10% orange juice. How many liters of each should be mixed together in order to get 20L that is 28% orange juice? I want step by step help. Wednesday, November 2, 2011 at 12:46pm by GATOR You have 120 orange and purple marbles in a bag. The orange marbles make up 5/8 in the bag. What percent of the marbles are orange? Tuesday, June 4, 2013 at 7:28pm by Sue pick the box labled mixed. If an apple is there, the box is apple. If an orange is there, it is orange. If the first mixed box is apple, then and the other two are wrong labels, then apple box is mixed, and the orange box is apples If the first mixed box was orange, then the ... Sunday, August 21, 2011 at 12:21pm by bobpursley A manufacturer of soft-drinks advertises that their orange soda as "naturally flavored," although it contains only 5% orange juice. A new federal regulation stipulates that to be called "natural" a drink must contain at least 10% fruit juice. How much pure orange juice must ... Thursday, January 31, 2013 at 9:19pm by Barry A bag of buttons contains 3 red, 4 orange, 7 green and 10 blue battens. How many orange buttons should be added so that makes up 1/4 of the total? (Remember, if you add more orange buttons, your total buttons will also increase.) Sunday, February 9, 2014 at 5:15pm by Nandani A bag of buttons contains 3 red, 4 orange, 7 green and 10 blue battens. How many orange buttons should be added so that makes up 1/4 of the total? (Remember, if you add more orange buttons, your total buttons will also increase.) Sunday, February 9, 2014 at 5:38pm by Anonymous A bag of buttons contains 3 red, 4 orange, 7 green and 10 blue battens. How many orange buttons should be added so that makes up 1/4 of the total? (Remember, if you add more orange buttons, your total buttons will also increase.) Pla help due tomorrow Sunday, February 9, 2014 at 9:58pm by anne One canned juice drink is 25 percent orange juice; another is 10 percent orange juice How many liters of each should be mixed together in order to get 15L that is 23 percent orange juice? Saturday, April 3, 2010 at 6:22pm by Michael A basket contains 5 green lollipops, 12 red lollipops, and 7 orange lollipops. When a lollipop is taken from the basket, it is not replaced. What is P(orange, then orange)? A.42/576 B.42/552 C.7/24 D.14/48 Ithink it is C...? Sunday, March 17, 2013 at 10:39pm by Cassie A spiiner was spun 16 times. The results are shown in the table below. yellow = 5 white = 4 red = 0 blue = 2 green = 2 orange = 3 whch colors' experimental probability matches its theoretical probability a. yellow, green and orange b. white, red and orange c. yellow, blue and ... Sunday, October 21, 2012 at 1:25pm by kyle Orange color in non-pedigreed cats is controlled by the O locus. The non-orange phenotype, if not diluted, appears as black coat color. Perform the following reciprocal crosses: Orange blotched female X black blotched male Black blotched female X orange blotched male What ... Tuesday, February 10, 2009 at 4:51pm by claire Does blue light or orange light have a larger index of refraction? Blue light is bent more by a prism than orange light. Does blue light or orange light have a larger index of refraction in glass? A. Blue B. Orange If you know the answer to this, could you also explain why? ... Thursday, October 29, 2009 at 11:48pm by Jeff College Math (2) From one, subtract the probabilities of: (1) zero or one orange, (2) zero or two yellow and (3) one orange and one yellow. Zero orange (all yellow):0.0047 Zero yellow (all orange):0.0163 One orange (4 yellow)): 5x(6/13)(5/12)(4/11)(3/10)(7/9)= 0.08159 One yellow (4 orange): 5... Saturday, February 28, 2009 at 11:48pm by drwls Math- Weighted Average How many quarts of pure orange juice should Mike add to a 10% orange drink to create 6 quarts of 40 % orange mix juice. Let p represent the number of quarts of pure ornage juice he should add to the orange drink. Monday, October 13, 2008 at 8:18pm by mike college math HELP!!!!! CalJuice Company has decided to introduce three fruit juices made from blending two or more concentrates. These juices will be packaged in 2-qt (64-oz) cartons. One carton of pineapple-orange juice requires 8 oz each of pineapple and orange juice concentrates. One carton of ... Saturday, February 22, 2014 at 7:10pm by Dee college math CalJuice Company has decided to introduce three fruit juices made from blending two or more concentrates. These juices will be packaged in 2-qt (64-oz) cartons. One carton of pineapple-orange juice requires 8 oz each of pineapple and orange juice concentrates. One carton of ... Sunday, February 23, 2014 at 2:38pm by Dee A fruit drink is made up of 20% orange juice, 35% apple juice and the rest water. Calculate these ratios. a. orange juice to apple juice b. water to orange juice Saturday, May 28, 2011 at 12:45am by dhara we are doing about the flame test. we are using some metals like barium chloride,calcium chloride,copper,lithium,opotassium sodium and strontium. the flame of Colors are yellow-orange,red- orange,green-blue,red,pink,orange and red.We had to guess wat color is going to be? Tuesday, October 19, 2010 at 8:25pm by Kassandra CIVICS PLEASE!!!!! How did frozen orange juice concentrate revolutionize history? (Please if you give me an answer or a link, do not have it be based on Orange juice I need FROZEN ORANGE JUICE CONCENTRATE) Thanks! Wednesday, November 28, 2012 at 8:29pm by Kara Cunningham A baq of 60 marbles has 8 red, 9 blue, 10 orange, and 33 green marbles. In drawing a marble from the bag 100 times, an orange marble was picked 13 times. What was the likelihood of an orange marble being picked? Is this more or lessthan the actual number? Wednesday, May 11, 2011 at 12:00pm by Anonymous 2 -- I drank some orange juice. or I drank a glass of orange juice. 3 -- either one 5 -- I went to Aunt Mary and Uncle George's house. (one house belonging to both of them together) 6 -- delete "back" 7 -- I ate a type of dessert... 8 -- Could I have it, too, please? 9 -- yes ... Wednesday, March 3, 2010 at 3:42pm by Writeacher 1 - right 2 - by is right, with is incorrect 3 - yes, only by 4 - Is this what you mean? >> In the morning I make myself toast with cheese and salami on it, a cup of tea, and a glass of orange juice. 5 - All sounds OK except you need to remove "a" in front of "medium" 6... Sunday, March 20, 2011 at 5:12pm by Writeacher At breakfast, Gino drank 2/3 of the orange juice in a container. He drank 15 ounces of orange juice. a) Write a multiplication equation that you can use to find the amount of orange juice that was in the container before Gino had breakfast Friday, January 11, 2013 at 12:19am by Gray M. Mr lol bought x oranges for $30 and sold them at a profit of 5cents per orange. Write down the (i)cost price of an orange. (ii)the selling price of an orange When he had sold all except 20 of the oranges, he found he had received $28. (iii) form an equation in x and show that ... Saturday, June 18, 2011 at 12:51am by Smartypants kuai or jai a television station released 300 balloons. Of these, 3/4 were orange. How many were orange? Again Im trying to divide it and the answer I get does not match the one in the book Monday, November 4, 2013 at 9:46pm by rian There are two identical boxes of balls. In the first box 2 blue and 5 orange balls, and the second 3 blue and 6 orange balls. Chosen at random from the box and pull out a 1 ball. What is the probability that the ball taken is orange Wednesday, August 22, 2012 at 7:20pm by hazard A: {apple, orange, pineapple} B: {apple, orange, pineapple} C: {apple, orange, pineapple} A⊆B ∧ B⊆C -> A⊂C Is this true or false? Tuesday, February 22, 2011 at 10:47pm by MathMate One canned juice is 20% orange juice: another is 10% orange juice. How many liters of each should be mixed together in order to get 10L that is 13% orange juice.How many liters of the 20%? How many liters of the 10%? Saturday, November 13, 2010 at 4:05pm by Melissa Math algebra (urgent!) help me! Thanks a lot!! (; Mr lol bought x oranges for $30 and sold them at a profit of 5cents per orange. Write down the (i)cost price of an orange. (ii)the selling price of an orange When he had sold all except 20 of the oranges, he found he had received $28. (iii) form an equation in x and show that ... Saturday, June 18, 2011 at 1:00pm by Click6 Two shuffleboard disks of equal mass, one orange and the other yellow are involved in a perfectly elastic glancing collision. The yellow disk is initially at rest and is struck by the orange disk moving initially to the right at 7.00 ms"1. After the collision, the orange disk ... Monday, April 11, 2011 at 12:01am by alya Probability and Statistics 2 Minute Maid orange juice company claims that 25% or more of all orange juice drinkers prefer its product. To try to disprove the validity of this claim, Tropicana orange juice company sample 200 orange juice drinkers and found that 41 prefer the Minute Maid brand. At a=0.5 ... Sunday, April 29, 2012 at 11:01pm by Kim you have 4 blue crayons, 2 red and 3 orange, what is the probablity of pulling a red crayon and then an orange without replacement? Friday, December 10, 2010 at 1:42am by BB orange light has a frequency of 4.8X10^14 S^-1. WHAT IS THE ENERGY OF ONE QUANTUM OF ORANGE LIGHT? Wednesday, October 20, 2010 at 6:08pm by Raynique 2 cases: box 1, then orange box 2, then orange prob = (1/2)(5/7) + (1/2)(6/9) = 5/14 + 6/18 = 29/42 Wednesday, August 22, 2012 at 7:20pm by Reiny Orange light has a frequency of 4.8 x 10^14 s^ -1. What is the energy of one quantum of orange light? Monday, September 16, 2013 at 6:35pm by Sarah Phenolphtalein vs Methyl Orange (check my reasonin yo what's the diference between phoenophyltlein and methyl orange in terms of titrating Monday, June 12, 2006 at 3:59pm by Anonymous a television station released 300 balloons. Of these, 3/4 were orange. How many were orange? 3/4=x/300 don't understand that Monday, November 4, 2013 at 10:31pm by rian One canned juice drink is 25% orange juice another is 5% orange juice. How many liters of each should be mixed together in order to get 20L that is 24% orange juice? How many liters of the 25% orange juice should be in mixture? How many liters of the 5% orange juice should be ... Thursday, November 5, 2009 at 4:34pm by Anna One canned juice drink is 25% orange juice another is 5% orange juice. How many liters of each should be mixed together in order to get 20L that is 24% orange juice? How many liters of the 25% orange juice should be in mixture? How many liters of the 5% orange juice should be ... Thursday, November 5, 2009 at 5:40pm by Anna If you have 10 orange marbles and 5 blue marbles in a bag how many times will you pick up the orange ones in 60 tries? Thursday, April 5, 2012 at 7:22am by Anonymous For the titration of hcl versus NaOh, suggest a better indicator than methyl orange. Why is methyl orange not the ideal choice for this application? Friday, November 1, 2013 at 10:46am by Hol In a basket of 24 mangoes 1/3 were yellow 5/8 were green and the rest were orange. 1. what fraction of the mangoes were green and ripe? 2. How mangoes green? 3. How many mangoes were orange? 4. What fraction of the mangoes were orange? Thursday, February 28, 2013 at 7:49pm by Chamiqueko In a basket of 24 mangoes 1/3 were yellow 5/8 were green and the rest were orange. 1. what fraction of the mangoes were green and ripe? 2. How mangoes green? 3. How many mangoes were orange? 4. What fraction of the mangoes were orange? Thursday, February 28, 2013 at 7:50pm by Chamiqueko john mixed 3/4 liter of yellow paint with 1 1/4 liters of red paint to make 2 liters of orange paint. . He needed more orange paint. . To make a new batch of orange paint, he used exactly 1 liter of red paint. Using the same ratio, how many liters of yellow paint should john ... Sunday, February 9, 2014 at 10:39am by Miki please help.. i really need the answer as soon as possible.. Two shuffleboard disks of equal mass, one orange and the other yellow are involved in a perfectly elastic glancing collision. The yellow disk is initially at rest and is struck by the orange disk moving initially to ... Monday, April 11, 2011 at 9:20am by irna Which color arrangement wouldcreate the deepest senseof space? A. red,orange,blue B. Green violet, blue C.Yellow, orange,yellow-orange D. Violet, blue-violet, red-violet What are the double split complement colors for blue-green? Saturday, March 27, 2010 at 10:45pm by Jake Which color arrangement wouldcreate the deepest senseof space? A. red,orange,blue B. Green violet, blue C.Yellow, orange,yellow-orange D. Violet, blue-violet, red-violet What are the double split complement colors for blue-green? Sunday, March 28, 2010 at 2:34am by Jake Color Number of Marbles Blue 4 Black 2 Red 5 Orange 4 Green 5 Rachel has 20 marbles in a jar. This table shows the number of blue, black, red, orange, and green marbles. What is the probability that if she randomly drew a marble out of the jar it would be orange? Thursday, May 16, 2013 at 6:18pm by tariq A bag contains 7 blue marbles, 18 red marbles, and 16 orange marbles. Find the chance that neither draws are orange. Sunday, December 4, 2011 at 5:53pm by Fay (Having used the metric system for the last 50 years, I had to recall all that gallon, pints and quart stuff) change all to pints 3quarts = 6 pints 1 1/2 gallons = 12 pints let the amount of orange juice she used be x pints 2 + 6 + x = 12 x = 4 pints she used 4 pints of orange... Tuesday, September 24, 2013 at 8:56pm by Reiny purple yellow blue orange ?? Monday, March 21, 2011 at 6:36pm by Writeacher A bag has 6 green marbles and 4 orange marbles.Find the least number of marbles of either color you would add to make the probability of picking an orange marble Monday, February 25, 2013 at 8:12pm by Nicole Two shuffleboard disks of equal mass, one orange and the other yellow are involved in a perfectly elastic glancing collision. The yellow disk is initially at rest and is struck by the orange disk moving initially to the right at 7.00 ms"1. After the collision, the orange disk ... Monday, April 11, 2011 at 12:00am by alya CHEM-Le' Chatelier's Principle Look at the equilibrium reaction. It involves H^+. So for example, we take K2CrO4 and add H^+ so it drives the reaction to the right, changes CrO4^-2 (which is yellow) to Cr2O4^-2 (which is orange). Or if we take Cr2O7^-2 (orange) and add NaOH. The NaOH reacts with the H^+, ... Wednesday, March 4, 2009 at 7:37pm by DrBob222 One ball is drawn from a bag containing 4 green and6 orange balls. Find the probability that it is: a.) green b.) orange Wednesday, October 7, 2009 at 11:08pm by Sharon pine glenn If I have total of 63 brushes and there is six times as many orange that white how many orange do I have and how many white? how do I solve Tuesday, January 24, 2012 at 10:20pm by chris there are 8 oranges in a box and 8 kids are waiting to get one. Each kid is given 1 orange, yet there is one orange left in the box, how? Tuesday, December 4, 2012 at 2:21pm by khan english grammar Ramesh generally drinks an orange juice Tuesday, March 12, 2013 at 12:06pm by dennis You can solve sequentially as follows: O=orange,B=banana,A=apple,P=pear "There are 3 bananas." B=3 "Eight pieces are either oranges or bananas" O+B=8 O=? "one more orange than apples" O-A=1 A=? "twice as many apples as pears" A=2P P=? Sunday, December 13, 2009 at 12:30pm by MathMate Say what energy changes are taking place: a)clockwork toy b)boy kicking soccer ball c)boiling kettle on a gas ring d)person walking upstairs What are the wasted energy in a) b) c) and d)? Friday, February 19, 2010 at 8:38am by Amir An orange grower has 96 orange trees to plant in her grove. She wants to plant them equally in groups. Each group will have at least 2, but no more than 10. How many different ways can they be divided evenly into these groups? Thursday, November 29, 2012 at 7:19am by Anonymous Rachel has 20 marbles in a jar. This table shows the number of blue, black, red, orange, and green marbles. What is the probability that if she randomly drew a marble out of the jar it would be Thursday, May 16, 2013 at 6:11pm by tariq To be orange the compound must absorb in the blue. To be blue the compound must absorb in the orange. Therefore, the first compound colored orange must be absorbing higher energy (blue is higher than orange). To become a blue compound it must have a lower energy; therefore, ... Wednesday, November 9, 2011 at 10:27pm by DrBob222 Chemistry Suspension Examples... Orange Juice is a suspension because the particlse of the juice(the pulp) will eventually settle down. That is why orange juice cartons always say shake well. Wednesday, June 15, 2005 at 7:34am by TT2*** No. 27 is half of 54. If half of the fruits are oranges and half are bananas, then the orange-to-banana ratio would be 1:1 (for every 1 orange there is 1 banana). That's not what you want. Thursday, February 4, 2010 at 2:57am by Anonymous 8th grade Sam Squeeze has 10 quarts of a drink that is 60% organge juice. how much pure orange juice must he add to get a drink that is 75% orange juice? Tuesday, September 28, 2010 at 2:59am by Anonymous 8th grade Sam Squeeze has 10 quarts of a drink that is 60% organge juice. how much pure orange juice must he add to get a drink that is 75% orange juice? Tuesday, September 28, 2010 at 3:00am by Anonymous Physics check It would have to be of longer wavelength, because the energy gap would be less. Red would be one possibility, but there are others, such as yellow, orange, red-orange and infrared. Friday, April 11, 2008 at 12:41am by drwls 4th grade math Mrs. Reid brought 32 orange and apple slices to her daughter's soccer practice. There were three times as many orange slices as there were apple slices. How many of each kind did she bring? Monday, February 7, 2011 at 11:31pm by Jennifer 4th grade math Mrs. Reid brought 32 orange and apple slices to her daughter's soccer practice. There were three times as many orange slices as there were apple slices. How many of each kind did she bring? Monday, February 7, 2011 at 11:31pm by Anonymous every yellow marble is expensive.one -half the orange marbles are expensive.one -half of all expensive marbles are yellow. there are 40 yellow marbles and 25 expensive marbles that are niether yellow nor orange. how many orange marbles are there? Sunday, October 30, 2011 at 9:54pm by phil jill bought 32 orange and apple slices. There were three times as many orange slices as there were apple slices. how many of each did she have Monday, October 15, 2012 at 8:25pm by lisa a fifth grade class sold 25 liters of orange juice the orange juice was sold in cups containing 200 millilters and 300 milliliters an equal number of cups contaning 200 milliliters and 300 millilters were sold how many cups of orange juice did the class sell Thursday, November 8, 2012 at 5:21am by Anonymous Please help I don't get how to do this! A spinner has three congruent sectors colored orange, green, and purple. Use the rule of probability of each event. a) landing on orange, then landing on purple b) landing on the same color 2 times in a row. Tuesday, April 16, 2013 at 11:05am by Cherie green : yellow = 2:5 yellow : orange = 3:4 green : orange = 3:10 if 30 green then 75 yellow and 100 orange 30:100 = 3:10 not a tutor Monday, January 10, 2011 at 7:27pm by helper lets add x yellow marbles so (x+8)/(x+13) = 2/3 3x + 24 = 2x + 26 x = 2 must add 2 yellows or lets add y orange marbles 8/(y+13) = 2/3 2y + 26 = 24 2y = -2 y = -1 or we could take out 1 orange marble Proof: yellow = 8 orange = 4 total = 12 prob of yellow = 8/12 = 2/3 Thursday, December 9, 2010 at 7:47pm by Reiny Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/search/index.cgi?query=english-A+CLOCKWORK+ORANGE","timestamp":"2014-04-16T17:39:07Z","content_type":null,"content_length":"41274","record_id":"<urn:uuid:3cbcf457-b07d-4dcc-b229-0918addecfff>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
Nuclear model of Atom, O2 nucleus and speed at which a proton must be fired toward it Sorry for not including the brackets, I did mean 1/(4*pi*Enot). So I changed the way I approached this question...below is my revised attempt which still ended up being wrong: 1/(4(pi)Enot) * (q oxygen * q proton)/radius min = 1/2 mv^2 9x10^9 (8*1.602x10^-19 * 1.602x10^-19)/(2.91x10^-15m +1.11x10^-15m)=1/2 (9.109x10^-31)v^2 9x10^9 ((8*1.602x10^-19 * 1.602x10^-19)/4.02x10^-15) = 1/2 (9.109x10^-31)v^2 4.5965x10^-13 = 1/2mv^2 1.00923x10^18 = v^2 1,004,605,104 m/s = v This answer is incorrect. I decided to add the radius of the oxygen nucleus with the turning point radius as the radius between the two point charges. Maybe my radius should be something else but the way I did it seems right to me...unfortunately it isn't right :(
{"url":"http://www.physicsforums.com/showthread.php?t=451904","timestamp":"2014-04-19T09:34:03Z","content_type":null,"content_length":"36563","record_id":"<urn:uuid:ab1e76ca-aca6-4cb5-b4c6-52e30e382f45>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Voorhees Statistics Tutor Find a Voorhees Statistics Tutor ...After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. I have been in upper management since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university. 13 Subjects: including statistics, calculus, geometry, algebra 1 ...I recently was Adjunct Professor at a local community college having taught Introduction to Statistics for two years. I recently tutored a very worried graduate (Masters Program) student (in a nursing program) to success in an on-line Bio-Statistics course project. She earned a "A" for the course. 22 Subjects: including statistics, geometry, GRE, ASVAB ...I desire to tutor people who have the desire to do well on the SAT and do not feel like dealing with the snobbery that may occur with a professional tutor. I tutored many of my friends on the SAT and saw improvements in their score. I believe with help from me your SAT score could improve dramatically. 16 Subjects: including statistics, reading, chemistry, physics ...During this time, I taught Business Management at the University of Wisconsin and Chestnut Hill College. I hold BS, MS, and MBA degrees. After completing a 3-year certificate in Gestalt Psychotherapy, I am a trainer in communications and motivation. 35 Subjects: including statistics, English, reading, chemistry ...I was trained in adolescent and cognitive psychology and have a very strong practical Mathematics background. I have served as an educator in various roles, both part time and full time, spanning across middle school and elementary school classroom settings. I have focused on working with class... 9 Subjects: including statistics, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/voorhees_statistics_tutors.php","timestamp":"2014-04-18T01:14:03Z","content_type":null,"content_length":"24146","record_id":"<urn:uuid:872907e7-7208-4191-8dfd-31609428be12>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
A note on reliable full-duplex transmission over half-duplex links Results 1 - 10 of 138 , 1999 "... We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approac ..." Cited by 2407 (62 self) Add to MetaCart We describe a method for reducing the complexity of temporal logic model checking in systems composed of many parallel processes. The goal is to check properties of the components of a system and then deduce global properties from these local properties. The main difficulty with this type of approach is that local properties are often not preserved at the global level. We present a general framework for using additional interface processes to model the environment for a component. These interface processes are typically much simpler than the full environment of the component. By composing a component with its interface processes and then checking properties of this composition, we can guarantee that these properties will be preserved at the global level. We give two example compositional systems based on the logic CTL*. - ACM Transactions on Programming Languages and Systems , 1986 "... We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent ..." Cited by 1173 (58 self) Add to MetaCart We give an efficient procedure for verifying that a finite-state concurrent system meets a specification expressed in a (propositional, branching-time) temporal logic. Our algorithm has complexity linear in both the size of the specification and the size of the global state graph for the concurrent system. We also show how this approach can be adapted to handle fairness. We argue that our technique can provide a practical alternative to manual proof construction or use of a mechanical theorem prover for verifying many finite-state concurrent systems. Experimental results show that state machines with several hundred states can be checked in a matter of seconds. - Formal Aspects of Computing , 1994 "... We present a logic for stating properties such as, "after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds". The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted ove ..." Cited by 247 (1 self) Add to MetaCart We present a logic for stating properties such as, "after a request for service there is at least a 98% probability that the service will be carried out within 2 seconds". The logic extends the temporal logic CTL by Emerson, Clarke and Sistla with time and probabilities. Formulas are interpreted over discrete time Markov chains. We give algorithms for checking that a given Markov chain satisfies a formula in the logic. The algorithms require a polynomial number of arithmetic operations, in size of both the formula and This research report is a revised and extended version of a paper that has appeared under the title "A Framework for Reasoning about Time and Reliability" in the Proceeding of the 10 th IEEE Real-time Systems Symposium, Santa Monica CA, December 1989. This work was partially supported by the Swedish Board for Technical Development (STU) as part of Esprit BRA Project SPEC, and by the Swedish Telecommunication Administration. the Markov chain. A simple example is inc... - Information and Computation , 1992 "... The research on algorithmic verification methods for concurrent and parallel systems has mostly focussed on finite-state systems, with applications in e.g. communication protocols and hardware systems. For infinite-state systems, e.g. systems that operate on data from unbounded domains, algorithmic ..." Cited by 176 (35 self) Add to MetaCart The research on algorithmic verification methods for concurrent and parallel systems has mostly focussed on finite-state systems, with applications in e.g. communication protocols and hardware systems. For infinite-state systems, e.g. systems that operate on data from unbounded domains, algorithmic verification is more difficult, since most verification problems are in general undecidable. In this paper, we consider the verification of a particular class of infinite-state systems, namely systems consisting of finite-state processes that communicate via unbounded lossy FIFO channels. This class is able to model e.g. link protocols such as the Alternating Bit Protocol and HDLC. The unboundedness of the channels makes these systems infinite-state. For this class of systems, we show that several interesting verification problems are decidable by giving algorithms for verifying the following classes of properties. - In Proceedings of the Workshop on Automatic Verification Methods for Finite State Machines , 1991 "... Abstract The Concurrency Workbench is an automated tool for analyzing networks of finite-state processes expressed in Milner's Calculus of Communicating Systems. Its key feature is its breadth: a variety of different verification methods, including equivalence checking, preorder checking, and model ..." Cited by 102 (3 self) Add to MetaCart Abstract The Concurrency Workbench is an automated tool for analyzing networks of finite-state processes expressed in Milner's Calculus of Communicating Systems. Its key feature is its breadth: a variety of different verification methods, including equivalence checking, preorder checking, and model checking, are supported for several different process semantics. One experience from our work is that a large number of interesting verification methods can be formulated as combinations of a small number of primitive algorithms. The Workbench has been applied to the verification of communications protocols and mutual exclusion algorithms and has proven a valuable aid in teaching and research. 1 Introduction This paper describes the Concurrency Workbench [11, 12, 13], a tool that supports the automatic verification of finite-state processes. Such tools are practically motivated: the development of complex distributed computer systems requires sophisticated verification techniques to guarantee correctness, and the increase in detail rapidly becomes unmanageable without computer assistance. Finite-state systems, such as communications protocols and hardware, are particularly suitable for automated analysis because their finitary nature ensures the existence of decision procedures for a wide range of system properties. - PROCEEDINGS OF CAV ’98 , 1998 "... We present a method for computing abstractions of infinite state systems compositionally and automatically. Given a concrete system S = S1 k \Delta \Delta \Delta k Sn of programs and given an abstraction function ff, using our method one can compute an abstract system S a = Sa 1 k \Delta \Delta \Del ..." Cited by 98 (5 self) Add to MetaCart We present a method for computing abstractions of infinite state systems compositionally and automatically. Given a concrete system S = S1 k \Delta \Delta \Delta k Sn of programs and given an abstraction function ff, using our method one can compute an abstract system S a = Sa 1 k \Delta \Delta \Delta k S a n such that S simulates S a. A distinguishing feature of our method is that it does not produce a single abstract state graph but rather preserves the structure of the concrete system. This feature is a prerequisite to benefit from the techniques developed in the context of model-checking for mitigating the state explosion. Moreover, our method has the advantage that the process of constructing the abstract system does not depend on whether the computation model is synchronous or asynchronous. - Chicago Journal of Theoretical Computer Science , 1995 "... Two aspects of reliability of distributed protocols are a protocol's ability to recover from transient faults and a protocol's ability to function in a dynamic environment. Approaches for both of these aspects have been separately developed, but have drawbacks when applied to an environment that has ..." Cited by 87 (12 self) Add to MetaCart Two aspects of reliability of distributed protocols are a protocol's ability to recover from transient faults and a protocol's ability to function in a dynamic environment. Approaches for both of these aspects have been separately developed, but have drawbacks when applied to an environment that has both transient faults and dynamic changes. This paper introduces definitions and methods for addressing both concerns in the design of systems. A protocol is superstabilizing if it is (i) self-stabilizing, meaning that it is guaranteed to respond to an arbitrary transient fault by eventually satisfying and maintaining a legitimacy predicate, and (ii) it is guaranteed to satisfy a passage predicate at all times when the system undergoes topology changes starting from a legitimate state. The passage predicate is typically a safety property that should hold while the protocol makes progress towards re-establishing legitimacy following a topology change. Specific contributions of the paper inc... - Distributed Computing , 1988 "... : We present a formal model that captures the subtle interaction between knowledge and action in distributed systems. We view a distributed system as a set of runs, where a run is a function from time to global states and a global state is a tuple consisting of an environment state and a local state ..." Cited by 85 (28 self) Add to MetaCart : We present a formal model that captures the subtle interaction between knowledge and action in distributed systems. We view a distributed system as a set of runs, where a run is a function from time to global states and a global state is a tuple consisting of an environment state and a local state for each process in the system. This model is a generalization of those used in many previous papers. Actions in this model are associated with functions from global states to global states. A protocol is a function from local states to actions. We extend the standard notion of a protocol by defining knowledge-based protocols, ones in which a process' actions may depend explicitly on its knowledge. Knowledge-based protocols provide a natural way of describing how actions should take place in a distributed system. Finally, we show how the notion of one protocol implementing another can be captured in our model. Some material in this paper appeared in preliminary form in [HF85]. An abridge... - In CAV'96. LNCS 1102 "... ) Bernard Boigelot Universit'e de Li`ege Institut Montefiore, B28 4000 Li`ege Sart-Tilman, Belgium Email: boigelot@montefiore.ulg.ac.be Patrice Godefroid Lucent Technologies -- Bell Laboratories 1000 E. Warrenville Road Naperville, IL 60566, U.S.A. Email: god@bell-labs.com Abstract We study the v ..." Cited by 83 (7 self) Add to MetaCart ) Bernard Boigelot Universit'e de Li`ege Institut Montefiore, B28 4000 Li`ege Sart-Tilman, Belgium Email: boigelot@montefiore.ulg.ac.be Patrice Godefroid Lucent Technologies -- Bell Laboratories 1000 E. Warrenville Road Naperville, IL 60566, U.S.A. Email: god@bell-labs.com Abstract We study the verification of properties of communication protocols modeled by a finite set of finite-state machines that communicate by exchanging messages via unbounded FIFO queues. It is well-known that most interesting verification problems, such as deadlock detection, are undecidable for this class of systems. However, in practice, these verification problems may very well turn out to be decidable for a subclass containing most "real" protocols. Motivated by this optimistic (and, we claim, realistic) observation, we present an algorithm that may construct a finite and exact representation of the state space of a communication protocol, even if this state space is infinite. Our algorithm performs a loo... , 1996 "... . Communication protocols pose interesting and difficult challenges for verification technologies. The state spaces of interesting protocols are either infinite or too large for finite-state verification techniques like model checking and state exploration. Theorem proving is also not effective sinc ..." Cited by 73 (12 self) Add to MetaCart . Communication protocols pose interesting and difficult challenges for verification technologies. The state spaces of interesting protocols are either infinite or too large for finite-state verification techniques like model checking and state exploration. Theorem proving is also not effective since the formal correctness proofs of these protocols can be long and complicated. We describe a series of protocol verification experiments culminating in a methodology where theorem proving is used to abstract out the sources of unboundedness in the protocol to yield a skeletal protocol that can be verified using model checking. Our experiments focus on the Philips bounded retransmission protocol originally studied by Groote and van de Pol and by Helmink, Sellink, and Vaandrager. First, a scaled-down version of the protocol is analyzed using the MurOE state exploration tool as a debugging aid and then translated into the PVS specification language. The PVS verification of the generalized prot...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=227579","timestamp":"2014-04-17T18:49:38Z","content_type":null,"content_length":"40791","record_id":"<urn:uuid:c7686702-ef63-4a54-9839-5acbab267474>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Creating Dynamic Named Ranges in Excel® using OFFSET & COUNTA Following on from Creating Named Ranges in Excel®, this blog post describes how to create dynamic named ranges. If you’re not familar with named ranges, I’d really suggest google searching “named ranges in excel” or read the post linked to above. As mentioned many times on this site, named ranges are really useful, however there are situations where the data you need to reference is not in a static cell range – that is, the cell range is dynamic. To combat this, we can use the OFFSET and COUNTA functions. First, let’s look at COUNTA – it is a function that most Excel users would be familiar with. The COUNT function returns the number of numbers in a cell range. For instance: =COUNT($A$1:$A$10) would equal 10 =COUNTA($A$1:$A$10) would equal 10 =COUNT($B$1:$B$10) would equal 0 =COUNTA($B$1:$B$10) would equal 10 Basically, COUNTA counts the number of cells in a range that have at least some value – not just numbers. =COUNTA($B$1:$B$10) would now equal 5 It is important to understand that this as it will be instrumental in determining the height and width of our dynamic named range. The next function to investigate is the OFFSET function. The definition of OFFSET is: For those of you who don’t know, the square brackets (i.e. “[" and "]“) surrounding the parameters of a function indicate that it is optional. This is fairly standard coding convention. To break this down: • reference corresponds to the starting cell position • row corresponds to the number of rows we are going to move downwards, relative to the starting cell (or upwards, if we use a negative value.) • column corresponds to the number of columns we are going to move to the right, relative to the starting cell (or to the left, if we use a negative value.) • [height] corresponds to the height (or number of rows) of the range starting at the adjusted position. • [width] corresponds to the width (or number of columns) of the range starting at the adjusted position. An example will almost certainly make this easier… We previously defined the three (3) named ranges, “Origins”, “Destinations” and “Distances” defined thusly: But by using OFFSET and COUNTA we can update our definitions to by dynamic: • For Origins, =$A$2:$A$4 becomes =OFFSET($A$1,1,0,COUNTA($A:$A)-1,1) We start at cell $A$1, we move down 1 cell, across 0 cells to reach our starting position (i.e. $A$2 – you could just as easily have had $A$2,0,0 as your first three (3) parameters. Now, this is the trickiest (not, tricky, but trickiest) part. We specify a height of COUNTA($A:$A)-1. What does this mean? Well, there are four (4) cells with values in all of Column A ($A:$A) in this example, but we don’t want to include the heading as it doesn’t constitute an “Origin”. We simply subtract one (1) to ignore it and what we have left over is the height. The final parameter value of 1 simply indicates that the dynamic named range of “Origins” is one (1) column wide. • For Destinations, =$B$1:$D$! the same premise holds. This becomes =OFFSET($A$1,0,1,1,COUNTA($1:$1)-1) We start at cell $A$1, we move down 0 cells, across 1 cell to reach our starting position (i.e. $B$1 – you could just as easily have had $B$1,0,0 as your first three (3) parameters. Now the (not so tricky anymore) tricky part; we specify a height of 1 and a width of COUNTA($1:$1)-1. This simply counts up all the values in Row 1, subtracts to row heading in $A$1 and sets the dynamic named range of “Destinations”. Now that we’ve done these, we can do something particularly fancy to define the “Distances”. This may be a slight point of difference to other tutorials on this topic. The same follows for the other named ranges. • For Distances, =$B$2:$D$4 becomes =OFFSET($A$1,1,1,COUNTA(Origins),COUNTA(Destinations)) How cool is that! Straight away you may have noticed something special about this definition, but we’ll go through it parameter by parameter, just for completeness (because that’s the kind of guy I We start at cell $A$1, we move down 1 cell, across 1 cell to reach our starting position (i.e. $B$2 – you could just as easily have had $B$2,0,0 as your first three (3) parameters. Now the (not so tricky anymore) tricky part; we specify a height of COUNTA(Origins) and a width of COUNTA(Destinations). We had already done the hard work defining our row and column “headings” that we can just count them up! We are using dynamic named ranges in another of our dynamic named ranges. I must admit, I used to use the formula: Now, while that also works, it isn’t as consise and doesn’t demonstrate what we are trying to define with our named range. With these definitions, adding more rows and columns will simply increase the COUNTA($A:$A) and COUNTA($1:$1) counts, thus dynamically expanding the range defined using the OFFSET function. There are certainly aspects of the functions that I have omitted. I’d welcome any comments or criticism, so please feel free to add any comments. If you require further clarification on any part of this article, please leave a comment, or you can email me at: ryan@kirgs.com Ryan Kirgan is from Sydney, Australia. 12 Comments 1. That’s a very good tip. Thank you for sharing this to us. Until now, there are still many who aren’t familiar using excel to make their work more efficient. 2. Your blog tips on OFFSET & COUNTA helped me a great deal. I have a problem that I think I figured out, but would like to hear if you have an explanation about why it works or if there’s a better way to do it. For instance: Column A: Mar 11 Dec 10 Sep 10 Jun 10 Mar 10 Dec 09 Sep 09 Jun 09 Mar 09 Column B: As you can see, I’m adding the most recent figures by inserting a row above row 3. For my chart, I need only the most recent eight figures to appear. For now, this Name seems to work: When I insert a row above 3, this is my new Name, but it still updates the chart correctly. Can you explain why? I’m absolutely new at this. Thanks for any help you can offer. 3. @Jen The general premise is ok, but I think that the parameters may not necessarily be in the correct order. Are you trying to SUM the 8 most recent values in the column? If that understanding is correct, the formula you have listed will only list the most recent value. Breaking down the OFFSET formula, you have: =OFFSET(reference, rows, cols, [height], [width]) Having $B$2, this means your starting point is the cell that has “Figure” (good place to start). The next parameter tells OFFSET how many rows to move to reach the beginning of what you want the range to be. I’d make this simply 1 as you are always going to want the cell just beneath the heading to make a start. You don’t want to move across any columns, so the next cell value will simply be “0″. You want the range to be 8 cells high to capture the 8 most recent numbers, so the 4th parameter (for “height”) will be “8″. The final parameter, for the width of the range, will be 1 (as the range is only 1 cell wide.) Therefore, =OFFSET($B$2,1,0,8,1) will define your range. Assuming you want the sum of these 8 values, you need to enter: Hope this helps (sorry if I’ve misunderstood what you were after.) P.S – The reason the formula value didn’t change was because when you inserted the row, the cell references updated ($B$3:$B$11 to $B$4:$B$12, the count of both will always equal 9 (provided there are values in there – therefore, your value for the second parameter – # of rows to move – was always 9 – 8 = 1!)) 4. @Ryan – I was hoping you’d write back! Sorry I wasn’t more clear. My chart is creating 8 bars of data with Date on the X axis and Figure on the Y axis. The 8 bars feed from rows 3-10 of columns A and B. So, imagine a bar graph with the first bar: Mar 11 and 10 units; second bar: Dec 10 and 15 units; …all the way to the eighth bar (and 10th row of data): Jun 09, 45 units. As I update data, I insert a row above row 3. So, I still need rows 3-10 to feed the chart, but now the chart would go from Jun 11, Mar 11,…Sep 10 — and the data from Jun 09 would no longer be on the chart. Thanks for your expertise! All the best from Denver, CO, USA! 5. @Ryan – =OFFSET($B$2,1,0,8,1) works perfectly for my needs. Thanks, again! 6. @Jen I’m glad to here you had some success. An interesting thing to note is that if you do use a named range for the source data for the chart, you need to reference it by including the worksheet name as well as the named range name. For example, if the named range was “dates”, you’d need to reference this as “Sheet1!dates” if that is where the data resides. I’ve attached the example from your earlier email here: Chart Example using OFFSET for Jen.xls. Hope this helps. 7. @Ryan You’re pretty awesome…thanks for the example! 8. Why not just use a table that expands it’s range automatically? Or if a named range is needed – name the range, then make it a table. This seems much simpler than entering all these formulas. 9. i entered “name a range” value for (A4,B4) cell in excel,to read cell value we use sh.cell(row,col) but how to get name range value for that cell. 10. Polly, Have you tried: Dim NRValue NRValue = ThisWorkbook.Names("<named range>").RefersToRange.Value MsgBox (NRValue(1, 1)) This should work for you if your range refers to a single cell. Otherwise, it is pretty easy to modify for larger ranges. 11. Hi Ryan, This explanation help me a lot. I already defined the name and it updates the values with no problem. How would I keep it alphabetically sorted? 12. Hi Alex, I couldn’t think of a really elegant way to do this without using VBA, but should the named range contain only numbers, you could try the following: It’s an array formula, so you need to press CTRL + SHIFT + ENTER when entering the formula. You then simply drag the formula down until it is the same size as your unsorted list. Let me know if you are looking to sort text, as this will require a different solution that I can’t think of off the top of my head! 1 Trackback 1. [...] Once you’ve mastered named ranges, you’ll find that there are many more application than what have been listed here. You might also find that there are limitations, least not, that you have to keep redefining your range every time you add more data (excluding inserting rows/columns in the middle of the range as it expands the cell range reference automatically.) If this is the case, you’re now ready for Dynamic Named Ranges: Creating Dynamic Named Ranges in Excel® using OFFSET & COUNTA [...] Post a Comment
{"url":"http://blog.kirgs.com/2010/05/04/creating-dynamic-named-ranges-in-excel%C2%AE-using-offset-counta/","timestamp":"2014-04-17T12:29:39Z","content_type":null,"content_length":"49528","record_id":"<urn:uuid:998b795e-44c0-4fa6-b8e0-e4608a88f811>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Here is a (prompt!) reply from Mark Girolami corresponding to the earlier post: In preparation for the Read Paper session next month at the RSS, our research group at CREST has collectively read the Girolami and Calderhead paper on Riemann manifold Langevin and Hamiltonian Monte Carlo methods and I hope we will again produce a A couple hours work and we now have animations of the global anomalies: Created with the animation package in R. The code examples were a bit terse about some of the details but after fiddling about I was able to get the program to output an Html animation complete with java based playback controls. Write Visualising questionnaires Last week I was shown the results of a workplace happiness questionnaire. The plots were ripe for a makeover. Most obviously, the pointless 3D effect needs removing, and the colour scheme is badly Damn Close 5.0 Code will be in the drop box in a bit, once I shower: This is a wholesale replacement of previous versions, completely rewritten in raster. It will be the base going forward. All of the analysis routines will be rewritten using raster. For time series functionality I will continue to use zoo as that Connecting to a MongoDB database from R using Java It would be nice if there were an R package, along the lines of RMySQL, for MongoDB. For now there is not – so, how best to get data from a MongoDB database into R? One option is to retrieve JSON via the MongoDB REST interface and parse it using the rjson package. Assuming, for Effective sample size In the previous days I have received several emails asking for clarification of the effective sample size derivation in “Introducing Monte Carlo Methods with R” (Section 4.4, pp. 98-100). Formula (4.3) gives the Monte Carlo estimate of the variance of a self-normalised importance sampling estimator (note the change from the original version in Introducing Monte Global done! Over the past few weeks I’ve been working at getting Moshtemp to work entirely in the raster package. I’ve been aided greatly by the author of that package, Robert, who has been turning out improvements to the package with regularity. For a while I was a bit stymied by some irregularities in getting source from Monte Carlo Statistical Methods third edition Last week, George Casella and I worked around the clock on starting the third edition of Monte Carlo Statistical Methods by detailing the changes to make and designing the new table of contents. The new edition will not see a revolution in the presentation of the material but rather a more mature perspective on what R tee-shirt I gave my introduction to the R course in a crammed amphitheatre of about 200 students today. Had to wear my collectoR teeshirt from Revolution Analytics, even though it only made the kids pay attention for about 30 seconds… The other few “lines” that worked were using the Proctor & Gamble “car 54″ poster and Saptarshi Guha on Hadoop, R Saptarshi Guha (author of the Rhipe package) joins the likes of Ebay, Yahoo, Twitter and Facebook and as one of just 37 presenters at the Hadoop World conference. (Revolution Analytics is proud to sponsor Saptarshi's presence at this event, which take place in New York on October 12.) He'll be talking about using R and Hadoop to analyze Voice-over-IP...
{"url":"http://www.r-bloggers.com/search/Twitter/page/170/","timestamp":"2014-04-20T08:38:36Z","content_type":null,"content_length":"39147","record_id":"<urn:uuid:3d557a9c-0f86-4ecf-be3b-78754a7f03db>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00178-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] howto dot(a, b.T), some a or b coords are zeros Nathan Bell wnbell@gmail.... Wed Jan 2 14:47:36 CST 2008 On Jan 2, 2008 1:28 PM, dmitrey <dmitrey.kroshko@scipy.org> wrote: > hi all, > I have 2 vectors a and b of shape (n, 1) (n can be 1...10^3, 10^4, may > be more); some coords of a or b usually are zeros (or both a and b, but > b is more often); getting matrix c = dot(a, b.T) is required (c.shape = > (n,n)) > What's the best way to speedup calculations (w/o using scipy, only numpy)? > (I intend to use the feature to provide a minor enhancement for NLP/NSP > ralg solver). How much more expensive is dot(a,b.T) than zeros((n,n))? Is outer() any faster? What proportion of a and b are zero? You could remove all zeros from a and b, compute that outer product, and then paste the results back into an n by n matrix. I doubt this would be any faster though since the outerproduct doesn't do many I know you don't want to use scipy, but time the following: from scipy.sparse import * asp = csr_matrix(a) bsp = csr_matrix(b.T) c = asp * bsp # time this Nathan Bell wnbell@gmail.com More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2008-January/014994.html","timestamp":"2014-04-18T20:50:45Z","content_type":null,"content_length":"3892","record_id":"<urn:uuid:50872730-5605-40c4-83e1-7fbfca54880c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Andreas received a new book for his birthday. He read 3/8 of the book in the first week, then 53 more pages in the second week. If he still has 117 pages to read, how many total pages are in his • one year ago • one year ago Best Response You've already chosen the best response. Let the total number of pages in the book be: x -In the first week, he reads 3/8 of the book, so in the first week he reads (3/8)*x pages -In the second week, he reads 53 pages. -After all this, he has 117 pages left So basically you can divide the book in to three sections, first week, second week and pages left after second week. All these when added equal the total number of pages in the book. So by this relation, you can draw the following equation: \[(3x \div8) + 53 + 117 = x\] Separate variable and non-variable terms, you get: \[53 + 117 = x - (3x \div8)\] => \[170 = (8x - 3x) \div8\] => \[170*8 = 5x\] => \[1360 = 5x\] Now solve for x, this gives you, x= 272 The book has 272 pages Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ff394cce4b01c7be8c7401a","timestamp":"2014-04-21T12:43:22Z","content_type":null,"content_length":"28406","record_id":"<urn:uuid:77f46566-bcab-4dd3-8342-78a33f34e17f>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
this is my code which is not performing the for loop! 12-09-2006 #1 Registered User Join Date Dec 2006 this is my code which is not performing the for loop! 14. Write a program that calculates how much money you'll end up with if you invest an amount of money at a fixed interest rate, compounded yearly. Have the user furnish the initial amount, the number of years, and the yearly interest rate in percent. Some interaction with the program might look like this: 15. Enter initial amount: 3000 16. Enter number of years: 10 17. Enter interest rate (percent per year): 5.5 At the end of 10 years, you will have 5124.43 dollars. At the end of the first year you have 3000 + (3000 * 0.055), which is 3165. At the end of the second year you have 3165 + (3165 * 0.055), which is 3339.08. Do this as many times as there are years. A for loop makes the calculation easy. int main() int i,y; char ch; float r,ans,n; cout<<"enter the initial amount:"; cout<<"enter the no of years:"; cout<<"enter the rate of interest (percent per year):"; cout<<"at the end of the year the balace willbe:"<<ans; cout<<"do you want to continue(y/n)"; return 0; Last edited by Salem; 12-09-2006 at 02:17 PM. Reason: fixed code tags - but not the indentation 1) Wrong forum 2) Maybe some, say, NEWLINES!! Silence is better than unmeaning words. - Pythagoras My blog Moved to c++ forum If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. > this is my code which is not performing the for loop! Because ans is being used to control the loop AND perform the calculations. Besides, shouldn't you be looping for the number of years, not the initial amount. You know, using meaningful variables like initialAmount and years rather than cryptic single letters would make the code read all wrong from the outset. for(ans=1;ans<=initialAmount ;ans++) would simply make no sense as you wrote it. would seem far more plausable. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. i will try this out thanks a lot! i have tried this too but not working.actually the loop is not repeating Post all of your latest code, not just random single lines. Are you STILL using ans inside the loop for something else as well? Because that will mess you up. If you dance barefoot on the broken glass of undefined behaviour, you've got to expect the occasional cut. If at first you don't succeed, try writing your phone number on the exam paper. I support http://www.ukip.org/ as the first necessary step to a free Europe. why do you mess with "ans" inside the loop? I guess that is what you are looking for: int main() int i,y,x; char ch; float r,ans,n; cout<<"enter the initial amount:"; cout<<"enter the no of years:"; cout<<"enter the rate of interest (percent per year):"; cout<<"at the end of the year the balace will be:"<<ans << endl; return 0; My suggestion Name your variables something useful like: iamount,years,interest,balance This is going to help you find errors in your code 12-09-2006 #2 12-09-2006 #3 12-09-2006 #4 12-10-2006 #5 Registered User Join Date Dec 2006 12-10-2006 #6 Registered User Join Date Dec 2006 12-10-2006 #7 12-10-2006 #8 Registered User Join Date Dec 2006 12-10-2006 #9 Registered User Join Date Dec 2006 12-10-2006 #10 Registered User Join Date Dec 2006
{"url":"http://cboard.cprogramming.com/cplusplus-programming/86384-my-code-not-performing-loop.html","timestamp":"2014-04-16T20:10:06Z","content_type":null,"content_length":"77143","record_id":"<urn:uuid:8c91d6bc-10a0-4120-8ba3-d18694ee00e2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
CMS Winter 2005 Meeting Let K be an o-symmetric convex body in d-dimensional Euclidean space and let |x| denote the norm of any point (vector) in d-space with respect to the norm generated by the centrally symmetric convex body K. Then let a(K) denote the minumum of the sum of the norms of points whose convex hull contains K. (Note that a(K) coincides with the illumination parameter of smooth convex bodies introduced by K. Bezdek in 1991.) In the talk we present lower and upper bounds on a(K). This is a joint work with A. Litvak (Univ. of Alberta). With Helly's Theorem as the starting point, Transversal Theory has been one of the areas of constant research during the past fifty years in Discrete and Combinatorial Geometry. The central problem is to determine conditions under which there is a line (the transversal) that intersect every member of a family of compact convex sets in the Euclidean plane. We present a very brief history of the subject, concluding with recent developments of the past year. In the projective plane W over a field F let S be a set of points. Assume also that there is a 1-1 [injective] mapping f from S into the lines of W satisfying the following two properties. A. For P in S,f(P) does not contain P. B. If P, Q are distinct points of S, then the points P, Q and R [which is the intersection of f(P) with f(Q)] are collinear. Then S is called a CO-TANGENCY set in W. In this lecture we present a structural result for co-tangency sets. Following this we present some applications including a classical result due initially to M. O. Nan. There is an interesting question as to whether a Voronoi polytope can be written as the Minkowski sum of Voronoi polytopes in complementary subspaces. It is convenient to say that a Voronoi polytope is reducible if it can be written as such a Minkowski sum, but irreducible otherwise. This situation can be characterized in terms of the Venkov Graph, which will be defined in the course of the talk-the Voronoi polytope is irreducible if and only if the corresponding Venkov Graph is connected. I will describe how the question of reducibility relates to the theory of metrical forms for lattices, the question of the number of distinct tilings that can be constructed from a given Voronoi polytope, and the Scaling Theorem of Matroid Theory. The classical Erdos distance problem says that N points in R^d determine at least N^[ 2/(d)]-d Euclidean distances. We shall discuss an analog of this problem in vector spaces over finite fields. Estimates for classical Kloosterman sums play an important role. General experience shows that narrower roadways are harder to traverse for vehicles with a bounded turning radius. One way to quantify this is to establish a sharp width threshold t such that (i) every roadway of width at least t (independent of its layout) is guaranteed to have a unit curvature-bounded traversal, and (ii) for any width w < t there exist roadways of width w that admit no such traversal. I will discuss the threshold t, extremal roadways, and related questions: if a given roadway has width less than t, how hard is it to determine its traversibility; if a traversal exists, how hard is it to construct? Applications to cutting logs (as opposed to log-factors) will also be mentioned. We prove a Pach-Sharir type incidence theorem for a class of algebraic curves in R^n and algebraic surfaces in R^3. Joint work with Jozsef Solymosi. For points p,q Î R^n, [p,q][1] denotes the intersection of all the unit balls that contain p and q. A set of diameter at most two is called 1-convex if, for any pair p,q of its points, it contains [p,q][1]. The intersection of finitely many unit balls is called a ball-polytope. In this talk we examine which properties of convex sets and convex polytopes can be translated to the language of 1-convex sets and ball-polytopes. This is joint work with K. Bezdek, M. Naszodi and P. Papez. An antipodal set in Euclidean n-space is a set of points with the property that through any two of them there is a pair of parallel hyperplanes supporting the set. In this talk, I will present two research topics that are connected by the idea of antipodality. The first part of the talk will focus on the extension of the above concept to hyperbolic n-space. This is joint work with Károly Bezdek and Deborah Oliveros. In the second part, the maximum number of touching positive homothetic copies of a convex body in Euclidean n-space will be discussed. According to a conjecture of Károly Bezdek and János Pach, this number should be 2^n; which bound, if it holds, is sharp as it is attained by cubes. The previously known bound was 3^n which I improved to 2^(n+1). We investigate a geometric problem motivated by sensor networks, which have emerged as a model for ubiquitous computing and monitoring of the physical world. If sensor networks are to act as our remote "eyes and ears", then we need to ensure that any significant failure (natural or adversarial) suffered by the network is promptly and efficiently detected. In this talk we will consider a concrete problem of detecting linear cuts that isolate at least e fraction of the nodes from the base station. We show that the base station can detect whenever an e-cut occurs by monitoring the status of just O(1/e) nodes in the network. Our scheme is deterministic and it is free of false positives: no reported cut has size smaller than en/2. Besides this combinatorial result, we also propose efficient algorithms for finding the O(1/e) nodes that should act as sentinels, and report on our simulation results, comparing the sentinel algorithm with two natural schemes based on sampling. This is joint work with Nisheeth Shrivastava and Csaba Toth. In this survey talk we consider how many edges a geometric graph with n vertices may have if it does not contain some specific forbidden configuration as a subgraph. Here a geometric graph is a graph whose vertices are points in general position in the plane and whose edges are straight line segments. The simplest forbidden configuration to consider is a pair of crossing edges: if no pair of edges crosses, we have a planar graph with at most 3n-6 edges. Agarwal et al. considered three pairwise crossing edges and proved the number of edges is still linear if this forbidden configuration is not contained. Recently Eyal Ackerman extended the same result to four pairwise crossing edges. With Ackerman we found the maximal number of edges in a geometric graph not containing three pairwise crossing edges within a small additive constant. It is a challenge to extend the linear bound to the forbidden configuration of five (or more) pairwise crossing edges. At present Valtr's O(nlogn) bound is the best. Another natural choice for a forbidden configuration is the self-crossing drawings of a small (planar) graph. With Pach, Pinchasi, and Tóth we proved that the maximal number of edges in a geometric graph not containing a self-crossing path of three edges is Q(nlogn). For longer self-crossing paths as forbidden patterns the exact order of magnitude is not known, but it is larger than linear (as shown by a randomized pruning procedure) and is o(nlogn). For forbidden self-intersecting cycles of length 4 we proved an O(n^3/2logn) bound on the number of edges with Adam Marcus, which is almost tight as abstract graphs with W(n^3/2) edges and no C[4] subgraphs exist. This result found many applications in other parts of combinatorial geometry. Finding the maximal number of edges for other types of forbidden patterns (and tighter estimates for some of the above patterns) raises many interesting research problems. Determining the maximum number of unit distances determined by n points in the plane is one of the notoriously hard Erdös problems in combinatorial geometry. It is easy to give a tight bound on number of occurrences of the minimum and maximum distance, which is at most 3n-O(Ön) and n, respectively. Finding the maximum number of unit area triangles determined by n points in the plane is similarly hard as the unit distance problem. It is known, however, that the minimum and maximum triangle areas can occur O(n^2) and O(n) times, and both bounds are tight. We pursue the analogous problems in the space, and find bounds on the maximal number of unit, minimum, and maximum volume tetrahedra determined by n points in three dimensions, along with some new techniques. Put 2n points on a circle, then join them in pairs by n chords. It is well known that the number of non-crossing configurations is [ 1/(n+1)]((2n) || (n)), the n-th Catalan number. A natural extension is to enumerate all configurations by the number of crossings of the chords, and it is proved that the distribution of number of crossings is identical to that of nestings. In this talk we extend the above result. We introduce the notion of crossing number and nesting number for a chord-configuration, and prove that these two statistics are distributed symmetrically over all chord configurations. A similar result also holds over all set partitions.
{"url":"http://cms.math.ca/Events/winter05/abs/dcg.html","timestamp":"2014-04-17T06:46:57Z","content_type":null,"content_length":"23297","record_id":"<urn:uuid:af486101-8a35-47dd-aea2-abbde203df46>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply I do want to ask though, is a right frustum just a straight, generic one? Or is it one that is tilted? Also, I checked out the link, and it seems that the full surface area (as in, the whole surface of it including bases) is: , where is multiplication. Am I right? I am kind of confused. The volume formula though, is easy for me to understand.
{"url":"http://www.mathisfunforum.com/post.php?tid=19072&qid=256876","timestamp":"2014-04-18T03:26:36Z","content_type":null,"content_length":"23029","record_id":"<urn:uuid:b46404c8-43f2-4731-9a47-55e9965f64ef>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help April 8th 2010, 06:03 AM #1 Apr 2010 Hi all, I am trying to figure the following out: Let Z,X be random variables and Y = X + Z; For a known f(X) and g(Z), and a given data set of Y it is required to find the parameters of f and g functions. I wonder if this is at all possible. If f and g are Normal then the distribution of Y is also Normal with the sum of means and variances of f and g. In this case I presume it is not possible to derive the mean of X and Z separately. Is this true? and if it is is this true for any arbitrary pair of functions f and g. With thanks, Hi all, I am trying to figure the following out: Let Z,X be random variables and Y = X + Z; For a known f(X) and g(Z), and a given data set of Y it is required to find the parameters of f and g functions. I wonder if this is at all possible. If f and g are Normal then the distribution of Y is also Normal with the sum of means and variances of f and g. In this case I presume it is not possible to derive the mean of X and Z separately. Is this true? and if it is is this true for any arbitrary pair of functions f and g. With thanks, Are Z and X independent? Yes, X and Z are independent. Say, f - lognormal and g - Weibull or gamma. Is it possible to find unique parameters of these two distributions based on the observations of Y, say, using MLE?! OK, thank you for having a look at it. I just noticed the densities of X and Z are known. Then you can definitely use MGF. Simply factorize $E(e^{Yt})$ into the product of the two known MGF's and your parameters will be apparent. Of course I'm assuming the generating functions exist... Thank you for your suggestion. I tried using Weibull for both X and Z. It looks like it is not possible to find the parameters of f and g uniquely based on observations of Y. More like trying to solve 5 = x + z for x and z. What information do you have about $Y?$ Can you fit the data $?$ If so, the above task should be easy enough. Try it, I don't see why it wouldn't work. You know, as long as $f_Y$ is integrable... The information on Y is in the form of a set of observation, say: Y = {24, 15, 78 ...}. To be honest I am not entirely sure how to fit the data on M[Y]=M[X]M[Z]. And my main issue with this is if the convolution of two Normal distributions is also Normal then you would fit the data using MLE for f(y)-Normal(m[Y],Var[Y]). Since m[Y]=m[X]+m[Z] there is no way to deduce unique m[X] or m[Z], isn't it?! Same for the variance. Hence, for other distributions similar problem may hold true?! The information on Y is in the form of a set of observation, say: Y = {24, 15, 78 ...}. To be honest I am not entirely sure how to fit the data on M[Y]=M[X]M[Z]. And my main issue with this is if the convolution of two Normal distributions is also Normal then you would fit the data using MLE for f(y)-Normal(m[Y],Var[Y]). Since m[Y]=m[X]+m[Z] there is no way to deduce unique m[X] or m[Z], isn't it?! Same for the variance. Hence, for other distributions similar problem may hold true?! You know X and Z to be normal? Then clearly, we can fit Y with a normal distribution. Determine its mean and variance empirically... Surely we can fit the data to Y as we know it is Normal because X and Z are Normal. And we will get f(y) which is a Normal distribution with mean - mean[Y] and variance-var[Y]. However, I want to find the means and variances of Z and X. Thanks for trying to help. Okay I see the issue now... Well you have a bound on the sum of your means, and you can standardize one of them to draw some inference on the other... Obviously there are an infinite number of possible combinations for the parameters on X and Z, though. My main question was whether this generalises to all combinations of different forms of f(X) and f(Z). Say if the resulting f(Y) is bimodal, can we find unique parameters for f(X) and f(Z). April 8th 2010, 07:17 AM #2 April 8th 2010, 07:35 AM #3 Apr 2010 April 8th 2010, 07:42 AM #4 April 8th 2010, 07:45 AM #5 Apr 2010 April 8th 2010, 07:50 AM #6 April 9th 2010, 12:48 AM #7 Apr 2010 April 9th 2010, 03:41 PM #8 April 12th 2010, 02:00 AM #9 Apr 2010 April 12th 2010, 07:43 AM #10 April 12th 2010, 08:04 AM #11 Apr 2010 April 12th 2010, 12:28 PM #12 April 13th 2010, 12:28 AM #13 Apr 2010
{"url":"http://mathhelpforum.com/advanced-statistics/137907-convolution.html","timestamp":"2014-04-21T06:09:16Z","content_type":null,"content_length":"67521","record_id":"<urn:uuid:6353bc2c-8551-4a4b-a460-c780a43f3648>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Chemical Engineering Services on Demand Related links Print version ISSN 0104-6632 Braz. J. Chem. Eng. vol.29 no.3 São Paulo July/Sept. 2012 PROCESS SYSTEMS ENGINEERING Mathematical modeling of a three-phase trickle bed reactor J. D. Silva^I, * ; C. A. M. Abreu^II ^IPolytechnic School, UPE, Laboratory of Environmental and Energetic Technology, Phone: + (81) 3183-7515, Rua Benfica 455, Madalena, CEP: 50750-470, Recife - PE, Brazil. E-mail:jornandesdias@poli.br ^IIDepartment of Chemical Engineering, Federal University of Pernambuco, (UFPE), Phone + (81) 2126-8901, R. Prof. Artur de Sá 50740-521, Recife - PE Brazil. E-mail: cesar@ufpe.br The transient behavior in a three-phase trickle bed reactor system (N[2]/H[2]O-KCl/activated carbon, 298 K, 1.01 bar) was evaluated using a dynamic tracer method. The system operated with liquid and gas phases flowing downward with constant gas flow Q[G] = 2.50 x 10^-6 m^3 s^-1 and the liquid phase flow (Q[L]) varying in the range from 4.25x10^-6 m^3 s^-1 to 0.50x10^-6 m^3 s^-1. The evolution of the KCl concentration in the aqueous liquid phase was measured at the outlet of the reactor in response to the concentration increase at reactor inlet. A mathematical model was formulated and the solutions of the equations fitted to the measured tracer concentrations. The order of magnitude of the axial dispersion, liquid-solid mass transfer and partial wetting efficiency coefficients were estimated based on a numerical optimization procedure where the initial values of these coefficients, obtained by empirical correlations, were modified by comparing experimental and calculated tracer concentrations. The final optimized values of the coefficients were calculated by the minimization of a quadratic objective function. Three correlations were proposed to estimate the parameters values under the conditions employed. By comparing experimental and predicted tracer concentration step evolutions under different operating conditions the model was validated. Keywords: Trickle Bed; KCl Tracer; Modeling; Transient; Validation Mathematical modeling of three-phase trickle bed reactors (TBR) considers the mechanisms of forced convection, axial dispersion, interphase mass transport, intraparticle diffusion, adsorption and chemical reaction. These models are formulated by relating each phase to the others (Silva et al., 2003; Iliuta et al., 2002; Latifi et al., 1997; Burghardt et al., 1995). The trickle-bed reactor is a three-phase catalytic reactor in which liquid and gas phases flow concurrently downward through a fixed bed of solid catalyst particles where the reactions take place. These systems have been extensively used in hydrotreating and hydrodesulfurization in petroleum refining, petrochemical hydrogenation and oxidation processes, and methods of biochemical and detoxification of industrial waste water (Al-Dahhan et al., 1997; Dudukovic et al., 1999; Liu et al., 2008; Ayude et al., 2008; Rodrigo et al., 2009; Augier et al., 2010). The flow regimes occurring in a trickle-bed reactor depend on the liquid and gas mass flow rates, the properties of the fluids and the geometrical characteristics of the packed bed. A fundamental understanding of the hydrodynamics of trickle-bed reactors is indispensable to their design and scale-up and to predict their performance (Charpentier and Favier, 1975; Specchia and Baldi, 1977). The purpose of this work was to evaluate the transient behavior of a three-phase trickle bed reactor using a dynamic tracer method to estimate the magnitude of the hydrodynamic parameters related to the operations, including the axial dispersion coefficient in the liquid phase, the liquid-solid mass transfer coefficient and the partial wetting efficiency. A dynamic phenomenological model was proposed and validated with experimental reaction data. To represent the dynamic behavior of the tracer component, a one-dimensional mathematical model was formulated considering the effects related to the axial dispersion, liquid-solid mass transfer, partial wetting and chemical reaction. The model was adopted for KCl, considered to be the tracer component in the liquid phase, and was restricted to the following hypotheses: (i) isothermal operation; (ii) constant gas and liquid flow rates throughout the reactor; (iii) moderate intraparticle diffusion resistance; (iv) the chemical reaction rate within the catalytic solid is equal to the liquid-solid mass transfer rate, in any position of the reactor. The mass balance for the tracer (A[L]) in the liquid phase is written as: Mass balance for the liquid; The initial and boundary conditions for Eq. (1) are given as: The equality of the mass transfer and reaction rates can be expressed by the following equations: The kinetic model for the reaction was based on a first-order reaction according to the following expression (Colombo et al., 1976): where r[KCl] is the consumption rate of the reactant, A[s](z, t) is the reactant concentration at the surface of the solid phase and k[r] is the Combining Equations (5) and (6), the rate of mass transfer is equal the rate of reaction at the surface of the solid phase as: Equations (1) to (4) and (7) can be analyzed by employing the dimensionless variables in Table 1. Expressed in the dimensionless variables, the equations and the initial and boundary conditions can be rewritten as: Equations (8) to (12) include the following dimensionless parameters: The dimensionless concentration, Applications of the Laplace Transform (LT) to dynamic transport problems in three-phase trickle bed reactors with tracer (liquid, gas) are employed to solve the linear differential equations. To complete the solution, the Laplace Transform inversion method is indicated, where numerical inversion is often employed. In the present work, the LT technique was applied to the partial differential equation, Eq. (16), as presented below: where the overhead "s" indicate the LT and its domain variable, respectively. The initial and boundary conditions in the Laplace domain are: Eq. (18) is a second-order non-homogeneous ordinary differential equation. Its solution is expressed by Eq. (22) and is composed of the general solution of the homogeneous ordinary differential The second-order homogeneous ordinary differential equation is expressed as: Its general solution is given by the following function: In term of hyperbolic functions, Eq. (24) was written as: where f[1](s) [2](s) are expressed by f[1](s)=C[1](s)-C[2](s), and f[2](s)=C[1](s)+C[2](s). The particular solution was given by the expression: The general solution has been presented as Eq. (22), in which where f[1](s) and f[2](s) are two arbitrary integration constants. Applying the boundary conditions from Eqs. (20) and (21) to the general solution, Eq. (27), led to the algebraic equations needed to find the arbitrary integration constants f[1](s) and f[2](s) in terms of known parameters. The expressions for these two constants have been found here as: Eqs. (28) and (29) were introduced into Eq. (27) to obtain the general solution of the tracer concentration in the liquid phase: To obtain the concentration evolution of the tracer at the exit of the trickle-bed reactor, the numerical fast Fourier transform (NFFT) technique was employed. In the NFFT operations, the Laplace variable "s" was changed to The transient behavior in a three-phase trickle bed reactor system (N[2]/H[2]O-KCl/activated carbon, 298 K, 1.01 bar) was evaluated by using a dynamic tracer method. The experiments were realized in a stainless steel reactor which consists of a fixed bed (0.22 m in height, 0.030 m inner diameter) of spherical catalytic pellets of activated carbon (d[p] = 0.00045 m, CAQ 12/UFPE). The bed was in contact with a concurrent gas-liquid downward flow carrying the tracer in the liquid phase. Experiments were performed at constant gas flow Q[G] = 2.50 x 10^-6 m^3 s^-1 and with the liquid phase flow (Q[L]) varying in the range from 4.25x10^-6 m^3 s^-1 to 0.50x10^-6 m^3 s^-1. Under these conditions, the low interaction regime was guaranteed (Ramachandran and Smith, 1983; Silva et al., 2003). The evolution of KCl concentration was measured at the exit of the reactor as the response to a concentrations step at the reactor inlet. Continuous analysis of the KCl tracer, fed at the reactor top at a concentration of 0.05M, was performed by using a refractive index detector (HPLC detector, Varian ProStar) at the exit of the fixed bed. The results were expressed in terms of the tracer concentration versus time. The methodology applied to evaluate the order of magnitude of the axial dispersion, liquid-solid mass transfer coefficient and the partial wetting efficiency for the N[2]/H[2]O-KCl/activated carbon system was: • Comparison of the experimental concentrations with the predicted concentrations based on the solutions of Eq. (33), developed for the system; • Evaluation of the initial values of the parameters • D[ax], k[LS] and F[M] from the correlations in Table 2; • Numerical optimization of the values of the model parameters employing, as the criterion, the minimization of a quadratic objective function expressed in terms of experimental and calculated concentrations, given by Eq. (34): The operating conditions and the characteristics of the trickle-bed system are presented in Table 3. Experiments were performed at constant gas flow Q[G] = 2.50 x 10^-6 m^3 s^-1 and with the liquid phase flow (Q[L]) varying in the range from 0.50x10^-6 m^3 s^-1 to 4.25x10^-6 m^3 s^-1. The experiments carried out with liquid phase flows of (0.50, 0.75, 1.25, 1.75, 2.25, 2.75, 3.25, 3.75, 4.25)x10^-6 m^3 s^-1 were employing to fit the model equations, while operations with liquid phase flows of (1.00, 1.50, 2.00, 2.50, 3.00, 3.50, 4.00)x10^-6 m^3 s^-1 were used for the model validation. Corresponding to the gas and liquid phase flows, the following superficial velocities were employed in the model equations: for the gas phase (nitrogen), V[SG] was maintained at 10^-3 m s^-1, and for the liquid aqueous solution of KCl, V[SL] ranged from 2 x 10^-4 m s^-1 ^-4m s^-1. The values of the axial dispersion, the liquid-solid mass transfer coefficient and the partial wetting efficiency were determined simultaneously by comparing experimental and predicted concentration data obtained at the exit of the fixed bed, subject to the minimization of the quadratic objective function (F), Eq. (34). The numerical procedure used to optimize the values of the parameters involved the solution of Eq. (34) associated with an optimization subroutine (Silva et al., 2003, Box, 1965). The procedure started with initial values of the parameters until the final values were obtained, considered to be the optimized values of the three parameters when the quadratic objective function was minimized. The magnitudes of the parameters at different liquid phase flows are reported in Table 4. The axial dispersion, the liquid-solid mass transfer coefficient and the wetting efficiency are influenced by changes in the liquid flow. To represent the behavior of D[ax], k[LS] and F[M], their optimized values were employed and empirical correlations formulated as Eqs. (35), (36) and (37). These are restricted to the following operational conditions: The parameter correlations were fitted by the least-squares method. The mean relative errors (MRE) between the predicted and experimental parameter values of D[ax], k[LS] and F[M] in the k experiments were computed as follows: p=D[ax],k[LS] . Figures 1, 2 and 3 present parity plots of the correlated results. The mean relative errors of D[ax], k[LS] and F[M] at different liquid flows are shown in Table 5. A model validation procedure was established by comparing the predicted concentrations obtained with the values of the parameters from the proposed correlations (Eqs. (35), (36) and (37)) and experimental data not employed in the model adjustment. Table 6 presents the values of the parameters. Figures 4 to 6 represent the model validations for three different operating conditions, where the parameter values were obtained from Eqs. (35), (36) and (37). The transient behavior of the three-phase trickle bed system N[2]/H[2]O-KCl/activated carbon was evaluated via an experimental dynamic method and via predictions of a phenomenological mathematical model. Operating at 298 K under 1.01 bar with liquid and gas phases flowing downward under constant gas flow Q[G] = 2.50 x 10^-6 m^3 s^-1 and the liquid phase flow (Q[L]) varying in the range from 4.25x10^-6 m^3 s^-1 to 0.50x10^-6 m^3 s^-1, the concentration of KCl was measured at the exit of the reactor in response to a concentration step at the reactor inlet. The solutions of the model equations predicted concentration profiles of the tracer employing optimized values of the parameters for the axial dispersion coefficient in the liquid phase, the liquid-solid mass transfer coefficient and the partial wetting efficiency. The magnitudes of the parameters were in the following ranges: D[ax] = 6.986 x 10^-7 m^2 s^-1 to 0.572 x 10^-7 m^2 s^-1, k [LS] = 6.109 x 10^-6 m s^-1 to 0.286 x 10^-6 m s^-1 and F[M] = 0.581 to 0.465. These results led to the proposal of three empirical correlations to quantify the influence of liquid phase flow rate changes on the axial dispersion, liquid-solid mass transfer and wetting efficiency in the low interaction regime. Based on the values of the parameters indicated by the correlations, the model was validated by comparing their predictions with those obtained in different three-phase operations with mean quadratic deviations between experimental and predicted concentrations on the order of 10^-4. The authors would like to thank CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) for financial support (Process 483541/07-9). Al-Dahhan, M. H., Larachi, F., Dudukovic, M. P. and Laurent, A., High-pressure trickle bed reactors: A review. Industrial Engineering Chemical Research, 36, 3292-3314 (1997). [ Links ] Augier, F., Koudil, A., Muszynski, L. and Yanouri, Q., Numerical approach to predict wetting and catalyst efficiencies inside trickle bed reactors. Chemical Engineering Science, v. 65, p. 255-260 (2010). [ Links ] Ayude, A., Cechini, J., Cassanello, M., Martínez, O. and Haure, P., Trickle bed reactors: Effect of liquid flow modulation on catalytic activity. Chemical Engineering Science, 63, 4969-4973 (2008). [ Links ] Box, P., A new method of constrained optimization and a comparison with other method. Computer Journal, 8, 42-52 (1965). [ Links ] Burghardt, A., Grazyna, B., Miczylaw, J. and Kolodziej, A., Hydrodynamics and mass transfer in a three-phase fixed bed reactor with concurrent gas-liquid downflow. Chemical Engineering Journal, 28, 83-99 (1995). [ Links ] Burghardt, A., Kolodziej, A. S., Jaroszynski, M., Experimental studies of liquid-solid wetting efficiency in trickle-bed cocurrent reactors. Chemical Engineering Journal, 28, 35-49 (1990). [ Links ] Charpentier, J. C., Favier, M., Some liquid holdup experimental data in trickle bed reactors for foaming and non foaming hydrocarbons, AIChE Journal, 21, 1213-1221 (1975). [ Links ] Colombo, A. J., Baldi, G. and Sicardi, S., Solid-liquid contacting effectiveness in trickle-bed reactors. Chemical Engineering Science, 31, 1101-1108 (1976). [ Links ] Dudukovic, M. P., Larachi, F., Mills, P. L., Multiphase reactors-revisited. Chemical Engineering Science, 54, 1975-1995 (1999). [ Links ] Iliuta, I, Bildea, S. C., Iliuta, M. C. and Larachi, F., Analysis of Trickle bed and packed bubble column bioreactors for combined carbon oxidation and nitrification. Brazilian Journal of Chemical Engineering, 19, 69-87 (2002). [ Links ] Lange, R., Gutsche, R. and Hanika, J., Forced periodic operation of a trickle-bed reactor. Chemical Engineering Science, 54, 2569-2573 (1999). [ Links ] Latifi, M. A., Naderifar, A. and Midoux, N., Experimental investigation of the liquid-solid mass transfer at the wall of trickle bed-influence of Schmidt number. Chemical Engineering Science, 52, 4005- 4011 (1997). [ Links ] Liu, G., Zhang, X., Wang, L., Zhang, S. and Mi, Z., Unsteady-state operation of trickle-bed reactor for dicyclopentadiene hydrogenation. Chemical Engineering Science, 36, 4991-5001 (2008). [ Links ] Rodrigo, J. G., Rosa, L. and Quinta-Ferreira, M., Turbulence modelling of multiphase flow in high-pressure trickle reactor. Chemical Engineering Science, 64, 1806-1819 (2009). [ Links ] Ramachandran, P. A. and Chaudhari, R. B., Three Phase Catalytic Reactors. Gordan and Breach, New York, USA, Chap. 7. (1983). [ Links ] Silva, J. D., Lima, F. R. A., Abreu, C. A. M., Knoechelmann, A., Experimental analysis and dynamic modeling of the mass transfer processes for a fixed bed three-phase reactor in trickle bed regime. Brazilian Journal of Chemical Engineering, 20, n. 4, 375-390 (2003). [ Links ] Specchia, V., Baldi, G., Pressure drop and liquid holdup for two phase cocurrent flow in packed beds. Chemical Engineering Science, 32, 515-523 (1977). [ Links ] Tsamatsoulis, D. and Papayannakos, N., Simulation of non ideal flow in a trickle-bed hydrotreater by the cross-flow model. Chemical Engineering Science, 50, 3685-3691 (1995). [ Links ] Submitted: August 5, 2010 Revised: July 27, 2011 Accepted: April 16, 2012 * To whom correspondence should be addressed
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0104-66322012000300014&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-20T11:09:57Z","content_type":null,"content_length":"53353","record_id":"<urn:uuid:ff7e4652-ac97-4c43-9d41-f2a6d6bd53f2>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about math on Girls' Angle Tag Archives: math Recently at the Girls’ Angle club, a girl asked: Has the Pythagorean theorem been proven? That was a terrific question! Unfortunately, I didn’t have time to discuss it as well or as fully as I would have liked. A lot of … Continue reading The latest issue of the Bulletin covers a wide variety of mathematics from combinatorics to topology. In the print version, there’s also an interview with MIT Ph.D. biostatistician Dana Pascovici who works at the Australian Proteome Analysis Facility in Sydney. … Continue reading I sometimes get science educator envy. I watch them convey ideas with hatching chickadees, snapping plants, fiery explosions, counter-intuitive gyroscopic dances, color-changing liquids, erupting volcanoes, whirling water spouts, sizzling arcs, glistening gems, Timothy Gowers, the mathematician who compiled the incredible Princeton Companion to Mathematics, has begun a series of blog posts directed toward students who aspire to become mathematicians. It’s really quite amazing! In math contests, participants match wits against humans (the contest designers). In mathematics, participants match wits against the mathiverse!
{"url":"http://girlsangle.wordpress.com/tag/math/","timestamp":"2014-04-17T18:24:11Z","content_type":null,"content_length":"41771","record_id":"<urn:uuid:1e25ba63-010e-441f-a618-a7fa6bf5de1f>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Chemistry: Acid-Strong Base Reactions Video | MindBites Chemistry: Acid-Strong Base Reactions About this Lesson • Type: Video Tutorial • Length: 10:17 • Media: Video/mp4 • Use: Watch Online & Download • Access Period: Unrestricted • Download: MP4 (iPod compatible) • Size: 111 MB • Posted: 07/14/2009 This lesson is part of the following series: Chemistry: Full Course (303 lessons, $198.00) Chemistry: Final Exam Test Prep and Review (49 lessons, $64.35) Chemistry: Equilibrium in Aqueous Solution (21 lessons, $31.68) Chemistry: Reactions of Acids and Bases (3 lessons, $4.95) This lesson was selected from a broader, comprehensive course, Chemistry, taught by Professor Harman, Professor Yee, and Professor Sammakia. This course and others are available from Thinkwell, Inc. The full course can be found at http://www.thinkwell.com/student/product/chemistry. The full course covers atoms, molecules and ions, stoichiometry, reactions in aqueous solutions, gases, thermochemistry, Modern Atomic Theory, electron configurations, periodicity, chemical bonding, molecular geometry, bonding theory, oxidation-reduction reactions, condensed phases, solution properties, kinetics, acids and bases, organic reactions, thermodynamics, nuclear chemistry, metals, nonmetals, biochemistry, organic chemistry, and more. Dean Harman is a professor of chemistry at the University of Virginia, where he has been honored with several teaching awards. He heads Harman Research Group, which specializes in the novel organic transformations made possible by electron-rich metal centers such as Os(II), RE(I), AND W(0). He holds a Ph.D. from Stanford University. Gordon Yee is an associate professor of chemistry at Virginia Tech in Blacksburg, VA. He received his Ph.D. from Stanford University and completed postdoctoral work at DuPont. A widely published author, Professor Yee studies molecule-based magnetism. Tarek Sammakia is a Professor of Chemistry at the University of Colorado at Boulder where he teaches organic chemistry to undergraduate and graduate students. He received his Ph.D. from Yale University and carried out postdoctoral research at Harvard University. He has received several national awards for his work in synthetic and mechanistic organic chemistry. About this Author 2174 lessons Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/. Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through... Recent Reviews This lesson has not been reviewed. Please purchase the lesson to review. This lesson has not been reviewed. Please purchase the lesson to review. We're now pretty comfortable with the idea that if we take a weak acid and put it in water that an equilibrium is established between that weak acid and its conjugate base. We know how to calculate the Ph for that solution. We know how to calculate the concentration of the weak base equilibrium as well as the weak acid. We know how to do the same exact type of problem if we start with a weak base and put that in aqueous solution. Now, we're going to continue with the theme of looking at equilibria in aqueous solution, in general, but now look at situations where we involve a reaction of an acid and a base. In particular, we're going to look at what happens when we put a weak acid which a strong base or with a weak base. In fact, we'll look at all of these combinations of acids and bases. We'll start with the simplest case, which is actually a strong acid and a strong base. We'll make sure that we're feeling with comfortable with that neutralization reaction, except we'll quantify the final equilibrium that we get and then we'll go onto weak acids and weak bases. So once again, our theme now will be the reaction of an acid with a base in aqueous solution. So Case 1 would be then a strong acid and a strong base. In this case, we'll look at hydrochloric acid which we know, when it dissolves in water, fully dissociates and gives us H plus and CL minus. And sodium hydroxide, which again, when it dissolves in water, fully dissociates to give us sodium plus and hydroxide. So overall, the reaction we're expecting between a strong acid and a strong base is a neutralization reaction giving us water and also giving us the spectator ions that were not involved in the reactions - sodium and the chloride, in this particular case. Our first step then in describing this is to write out the individual ions and solution and identify the reaction. So here we go. H[3]O plus and CL minus, that would be when we dissolve the HCL in the water, and then sodium plus and hydroxide. And those combined together again to give us water and sodium plus and chloride minus. Okay, now what we're going to do is remove the spectator ions from this and, again, write down just the actual reaction that we're interested in. And it simplifies just to H[3]O+ plus hydroxide goes to two water. Now, let me just remind you that when we write down H[3]O+ that this is a salvated proton. You may see in other textbooks or your instructor may write H+ instead of H[3]O+ or H+ aqueous. It all means exactly the same thing. But if we're going to write down formally H[3]O+, we have to account for that extra water in our product. So, again, you may see this final reaction as H+ plus OH- goes to one H[2]O. And it will mean exactly the same thing as this, just to clarify that point. So once again, what we're interested in is this reaction. And we haven't seen that equilibrium but we've seen something very similar to it. We've seen the opposite equilibrium. We've seen that H[2]O goes to H[3]O+ plus OH, or rather 2H[2]O goes to H[3]O+ plus hydroxide. Now, that's just the auto-ionization of water or the dissociation of water. We know that that's K[W]. It has a special name. The equilibrium constant at room temperature is 1.0 times 10^-14. So we have a value for the reverse of this reaction and we remember from chemical equilibria that if we take that reaction and reverse it, what we must do to the K is take the reciprocal of the K. So in this case, the equilibrium describing this reaction as written is one over the equilibrium constant for the reverse reaction, and that's again K[w]. And so, in this case, we end up with 1 x 10^14. Well, what does this tell us? 1 x 10^14 is a huge number. It says that this equilibrium must lie very far to the right and therefore, again, essentially we have complete neutralization as long as we started with equal amounts of acid and base of the H[3]O+ and the hydroxide. We could ask, "All right, what's the concentration of H[3]O+ or of hydroxide at equilibrium?" We know how to calculate that because that's the exactly the same thing as if we'd started just with water and let this system run to equilibrium. And we know then that governed by this equilibrium - or if we wanted to write it in the reverse and flip the K to give us K[w] - we know that the concentration of these must be 10^-7 each if our final equilibrium constant is 1 x 10^14. And just to clarify that for you, with this written as is, remember that the equilibrium constant for this thing is to find as concentration of products, but we just have a liquid on the product side over concentration of reactants, and if each of these is 10^-7 molar, then that turns out to be one times So again, this may look different, but you've seen this before. This is just the exact reverse of water dissociating to give hydroxide and H[3]O plus. Now the more interesting problem comes when we look at the reaction of a weak acid and a strong base. In this case, let's consider formic acid, this is a model for formic acid. Formic acid, in particular again, has got this OH bond that's very easily cleaved and the resulting negative charge here is resonance stabilized. So we have an acidic molecule, but it's not a strong acid. We know that just when it's dissolved in water it's primarily in the form of formic acid, not of formate and H+. Nonetheless, when we combine that with a strong base, the question is will that, in fact, go to formate plus hydroxide? This would be the reaction we'd be worried about. And where, in fact, does this equilibrium lie? Well, the answer lies entirely with how weak is this acid and how strong is this base? There are some acids that are going to be so weak, for instance, methane. It's going to be so weak that whether you throw a strong base at it or not, you're not going to get any significant reaction. So we have to now turn to the numbers, to K[a], that gives us a quantitative explanation of just how strong this acid and just how strong this base is to see if this reaction actually will occur. Now K[a], again, is what describes how strong this acid is. 1.8 x 10^-4 tells us it's a weak acid. And we're going to, now, want to write out the equation that we do know, that this number describes, and see if we can get from that equation to the overall chemical reaction that we're interested in. Okay, once again, we're trying to get an equilibrium constant for this reaction. Okay, well what we do know is formic acid reacts, to some degree, to give us H[3]O+ and formate and that is described by the equilibrium constant K[a]. We have a value for that. How do we go from here down to the equation in green, which is the one we just looked at? Well, we know that H[3]O can combine with hydroxide, and remember, that's what we're adding to give us water. And we know that equilibrium. That's just . Once again, we just saw this equilibrium, if you remember. So if we combine those two steps together, we end up with the reaction we're interested in. So, remember now, when we combine reactions to get a new reaction, we must take the product of those equilibrium constants. K[a] divided by K[w] is going to be our answer. In other words, K[a] x gives us will describe that equilibrium then. Plugging in numbers, K[a] is 1.8 x 10^-4; K[w] is 1 x 10^-14. This gives us a K of 1.8 x 10^10. So in this particular case, this equilibrium lies far to the right. Remember that large K tells us so. The equilibrium lies far to the right. So this is a strong enough weak acid. Let me say that again, this is a strong enough weak acid to react with this strong base for the reaction to go far to this side. But again, that is not a general result. Sometimes this will work. Sometimes it won't. It depends just how weak this acid actually is. In this case, again we calculated that this is the dominant species. So if you were presented with a problem like you mix these guys together in equal amounts. You establish this equilibrium. What is the Ph? You can do this? Because you know that the dominant species in solution now will be sodium formate. This will all be consumed and sodium formate in water is a problem you've dealt with before. That's a K[b] problem. You know the K[b] for sodium formate. You know how much sodium formate you have. And we already know how to calculate what the Ph would be. If you want a little bit of a review on that, then go back to the earlier tutorials on how to do K[b] problems. So, again, just summarizing then, when we're dealing with a reaction of an acid and a base, what we need to do first is assess where the equilibrium lies. Is it on the left or the right side? And in some cases, it's not easy to decide that by inspection. We need to actually go to the numbers and look at the K[a] for just how strong that acid is and combine that with our knowledge of how good the base is that we're interested in and that's going to tell us where the equilibrium lies. Equilibrium in Aqeous Solution Reactions of Acids and Bases Strong Acid-Strong Base and Weak Acid-Strong Base Reactions Page [2 of 2] Get it Now and Start Learning Embed this video on your site Copy and paste the following snippet: Link to this page Copy and paste the following snippet:
{"url":"http://www.mindbites.com/lesson/4795-chemistry-acid-strong-base-reactions","timestamp":"2014-04-20T08:22:37Z","content_type":null,"content_length":"61866","record_id":"<urn:uuid:cf5ed033-f336-44ee-81ed-dfabea79371e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
evaluate determinant September 19th 2005, 11:33 AM #1 Aug 2005 evaluate determinant can any one help ma to evaluate the determinant by using properties? determinant is......... 1st row x^3 y^3 z^3 x^2 y^2 z^2 y+z x+z x+y we have to prove that value of this determinant is equall to. ............ -(x-y)^2 (y-z)^2 (z-x)^2 by using properties. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/939-evaluate-determinant.html","timestamp":"2014-04-19T06:15:32Z","content_type":null,"content_length":"28961","record_id":"<urn:uuid:172b55d2-117f-44ea-a573-dbe07d3ffd74>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
find an equation of the line containing the given pair of points (2,3) and (6,5)? Word Problem Answers (7,910) Statistics Answers (2,471) Calculus Answers (5,048) Trigonometry Answers (2,192) Geometry Answers (4,417) Algebra 2 Answers (9,136) Algebra 1 Answers (21,990) Pre-Algebra Answers (10,869) If we take the rise over run of these two points we can find the slope of the line. up 2 and over 4 So if y=mx+b is the equation of a line where m=slope and b=y-intercept we can plug in the slope Now we just need to know where the line crosses the y intercept. Graph points using the slope until you find the point where the y value is 0 from the point 2,3 we go down 1 and to the left 2 to get (0,2) which gives the y intercept as 2 since the value for x is 2 when the value for y is 0 So the equation for the line would be y=1/2x+2 Here is a graph of the line.
{"url":"http://mathhomeworkanswers.org/851/find-equation-the-line-containing-the-given-pair-points-and","timestamp":"2014-04-16T19:09:21Z","content_type":null,"content_length":"177097","record_id":"<urn:uuid:f24b2d92-d8b7-4ada-9e3f-5943bf714336>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
[Maxima] bfloat hangs computer Raymond Toy raymond.toy at ericsson.com Thu Mar 19 10:42:55 CDT 2009 Edwin Woollett wrote: > On Wed, Mar 18, 2009 Robert Dodier wrote: > -------------------------------------------------- >> 1b<foo> attempts to call (EXPT 10 <foo>) so when <foo> >> is a large integer (positive or negative), it is trying to compute >> a very large integer or 1/(very large integer). I'm guessing that >> when Maxima appears to hang, it is actually running EXPT and >> just taking a very long time. > ----------------------------------- > In fact Maxima has no problem returning exp(-10^n) for n equal > to 8 or larger. The problem is when Maxima is asked to > add (or subtract) the number 1 to that tiny number > using bfloat( 1 + e) where e has been defined as exp(-bfloat(10^j) ) > for j = 8 or larger. Here is the fix that was checked in. Just place this in some file and load it. (Or compile and load it.) Then you can continue your work. Doesn't fix the issue about 1b<foo> but does fix the bfloat addition issue. (in-package "MAXIMA") (defun haipart (x n) (let ((x (abs x))) (if (< n 0) ;; If the desired number of bits is larger than the actual ;; number, just return the number. (Prevents gratuitously ;; generating a huge bignum if n is very large, as can happen ;; with bigfloats.) (if (< (integer-length x) (- n)) (logand x (1- (ash 1 (- n))))) (ash x (min (- n (integer-length x)) 0))))) More information about the Maxima mailing list
{"url":"http://www.ma.utexas.edu/pipermail/maxima/2009/016201.html","timestamp":"2014-04-18T05:30:46Z","content_type":null,"content_length":"3997","record_id":"<urn:uuid:620b9964-1f6f-4f5c-a219-a1374e6ec12d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
not 3x3 tic tac toe game AI not 3x3 tic tac toe game AI How do you make the AI for a tic tac toe game which isn't a 3x3 game but of unlimited size, and there it takes 5 in a row to win? Which algorithms to use? You need to concentrate on where the points are. This is gomoku right? How 'bout this: If you can win in 1 move, place a stone in the place that would make you win. If the other player can win in 1 move, put your stone in the place that would make him win. Otherwise, check each of your stones, and if there are 4 open spaces in a line of which that stone is a part, then add a stone somewhere in that line. Pretty simple, but it shoudn't be too bad. Or you could use a brute force approach: Given board size X * Y, and this configuration of pieces on the board, which moves have I had the best win-to-loss ratio with in the past? (it's "memory" is a file) Of equally ranked moves, choose one at random. By "training" this against an entirely random oponnent overnight, you should end up with a pretty good AI. And the longer it trains, the better it gets. Brute force is too large even on a 20x20 board. This should show you how to build the ai A 20x20 board would be 3 ^ (20 * 20) = 3 ^ 400 = 7.05508e+190 states. And if each state has two bytes in a file, one representig wins, and the other representing loses, then that would be, assuming my equation parser is working correctly, 1.31411e+182 gigabytes. A little too big, you're right! ;) i got an idea.. I always figured you use a pick a random number for x, and a random number for y. x = //random number between the x coords on your board y = // random number between the y coords on your board then just if//if it is not one point from winning // place x or y there board[x][y] = // X or Y depending on whos turn that might work, i think ym logic is at least right. I made a simple Tic Tac Toe program with a (bad) AI, and it will move into a winning position if it sees oine; otherwise it will block you from winning. And if neither of those is the case, then it moves randomly. It works against stupid people. But anybody who's any good at Tic Tac Toe is sure to beat it. So I was trying to think of something better. You could probably think of it as a pathfinding problem, actually, where "Winning" is the destination, the current state is the start, and all board states that can be reached from a given state are "adjacent tiles." I actually coded a gomoku game 20x20 tiles which used alpha beta. It could see ahead 3 or so moves but that's not enough to play good. I lost the source code to it, it was pretty bad. The best way, besides reading http://boardgames.about.com/gi/dyna...tor/thesis.html (you will probably have to download ghostscript and I think winzip compresses unix style .Z files) is to concentrate on the stones that are near to each other. You can narrow the possible moves perhaps to 20 instead of 400 and reconize force positions. Thanks, I've got to check that out. I've tested one whcih uses a good ai, which makes it almost impossible to win on the hardest level. Yes, maybe alfa beta is a way to go.
{"url":"http://cboard.cprogramming.com/game-programming/443-not-3x3-tic-tac-toe-game-ai-printable-thread.html","timestamp":"2014-04-16T13:22:08Z","content_type":null,"content_length":"11492","record_id":"<urn:uuid:1a869795-cd1f-4329-9152-eeae43bb30d5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Quiz 3: Lines and Planes Question 1 Find the equation of the line joining points $P\left(2,1,-1\right)$ and $Q\left(0,3,1\right)$, in vector form. a) $x=\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 1\hfill \\ \hfill -1\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 3\hfill \\ \hfill 1\hfill \end{array}\right]$ b) $x=\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 4\hfill \\ \hfill 0\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 1\hfill \\ \hfill -1\hfill \end{array}\right]$ c) $x=\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 1\hfill \\ \hfill -1\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill -2\hfill \\ \hfill 2\hfill \\ \hfill 2\hfill \end{array}\right]$ d) $x=\left[\begin{array}{c}\hfill -2\hfill \\ \hfill 2\hfill \\ \hfill 2\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 1\hfill \\ \hfill -1\hfill \end{array}\right]$ Not correct. Choice (a) is false. Not correct. Choice (b) is false. Your answer is correct. A vector parallel to the line is the vector $\stackrel{\to }{PQ}=\stackrel{\to }{OQ}-\stackrel{\to }{OP}=\left[\begin{array}{c}\hfill -2\hfill \\ \hfill 2\hfill \\ \hfill 2\hfill \end{array}\right].$ Hence the line can be represented by the vector equation $x=\stackrel{\to }{OP}+t\stackrel{\to }{PQ}.$ Note that there are many other ways of giving a vector equation for this line. For example, instead of using the position vector of the known point $P$ in the equation, we could have used the position vector of $Q$ to give another version of the equation: $x=\left[\begin{array}{c}\hfill 0\ hfill \\ \hfill 3\hfill \\ \hfill 1\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill -2\hfill \\ \hfill 2\hfill \\ \hfill 2\hfill \end{array}\right].$ Not correct. Choice (d) is false. Question 2 A line has parametric equations $x=2+3t,\phantom{\rule{1em}{0ex}}y=1+2t,\phantom{\rule{1em}{0ex}}z=-6-t.$ A vector parallel to the line is: There is at least one mistake. For example, choice should be When the equations of a line are given in parametric form we can identify the coordinates of two points on the line by letting the parameter take two different values, such as $t=0$ and $t=1$. Hence $\left(2,1,-6\right)$ is on the line, and so is $\left(5,3,-7\right)$. The vector from the first to the second point, namely $\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 2\hfill \\ \hfill -1\hfill \end{array}\right]$, is therefore parallel to the line. The vector $\left[\begin{array}{c}\hfill 6\hfill \\ \hfill 4\hfill \\ \hfill -2\hfill \end{array}\right]$ is a scalar multiple of $\left[\begin {array}{c}\hfill 3\hfill \\ \hfill 2\hfill \\ \hfill -1\hfill \end{array}\right]$ and is also parallel to the line. There is at least one mistake. For example, choice (b) should be false. There is at least one mistake. For example, choice should be The point $\left(2,1,-6\right)$ is on the line (corresponding to $t=0$), and so is $\left(5,3,-7\right)$ (corresponding to $t=1$). The vector from the first to the second point, namely $\left[\begin {array}{c}\hfill 3\hfill \\ \hfill 2\hfill \\ \hfill -1\hfill \end{array}\right]$, is therefore parallel to the line. There is at least one mistake. For example, choice (d) should be false. Your answers are correct 1. True. When the equations of a line are given in parametric form we can identify the coordinates of two points on the line by letting the parameter take two different values, such as $t=0$ and $t= 1$. Hence $\left(2,1,-6\right)$ is on the line, and so is $\left(5,3,-7\right)$. The vector from the first to the second point, namely $\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 2\hfill \\ \ hfill -1\hfill \end{array}\right]$, is therefore parallel to the line. The vector $\left[\begin{array}{c}\hfill 6\hfill \\ \hfill 4\hfill \\ \hfill -2\hfill \end{array}\right]$ is a scalar multiple of $\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 2\hfill \\ \hfill -1\hfill \end{array}\right]$ and is also parallel to the line. 2. False. 3. True. The point $\left(2,1,-6\right)$ is on the line (corresponding to $t=0$), and so is $\left(5,3,-7\right)$ (corresponding to $t=1$). The vector from the first to the second point, namely $\ left[\begin{array}{c}\hfill 3\hfill \\ \hfill 2\hfill \\ \hfill -1\hfill \end{array}\right]$, is therefore parallel to the line. 4. False. Question 3 Given the parametric equations of a line, $x=3+5t,\phantom{\rule{1em}{0ex}}y=2-t,\phantom{\rule{1em}{0ex}}z=3+2t,$ find the equation of the same line in vector form. a) $x=\left[\begin{array}{c}\hfill 5\hfill \\ \hfill -1\hfill \\ \hfill 2\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]$ b) $x=\left[\begin{array}{c}\hfill 5\hfill \\ \hfill -1\hfill \\ \hfill 2\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill -3\hfill \\ \hfill -2\hfill \\ \hfill -3\hfill \end{array}\right]$ c) $x=\left[\begin{array}{c}\hfill 8\hfill \\ \hfill 1\hfill \\ \hfill 5\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 5\hfill \\ \hfill 1\hfill \\ \hfill 2\hfill \end{array}\right]$ d) $x=\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 2\hfill \\ \hfill 3\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 5\hfill \\ \hfill -1\hfill \\ \hfill 2\hfill \end{array}\right]$ Not correct. Choice (a) is false. Not correct. Choice (b) is false. Not correct. Choice (c) is false. Your answer is correct. In vector form, the equation of a line is written $x=p+td$, where $p$ is the position vector (relative to the origin) of a point $P$ on the line and $d$ is a direction vector for the line. The components of the direction vector are simply the coefficients of $t$ in the parametric equations. Question 4 Find the equation of the line parallel to the line given by the parametric equations a) $x=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 1\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 1\hfill \\ \hfill 5\hfill \end{array}\right]$ b) $x=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 1\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 6\hfill \\ \hfill 2\hfill \\ \hfill 8\hfill \end{array}\right]$ c) $x=\left[\begin{array}{c}\hfill 2\hfill \\ \hfill 1\hfill \\ \hfill 5\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 1\hfill \\ \hfill 4\hfill \end{array}\right]$ d) $x=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 1\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 3\hfill \\ \hfill -1\hfill \\ \hfill 4\hfill \end{array}\right]$ Not correct. Choice (a) is false. Line $\ell$ must be parallel to $\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 1\hfill \\ \hfill 4\hfill \end{array}\right]$. The line given in this option is parallel to $\left[\begin{array}{c}\ hfill 2\hfill \\ \hfill 1\hfill \\ \hfill 5\hfill \end{array}\right]$, which is not a direction vector for $\ell$. Your answer is correct. A vector parallel to the line is $\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 1\hfill \\ \hfill 4\hfill \end{array}\right]$ (from the coefficients of $t$ in the parametric equations). So the line is also parallel to $\left[\begin{array}{c}\hfill 6\hfill \\ \hfill 2\hfill \\ \hfill 8\hfill \end{array}\right]$. Therefore since the line passes through the point $\left(1,2,1\right)$, it has vector equation $x=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 2\hfill \\ \hfill 1\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 6\hfill \\ \hfill 2\hfill \\ \hfill 8\hfill \end{array}\ Not correct. Choice (c) is false. This line is parallel to the vector $\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 1\hfill \\ \hfill 4\hfill \end{array}\right]$, but does not contain the point $\left(1,2,1\right)$. If it did, there would be a value of the parameter $t$ satisfying all three of the following equations simultaneously: $1=2+3t,\phantom{\rule{2em}{0ex}}2=1+t,\phantom{\rule{2em}{0ex}}1=5+4t.$ It is easy to see that no such $t$ exists! Not correct. Choice (d) is false. The line given in this option is parallel to the vector $\left[\begin{array}{c}\hfill 3\hfill \\ \hfill -1\hfill \\ \hfill 4\hfill \end{array}\right]$, which is not parallel to the line given in the Question 5 Suppose that $P$ and $Q$ are two distinct points in 3-dimensional space. How many planes are there which contain both $P$ and $Q$? Not correct. Choice (a) is false. Not correct. Choice (b) is false. Not correct. Choice (c) is false. Your answer is correct. There are infinitely many planes containing two distinct points. To see this, visualise the line joining the two points as the spine of a book, and the infinitely many planes as pages of the book. Question 6 Find the general equation of the plane which goes through the point $\left(3,1,0\right)$ and is perpendicular to the vector $\left[\begin{array}{c}\hfill 1\hfill \\ \hfill -1\hfill \\ \hfill 2\hfill Not correct. Choice (a) is false. Not correct. Choice (b) is false. Your answer is correct. The general equation of the plane through the point $\left(p,q,r\right)$ perpendicular to the vector $\left[\begin{array}{c}\hfill a\hfill \\ \hfill b\hfill \\ \hfill c\hfill \end{array}\right]$ is $a\left(x-p\right)+b\left(y-q\right)+c\left(z-r\right)=0$. In this particular case, the equation becomes $1\left(x-3\right)-1\left(y-1\right)+2\left(z-0\right)=0$, that is, $x-y+2z=2.$ Not correct. Choice (d) is false. Not correct. Choice (e) is false. Question 7 Find the equation of the unique plane through the three points $A=\left(3,-2,1\right),\phantom{\rule{0.3em}{0ex}}B=\left(1,1,5\right),\phantom{\rule{0.3em}{0ex}}C=\left(-2,4,0\right).$ Not correct. Choice (a) is false. First find a vector perpendicular (normal) to the plane by calculating $\stackrel{\to }{AB}×\stackrel{\to }{BC}$. Then remember that the formula $ax+by+cz=d$ for the equation of a plane gives us the information that the vector $\left[a,b,c\right]$ is normal to the plane. So the only unknown constant left to find is the constant $d$. This can be evaluated by substituting the coordinates of any of the points $A,\phantom{\rule{0.3em}{0ex}}B,\phantom{\rule{0.3em}{0ex}}C$ into the equation. Not correct. Choice (b) is false. First find a vector perpendicular (normal) to the plane by calculating $\stackrel{\to }{AB}×\stackrel{\to }{BC}$. Then remember that the formula $ax+by+cz=d$ for the equation of a plane gives us the information that the vector $\left[a,b,c\right]$ is normal to the plane. So the only unknown constant left to find is the constant $d$. This can be evaluated by substituting the coordinates of any of the points $A,\phantom{\rule{0.3em}{0ex}}B,\phantom{\rule{0.3em}{0ex}}C$ into the equation. Not correct. Choice (c) is false. First find a vector perpendicular (normal) to the plane by calculating $\stackrel{\to }{AB}×\stackrel{\to }{BC}$. Then remember that the formula $ax+by+cz=d$ for the equation of a plane gives us the information that the vector $\left[a,b,c\right]$ is normal to the plane. So the only unknown constant left to find is the constant $d$. This can be evaluated by substituting the coordinates of any of the points $A,\phantom{\rule{0.3em}{0ex}}B,\phantom{\rule{0.3em}{0ex}}C$ into the equation. Your answer is correct. The vector $\stackrel{\to }{AB}$ equals $\left[-2,3,4\right]$ and the vector $\stackrel{\to }{BC}$ equals $\left[-3,3,-5\right]$. These two vectors are parallel to the plane and so their cross product is perpendicular to the plane. We find that $\stackrel{\to }{AB}×\stackrel{\to }{BC}=\left[-27,-22,3\right]$. The equation of the plane has the form $-27x-22y+3z=d$ where we can find $d$ by substituting the coordinates of any of the three original points. This gives $d=-34$ and the answer follows. Question 8 Find a vector perpendicular to the two lines $x=\left[\begin{array}{c}\hfill 2\hfill \\ \hfill -2\hfill \\ \hfill 1\hfill \end{array}\right]+t\left[\begin{array}{c}\hfill 1\hfill \\ \hfill 0\hfill \\ \hfill 3\hfill \end{array}\right]\phantom{\rule{2em}{0ex}}\text{and}\phantom{\rule{2em}{0ex}}x=\left[\begin{array}{c}\hfill 0\hfill \\ \hfill 2\hfill \\ \hfill 7\hfill \end{array}\right]+t\left[\ begin{array}{c}\hfill 4\hfill \\ \hfill -2\hfill \\ \hfill 7\hfill \end{array}\right]$ Not correct. Choice (a) is false. The first line is parallel to $\left[1,0,3\right]$ and the second is parallel to $\left[4,-2,7\right]$. A vector perpendicular to both lines is therefore the cross product of these two vectors. Not correct. Choice (b) is false. The first line is parallel to $\left[1,0,3\right]$ and the second is parallel to $\left[4,-2,7\right]$. A vector perpendicular to both lines is therefore the cross product of these two vectors. Your answer is correct. Not correct. Choice (d) is false. The first line is parallel to $\left[1,0,3\right]$ and the second is parallel to $\left[4,-2,7\right]$. A vector perpendicular to both lines is therefore the cross product of these two vectors. Not correct. Choice (e) is false. The first line is parallel to $\left[1,0,3\right]$ and the second is parallel to $\left[4,-2,7\right]$. A vector perpendicular to both lines is therefore the cross product of these two vectors. Question 9 The vectors $u$ and $v$ are non-parallel. Which of the following vectors are perpendicular to $u×v$? There is at least one mistake. For example, choice (a) should be true. There is at least one mistake. For example, choice should be This is just a scalar multiple of $u×v$ and is therefore a vector in the same or opposite direction, not perpendicular to $u×v$. There is at least one mistake. For example, choice should be This is the negative of $u×v$ and is therefore a vector in the opposite direction to $u×v$. There is at least one mistake. For example, choice (d) should be true. There is at least one mistake. For example, choice (e) should be true. There is at least one mistake. For example, choice should be The properties of cross product give $\left(u+v\right)×v=u×v+v×v=u×v+0=u×v.$ Therefore this option is certainly not perpendicular to $u×v$ – it is equal to it. Your answers are correct 1. True. 2. False. This is just a scalar multiple of $u×v$ and is therefore a vector in the same or opposite direction, not perpendicular to $u×v$. 3. False. This is the negative of $u×v$ and is therefore a vector in the opposite direction to $u×v$. 4. True. 5. True. 6. False. The properties of cross product give $\left(u+v\right)×v=u×v+v×v=u×v+0=u×v.$ Therefore this option is certainly not perpendicular to $u×v$ – it is equal to it. Question 10 Find the acute angle between the planes $3x+y+z=0$ and $x-2y+z=3$. Not correct. Choice (a) is false. Hint: Find the angle between the normals to the two planes. Your answer is correct. The two planes are not parallel, since the first plane is perpendicular to $n=\left[\begin{array}{c}\hfill 3\hfill \\ \hfill 1\hfill \\ \hfill 1\hfill \end{array}\right]$ and the second plane is perpendicular to $m=\left[\begin{array}{c}\hfill 1\hfill \\ \hfill -2\hfill \\ \hfill 1\hfill \end{array}\right]$ , and is not parallel to . Denote the two planes by , respectively, so that is the line of intersection. The diagram above shows that the angle between two planes is the same as the angle between the normals to the planes, so we need to find the angle between the vectors . This angle is given by the now well-known formula $cos\theta =\frac{n\cdot m}{||n||\phantom{\rule{1em}{0ex}}||m||}=\frac{2}{\sqrt{11}\sqrt{6}}.$ So, the angle between the two planes is the (acute) angle $\theta ={cos}^{-1}\frac{2}{\sqrt{66}}\approx 1.32$ Not correct. Choice (c) is false. Hint: Find the angle between the normals to the two planes. Not correct. Choice (d) is false. Hint: Find the angle between the normals to the two planes.
{"url":"http://www.maths.usyd.edu.au/u/UG/JM/MATH1014/Quizzes/quiz3.html","timestamp":"2014-04-21T04:48:24Z","content_type":null,"content_length":"122246","record_id":"<urn:uuid:3bcb2f2d-96b5-4060-b43e-75cd74f0bfd1>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Please verify answer on this set question October 19th 2011, 11:35 AM #1 Please verify answer on this set question The answer I get is the intersection of all 3 is <= -5! But it should be a positive integer in my mind. I have worked this out about 10 times now and I get -5 every time! Any help mch appreciated! Re: Please verify answer on this set question The problem does not make much sense to me. Suppose the first 15 out of 25 attend session one. Suppose the 10 other plus 8 of the first 15 attend session two (so that 7 of the ones that attended the first session do not attend the second). Then you could have that the 10 other again and two of the 7 attend the third. That way, no one attends all three. The answer would be zero. Re: Please verify answer on this set question It might help you to know that $\displaystyle n(A \cup B \cup C) = n(A) + n(B) + n(C) - n(A\cap B) - n(A\cap C) - n(B \cap C) - n(A \cap B \cap C)$ Re: Please verify answer on this set question Re: Please verify answer on this set question hi i used this too and got -5 also. Re: Please verify answer on this set question the trouble is, the estimates on the two-set intersections are all done independently, and they're not truly independent of each other. for example, it's easy to establish that |A∩B| must be at least 8, but if we assume it IS 8, we get "higher estimates" for A∩C and B∩C. however, the minimum, as the problem is stated is indeed 0. asuume A∩B = 8. then 7 people attended the 1st but not the 2nd session, and 10 people attended the 2nd but not the 1st session. all of the people who attended the 3rd session (12 people), must then have attended the 1st and 3rd only, the 2nd and 3rd only, or all 3. no more than 7 of these 12 could have attended just the 1st and 3rd, because only 7 attended the 1st but not the 2nd. the remaining 5 could have attended just the 2nd and 3rd, giving the following breakdown: 8 people attended the 1st and 2nd only 7 people attended the 1st and 3rd only 5 people attended the 2nd and 3rd only 5 people attended only the 2nd. summing, we have: 25 = |AUBUC| = |A|+|B|+|C|-|A∩B|-|A∩C|-|B∩C|+|A∩B∩C| = 15+18+12-8-7-5 = 45-20 = 25, so we see it is possible that no one attended all 3. Re: Please verify answer on this set question thanks! I thought it would be zero but the -5 was throwing me off. Thanks!! October 19th 2011, 12:46 PM #2 Apr 2009 October 19th 2011, 06:33 PM #3 October 20th 2011, 01:00 AM #4 MHF Contributor Oct 2009 October 20th 2011, 03:41 AM #5 October 20th 2011, 05:45 AM #6 MHF Contributor Mar 2011 October 20th 2011, 09:08 AM #7
{"url":"http://mathhelpforum.com/discrete-math/190803-please-verify-answer-set-question.html","timestamp":"2014-04-18T10:15:25Z","content_type":null,"content_length":"51067","record_id":"<urn:uuid:71ba9ce5-fecc-49cb-a95f-ca2ccd95c580>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Apolo Ohno Physics It is winter Olympics time and time for physics. The event that I always gets me thinking about physics is short track speed skating. It is quite interesting to see these skaters turn and lean at such high angles. All it needs is a little sprinkling of physics for flavor. Check out this image of Apolo (apparently, it is not Apollo). How about I start with a force diagram? I know what you are thinking…F[cent]….what force is that? Yes, I am going to use the centrifugal force in this case – but remember that sometimes fake forces are awesome. In short, if I want to pretend like Apolo is not accelerating then I need to add the fake centrifugal force (which is in the opposite direction as the actual acceleration). Remind me later and I will re-visit this problem without using fake forces. Anyway, the centrifugal force will have the magnitude: Here v is the speed that Apolo (or any skater) is moving and r is the radius of the circle he (or she) is moving in. I drew this vector for the centrifugal force as acting at the center of mass of the skater. This isn’t exactly true. The problem is that different parts of the skater are moving in circles of different radii. However, the difference probably (but I will look at it later) not that large that it matters. For the other forces, notice that the ice exerts two forces (well, one force that I broke into two components). There is a component parallel to the ice. This is a static friction force where the skate blades cut into the ice. Also, the ice pushes up perpendicular to the ice. This is the normal force. I assumed that the kinetic friction force (which would be into the page opposite the direction of motion) is small enough to be ignored. Really, that is the cool thing about ice skating. Ice needs to be low friction in the direction the skate moves and high friction perpendicular to the blade. Back to the force diagram, there are two things to consider. The forces must add up to the zero vector (because I am assuming the reference frame of the skater is not accelerating). Also, the torque must be zero about any point. For this case, I will choose the point where the skates touch the ice. This will give three scalar equations (two for the forces an one for the torque). Forgive me, but I am not going to go into all the torque details for now. Wait – I forgot one more parameter – the distance from the point where the skates contact the ice to the center of mass. I will call this distance s (for no particular reason). Now I can make a substitution for both the centrifugal force (which I wrote above) and the frictional force. I will assume a coefficient of static friction of mu[s]. I will also assume that the skater is just at the point of slipping. This means that the static frictional force is the greatest it can be (so there will be an equal sign and not a less-than-or-equal sign) Substituting in for the friction and the centrifugal force in the x-direction force equation: And again for the y-direction substituting for the centrifugal force: There is another important relationship here. I am going to assume that the sum of the frictional and normal force must be directed towards the center of mass. This means that: And now, using this in the x-direction force equation to eliminate mu. I get: This gives me the speed of the skater in terms of the angle he (or she) is at and the radius of the circle the skater is moving in. It turns out I get the same thing if I solve the y-direction force equation (and that would have been a little simpler). Does this result seem reasonable? • Do the units work? If g is in N/kg (same as m/s^2), then g*r will be m^2/s^2. When I take the square root of this, I get units of m/s – that is good. • If r is constant, what should happen as theta gets larger? This should be slower speed. It is sort of difficult to see from that function, so let me make a quick plot. That plot looks pretty good. For an angle that approaches 90 degrees, the skater’s speed would be smaller. A skater wouldn’t have to lean at all if the skater was stopped. As the angle gets smaller (approaching zero), the skater would have to be going faster and faster. That is just what that graph shows. So, let me see if this works. What is the radius of a short-track? According to the ISU (International Sk8ing Union) the inner radius must be [S:25-26 meters:S] 8 – 8.5 meters (see correction in comments). What about the angle? From the picture of Apolo above, I get about 33 degrees (0.6 radians). Using these values, I get a speed of: This is the part where I discuss why the speed was too great. I originally used the 25 meter radius and calculated a speed of around 42 mph. I will leave this part in here even though it is Clearly, this is way too fast. Apolo’s best time on the 500 meter race is 41.5 seconds. This gives an average speed of 12 m/s. Is the angle the problem? I don’t think so. Looking at the plot of the function above, the angle would have to be around 50-60 degrees for the speed to be 12 m/s. Is it because he is pushing on the ice with his hand? Again, I don’t think this is the case because sometimes they don’t touch the ground. What about the radius? Even moving the radius down to 23 meters doesn’t make that big of a difference. The problem must be with one of my assumptions. I suspect the assumption that the “center of the centrifugal force” was at the same location as the center of mass. This would make a difference. Now I guess I will have to calculate that. You should give me some credit for knowing something was wrong. According to commenter Milan, the radius is around 8.5 – 8 meters. You can take off some points in my internet looking-up scores. This gives a speed of 24 mph – that I am much happier with. I will fix the figures above. 1. #1 Dave X February 17, 2010 Might it have something to do with part of the non-accelerating assumption or the kinetic friction force? He has to do an Olympian level of work climbing towards the center in order to maintain that 12m/s. 2. #2 Len Bonacci February 17, 2010 The 25-m radius is for the regular track, I think — short track has a smaller radius. 3. #3 len Bonacci February 17, 2010 Found it — the radius for the short track is around 8.5 m. That gives you a speed of about 11 m/s. 4. #4 bsci February 17, 2010 Not related to your math here, but one of the talking sports heads mentioned that short track blades are actually slightly curved and not asymmetrical on the bottom of the skate. Here’s a link I just found: It says the asymmetry is to allow the skater to lean more without the boot touching the ice, but it must also move the blades slightly closer to the center of gravity during the leans. The curved blade probably decreases friction in the desired direction of motion and adds more surface area and friction perpendicular to the blade. There must be some fun calculations behind the blade locations and curvatures. 5. #5 Milan Merhar February 18, 2010 The inner radius of the turns on a 111 meter short track course is 8.0 m although the “measured track” radius is 8.5 m. Top competitors try to hug the 8.0 m track edge, but that’s very difficult at their highest speeds, and in timed trials they often swing wider on turn entry and exit, approximating a wider turn radius. Blades are offset on the boot to provide clearance on turns, and are both ground to a slight radius (tips higher than middle) and curved slightly longitudinally (center to right of tips) to maximize edge on ice on turns. 6. #6 Rhett Allain February 18, 2010 I was thinking about the radius last night. After watching the olympics, I was sure I had the wrong value – thanks for your input. 7. #7 Claire February 18, 2010 Wondered if the skater changing leg alters the amount of time friction is excerted on ice – on flat blade and forward. Look what happens when they change leg. You know when you see slow motion of animals running, at some point they are air born – less friction so more area covered over time. In this senese, is the curve of the skater also a result of the changes of friction by each change of leg on the ice surface? 8. #8 Claire C Smith February 18, 2010 I meant that path of the curve the skater takes, not the angle of the skater. 9. #9 Ian Kemmish February 19, 2010 Please do curling. The most I could find on the Internet was an explanation of why a curling stone curls in the opposite direction to, say, an upturned tumbler on a glass surface. But I don’t understand where the sideways force on the upturned tumbler comes from either…. 10. #10 Sebastian February 20, 2010 Concerning the discussion about why the skater makes a curve, doing Inline Speed Skating i’m experiencing, that the skate starts the curve as soon as it isn’t orthographic to the floor. Even with only one foot on the ground for the whole corner (thus on air time) you can make it around the corner (see short track skaters especially). I assume that cornering is caused by slight drifting. 11. #11 rubbertree February 22, 2010 I love your work here, Rhett. I have a suggestion for a future post, perhaps during the NBA finals: “Spudd Webb Physics.” Because who among us didn’t ask ourselves about him “How Dat?” 12. #12 Birger Johansson February 23, 2010 On terminology: Simply call it “the centrifugal pseudoforce” and get on with it….the readers will understand what it means. 13. #13 Jen February 25, 2010 I have nothing of importance to contribute here, but I just stumbled across this and had to say how awesome it is. 14. #14 Craig February 25, 2010 I just stumbled across this page. Very cool! As a former short track speedskater with a degree in Biomechanics I can confirm everything posted here. For those who are still curious, I’ll throw in some more data. The radius of the actual track layout is exactly 8m radius. Obviously they dont skate ON the layout. The actual radius that anyone skates ranges from as low as 8m to as high as 16m, all within the same turn. It is never consistent through the entire turn. So any scientific calculations could only be specific to the instantaneous sample. 12m/s is a very typical velocity for short track. The actual question to be asked with these calculations is “What radius is he skating at this exact point?” The blades are both curved and offset towards the center of the corner. In most cases the heel will be centered. On the right foot the front of the blade passes approximately between the big toe and second toe. On the left foot, depending on the width of the skaters foot, it will pass +/- through the center of the second smallest toe. The “rocker” of a blade ranges from 8m to about 12m radius. The “rocker” is the normal curve in the blade you would expect. A few skaters will have a consistent rocker of only one radius. Most elite skaters have a compound radius, flatter in the center and more round at the front and back. Flatter glides better in the straights. Rounder allows tighter corners and pivots. Tweaking the ankle at different points of the track positions the force through a different part of the blade and thus a different rocker value. The “Bend” of the blade is a curvature that allows for more blade contact while cornering. Both the right and left blades have a slight concave bend as referenced from the center of the track radius. As the skater leans over, more blade contacts the ice. This curve ranges from about 18m to around 26m radius. These two parameters combined result in some very complex relationships with the ice. In real life the numbers are far to complex. Thus elite skaters spend most of their careers going through trial and error to find the best setup. Once they find something they like it becomes a hugely guarded secret. As an aside, if you look closely at any pictures that show the bottom of someones skate you will see tape next to where the blades attach to the boot. This tape has little pen marks to help the skater remember past positioning. In other words, all of these things are constantly being tweaked. EVERYONE has a wrench with them while on the ice during practice. 15. #15 Claire C Smith February 25, 2010 To comment 14, Craig, Hey thanks for that. All a bit clearer. I wasn’t suggesting that the skater becomes air born when changing leg – the animal analogy was to highlight what more can be seen when slowing down a By the way, I posted my first comment on this page a few days ago here, thinking I was still on the New York Time Science section and commenting on that – I was reading an article on NYTS then clicked here thinking it was still the same site. So, now accidently quite glad I saw this kewl site for the fisrt time. 16. #16 Ethan March 5, 2010 l like ice cream 17. #17 Shaner September 13, 2010 I am a short tracker myself. I just wanted to note the hand touches the ice only when we are near top speed and need extra support 18. #18 Brett September 28, 2010 I don’t know if you know this but Apolo has angled cups on his skates. the blade isn’t quite perpendicular to his foot which means that the angle your measuring might be off. 19. #19 Craig September 30, 2010 The angle remains the same since what is being measured is the angle that is created from the line that starts at the point of contact on the ice and passes through the body’s center of mass. Angled cups don’t effect this measurement. In other words, they don’t change the angle of lean. Just the contact angle of the blade to the ice. 20. #20 Barraldinho December 6, 2010 Nice breakdown of the basic forces But I think you overlook some things 1- The lean angle is not the same all the way thru the corner- you fall in to a maximum lean at the apex- the pop backout the exit- it’s an orbital tube – think of a swept volume tracing where all the body parts have been over time. 2. The COM is not fixed as the body moves – as you say the diff body libs have different orbital radii, and it really is a big difference – and all have different moments of inertia, especially the parts that swing- like legs and arms. 3. You ignore the corriolis force ( yes psudo forces are awesome, and are very real) especially on trailing limbs with reference to the first person frame of ref. 4. You ignore the Newtons law of conservation of angular momentum, Iw = Iw – this is a key equation for objects that change their orbit, it’s what gives good skaters that “slignshot ” effect when they tighten their track. It’s also why core stability is so important, without it you get no efficient conversion of angular momentum to radial spin/torque. Newton says ” In a rigid system angular momentum is conserved” So rigidity is important,even when you are moving a lot, you have to keep some rigidity /core stability Also there are many internal torques in the body that act to store and release rotational and compression energy (like plyo effects from muscles and tendons), when skaters fall they sometime spin out anti clockwise- as the rotations are contained by the centripedeal force of the blade path, then released when they fall I’d say there is a bunch of Euler transforms involved and you would need a bunch of Quaternion math to solve it, espcially the blade dynamics. The rotational dynamics are very complex, and are the hidden forces that can be quite non linear, as the blade behaves dynamically depending on angle of lean, blade profile and torque and vert loading applied to it. this might be interesting to u Great thread- I have some other writing on this I will dig out. 21. #21 Barraldinho December 6, 2010 Found this on the web- can’t remember where . . Generating Angular Momentum An object does not just typically have angular momentum. Recall Newton’s first law that an object in motion tends to stay in motion. Well, if a figure skater is just skating straight down the ice and then needs to perform a spin or jump with several rotations in the air, he or she needs to generate angular momentum. Angular momentum is generated by the skater applying a force against the ice. The ice then applies a ground reaction force on the skater. This ground reaction force causes gives the skater angular momentum. The point of application and line of action of this force is critical. If the line of action of the force is directed through the skater’s axis of rotation, then he or she won’t spin. The force must cause a torque, or moment, which means it must be applied some distance from the axis of rotation AND have a line of action which does not go through the axis of rotation. The larger the force or the farther the force is from the axis of rotation, the larger the torque. The larger the torque, the greater the angular momentum. Another key consideration in generating angular momentum is the object’s moment of inertia. The larger an object’s moment of inertia, the more angular momentum the object can obtain. For example, if a figure skater wants to generate a lot of angular momentum, they should have their arms spread wide, which increases their moment of inertia. In this position, while the skater will have to have a large torque to start rotating, his or angular momentum: will be larger due to the large I. A skater who starts spinning with his arms at his side, with the same angular velocity will have a smaller angular momentum. Moreover, this skater will not be able to increase his speed in the spin, because he will not be able to reduce his moment of inertia to increase his angular velocity. Two animated figures are provided to illustrate this idea. The larger the moment of inertia the more torque it takes to start the object spinning. Thus, there is a trade-off between moment of inertia and angular velocity when generating angular momentum. In figure skating, the skaters do not usually have a problem with having producing large enough torque to start spinning. Accordingly, it is to their advantage to start every spin, or rotational trick, with a large moment of inertia. They accomplish this by having their arms and free leg held away from their body. Some skaters reach rotation speeds of 7 rev/s during a jump. This corresponds to 420 rpm (revolutions per minute). This is as fast than the idling speed of the engine on some cars!!!! Given the following moments of inertia and angular velocities of the skaters initiating spins, calculate their angular momentum and answer the questions that follow. Note that when calculating angular momentum, it is important to convert any angular velocity to readians/s before performing the calculation. 22. #22 Barraldinho December 6, 2010 Also this is interesting, and often overlooked in ST technique. The Physics of Ice Skating – Angular Momentum The angular component of linear momentum is angular momentum. When an object rotates around a fixed axis, the force acting on the object is called the centripital force. This force points inward, toward the center of the circle traced by the rotation. The velocity of the object points tangential to the circle traced. This is illustrated by swinging a ball on a string around your head (don’t hit any lamps though). If the ball becomes detached from the string, it goes flying in a straight line. The vector for angular momentum points perpendicular to the velocity and force vectors. It goes according to the “right hand rule.” This is just a simple way of remembering where the angular momentum vector is pointing. Angular momentum is represented by the equation L=I where I equals the moment of inertia and is the angular rotation or the period of rotation divided by 2 . The moment of inertia depends on the mass of an object and also the distribution of that mass around the axis of rotation. So a skater can have a different moment of inertia based on whether their arms are extended or not. This can be compared to linear momentum where p=mv or linear momentum equals mass times velocity. Angular momentum is conserved when no outside torques act on an object. As say, the moment of inertia decreases, the angular rotation has to increase to keep the same angular momentum. This is most evident when a figure skater spins. A skater starts the spin with arms outstretched (a large moment of inertia). As the skater brings the arms in (decreasing the moment of inertia), the rotational speed increases. This is how those incredible spins skaters like Paul Wylie, Todd Eldridge and Kristi Yamaguchi are accomplished. Along with many long years of practice. Most of the spins done by world class figure skaters are edge turns, meaning they are spinning while remaining on an edge. For beginners, often the first spin learned is the two-footed spin. A skater rides a large curve with most of their weight on an outside edge. As the curve spirals into the center, the skater rises up on the flats and begins to spin. One of the most important aspects of a spin is how to “center” a spin. This refers to the property that the spin should stay in one place and not travel all over the ice (which is quite hazardous). This requires converting all of the linear momentum into angular momentum. (Another conservation law) Another example of conservation of angular momentum occurs when a massive star (meaning several times the mass of our sun) dies. As the star, which is already rotating, begins to collapse, it becomes a smaller sphere which decreases its moment of inertia. Since the star is an isolated system with no forces acting upon it, the angular momentum must be conserved and the rotational period of the star increases. If the star (known now as a neutron star) is emitting a beam of radiation, its rotational motion makes this beam appear to us like pulses. These stars are known as Back to the Physics of Ice Skating by Karen Knierman and Jane Rigby The Physics of Ice Skating – Torque Torque is a rotational force, in fact the word itself comes from the latin for “to twist”. Torque, in a sense, causes rotation about an axis. Torque involves both the force applied to an object and the distance from the rotation axis you apply the force. Perhaps some examples will help. In order to open a heavy door, you need to apply a force. But force alone will not do the job. Where the force is applied and in what direction is also important. If you apply a force close to the hinge of the door rather than out by the doorknob, it is much harder to move the door. That’s why doorknobs are located at the opposite side of the door to the hinges; it’s much easier to move the door out there. The definition of torque is the product of the distance from the axis of rotation (often called the lever arm) with the force that is perpendicular to the lever arm. (If you pull on a door parallel to the plane of the door, you do not rotate the door.) Another example of torque occurs when you turn a screw or bolt. Using a screwdriver (the non-electric kind) is often hard and time consuming since you must apply a large force in order to turn the screw (small lever arm). However, if you use a wrench for tightening bolts, you only need to apply a small force since you have a long lever arm. That’s why wrenches used to turn large bolts have much longer handles compared to those that turn small bolts. This enables the user to use less force since they have a long lever arm. Of course, the user must apply that force over a longer distance. So there’s a tradeoff between force and distance. So how do we get from tools to ice skating? A skater, in order to rotate, must exert a torque on his body by pushing against the ice. In edge spins, the skater pushes one foot against the ice to start the turn. You also see this in multiple rotation edge jumps. In these jumps, the skater takes off from the ice, turning the skate as he does so, which creates a torque. Thus, the skater spins! You can think about the physics of moving objects in two different and equally acceptable ways: in terms of forces, or in terms of energy. Which you use depends on which is more convenient; both will always work. In this section, we’ll look at energy in ice skating. It’s an energetic section! Kinetic and Potential Energy Explained Kinetic energy is the energy of motion (“kinetic” means motion). So an ice skater flying across the ice has lots of kinetic energy. When he slams into the boards, that is transferred abruptly into thermal energy, sound, and work done on the skater (ie compressing his chest, rearranging his face, etc.) Seriously, to measure kinetic energy (KE), just measure the mass of the object and its velocity. KE = 1/2 mass*velocity*velocity or 1/2 mv^2. If the movement is rotational instead of straight-line, then the equations are similar. (See Karen’s section on rotation.) The idea is the same – how fast the skater is spinning is directly related to her kinetic energy. It’s easy to “see” kinetic energy in the motion of an object. But what about potential energy? Potential energy is stored energy. Energy can be stored in chemicals, by compressing a spring, or by doing work against gravity (ex: by placing an object on a higher shelf.) Muscles act like springs, so the chemical potential energy of muscles is converted, by the muscle applying a force, into kinetic energy of motion. Energy Conversions in Jumping In a jump, the skater uses chemical potential energy (muscle power) to gain speed across the ice. When she jumps, she’s also converting her chemical potential energy into kinetic energy. As she flies upward, her kinetic energy is converted into gravitational potential energy; as she slows, she gains height above the ice. At the top of her jump, she has no kinetic energy (for a moment!) There, all of her former kinetic energy is now gravitational potential energy. As she falls back down, her potential energy is converted back into kinetic energy. At the moment she hits the ice, all the gravitational potential energy she had at the jump’s peak is again kinetic energy, so she hits pretty hard. If you measure the height of her jump, you can determine how hard she pushed off, which is also how hard she smacked down. The Physics of Ice Skating – Isaac’s Third Law Newton’s third law is one of the most-quoted in physics: for every action, there is an equal and opposite reaction. Lots of times, this is used out of context (to describe politicians, The basic stroke in ice skating provides a good example of Isaac’s 3rd law in action. When you “stroke” (the basic push in ice skating), you apply a backwards force to the ice. The ice applies an equal, forward force on you, so you go forward. Skaters stroke at an angle, so part of the stroke is wasted. You’re pushing forward and to the side. The side push is resisted by the edge of your other blade. The forward push is resisted only by ice/blade friction, so you go forward. Actually, this is exactly why sailboats go forward, not sideways. The wind pushes the boats forward and to the side. The sideways push is resisted by the long keel, but the forward push is relatively unresisted. Boats are designed to be aerodynamic (actually, “hydrodynamic) to forward motion and are intentionally unhydrodynamic to sideways motion. Note that these forces apply only to direct pairs of objects. I push on the ice; it pushes on me. As I push in on the wall, it pushes me outward. Third law reactions never involve a third 23. #23 Barraldinho December 6, 2010 For reference 24. #24 messerman February 15, 2011 surprising that so much mental muscle has gone into accepting the fact that we lean round corners. Even my horse knows that (which can be daunting as he leans to go round a tree and near wipes you out). And the fact of maybe coming out of a turn faster than entry is clearly nothing to do with some imaginary sling-shot effect. Quite simply if your linear speed out > linear speed in then yo have inserted extra energy (more than reqired to ovecome resistive forces). which is what they do. QED Of much greater interest – and perhaps not such risk of confounding by minutia – is the question of the “slalom” style of propulson of long track long distane (10km olympics for example) speed skaters. They constantly skate a curve path (to left and right) up their 100m straight. Putting aside the illusory sling-shot model, how do they insert forwards thrust into their bodies by swerving side to side. no other sport (motorcycle racing, even salom ski racers on their final straight swoosh to finish) – nobody believes that swerving increases speed. 25. #25 FirstLoser July 20, 2011 Wow – thanks – this is great! 26. #26 FirstLoser July 20, 2011 Wow – thanks – this is great! 27. #27 Mark December 13, 2011 I stumbled across this photo looking for for images of skaters to put into a biomechanics exam that I am preparing. Funny that the photo I clicked on happened to be associated with the exact problem that I was formulating!
{"url":"http://scienceblogs.com/dotphysics/2010/02/17/apolo-ohno-physics/","timestamp":"2014-04-20T01:07:42Z","content_type":null,"content_length":"106872","record_id":"<urn:uuid:e7da1d5b-670d-455d-beee-3c2db0966ace>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Two-part modeling and SEM Sanjoy Bhattacharjee posted on Wednesday, October 24, 2007 - 2:59 pm Dear Prof. Muthen, I have a cross-sectional dataset and fitting the following standard SEM model to estimate, where Y1 is binary and Y2-Y7 are ordered categorical indicators, X’s are strictly exogenous variables and eta’s are continuous latent variables. Y1 = g(eta1) Y2, Y3, Y4 = g(eta2) Y5, Y6, Y7 = g(eta3) Eta1 = f(eta2, eta3, X1); Eta2 = f(X2); Eta3 = f(X3); I am using MPlus 4.0 and everything is fine with the estimation result. However, I want to modify the model introducing (Duan, Manning type) two-part rationale. In particular, we have the following Y0, a binary variable, indicates whether or not an individual participates in a program. Now, when an individual says yes to the participation question, we ask the question to generate Y1. It is likely that the latent variables eta2 and eta3 influence Y0 as well How could I do it in MPlus? Is there any theoretical paper which introduced factor analytic/SEM rationale in Duan-Manning type setup? Thanks and regards Bengt O. Muthen posted on Thursday, October 25, 2007 - 9:17 am I think Duan-Manning worked in a regression setting and there I think the 2 parts of the model can be estimated separately because you can't identify a random effect correlation between them like you can in a hierarchical data setting such as the longitudinal data analysis considered in Olsen & Schafer's JASA article on 2-part growth modeling. Thinking out loud, it seems the question then is if the SEM setting changes the independence of the 2 parts. I guess if Y0 is influenced by the same etas as Y1, then their factor loadings should be held equal across the 2 parts and therefore a simultaneous analysis of the 2 parts should be done. I guess this calls for doing 2-part modeling - see the Mplus DATA TWOPART option. I haven't seen such an application. We have a Kim & Muthen paper on our web site which does 2-part mixture factor analysis but there again you have hierarchical data in that you have multiple indicators - you have only one Y1 - so that is different. Sanjoy Bhattacharjee posted on Thursday, October 25, 2007 - 12:31 pm Thank you Prof. Muthen. I just read your working paper on “two-part factor mixture model”. It is really helpful and I believe this one should help to estimate my model. Besides, my model is simpler than yours. I am not dealing with the mixture portion. Following is my model statement; where Y0 and Y1 are binary, Y2-Y7 are ordered categorical, and X0, X1, X2 and X3 are the vector of exogenous variables. Each of the X’s has unique elements necessary for the model to be estimated. ETA2 BY Y2 Y3 Y4; ETA3 BY Y5 Y6 Y7; ETA2 ON X2; ETA3 ON X3; Y1 ON ETA2 ETA3 X1; (Now adding the two part rationale where Y0 indicates response to the participation question: 1=yes and 0 otherwise and Y1 is valid only if Y0 is 1) Y0 ON ETA2 ETA3 X0; Q1: What should I write in the model section NOW to make Y1 “conditioned on” Y0? On page 11 of your article you wrote “correlated”. I might be confused but I don’t think we are doing simple Q2: Reviewer will ask for the likelihood function or the final sets of equations that MPlus calculate for the model when we combine regular SEM with Y0 in a Duan - Manning type framework. Is there any reference for that? Thanks and regards Bengt O. Muthen posted on Friday, October 26, 2007 - 10:03 am Q1. You need to create the data in line with "DATA TWOPART" described in the User's Guide. This specifies missing on y1 when y0=0. There is no correlation that can be estimated here. Q2. A related likelihood function is given in the Olsen-Schafer's article. Sanjoy Bhattacharjee posted on Monday, October 29, 2007 - 9:18 am Thank you professor. Regards, Sanjoy Moira Haller posted on Saturday, October 15, 2011 - 3:53 pm I want to simultaneously examine trauma exposure (binary) and PTSD symptoms as mediators of several covariate effects on an outcome variable. Rather than model trauma and PTSD symptoms separately, I want to examine them within the same model so that I can identify the effect of PTSD symptoms on the outcome, over and above the influence of trauma exposure. However, PTSD symptoms are conditioned on trauma exposure (PTSD symptoms are only valid if trauma=1). Is it sufficient to code PTSD symptoms as missing for cases with trauma=0? Also, because trauma exposure and PTSD symptoms may share predictors other than the covariates, I believe I need to specify the residual covariance between them. Is the syntax that I’ve used below an appropriate way to do this? I was also considering specifying the model using the TWOPART feature as you described above. Any advice on how to handle this issue would be much appreciated! Categorical is trauma; Missing are ALL (-99); ptsdsx on gen advers ethinc pathol; trauma on gen advers ethinc pathol; y on trauma ptsdsx gen advers pathol ; f1 BY ptsd trauma; f1@1; [f1@0]; Bengt O. Muthen posted on Saturday, October 15, 2011 - 6:07 pm I think a Twopart approach is a bit more transparent than working with missing data approaches in this case. The two parts can have different predictors and different effects on the outcome. I think a residual covariance can be specified as you do it here. But I don't think the residual covariance is identifiable, as it is in growth models. Perhaps a sensitivity analysis can be carried out, fixing it to different values to capture the effects of potential left-out covariates. Moira Haller posted on Sunday, October 16, 2011 - 1:00 pm Thanks so much for your help. When you recommend a sensitivity analysis, do you mean setting the covariance to different values and seeing which has the best fit (e.g., by inspecting the loglikelihood)? When I allow the model to estimate the covariance using the above syntax, the covariance is negative and most of the covariate effects on trauma exposure and PTSD symptoms are no longer significant (they are when the covariance is not estimated). Substantively, I would expect the covariance to be positive. Perhaps this means the model estimates are unreliable and I am better off constraining the covariance to a certain value as you suggested? By the way, there are two reasons I thought it may be best to staty away from the twopart approach. First, I thought that I should try specifying PTSD symptoms as a count rather than continuous variable. Second, from the usersguide, it looks like when setting up the data to use twopart, cases with 0 PTSD symptoms would be coded as 0 on the binary part (no trauma exposure). However, some cases with trauma have no PTSD symptoms. Bengt O. Muthen posted on Sunday, October 16, 2011 - 8:43 pm You can do two-part modeling with count outcomes - this is called hurdle modeling. The "continuous" part is a zero-truncated Poisson variable. I didn't think the covariance you estimated was identified, so please send your output to Support. Alexander Kapeller posted on Thursday, March 01, 2012 - 10:16 am In my two part Modell outut there ist the line: Chi-Square Test for MCAR under the Unrestricted Latent Class Indicator Model Could you please explain what the results tell me: Value 100.355 Degrees of Freedom 130 p-Value 0.9749 is this the result for the continuous part? Linda K. Muthen posted on Thursday, March 01, 2012 - 12:52 pm This is a test of whether the data are missing completely at random. This cannot be rejected with a p-value of .9749 For further information about this test, see the Little and Rubin book. Alexander Kapeller posted on Saturday, March 03, 2012 - 11:02 am thanks Linda, so is it right that i have no test statistic or descriptive measure of fit within the two part model at all? Alexander Kapeller posted on Saturday, March 03, 2012 - 11:06 am Hi Linda, I have a special question related to the mplus procedure two part with mediation. Having a mediation on a binary Y which results from a two part data set , is the default a probit or logit modell. Concerning the standardization procedure to compare a*b and c-c'. There is already a stdy section in the output can't this be used or do i have to handcalulate the standardized effects via pi squared /3 and varinces ? Next: following after standardization via the varince to compare the a*b with c-c' and testing the significance of the effect. Do i have to standardize the s.e. of the parameters also to do the sobel. is the formula for standardization of variance(ab) the same as for b(ab). thanks in advence. Bengt O. Muthen posted on Saturday, March 03, 2012 - 4:47 pm The default is a logit model. Note that the literature on indirect effects with a binary outcome y says that a*b will be different from c-c' and that c-c' is the wrong quantity to use. You don't want to use a_stand * b_stand, but instead compute a*b and then standardize it by dividing by the estimated y SD and multiplying it by the x SD. That calculation can be done in Model Constraint using parameter labels given in Model. Model Constraint then also gives the significance, with SE calculated automatically by the Delta method (of which the Sobel formula is a special Alexander Kapeller posted on Sunday, March 04, 2012 - 2:46 pm thanks Bengt, this all puzzles me seriously. 1) The a*b and c-c' problem I am aware. But I thought when standardizing both to the same scale they would be equal again (MacKinnon / Dwyer 1993)? Is this not true any longer? Could you please give a literature hint. 2) For the calculation: I am reading your 2011 paper "Applications of causally defined direct ..." . On the buttom of page 25 you state that " a latent mediator approach using logistic regression is not yet available in Mplus" <--> My X and M are latent constructs, y is a binary single variable --> Does this affect my calculations so that I have to switch to probit? 3) Could you please give a reference for the staandardization you describe? your help is appreciated Alexander Kapeller posted on Sunday, March 04, 2012 - 3:46 pm hi Bengt, I also tried to get bcbootstrap CI. Mplus tells me: *** ERROR in ANALYSIS command BOOTSTRAP is not allowed with ALGORITHM=INTEGRATION. even after I switched to ema as algorithm. Bengt O. Muthen posted on Sunday, March 04, 2012 - 6:07 pm 1) Look at the more recent paper MacKinnon, D.P., Lockwood, C.M., Brown, C.H., Wang, W., & Hoffman, J.M. (2007). The intermediate endpoint effect in logistic and probit regression. Clinical Trials, 4, 499-513. which is on our web site. Also look at the Imai and Vanderweele references in my paper. 2) I was not talking about a latent variable construct, but the case of an observed categorical variable m. The question was if the observed categorical m or the latent response variable m* behind m was the mediator of interest. 3)I don't know about a reference - it simply uses first principles for standardization where you always divide by the DV's SD and multiply by the IVs SD. The DV is y and the IV is x in an x-m-y mediation model. Mplus does not yet offer bootstrap when numerical integration is needed. You can use Bayes if you are worried about non-normality of the a*b product. Alexander Kapeller posted on Tuesday, March 06, 2012 - 8:56 am thanks for your advice. best alex Alexander Kapeller posted on Monday, April 09, 2012 - 11:21 am I am using a two part model in mplus as my data show a high number of zero’s. Focussing at the quantitative part: the zero in the binary part are missings in the quantitative part. In this quantitative part I specified a latent factor. As I use FIML the missings process will take information from the information in distributions from the data. I am concerned, these missing are not like missing data from a questionnaire – they have a meaning. So is FIML imputing the right way. But otherwise the deleting of missing cases will reduce the dataset to zero cases. Can you give some comments on this issue please. Thanks a lot Bengt O. Muthen posted on Tuesday, April 10, 2012 - 8:37 am The missing on the continuous part due to the binary part being zero is not missingness of the regular kind, but simply a way to arrange the data in order to obtain the likelihood of the model in line with what is described in the Olsen-Schafer (2001) JASA paper that originated two-part growth modeling. They don't use the data arrangement that Mplus uses, but the two approaches have the same On top of this "structural" missingness you can have missing on the continuous variables even when the binary part is one. That kind of missingness is handled via "FIML" under MAR as usual. Note that two-part modeling is different from ZIP modeling. The latter is a special 2-class model, whereas the former is not. Note also that two-part modeling with counts also has the name hurdle modeling often used in the literature. Mplus can handle ZIP and hurdle models. Bengt O. Muthen posted on Tuesday, April 10, 2012 - 8:44 am Note also the interesting discussion in Olsen-Schafer (2001) at the top of the right-most column of page 742. This concerns counterfactuals and the expected response on the continuous variable with different values of the binary variable. Alexander Kapeller posted on Tuesday, April 10, 2012 - 9:40 am Hi Bengt, this is hard to understand. puh. So my interpretation is: there is no bias from the structural zeros in the continuous part, because the structural zero information ist left out in calculating the likelihood. Put this to practice: it is not like estimating the model just without the cases with the structural zeros. If I have 413 cases total, around 70% structural zeros mixed around in six y indicators of a latent variable eta, then the relationship beta between ksi and eta is based on the 30% remaining information in y connected to a) all information of 413 cases in ksi b) only that ksi cases which have an corresponding y (this I think is not the case) c) ... - and there my understanding stops. Is it also correct to say that there is not so much information left to calculate that beta. Alexander Kapeller posted on Tuesday, April 10, 2012 - 9:58 am Hi Bengt, going on .. concerning the Zip. I want to model an interaction effect with my continuous part, then mplus has really trouble with zip or zinb. I guess this comes from the 2class within the zip. In the two part it is much more stable. Considering a two part with an truncated poison at zero. how to tell Mplus that I have a truncated poison in the continuous part? Isn't that a 2class model then? As I understand, truncation is different to censoring. So the censoring (bi) specification should not be adequate as my y are positive continuous data. Bengt O. Muthen posted on Tuesday, April 10, 2012 - 6:38 pm Yes, 70% who have the binary indicator=0 is a large percentage. This reduces your power and also makes the analysis rely more heavily on model assumptions. It is not clear to me if your continuous part corresponds to a continuous variable or a count variable. Only for count variables would ZIP, ZINB, or truncated Poisson be relevant. A two-part model for a count outcome can be specified using the negative binomial hurdle model specified in Mplus using Count = u(nbh). See page 493 in the V6 UG. Back to top
{"url":"http://www.statmodel.com/discussion/messages/11/2675.html?1318823019","timestamp":"2014-04-19T19:36:37Z","content_type":null,"content_length":"55336","record_id":"<urn:uuid:b5c76062-4299-4793-a9c0-47e70f420cf9>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Describe the relation between impulse and motion. Get your Question Solved Now!! Describe the relation between impulse and motion. ... Introduction: impulse and motion More Details: Describe the relation between impulse and motion. Describe the relationship between pairs of forces. Please log in or register to answer this question. 0 Answers Related questions
{"url":"http://www.thephysics.org/46015/describe-the-relation-between-impulse-and-motion","timestamp":"2014-04-20T18:23:04Z","content_type":null,"content_length":"105165","record_id":"<urn:uuid:bd640eaa-1eac-47d8-8f41-591bcf3af46e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
angle between eigenvector and X-axis On Feb 19, 1:41=A0pm, "Roger Stafford" <ellieandrogerxy...@mindspring.com.invalid> wrote: > "Serena Frittoli" <xe...@hotmail.com> wrote in message <gnhi7a$jd...@fred= > > Thanks to reply > > There is a problem ... > > If I find theta from V(1,1) using acos e from V(2,1) using asin ... I g= et two different values ... > > I =A0have another question: > > tha matrix V from comand =A0[U,S,V] =3D svd(M) ... what does this matri= x mean? > > it is possible that sometime it is [cos -sin;sin cos] and other times i= t is [cos sin; sin -cos]... > > I'm very confuse!!! > > Thanks > =A0 Let's tackle one problem at a time. =A0Suppose you have a vector v = =3D [vx,vy] and you wish to know the angle between the positive x-axis and = vector v. =A0Even that question has ambiguities to it. =A0One interpretatio= n is that it is an angle which lies in the range from 0 to pi regardless of= whether v lies counterclockwise or clockwise from the positive x-axis. =A0= This would be the way angles within a triangle would be interpreted. =A0Ano= ther interpretation is that we must rotate strictly counterclockwise from t= he positive x-axis until first reaching v. =A0That would produce an angle s= omewhere between 0 and 2*pi. =A0Yet a third interpretation is to rotate eit= her counterclockwise or clockwise from the x-axis by no more than pi radian= s with the counterclockwise direction considered positive and clockwise neg= ative. =A0This angle would therefore lie between -pi and +pi. > =A0 In matlab these three possible angle interpretations are best found b= y using 'atan2' in the following respective ways: > =A01) angle =3D atan2(abs(vy),vx); > =A02) angle =3D mod(atan2(vy,vx),2*pi); > =A03) angle =3D atan2(vy,vx); > The method I gave you earlier is in accordance with the third interpretat= ion here. =A0You will have to decide which kind of angle it is you are seek= > =A0 You can also use 'asin' or 'acos' to find these, but they suffer a lo= ss of accuracy for certain values. =A0Also they require a more complicated = procedure to produce the correct values in cases 2) and 3), since each func= tion gives results which span no more than a pi width, whereas these cases = require a span of 2*pi. > =A0 As to applying this to your eigenvectors, you should be aware that ea= ch eigenvector from 'eig' is arbitrary as to its sign. =A0Either direction = can occur and in terms of angles that makes a difference of pi in the angle= value. =A0Also there is nothing in the documentation of 'eig' that specifi= es that the largest eigenvalue must come first, so there is some doubt abou= t which eigenvector you are finding the angle for. =A0It makes a difference= of pi/2 in the answer. > =A0 Finally there is a large ambiguity which occurs in case both eigenval= ues are equal. =A0In that event the eigenvectors are absolutely arbitrary a= s long as they are orthogonal and of unit length. =A0Any angle may occur de= pending on the vagaries of the programming in 'eig'. =A0This is no fault of= matlab. =A0It is inherent in the very mathematical definition of eigenvect= > =A0 As to the 'svd' function, there is a definite relationship between th= e results given by it and those of 'eig' if you are using Hermitian matrice= s, as is indeed true in your case. =A0I refer you to the Wikipedia article = > =A0http://en.wikipedia.org/wiki/Singular_value_decomposition > In particular read the section on "Relation to eigenvalue decomposition" > =A0 I suspect that these uncertainties are not what you wanted to hear, b= ut that is simply the way things are. =A0It just means you have to work har= der at deciding precisely what it is you want to accomplish. > Roger Stafford And there's another option: 4. angle=3Dmod(atan2(ux,vy)*180/pi,360); to give bearing (i.e., degrees clockwise from True North)
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/244236","timestamp":"2014-04-16T13:49:24Z","content_type":null,"content_length":"47465","record_id":"<urn:uuid:d75f2f2d-1a10-4585-b5c9-05f57eeae787>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Frequency and Severity of Belgian Road Traffic Accidents Studied by State-Space Methods ELKE HERMANS ^1 GEERT WETS ^2 * FILIP VAN DEN BOSSCHE ^3 In this paper we investigate the monthly frequency and severity of road traffic accidents in Belgium from 1974 to 1999. We describe the trend in the time series, quantify the impact of explanatory variables, and make predictions. We found that laws concerning seat belts, speed, and alcohol have proven successful. Furthermore, road safety increases with freezing temperatures while sun has the opposite effect, and precipitation and thunderstorms particularly influence accidents with light injuries. Economic conditions have a limited impact. State-space methodology is used throughout the analysis. We compared the results of this study with those of earlier research that applied a regression model with autoregressive moving average errors on the same data. Many similarities were found between these two approaches. KEYWORDS: Road safety, time series, trend, seasonal, explanatory model, state-space methodology, prediction. Every year, Belgium has about 70,000 road deaths and injuries (BIVV 2001). During the past decade, the steady increase in traffic volume has resulted in a steady growth in traffic problems. The negative impact of these problems on our society highlights the need for an effective road safety policy. In order to take appropriate actions that will increase the level of road safety, we need to understand the underlying processes that result in traffic problems and their causes. This requires gathering extensive and reliable data over a long time period, together with modeling techniques suitable for describing, interpreting, and forecasting safety developments (EC 2004, 7). We studied the frequency and severity of traffic accidents in Belgium from 1974 through 1999. Data in economics, engineering, and medicine are often collected in the form of time series a sequence of observations taken at regular intervals of time (Peña et al. 2001, 1). This data collection method was also used here. From the broad category of time series model construction methods, we applied state-space methods in this study. This methodology will be explained in detail later in this paper. However, it is important to note here that one of the key characteristics of state-space time series models is that observations are regarded as comprising distinct components, such as trend, seasonal, and regression elements, each of which is modeled separately (Durbin and Koopman 2001, vii) and has a direct interpretation. Furthermore, the components are allowed to change in time, and the stationarity of the series is not required. The increasing interest in road safety is evident in the literature. An important class of road safety models is based on time series analysis. The succession of data points in time is a fundamental aspect in this analysis. Models are used to describe the behavior of the data, to explain the behavior of the time series in terms of exogenous variables, and for forecasting (Aoki 1987, v). The most relevant ideas highlighting developments in road safety inside this movement are described in the COST329 report of the European Commission (2004). In addition to giving a description of the trend in traffic data, many models test the influence of explanatory factors. A simple, well-known example of such a time series model is the classical linear regression, which assumes a linear relationship between a criterion or dependent variable ( y [ t] ) and one or more predictor or independent variables ( x [ t] ). Explanatory models describe how the target variable depends on the explanatory variables and interventions. One special and prominent class of explanatory models in road safety analysis is known as the DRAG (Demand Routière, les Accidents et leur Gravité) family, extensively described in Gaudry and Lassarre (2000). DRAG models are structural explanatory models that include a relatively large number of explanatory variables whose partial effects on the exposure, the frequency, and the severity of accidents are estimated by means of econometric methods (EC 2004, 174). The COST329 report (EC 2004, 47) mentions two main classes of univariate dynamic models: ARIMA models studied by Box and Jenkins; and unobserved components models, which are called structural models by Harvey. In a structural model, each component or equation is intended to represent a specific feature or relationship in the system under study (Harvey and Durbin 1986, 188). The models used here, state-space methods, belong to the latter group. To date, Box-Jenkins methods for time series analysis are applied more widely and are more popular than state-space methods, but this study will show the strengths of the state-space methodology. Both classes are concerned with the decomposition of an observed time series into a certain number of components. ARMA models decompose the series into an autoregressive (AR) process, a moving average (MA) process, and a random process. Unobserved components models decompose a series in a trend, a seasonal, and an irregular part. An important characteristic is that the components can be stochastic. Moreover, explanatory variables can be added and intervention analysis carried out. The principal structural time series models are, therefore, nothing more than regression models in which the explanatory variables are functions of time and the parameters are time-varying (Harvey 1989, 10). The key to handling structural time series models is the state-space form, with the state of the system representing the various unobserved components. Once in state-space form, the Kalman filter (Kalman 1960) may be applied and this in turn leads to estimation, analysis, and forecasting. Harvey (1989, 22 23) wrote comprehensively on structural time series models (primarily applied to economic time series), presenting an historical overview of the technique. A rapid growth of interest has ensued in recent years. Nowadays, the technique of unobserved components models is used in several studies: Flaig (2002) applied it to quarterly German Gross Domestic Product (GDP), Cuevas (2002) to real GDP and imports in Venezuela, and Orlandi and Pichelmann (2000) to unemployment series. Other than those economic applications, this technique (more specifically an intervention analysis) was also used in traffic-related research (Balkin and Ord 2001; Harvey and Durbin 1986). The state-space methodology forms a well-used approach in modeling road accidents in a number of countries, for example, the Netherlands (Bijleveld and Commandeur 2004), Sweden (Johansson 1996), and Denmark (Christens 2003). This paper presents the results of the first state-space analysis on Belgian data. The data used in this study are monthly observations from January 1974 through December 1999; 12 observations each year over a period of 26 years equals 312 observations. All data have been gathered from governmental ministries and official documents published by the Belgian National Institute for Statistics. In addition to four dependent traffic-related variables, we studied the effect of 16 independent variables. These 16 explanatory factors can be divided into 3 groups: juristic, climatologic, and economic variables. Table 1 gives an overview of all the variables used in this study. The four dependent variables in our data are the number of accidents with persons killed or seriously injured (NACCKSI), the number of accidents with minor injuries (NACCLI), the number of persons killed or seriously injured (NPERKSI), and the number of persons with minor injuries (NPERLI). The evolution in time of these variables is displayed in figures 1a and 1b. In order to make a comparison between the results of the state-space method and the regression model with ARMA errors, the same variables, data, and time periods were used. In accordance with the study of the regression model with ARMA errors, the logarithm of the dependent variables were modeled and written respectively as LNACCKSI, LNACCLI, LNPERKSI, and LNPERLI. As figure 1a reveals, the variables concerning killed or seriously injured persons (NACCKSI and NPERKSI) show a decreasing trend over the period. This is less obvious in the case of lightly injured casualties (figure 1b). Another aspect is the recurring pattern in the data. Thirdly, some months have an extremely low value. The first group of explanatory variables contains laws and regulations. Five dummy variables were included in the model to study the effect of policy measures introduced in Belgium at a certain date within the scope of our analysis. These variables are equal to zero before the introduction and have a value of one from the moment of introduction. Table 1 describes the laws. Weather conditions form the second group of explanatory factors. All meteorological variables were gathered by the Belgian Royal Meteorological Institute and published by the National Institute for Statistics. The quantity of precipitation (in mm) was measured as an average for the whole country. The other variables were measured in the climatologic center in Ukkel (in the center of Belgium). Thirdly, the influence of four indicators of the economic climate will be investigated. According to several studies (e.g., Fridstrøm et al. 1995, 12; OECD 1997, 16), exposure is a key variable in traffic research. In this study, the frequency and severity of accidents will be explained by many variables, but the impact of exposure is not measured. We cannot describe this effect because adequate monthly data of the total number of kilometers covered on the whole Belgian road system are not available. Population-related exposure statistics could be a solution, but these data are only available on a yearly basis, and no distribution code is at hand. Although we are aware that this is a serious limitation, even without an exposure variable valid models can be constructed and a good fit obtained. (For more details, refer to Van den Bossche et al. 2005). Other factors possibly omitted are assumed to be taken into account to some extent by the unobserved components framework. In this study, state-space models are constructed using STAMP software (Koopman et al. 2000). With state-space models, we were able to obtain an explicit description of the series in terms of trend and seasonal. It was also possible to quantify the impact of explanatory factors. For example, the effect of road safety measures over time can be checked by adding so-called intervention variables to the model. Apart from these purposes, state-space models can easily be used for forecasting. (For a technical discussion of state-space models, see to the methodological appendix at the end of this paper.) The objective here is to find the model that best describes the data. For each of the four dependent variables, we constructed several state-space models, each with their specific components. To be able to choose the best model, we used the Akaike Information Criterion (AIC), a measurement of fit that takes the number of parameters into account (Akaike 1973, 267 281; Koopman et al. 2000, 180). We conclude this section with the discussion of some of the advantages of state-space models compared with classical regression. An interesting characteristic of state-space methods is the possibility of modeling stochastically the variation in the estimation of the various components. Contrary to classical regression models, where components are fixed or unchangeable in time, a component can also vary in time. This is an advantage because variation in time makes it easier to follow the fluctuations in the data. Secondly, when the time dependency between observations is taken into account (which is not the case in classical regression analysis), the observation errors will mostly be situated more closely to independently random values. This makes significance tests of explanatory variables more reliable. Furthermore, state-space methods can easily handle missing observations, multivariate data, and (stochastic) explanatory variables. A last advantage is that the components can be modeled separately and interpreted directly. Not all numerical outcomes of the different models will be presented here. However, this section reports and discusses the most essential results of the analysis. It is divided into four parts. First, the outcomes of the descriptive analysis are presented, followed by an interpretation of the explanatory analysis. Next, the forecasting capacity is evaluated. Finally, we compare our results with those obtained by the regression model with ARMA errors and deduce the most important similarities and differences between these two methodologies. Based on AIC, we chose the model that best describes the accident data. For each of the four variables the same model resulted in the best fit. This contains a stochastic trend (that adapts every time period) and a deterministic or fixed recurring seasonal pattern. The interpretation of the seasonal coefficients shows that October and June are the most unsafe road traffic months of the year. During these months, respectively, approximately 13% and 11% more accidents happen than on average. The October percentage can be partly explained by the fact that it is a long month (31 days) without holidays; it is autumn and there is the transition from Central European Summer Time to Central European Time; and it is the start of the academic year. Possible explanations are not apparent for the large number of accidents during June. To look at the explanatory objective, we tested the effect of 16 independent variables. In order to obtain more reliable results (which implies normally distributed residuals), we added correction variables to the model. The inclusion of correction variables has algebraically been presented in the model formulation (see the methodological appendix). In general, two main intervention effects can be distinguished (Sridharan et al. 2003), namely a pulse intervention and a step intervention. The first effect is used to capture single special events because they may cause outlying observations that the pulse regression variable accounts for. The variable takes value 1 if t is the month that needs correction for a special event and has value 0 otherwise. The second intervention called a step intervention or level shift is added to the model to capture events such as the introduction of new policy measures. Laws and regulations can be incorporated in a model as this second type of intervention. Before its introduction, the variable has value 0, but from the moment of introduction it has value 1. Our focus is on the first type, the temporal pulse intervention. As could be seen on the graphs of the actual data (figures 1a and 1b) as well as on the graph of the residuals (figure 2), the number of accidents and casualties was unexpectedly low during some months. Either these months indeed had extremely low values or some registration error was left in the accident statistics. The following are extreme values for which correction is necessary. January 1979, January 1984 (only for LNPERLI, so a registration error probably occurred here), January 1985, and February 1997 are outliers. There are some indications for a very severe winter in 1979 and 1985 (BIVV 2001, 5). We explicitly correct for those four months by adding pulse intervention variables to the model, which are coded one during the month they represent and zero elsewhere. We are convinced that the most striking shocks must be excluded in order to fulfill the error terms conditions: no autocorrelation, homoscedasticity, and normality. In the end, we want to obtain a correct parameter interpretation. The inclusion of these correction variables lowers the difference between the predicted and the real series and thus improves the quality of the estimations. All tested correction variables are highly statistically significant. The exact t -values are given in table 2 under "correction variables." Taking these outliers into account, the fit of the models improves. The last step in the construction of the final model consists of the significance tests of the explanatory variables. An explanatory variable must have a significant influence at least at the 90% confidence level to be included in the final model. Each model was re-estimated after dropping the nonsignificant variables such that the ultimate model for every dependent variable consists of a stochastic level, a deterministic seasonal, and significant correction and explanatory variables. The addition of significant explanatory variables further improves the fit. Table 2 gives an overview of all significant combinations of variables. The parameter estimates and the t -statistics (between brackets) of the significant explanatory and correction variables according to the state-space method on the one hand and the regression model with ARMA errors on the other hand are presented. At first sight, there are a lot of similarities between the results of the two methods. Note that the majority of explanatory variables is statistically significant at least at the 95% confidence interval. In the remainder of this section, we will interpret the significant explanatory variables according to the state-space method per category. The results of laws and regulations are instructive and interesting. Three of the five variables originally included in the model proved to be significant for at least two dependent variables. Their introduction has been of major importance for road safety. This is reflected by the magnitude of the coefficients. The negative signs are as expected because laws are established to enhance road safety. The introduction of the law of June 1975 (LAW0675) the mandatory seat belt use in the front seats resulted in a considerable and highly significant increase in road safety. This law reduced all kinds of accidents and casualties. Several empirical studies (Hakim et al. 1991, 392; Harvey and Durbin 1986) have shown that seat belt legislation significantly reduces the number of fatalities and the severity of injuries. The introduction of a speed limit of 50 km/h in urban areas and 90 km/h at road sections with at least 2 x 2 lanes without separation (LAW0192) seemed significant for two dependent variables. The literature verifies the positive effect on road safety in case of a reduction in speed limit. Severity of injuries appears to be positively related to the allowed speed (Van den Bossche and Wets 2003, 15; Hakim et al 1991, 390). Yet another promising effect can be noted for the regulations and fines on the maximum blood alcohol concentration (LAW1294). They played an important role in the decrease in the number of serious accidents and the number of persons killed or seriously injured. The results confirm the hypothesis that drunk drivers often cause serious or fatal accidents. Amongst others, Gaudry (2000, 1-36) studied the effect of the consumption of alcohol on road safety and found that the relative accident probability, as a function of blood alcohol concentration, is J-shaped. In our models, it is assumed that the introduction of a law results in a sudden and permanent decrease in the dependent variable. This assumption of a step-based intervention is not always a natural one (Van den Bossche et al. 2004, 8). The significant impact of laws and regulations may be better described as "something changed at that time," instead of attributing the whole effect to the law itself. Nevertheless, it makes sense to test whether these changes are indeed substantial. As one would expect intuitively, the weather plays an important role in explaining the number of accidents and casualties (especially for the variables concerning lightly injured persons). In terms of direction, we can make a distinction between precipitation, sun, and thunderstorms on the one hand and freezing temperatures on the other hand. In addition to precipitation (QUAPREC and PDAYPREC) and thunderstorms (PDAYTHUN), the sun (HRSSUN) is a factor tied to an increase in accidents. It is plausible to assume reduced visibility in stormy weather and on sunny days, a greater likelihood of blinding by the sun. The only weather variable that has a positive effect on road safety is the monthly percentage of days with freezing temperatures (PDAYFROST). A possible explanation is that drivers adjust their driving habits steer more slowly and prudently and concentrate more because they perceive driving in freezing conditions as dangerous (which is not the case with rain and thunderstorms). Thus, it seems like road users compensate for the higher risk imposed by freezing temperatures. This result is in line with other studies (Fridstrøm et al. 1995, 9) wherein it is mentioned that exposure to traffic is lower in winter and the average driving capacity increases because less proficient drivers prefer to avoid driving on slippery roads. The impact of freezing road conditions (PDAYFROST) and sun (HRSSUN) is noticeable for all dependent variables. The quantity of precipitation (QUAPREC) and the monthly percentage of days with thunderstorms (PDAYTHUN) are only relevant for the variables concerning lightly injured casualties. Eisenberg (2004, 641) noticed that in adverse weather conditions, persons possibly drive more slowly and therefore, on average, accidents are less severe. Concerning the quantity of precipitation (QUAPREC) (on the killed or seriously injured outcomes) it is possible that two effects canceled out each other. As also found in Gaudry and Lassarre (2000, 67 96), the onset of rain has a larger and more general impact than the amount of rain (habituation can lead to more risky driving behavior). A conclusive remark on the explanatory capacity of weather conditions is that the effect of weather data is strongly related to the geographical properties of the area of concern and the level of aggregation. Concerning the economically related variables, two economic indicators happened to be significant, namely the number of unemployed (LNUNEMP) and the number of car registrations (LNCAR) for the variable LNPERKSI. They have an opposite sign and both imply that a better economy with less unemployment and more car registrations decreases the number of killed or seriously injured casualties. In the literature the findings about the direction of this effect are very diverse (Hakim et al. 1991, 384). In this study, the number of car registrations is used as one of the indicators for the economic climate. The assumption we make is that when the economy goes well more cars will be bought, and the average quality of the vehicles on the road increases. In the future, more variables (e.g., disposable income) should be included in the analysis to better assess the explanatory capacity of economic variables and their impact. The third objective of this study is predicting accident data with state-space methods for the years 2000 and 2001. Future values of the explanatory variables are available. Only the values of QUAPREC and PDAYTHUN for 2001 have to be estimated. This is done with a simple univariate state-space model based on the data from 1974 through 2000.^1) We use the final model which contains a stochastic level, a deterministic seasonal, and significant explanatory and correction variables to forecast the values of the out-of-sample dataset for 2000 and 2001 and compare them to the actual observations. To depict possible uncertainty, 95% prediction intervals are provided. The graphs (see figure 3) show us that the predictions are close to the actual observations. So we are able to capture a great part of the fluctuations in the series. Only a few points lie outside the prediction intervals. Apart from a visual presentation, we also quantified the forecasting precision. We interpreted the results of the Failure Chi-squared test and computed the mean squared error (MSE). Those tests confirmed our conclusion of accurate predictions. Comparison with ARMA regression model In addition to the interesting characteristics of state-space models already mentioned in the methodology section, we discuss an important disadvantage of ARIMA models here. It is not possible to explicitly describe a time series in terms of the different components because ARIMA models require the time series to be stationary (Harvey and Durbin 1986, 188). In those models the trend and/or seasonal are treated as a problem and therefore removed from the series by a procedure called differencing (in order to transform the series into a stationary one) before any analysis can be performed. But few economic and social time series are stationary, and there is no overwhelming reason to suppose that they can necessarily be made stationary by differencing (Harvey and Shephard 1993, 266). In 2003, a study on intervention time series analysis of crime rates (Sridharan et al.) showed that the results of a legislation on different kinds of crimes were very similar between the ARIMA model and the structural time series model. Both coefficients and t -values were very analogous. A comparison with the regression model with ARMA errors, however, showed different results. Earlier, Harvey and Todd (1983) compared the results of the prediction of a number of economic time series done by the basic structural model with those obtained using the Box-Jenkins models. They concluded that the forecasts given by both methods are comparable. In this study, we investigate the differences and similarities in explanatory and predictive analysis between the state-space method and the regression model with ARMA errors. Table 2 shows that the outcomes of these two approaches are comparable. The same correction variables seemed significant and the juristic and climatologic variables also matched quite well. Different from the results of the regression model with ARMA errors is the fact that two of the four economic variables are significant. A possible reason is that the evolution in economic factors is a very slow one. In case of a regression model with ARMA errors differences are taken, resulting in almost a constant. Differencing possibly cancels out the already little variation in time. Next, the estimated parameters of the two methods have the same (expected) sign and are of the same order of magnitude. Both methods forecasted the data for the year 2000, so we are able to assess and compare the quality of the predictions. The measure used is MSE, and the values of the two methodologies for the four variables are reported in the last row of table 2. The lower MSE, the better the prediction. The values are of the same order of magnitude. The predictions for the two variables concerning killed or seriously injured persons from the regression model with ARMA errors are more accurate. The state-space method better predicts the values of the variables concerning lightly injured persons. In case of killed or seriously injured persons, the decreasing level is more important than the recurring seasonal pattern. In contrast, for light injuries with values fluctuating around the average, the seasonal effect is more important. Because the seasonal effect is explicitly modeled in the state-space model, this model possibly predicts more accurately in case of lightly injured persons than the regression model with ARMA errors. In this study state-space models were elaborated to describe the developments in the frequency and severity of accidents and casualties in Belgium from 1974 through 1999. Furthermore, the impact of laws, weather, and economic conditions was measured. In the third place, an out-of-sample forecast of the dependent variables for 24 months was made. The results were compared with those obtained from a regression model with ARMA errors, based on the same data. For each of the four dependent variables we built several models. The model that described all data best consisted of a level that is allowed to vary over time and a seasonal. Explanatory and correction variables were added to this descriptive model. The fact that accidents happen can to a certain extent be attributed to juristic, meteorological, and economic factors. Due to data and multicollinearity issues and for reasons of comparison, we tested the influence of 16 independent variables. Additionally, correction variables for January 1979, January 1984 (only for LNPERLI), January 1985, and February 1997 were significant. From this study we can conclude that there is a lot of similarity between the results of the state-space method and the regression model with ARMA errors. Both methods labeled (more or less) the same explanatory variables as significant, and their influence was at all times in the same direction and of comparable magnitude. Several laws had a clear positive effect. Apart from those, the weather elements precipitation, sun, freezing temperatures, and thunderstorms were important. Nevertheless, note the difference between the two methods on the subject of the economic variables. The forecasting capacity of the methods was tested quantitatively and was shown to be approximately the same. The models developed in this text show large potential for describing long-term trends in road safety. On the one hand, they can isolate the effect of phenomena that cannot be influenced, but certainly act on road safety (for example the weather). Similarly, macroeconomic and sociodemographic evolutions could be added to the model. On the other hand, the efficiency of policy decisions (for example laws) or time-specific interventions can be tested. These are the direct tools for increasing the level of road safety. Moreover, forecasts can be made, uncertainty estimated, and ruptures in the time series detected. Furthermore, some advantages of state-space methods over regression and ARIMA models were reported. We conclude with some topics for model improvement and further research. In this study the variable exposure was not included. In the future, monthly observations of the total mileage covered on the Belgian road system could be taken into account in order to measure this effect. Secondly, because the number of variables in our models is limited, the effect of more explanatory factors could be tested, for example income or public transportation. The elaboration of data quality and availability together with the development of extensive but statistically sound models should lead to high quality results. Akaike, H. 1973. Information Theory and an Extension of the Maximum Likelihood Principle. Second International Symposium on Information Theory.Edited by P.N. Petrov and F. Csaki. Budapest: Akadémiai Aoki, M. 1987. State Space Modeling of Time Series.New York, NY: Springer-Verlag. Balkin, S. and J.K. Ord. 2001. Assessing the Impact of Speed-Limit Increases on Fatal Interstate Crashes. Journal of Transportation and Statistics 4(1):1 26. Belgisch Instituut voor de Verkeersveiligheid (BIVV). 2001. Verkeersveiligheid Statistieken 2001. Available at http://www.bivv.be/main/PublicatieMateriaal/Statistieken.shtml. Bijleveld, F.D. and J.J.F. Commandeur. 2004. The Basic Evaluation Model, paper presented at the ICTSA meeting, INRETS, Arcueil, France, May 27 28, 2004. Christens, P.F. 2003. Statistical Modelling of Traffic Safety Development, IMM-PHD-2003-119. Available at http://www.imm.dtu.dk. Cuevas, M.A. 2002. Demand for Imports in Venezuela: A Structural Time Series Approach, World Bank Policy Research Working Paper No. 2825. Available at http://ssrn.com/abstract=313423. Durbin, J. and S.J. Koopman. 2001. Time Series Analysis by State-Space Methods.Oxford, England: Oxford University Press. Eisenberg, D. 2004. The Mixed Effects of Precipitation on Traffic Crashes. Accident Analysis and Prevention 36: 637 647. European Commission (EC). 2004. COST Action 329: Models for Traffic and Safety Development and Interventions.Luxembourg: European Communities. Flaig, G. 2002. Unobserved Components Models for Quarterly German GDP, CESifo Working Paper no. 681. Available at http://www.cesifo.de. Fridstrøm, L., J. Ifver, S. Ingebrigtsen, R. Kulmala, and L.K. Thomsen. 1995. Measuring the Contribution of Randomness, Exposure, Weather and Daylight to the Variation in Road Accident Counts. Accident Analysis and Prevention 27(1):1 20. Gaudry, M. and S. Lassarre. 2000. Structural Road Accident Model: The International DRAG Family.Oxford, England: Elsevier Science Ltd. Hakim, S., D. Shefer, A.S. Hakkert, and I. Hocherman. 1991. A Critical Review of Macro Models for Road Accidents. Accident Analysis and Prevention 23(5):379 400. Harvey, A.C. 1989. Forecasting, Structural Time Series Models and the Kalman Filter.Cambridge, England: Cambridge University Press. Harvey, A.C. and J. Durbin. 1986. The Effect of Seat Belt Legislation on British Road Casualties: A Case Study in Structural Time Series Modeling. Royal Statistical Society A 149(3):187 227. Harvey, A.C. and N. Shephard. 1993. Structural Time Series Models. Handbook of Statistics 11:261 302. Harvey, A.C. and P.H.J. Todd. 1983. Forecasting Economic Time Series with Structural and Box-Jenkins Models: A Case Study. Journal of Business and Economic Statistics 1(4):299 315. Johansson, P. 1996. Speed Limitation and Motorway Casualties: A Time Series Count Data Regression Approach. Accident Analysis and Prevention 28(1):73 87. Kalman, R.E. 1960. A New Approach to Linear Filtering and Prediction Problems. Journal of Basic Engineering D 82:35 45. Koopman, S. J., A.C. Harvey, J.A. Doornik, and N. Shephard. 2000. Stamp: Structural Time Series Analyser, Modeller and Predictor.London, England: Timberlake Consultants Ltd. Organization for Economic Cooperation and Development (OECD). 1997. Road Safety Principles and Models: Review of Descriptive, Predictive, Risk and Accident Consequence Models.Paris, France. Orlandi, F. and K. Pichelmann. 2000. Disentangling Trend and Cycle in the EUR-11 Unemployment Series, ECFIN/27/2000-EN, No. 140. Available at http://europa.eu.int. Peña, D., G.C. Tiao, and R.S. Stay. 2001. A Course in Time Series Analysis.New York, NY: John Wiley & Sons. Sridharan, S., S. Vujic, and S.J. Koopman. 2003. Intervention Time Series Analysis of Crime Rates, TI 03-040/4. Available at http://www.tinbergen.nl. Van den Bossche, F. and G. Wets. 2003. Macro Models in Traffic Safety and the DRAG Family: Literature Review, RA-2003-08. Available at http://www.steunpuntverkeersveiligheid.be/en. Van den Bossche, F., G. Wets, and T. Brijs. 2004. A Regression Model with ARMA Errors to Investigate the Frequency and Severity of Road Traffic Accidents. Proceedings of the 83rd Annual Meetings of the Transportation Research Board, Washington, DC, January 11 15, 2004: 1-15. ______. 2005. The Role of Exposure in the Analysis of Road Accidents: A Belgian Case-Study. Proceedings of the 84th Annual Meetings of the Transportation Research Board ,Washington, DC, January 9 13, pp.1 16. In this appendix state-space models are discussed in more detail. The overall objective of the state-space analysis is to study the development of the state over time using observed values (Durbin and Koopman 2001, 11). More specifically, we want to obtain an adequate description of this development and to find explanations hereof. Furthermore, these models have the ability to predict developments of a series into the future. The state is the unobserved value of the true development at time t . The gathering (or space) of possible values of the state is called the state-space of the process. The state consists of several components: on the one hand a l evel, slope, and seasonal that give a description of the time series and on the other hand explanatory and intervention variables that give an explanation about the actual development in the series. A state-space model consists of an observation or measurement equation and one or more state equations (depending on the number of components). The first one contains the unobserved state at time t and an observation residual (ε [t] ), which is white noise. In the state equation, time dependencies in the observed time series are dealt with by letting the state at time t +1 be a direct function of the state at time t, and the state error is also white noise. Algebraically, the final state-space model used in this analysis can be written as: (Eq. 1) (Eq. 2) (Eq. 3) (Eq. 4) (Eq. 5) for t = 1,..., n ; j =1,..., k and i =1,..., l . The observation equation (Eq. 1) relates the values of the dependent variable y [ t] to the level μ[t] , the seasonal component γ[t] , explanatory variables x [ jt] ( j = 1,..., k ), intervention variables w [ it] ( i = 1,..., l ), and an observation error ε[t] . Each component has its state equation (Eq. 2 till 5 respectively). All (observation and state) errors are assumed to be mutually independent and normally distributed with mean zero and variances β[j] is the unknown regression coefficient of the j th explanatory variable. One type of intervention is the temporal pulse intervention. Only during one time point a correction of an unusual high or low value occurs. In this paper, four correction variables of this type were used. Concerning these variables, w [ it] = 1 if t is the month of correction, and 0 otherwise. λ[i] is the coefficient of the i th correction variable. The error variances are used in order to obtain the most parsimonious model that describes the data best. Each component can be chosen deterministically or stochastically. Deterministic implies one parameter estimate during the whole time period while stochastic implies that the estimate will be adapted every time point. However, this last option requires more parameters. Whether a state component should be treated deterministically or stochastically can be determined by evaluating the error variance of the component when analyzed stochastically. If the error variance of the stochastic component is very small (i.e., almost zero), this indicates that the corresponding state component should be handled deterministically. Because we consider only deterministic explanatory variables, the corresponding errors τ[jt] are equal to zero. In state-space methods the value of the unobserved state at the beginning of the time series ( t = 1) is unknown. Using diffuse initialisation (Durbin and Koopman 2001, 28) estimates for the unknown parameters are obtained. Also none of the observation and state error variances are known. The estimation of all these parameters can be obtained with an iterative process using the maximum likelihood principle. ^1One could question the correctness of using estimated values in the prediction, but we can assume that the estimates of these two weather variables will be in line with the actual unknown values due to little variation from year to year and the strong seasonal pattern. ^1E. Hermans, Transportation Research Institute, Hasselt University, Campus Diepenbeek, Wetenschapspark 5 bus 6, 3590 Diepenbeek, Belgium. E-mail: elke.hermans@uhasselt.be ^2 Corresponding author: G. Wets, Transportation Research Institute, Hasselt University, Campus Diepenbeek, Wetenschapspark 5 bus 6, 3590 Diepenbeek, Belgium. E-mail: geert.wets@uhasselt.be ^3F. Van den Bossche, Transportation Research Institute, Hasselt University, Campus Diepenbeek, Wetenschapspark 5 bus 6, 3590 Diepenbeek, Belgium. E-mail: filip.vandenbossche@uhasselt.be
{"url":"http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/journal_of_transportation_and_statistics/volume_09_number_01/html/paper_06/index.html","timestamp":"2014-04-20T23:38:51Z","content_type":null,"content_length":"80412","record_id":"<urn:uuid:82bfa508-7971-4b2d-be4f-f87875c2732a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00559-ip-10-147-4-33.ec2.internal.warc.gz"}
Olympia, WA Geometry Tutor Find an Olympia, WA Geometry Tutor ...I enjoy working one on one with students, whether helping them with homework or preparing for an exam. I am willing to create practice tests for students to ensure their success. I have a love for mathematics, but I understand that not everyone shares my passion. 19 Subjects: including geometry, English, statistics, SAT math With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including geometry, calculus, statistics, GRE ...With sufficient 'vocabulary', the individual topics blend into a mosaic that makes sense, is informative, and useful in many fields such as nursing, bioengineering, biology and other related fields. I look forward to meeting you and assisting you in your efforts to understand and appreciate this... 12 Subjects: including geometry, chemistry, algebra 1, algebra 2 ...Regardless of the subject, I would say I am effective at recognizing patterns. I love sharing any shortcuts or tips that I discover.I have taken 2 quarters of Discrete Structures (Mathematics) at University of Washington, Tacoma. I earned a 4.0 each quarter. 16 Subjects: including geometry, chemistry, French, calculus ...English is both the most important and the most difficult language in the modern world. As a professional editor and writer since the 1960s, I've had to know 1001 things about English for the hundreds of books I've worked on--but there's only about a dozen that a person needs to master to survive in the real world. Wherever your skills are at, I can improve them. 38 Subjects: including geometry, English, writing, GRE Related Olympia, WA Tutors Olympia, WA Accounting Tutors Olympia, WA ACT Tutors Olympia, WA Algebra Tutors Olympia, WA Algebra 2 Tutors Olympia, WA Calculus Tutors Olympia, WA Geometry Tutors Olympia, WA Math Tutors Olympia, WA Prealgebra Tutors Olympia, WA Precalculus Tutors Olympia, WA SAT Tutors Olympia, WA SAT Math Tutors Olympia, WA Science Tutors Olympia, WA Statistics Tutors Olympia, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/Olympia_WA_Geometry_tutors.php","timestamp":"2014-04-19T05:08:59Z","content_type":null,"content_length":"23851","record_id":"<urn:uuid:7f73abb8-8a98-414d-9fe9-0837d62b7d1b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Eccentricity, trouble w/ the proof July 9th 2011, 11:54 AM #1 Jul 2011 Eccentricity, trouble w/ the proof I'm trying to understand the proof for eccentricity; I can get about halfway through it. The problem I'm having is summarized below If e<1, you get the equation of an ellipse of the form Eq. 4: $h= -\frac{e^2d}{1-e^2}$ $a^2= -\frac{e^2d^2}{(1-e^2)^2}$ $b^2= -\frac{e^2d^2}{1-e^2}$ The foci of an ellipse are at a distance c from the center, where Eq 5: This shows that $c= \frac{e^2d}{1-e^2}=-h$ It follows from equations 4 and 5 that the eccentricity is given by If someone could help explain how $e=\frac{c}{a}$ is derived, or really what it means I'd be so grateful. I'm using Calculus Early Transcendentals 6th Ed. by Stewart (Ch. 10, Sect. 6). Re: Eccentricity, trouble w/ the proof The conclusion is that the eccentricity of an ellips is: $e=\frac{c}{a}$, now: If you write: $e^2=\frac{c^2}{a^2}$ Calculating with this: Substitution of $b^2$ and $a^2$ out of eq 4 and you get: Now take the square root of $e^2=e$. July 9th 2011, 01:46 PM #2
{"url":"http://mathhelpforum.com/calculus/184331-eccentricity-trouble-w-proof.html","timestamp":"2014-04-20T05:50:01Z","content_type":null,"content_length":"36011","record_id":"<urn:uuid:449c0087-0c2c-444d-932f-e6f3f2233d08>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 11 - 20 of 58 , 1994 "... In geometric range searching, algorithmic problems of the following type are considered: Given an n-point set P in the plane, build a data structure so that, given a query triangle R, the number of points of P lying in R can be determined quickly. Problems of this type are of crucial importance in c ..." Cited by 46 (2 self) Add to MetaCart In geometric range searching, algorithmic problems of the following type are considered: Given an n-point set P in the plane, build a data structure so that, given a query triangle R, the number of points of P lying in R can be determined quickly. Problems of this type are of crucial importance in computational geometry, as they can be used as subroutines in many seemingly unrelated algorithms. We present a survey of results and main techniques in this area. - In Proceedings of the 11th Annual IEEE Conference on Computational Complexity , 1996 "... This is a survey of space-bounded probabilistic computation, summarizing the present state of knowledge about the relationships between the various complexity classes associated with such computation. The survey especially emphasizes recent progress in the construction of pseudorandom generators tha ..." Cited by 36 (0 self) Add to MetaCart This is a survey of space-bounded probabilistic computation, summarizing the present state of knowledge about the relationships between the various complexity classes associated with such computation. The survey especially emphasizes recent progress in the construction of pseudorandom generators that fool probabilistic space-bounded computations, and the application of such generators to obtain deterministic simulations. - In Proc. 42nd Annu. IEEE Sympos. Found. Comput. Sci , 2003 "... Given a set of moving points in IR , we show how to cluster them in advance, using a small number of clusters, so that at any time this static clustering is competitive with the optimal k-center clustering at that time. The advantage of this approach is that it avoids updating the clustering a ..." Cited by 29 (5 self) Add to MetaCart Given a set of moving points in IR , we show how to cluster them in advance, using a small number of clusters, so that at any time this static clustering is competitive with the optimal k-center clustering at that time. The advantage of this approach is that it avoids updating the clustering as time passes. We also show how to maintain this static clustering eciently under insertions and - Journal of Operational Research Society , 1994 "... Dynamic load balancing in multicomputers can improve the utilization of processors and the efficiency of parallel computations through migrating workload across processors at runtime. We present a survey and critique of dynamic load balancing strategies that are iterative: workload migration is car ..." Cited by 21 (3 self) Add to MetaCart Dynamic load balancing in multicomputers can improve the utilization of processors and the efficiency of parallel computations through migrating workload across processors at runtime. We present a survey and critique of dynamic load balancing strategies that are iterative: workload migration is carried out through transferring processes across nearest neighbor processors. Iterative strategies have become prominent in recent years because of the increasing popularity of point-to-point interconnection networks for multicomputers. Key words: dynamic load balancing, multicomputers, optimization, queueing theory, scheduling. INTRODUCTION Multicomputers are highly concurrent systems that are composed of many autonomous processors connected by a communication network 1;2 . To improve the utilization of the processors, parallel computations in multicomputers require that processes be distributed to processors in such a way that the computational load is evenly spread among the processors... - Proc. of the 2nd SODA , 1991 "... It has been shown previously that sorting n items into n locations with a polynomial number of processors requires Ω(log n/log log n) time. We sidestep this lower bound with the idea of Padded Sorting, or sorting n items into n + o(n) locations. Since many problems do not rely on the exact rank of s ..." Cited by 20 (3 self) Add to MetaCart It has been shown previously that sorting n items into n locations with a polynomial number of processors requires Ω(log n/log log n) time. We sidestep this lower bound with the idea of Padded Sorting, or sorting n items into n + o(n) locations. Since many problems do not rely on the exact rank of sorted items, a Padded Sort is often just as useful as an unpadded sort. Our algorithm for Padded Sort runs on the Tolerant CRCW PRAM and takes Θ(log log n/log log log n) expected time using n log log log n/log log n processors, assuming the items are taken from a uniform distribution. Using similar techniques we solve some computational geometry problems, including Voronoi Diagram, with the same processor and time bounds, assuming points are taken from a uniform distribution in the unit square. Further, we present an Arbitrary CRCW PRAM algorithm to solve the Closest Pair problem in constant expected time with n processors regardless of the distribution of points. All of these algorithms achieve linear speedup in expected time over their optimal serial counterparts. 1 Research done while at the University of Michigan and supported by an AT&T Fellowship. - Computational Complexity Protein Structure Prediction and the Levinthal Paradox , 1994 "... The task of determining the globally optimal (minimum-energy) conformation of a protein given its potential-energy function is widely believed to require an amount of computer time that is exponential in the number of soft degrees of freedom in the protein. Conventional reasoning as to the exponenti ..." Cited by 19 (0 self) Add to MetaCart The task of determining the globally optimal (minimum-energy) conformation of a protein given its potential-energy function is widely believed to require an amount of computer time that is exponential in the number of soft degrees of freedom in the protein. Conventional reasoning as to the exponential time complexity of this problem is fallacious---it is based solely on the size of the search space---and for some variants of the protein-structure prediction problem the conclusion is likely to be incorrect. Every problem in combinatorial optimization has an exponential number of candidate solutions, but many such problems can be solved by algorithms that do not require exponential time. We present a critical review of efforts to characterize rigorously the computational requirements of global potential-energy minimization for a polypeptide chain that has a unique energy minimum corresponding to the native structure of the protein. An argument by Crippen (1975) demonstrated that an algor... - Communications of the ACM , 1983 "... foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that "Dr. Cook has advanced our understanding of the complexity of computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures ..." Cited by 17 (0 self) Add to MetaCart foremost recognition of technical contributions to the computing community. The citation of Cook's achievements noted that &quot;Dr. Cook has advanced our understanding of the complexity of computation in a significant and profound way. His seminal paper, The Complexity of Theorem Proving Procedures, presented at the 1971 ACM SIGACT Symposium on the Theory of Computing, laid the foundations for the theory of NP-completeness. The ensuing exploration of the boundaries and nature of the NP-complete class of problems has been one of the most active and important research activities in computer science for the last decade. Cook is well known for his influential results in fundamental areas of computer science. He has made significant contributions to complexity theory, to time-space tradeoffs in computation, and to logics for programming languages. His work is characterized by elegance and insights and has illuminated the very nature of computation.&quot; During 1970-1979, Cook did extensive work under grants from the - In Proc. 11th Annu. European Sympos. Algorithms, volume 2832 of Lect. Notes in Comp. Sci , 2003 "... We consider the problem of nding, for a given n point set P in the plane and an integer k n, the smallest circle enclosing at least k points of P . We present a randomized algorithm that computes in O(nk) expected time such a circle, improving over previously known algorithms. ..." Cited by 16 (3 self) Add to MetaCart We consider the problem of nding, for a given n point set P in the plane and an integer k n, the smallest circle enclosing at least k points of P . We present a randomized algorithm that computes in O(nk) expected time such a circle, improving over previously known algorithms. , 1992 "... ions for Constructing Dependable Distributed Systems Shivakant Mishra 1 and Richard D. Schlichting TR 92-19 Abstract Distributed systems, in which multiple machines are connected by a communications network, are often used to build highly dependable computing systems. However, constructing the softw ..." Cited by 15 (3 self) Add to MetaCart ions for Constructing Dependable Distributed Systems Shivakant Mishra 1 and Richard D. Schlichting TR 92-19 Abstract Distributed systems, in which multiple machines are connected by a communications network, are often used to build highly dependable computing systems. However, constructing the software required to realize such dependability is a difficult task since it requires the programmer to build fault-tolerant software that can continue to function despite failures. To simplify this process, canonical structuring techniques or programming paradigms have been developed, including the object/action model, the primary/backup approach, the state machine approach, and conversations. In this paper, some of the system abstractions designed to support these paradigms are described. These abstractions, which are termed fault-tolerant services, can be categorized into two types. One type provides functionality similar to standard hardware or operating system services, but with improved ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=699518&sort=cite&start=10","timestamp":"2014-04-20T14:10:31Z","content_type":null,"content_length":"36698","record_id":"<urn:uuid:85f633e0-8008-4008-991a-85326e8595c9>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
INT function This article describes the formula syntax and usage of the INT function (function: A prewritten formula that takes a value or values, performs an operation, and returns a value or values. Use functions to simplify and shorten formulas on a worksheet, especially those that perform lengthy or complex calculations.) in Microsoft Excel. Rounds a number down to the nearest integer. The INT function syntax has the following arguments (argument: A value that provides information to an action, an event, a method, a property, a function, or a procedure.): ● Number Required. The real number you want to round down to an integer. The example may be easier to understand if you copy it to a blank worksheet. 1. Select the example in this article. If you are copying the example in Excel Online, copy and paste one cell at a time. Important: Do not select the row or column headers. Selecting an example from Help 1. Press CTRL+C. 2. Create a blank workbook or worksheet. 3. In the worksheet, select cell A1, and press CTRL+V. If you are working in Excel Online, repeat copying and pasting for each cell in the example. Important: For the example to work properly, you must paste it into cell A1 of the worksheet. 4. To switch between viewing the results and viewing the formulas that return the results, press CTRL+` (grave accent), or on the Formulas tab, in the Formula Auditing group, click the Show Formulas After you copy the example to a blank worksheet, you can adapt it to suit your needs. A B Formula Description (Result) =INT(8.9) Rounds 8.9 down (8) =INT(-8.9) Rounds -8.9 down (-9) =A2-INT(A2) Returns the decimal part of a positive real number in cell A2 (0.5) Applies to: Excel 2010, Excel Web App, SharePoint Online for enterprises, SharePoint Online for professionals and small businesses
{"url":"http://office.microsoft.com/en-us/excel-help/int-function-HP010342625.aspx?CTT=5&origin=HA010342655","timestamp":"2014-04-18T20:50:38Z","content_type":null,"content_length":"24440","record_id":"<urn:uuid:54eac6ad-5aff-4541-99be-158348cbd5a9>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Tune Self-Regulating Control Loops | Emerson Process Experts How to Tune Self-Regulating Control Loops I’ve highlighted a few Emerson presentations for this week’s ChemInnovations conference in New Orleans. Emerson’s James Beall, a 30-year veteran in process control and chairman of the ISA 75.25 committee on control valve performance testing, shared his ChemInnovations presentation with me as well. The subject is a look at the fundamentals of modern loop tuning. With a high demand for automation professionals and more new engineers joining the ranks every day, James’ primer is well timed. I’ll highlight some of the guidance he offers. He opens by defining the proportional, derivative, and integral terms of a PID loop: Proportional – output contribution is “proportional” to difference between Set Point and PV (error). Therefore, output contribution is “gain” times error. Will have “offset” between Set Point and PV. Action expressed in gain. Integral – output contribution is “integration” of the difference between Set Point and PV (error). Therefore, always tries to make PV = Set Point (eliminates offset). Action expressed in integral time, Tr, in seconds/repeat. Derivative – output contribution is based on the derivative (rate of change) of the difference between Set Point and PV. Actually usually based on PV, not error. Action expressed in seconds. James noted that the PID algorithm comes in different forms—parallel, series, standard. The form significantly impacts the actual tuning values. The tuning rules he presents are based on the series and standard form of the PID algorithm. Historically, the older tuning methods try to tune as fast as possible and loops were tuned independently without coordinating the impact of the associated process dynamics. James advised a 4-step First, determine the basic type of process, such as integrating or self-regulating. Next, determine the process dynamics. Next, choose the desired closed-loop response time, known as Lambda, λ. Finally, calculate the tuning constants. James first looked at a self-regulating process. To determine the process dynamics, first put the controller output in manual mode and bump the controller output. This change in output gives you the % change in output (Δ%output), deadtime, Td, from when the change is made until the output reaches the asked for new output level, the time, T , until the process variable PV reaches 98% of its final value, Δ%PV. The illustration shows how to arrive at these values and calculate the process gain, Kp, and time constant, Tau. He offers the Lambda tuning rules for a self-regulating process. A recommended starting point to ensure robustness is 3 * (larger of Td or Tau). This results in stable tuning if the deadtime and process gain double. The closed loop “Time to Steady State”, T[98] for a set point change is approximately 4*Lambda, assuming “P” is on error. Tr, the reset time is equal to Tau. The controller gain, Kc, is calculated with the formula Tr / (Kp * (λ + Td)). James notes that Tr stays the same and only Kc changes with Lambda. James addressed a concern that Lambda tuning is slow by noting, “Compared to what?” We addressed some of these concerns in an earlier post, Lambda Tuning-Yeah or Neah? Now that we have a handle on the process for a single self-regulating loop, what do you do about interacting loops? The key is to coordinate the loop tuning based on the Lambda values from lowest to highest. This animation shows a distillation column example beginning with a couple of flow controllers, moving on to level controllers, and finally a temperature controller. I’ll save James’ discussion of tuning integrating processes for another post. I hope if you’re new to process automation that this helps. I’d also suggest visiting the Control Loop Foundation site for more background. Related Posts:
{"url":"http://www.emersonprocessxperts.com/2012/11/how-to-tune-self-regulating-control-loops/","timestamp":"2014-04-18T08:38:02Z","content_type":null,"content_length":"60438","record_id":"<urn:uuid:7332ffcb-97ef-4942-90a4-4c04f5762ae2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest Knolls Prealgebra Tutor Find a Forest Knolls Prealgebra Tutor ...This includes (but is not limited to): the animal kingdoms (mammals, amphibians, reptiles, insects, etc.), and examples of animals from each. Habitats and environments (such as marine life and wetlands) also appear. The same can be said for the different types of rocks (igneous, sedimentary, me... 17 Subjects: including prealgebra, English, reading, grammar ...I believe in working with each student in a customized way that fits their individual needs and builds their confidence. It is important to emphasize the progress a student makes, in addition to what they still need to work on. In my experience, when students realize what they have accomplished they also realize that they are capable of achieving much more than they thought they could. 11 Subjects: including prealgebra, reading, English, ESL/ESOL ...After working with me, you'll believe it too. Many of my students were referred to me through federally-funded free tutoring. To accept these funds, I was required to administer standardized tests to the students before and after tutoring. 29 Subjects: including prealgebra, English, reading, geometry ...As the author of more than 1000 technical and business documents, I am well-versed in the proper use of the English language. I consistently score at or near the top in SAT and similar tests involving writing, grammar, vocabulary, reading comprehension, and linguistics. I have taught professional courses in technical and business writing for several years. 25 Subjects: including prealgebra, reading, English, writing ...I worked as a math tutor for a year between high school and college and continued to tutor math and physics throughout my undergraduate career. I specialize in tutoring high school mathematics, such as geometry, algebra, precalculus, and calculus, as well as AP physics. In addition, I have sign... 25 Subjects: including prealgebra, physics, calculus, statistics
{"url":"http://www.purplemath.com/Forest_Knolls_Prealgebra_tutors.php","timestamp":"2014-04-18T15:52:22Z","content_type":null,"content_length":"24325","record_id":"<urn:uuid:206bc613-a7d3-4af7-ad19-1efadb21436d>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] optimization question Volker Lorrmann lorrmann@physik.uni-wuerzburg... Tue Jul 3 17:18:23 CDT 2007 Hello list, i wanna fit a function to some measured datapoints T_m(x_i). The function i wanna fit is something like that,T_fit(a,b,c,d,x_i), where a,b,c are the fitting-parameters. Fitting this with scipy.optimize.leastsq would be easy (btw. is there anoterh way to fit this, like fmin, fmin_powell, ...?). The problem is, that a is a and b are _fixed_ scalar parameters. But c and d are variables, that depend on x_i, c=c(x_i) and d=d(x_i). And in fact, c(x_i) and d(x_i) are the variables i´m mainly interested in. (a and b are nearly exactly known, so i can reduce the fitting_function to I hope you can see what my problem is, its late here and i´m tired, so maybe i haven´t explained very well. Maybe the following will help \ 2 minimize | {T_meas(x_i) - T(a,b,c(x_i),d(x_i),x_i)} is what i´m lookig for. Is this possible with scipy.optimize.leastsq, or should i use some other routine therefor? And if so, which on? Thanks so far More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-July/012827.html","timestamp":"2014-04-19T05:10:16Z","content_type":null,"content_length":"3511","record_id":"<urn:uuid:d351222a-9652-4568-99e8-f9933b2c2e03>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Positive and Negative Exponents Date: 07/27/97 at 14:26:29 From: Anonymous Subject: Exponents Dr. Math, Could you please explain why (-1)^n = 1 for any even number n, and why (-17)^-8 is positive? Thank you for your help, Date: 07/28/97 at 13:21:43 From: Doctor Beth Subject: Re: Exponents Good question! The general idea of raising a number to a positive integer exponent "n" is to multiply that number by itself n times; for example, (-1)^4 = (-1)*(-1)*(-1)*(-1) = 1. Remember that two negatives multiply to be a positive, so that if "n" is even, all the negatives can be paired with another negative, and the result is positive. So that's why any number to an even positive power is positive. (Incidentally, that's why you can't find a real number that is the even root of a negative number; for example, there is no real number that is the square root of -1, because to be the square root of -1, the number squared would have to be -1, which we just decided can't happen.) Now for your second question. Negative exponents are a bit tricky at first - they mean that you have to put the number in the denominator and take a positive exponent. In symbols, this is the same as saying a^(-b) = ----- . So (-17)^(-8) = 1/(-17)^8, and since 8 is even, (-17)^8 is positive, so that 1/(-17)^8 is positive. Thanks for the question! -Doctor Beth, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/58228.html","timestamp":"2014-04-20T16:19:15Z","content_type":null,"content_length":"6309","record_id":"<urn:uuid:0b2a3193-9d0d-4e8f-9f15-b72be005c7a2>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Olympia, WA Geometry Tutor Find an Olympia, WA Geometry Tutor ...I enjoy working one on one with students, whether helping them with homework or preparing for an exam. I am willing to create practice tests for students to ensure their success. I have a love for mathematics, but I understand that not everyone shares my passion. 19 Subjects: including geometry, English, statistics, SAT math With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including geometry, calculus, statistics, GRE ...With sufficient 'vocabulary', the individual topics blend into a mosaic that makes sense, is informative, and useful in many fields such as nursing, bioengineering, biology and other related fields. I look forward to meeting you and assisting you in your efforts to understand and appreciate this... 12 Subjects: including geometry, chemistry, algebra 1, algebra 2 ...Regardless of the subject, I would say I am effective at recognizing patterns. I love sharing any shortcuts or tips that I discover.I have taken 2 quarters of Discrete Structures (Mathematics) at University of Washington, Tacoma. I earned a 4.0 each quarter. 16 Subjects: including geometry, chemistry, French, calculus ...English is both the most important and the most difficult language in the modern world. As a professional editor and writer since the 1960s, I've had to know 1001 things about English for the hundreds of books I've worked on--but there's only about a dozen that a person needs to master to survive in the real world. Wherever your skills are at, I can improve them. 38 Subjects: including geometry, English, writing, GRE Related Olympia, WA Tutors Olympia, WA Accounting Tutors Olympia, WA ACT Tutors Olympia, WA Algebra Tutors Olympia, WA Algebra 2 Tutors Olympia, WA Calculus Tutors Olympia, WA Geometry Tutors Olympia, WA Math Tutors Olympia, WA Prealgebra Tutors Olympia, WA Precalculus Tutors Olympia, WA SAT Tutors Olympia, WA SAT Math Tutors Olympia, WA Science Tutors Olympia, WA Statistics Tutors Olympia, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/Olympia_WA_Geometry_tutors.php","timestamp":"2014-04-19T05:08:59Z","content_type":null,"content_length":"23851","record_id":"<urn:uuid:7f73abb8-8a98-414d-9fe9-0837d62b7d1b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Braingle: 'Two Digit Numbers' Brain Teaser Two Digit Numbers Math brain teasers require computations to solve. Puzzle ID: #5101 Category: Math Submitted By: mad-ade Corrected By: boodler Take the 2-digit number 45. Square it (45 x 45) to make 2025 Split this 4-digit number in half to make 20 and 25 Add them (20 + 25) to make 45 Which is what you started with. Can you find another 2-digit number which does the same? Show Answer What Next? Piffle This is a good one, I think. Good job. Jul 09, 2002 peppermintwist Clever! Jul 09, 2002 george1978 i liked this one Jul 12, 2002 rufio "Can you find another 2-digit number which does the same?" The correct answer, in my case, was "No." TECHNICALLY, it was the right answer. Aug 21, 2002 Smithy d'oh. Rufio - I was gonna say that. You beat me to it by about a year. Except I was gonna say yes. Aug 20, 2003 lessthanjake789 no one has commented in just about forever on this teaser. clever and tough. id be surprised if you figured that out all on your own, unless you have a system for it. id be Oct 14, 2006 interested to hear it actually. if its just a fact then - now i know, and thanks! Jimbo It's easily done with a spreadsheet but if you try it algebraically, it is hard to deal with the bit where you split the number in half. This step has a fraction in it which can be May 11, 2009 removed in a spreadsheet by the 'Round Down' function but it is much more difficult to deal with the remainder in Algebra. I believe that teasers like this belong in the 'Trivia' category unless you accept that creating a spreadsheet is a legitimate way to solve Mathematical teasers. c0forerunner0 How about 10? Feb 18, 2013 Take the 2-digit number 10. Square it to make 100 Split this number to make 10 and 0 Add them to make 10 Which is what you started with. spikethru4 Well, if you split 100 according to the rules, you'd end up with 01 and 00, which sums to 1, so no. Feb 18, 2013
{"url":"http://www.braingle.com/brainteasers/teaser.php?id=5101&comm=1","timestamp":"2014-04-20T13:58:58Z","content_type":null,"content_length":"29183","record_id":"<urn:uuid:34433e84-9ddf-44c8-ad6c-6005ec30d310>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Decimal to Binary Conversion by Excel Formula The below example lets you understand how to convert decimal number to its binary equivalent. This Decimal to Binary conversion have been achieved by using Excel formula . The first argument of this formula is the numbers which is to be converted to binary. the second argument is the number of binary bits to represent binary equivalent. The most significant bit is the sign bit and the rest are magnitude bits. Places is useful for padding the return value with leading 0s. The rules should be followed when performing the conversion 1. The number ranges between -512 and 512, otherwise DEC2BIN() returns #NUM! error value 2. The inputs number and places should be numeric otherwise DEC2BIN() returns #VALUE! error value 3. The places value truncated if it is fraction value 4. Places should not be negative number
{"url":"http://nscraps.com/Windows/1429-decimal-to-binary-conversion-excel-formula.htm","timestamp":"2014-04-20T16:33:16Z","content_type":null,"content_length":"12389","record_id":"<urn:uuid:fbfd0faa-b7cd-4056-a9fe-c4b12dd9b6de>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A smooth Semicircular wire is fixed in the vertical plane . A particle of Mass M is released at angle = 37. The centripetal Force on the Particle when angle is 53 is of the magnitude? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b5fb40e4b0badab8d5f853","timestamp":"2014-04-24T11:40:11Z","content_type":null,"content_length":"177587","record_id":"<urn:uuid:d1554f8f-6fe7-4ba6-be76-7434effadc99>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
The Million Dollar Problem Everyone knows that computers are getting more powerful and better at doing almost anything. Finding you the fastest route cross country is easy. Translating a page of prose from one language to another is harder, but it's getting better all the time. Finding the shortest route that will get you to all of five different cities, no problem; finding the provably shortest route that will get you to all of a thousand cities - that's a toughie. It's so hard that perhaps no computer, no matter how big or how fast, can ever do it. Perhaps. Are there tasks beyond computing? It is a deep question bridging mathematics and computer science, and it is the subject of The Golden Ticket: P, NP, and the Search for the Impossible (Princeton University Press) by Lance Fortnow. The question is so hard, and so important, that it is one of the seven Millennium Problems for which the Clay Mathematics Institute will give you one million dollars when you prove it. (Programming genius Donald Knuth will also give you a turkey.) This is deeper mathematical territory than most of us will ever penetrate, but Fortnow, a professor of computer science, keeps the explanations light, knowing that those of us reading this sort of book aren't really in the running for the prize, but at the same time showing how important the answer to the question might be for the future of computing. It is best to call it the P/NP problem; the abbreviation P comes from "polynomial;" and in giving us the second, Fortnow jokes, "NP (which stands for 'nondeterministic polynomial time,' if you really need to know)." He does not get much deeper into polynomials, but P is the group of problems we know computers can solve quickly. NP is a possibly separate group of problems that cannot be solved quickly by any computer program we have now, but if P = NP, then a powerful computer could solve those NP problems as easily as computers are currently solving the P ones. It would be very nice if P = NP, because then we could expect efficient algorithms that would do all the problems we now think of as NP. Fortnow's second chapter imagines a world where P = NP: cancer prediction and treatment all come from a simple blood draw, weather is predicted accurately a year in advance, and Schubert's Eighth Symphony is finally completed. It's not all good news: the public-key cryptography we all now use for electronic financial transactions would be easily cracked, and there would be tremendous losses of jobs because of all the new stuff computers could do. The programs for these intricate problems would still have to be written, but if P = NP, they could be written, and undoubtedly would be. One of the important parts of Fortnow's book is that he shows that the P/NP problem is not something just of interest to mathematicians and computer scientists. It is a critical question in fields as diverse as biology, economics, medicine, and physics. Kurt Gödel famously showed that there are mathematical truths that cannot be proved even though they are true; now can someone show in some similar way that NP problems really are different from P? One chapter here gives the history of the problem, a history bifurcated because of the Cold War in the seventies. The problem was first defined in the West in 1971; Russian mathematics at the time was dragged down by strong central politics within the Russian mathematical community, but eventually P/NP became a worthy subject of research there as well. Mathematicians on both sides are now doing research on the problem, but not in the sort of isolation that used to be; the collapse of the Soviet Union and the worldwide reach of the internet have meant that solving P/NP is a globally communal A good example of what we know to be an NP problem is the Traveling Salesman Problem, the one about getting the absolute minimum travel distance for a salesman who wants to visit a hundred cities, or a thousand, or a million. If you only want to go to the 48 state capitals in the lower US, you could have a computer look at lists of cities in every order and total up the mileage, but there are so many possible orderings that the fastest of computers would take longer than the age of the universe to solve the problem. Programmers do solve versions of the Traveling Salesman Problem, but they do so by approximation; they cannot be sure if the answer they get is the real minimum distance or just very close to it. This is the same sort of issue with problems having to do with protein folding in biology or finding minimal energy states in physics, and if one of these NP problems can be shown to be P, then all the rest of them are P, too, and our computers can grind out answers to all NP problems efficiently. No one has been able to come up with an efficient algorithm that solves any NP problem, which seems to indicate there is no such thing, and that P is not equal to NP. It would be a real surprise if P = NP, but right now there is no proof either way. There are plenty of people working on it. Some of them are the same sort of people who are sure they have proved the classic (and unprovable) problem of trisecting an angle. One computer journal has ruled that it will accept such P/NP proofs from any one author no more often than every two years, because most such attempts are "unreadable or clearly wrong." Fortnow encourages readers to try proving P/NP, "for you cannot truly understand the difficulty of a problem without attempting to solve it," and while his book does not give formal definitions of P/NP that would be the basis for your proof, it has website citations that could start you off. But on the other hand: "Suppose you have actually found a solution to the P versus NP problem," he writes. "How do you get your $1 million check from the Clay Mathematics Institute? Slow down. You almost surely don't have a proof. Realize why your proof doesn't work, and you will have obtained enlightenment." Not only are you unlikely to get a proof, Fortnow is pessimistic that any mathematician is going to be coming up with one anytime soon. He knows the state of current research on the problem, and says that there is no known line of attack currently being pursued that could lead to a successful proof. Things seem to be at a standstill. He reminds us that it took 357 years to get a proof of Fermat's Last Theorem. While we may continue to butt our heads up against NP problems with merely approximate answers, and while P will increasingly seem not to equal NP, there may be no proof out there ever. Fortnow's book does a fine job of showing why the tantalizing question is an important one, with implications far beyond just computer science.
{"url":"http://www.cdispatch.com/robhardy/article.asp?aid=23267","timestamp":"2014-04-20T21:43:38Z","content_type":null,"content_length":"26324","record_id":"<urn:uuid:861b913a-faea-4527-8739-bfe55be1422e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Please intigrate Here is what the Integrator gives: Wolfram Mathematica Online Integrator type in sin[a*x]*sec[x] It ain't pretty. Well presently using phone thats why did not used wolfram maths integrator. But if you think that solution is not pretty then I think its not sooo... Easy to do it manually right now thanks.
{"url":"http://mathhelpforum.com/calculus/72836-please-intigrate.html","timestamp":"2014-04-20T17:49:08Z","content_type":null,"content_length":"30792","record_id":"<urn:uuid:96f5c320-0dc5-4427-9426-d0ab32a13b3f>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Human Behaviour and the Principle of Least Effort Results 1 - 10 of 245 , 1996 "... Many commercial database systems maintain histograms to summarize the contents of relations and permit efficient estimation of query result sizes and access plan costs. Although several types of histograms have been proposed in the past, there has never been a systematic study of all histogram aspec ..." Cited by 239 (20 self) Add to MetaCart Many commercial database systems maintain histograms to summarize the contents of relations and permit efficient estimation of query result sizes and access plan costs. Although several types of histograms have been proposed in the past, there has never been a systematic study of all histogram aspects, the available choices for each aspect, and the impact of such choices on histogram effectiveness. In this paper, we provide a taxonomy of histograms that captures all previously proposed histogram types and indicates many new possibilities. We introduce novel choices for several of the taxonomy dimensions, and derive new histogram types by combining choices in effective ways. We also show how sampling techniques can be used to reduce the cost of histogram construction. Finally, we present results from an empirical study of the proposed histogram types used in selectivity estimation of range predicates and identify the histogram types that have the best overall performance. 1 Introduction... , 1997 "... The result size of a query that involves multiple attributes from the same relation depends on these attributes’joinr data distribution, i.e., the frequencies of all combinations of attribute values. To simplify the estimation of that size, most commercial systems make the artribute value independen ..." Cited by 198 (12 self) Add to MetaCart The result size of a query that involves multiple attributes from the same relation depends on these attributes’joinr data distribution, i.e., the frequencies of all combinations of attribute values. To simplify the estimation of that size, most commercial systems make the artribute value independenceassumption and maintain statistics (typically histograms) on individual attributes only. In reality, this assumption is almost always wrong and the resulting estimations tend to be highly inaccurate. In this paper, we propose two main alternatives to effectively approximate (multi-dimensional) joint data distributions. (a) Using a multi-dimensional histogram, (b) Using the Singular Value Decomposition (SVD) technique from linear algebra. An extensive set of experiments demonstrates the advantages and disadvantages of the two approaches and the benefits of both compared to the independence assumption. 1 "... Random graph theory is used to examine the “small-world phenomenon”– any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees, the average distance is almost surely of order log n / log ˜ d wher ..." Cited by 191 (13 self) Add to MetaCart Random graph theory is used to examine the “small-world phenomenon”– any two strangers are connected through a short chain of mutual acquaintances. We will show that for certain families of random graphs with given expected degrees, the average distance is almost surely of order log n / log ˜ d where ˜ d is the weighted average of the sum of squares of the expected degrees. Of particular interest are power law random graphs in which the number of vertices of degree k is proportional to 1/k β for some fixed exponent β. For the case of β> 3, we prove that the average distance of the power law graphs is almost surely of order log n / log ˜ d. However, many Internet, social, and citation networks are power law graphs with exponents in the range 2 < β < 3 for which the power law random graphs have average distance almost surely of order log log n, but have diameter of order log n (provided having some mild constraints for the average distance and maximum degree). In particular, these graphs contain a dense subgraph, that we call the core, having n c / log log n vertices. Almost all vertices are within distance log log n of the core although there are vertices at distance log n from the core. , 2002 "... cychan,pascal,minos,rastogi¡ We propose a novel index structure, termed XTrie, that supports the efficient filtering of XML documents based on XPath expressions. Our XTrie index structure offers several novel features that make it especially attractive for largescale publish/subscribe systems. First ..." Cited by 172 (12 self) Add to MetaCart cychan,pascal,minos,rastogi¡ We propose a novel index structure, termed XTrie, that supports the efficient filtering of XML documents based on XPath expressions. Our XTrie index structure offers several novel features that make it especially attractive for largescale publish/subscribe systems. First, XTrie is designed to support effective filtering based on complex XPath expressions (as opposed to simple, single-path specifications). Second, our XTrie structure and algorithms are designed to support both ordered and unordered matching of XML data. Third, by indexing on sequences of element names organized in a trie structure and using a sophisticated matching algorithm, XTrie is able to both reduce the number of unnecessary index probes as well as avoid redundant matchings, thereby providing extremely efficient filtering. Our experimental results over a wide range of XML document and XPath expression workloads demonstrate that our XTrie index structure outperforms earlier approaches by wide margins. 1. - Contemporary Physics , 2005 "... When the probability of measuring a particular value of some quantity varies inversely as a power of that value, the quantity is said to follow a power law, also known variously as Zipf’s law or the Pareto distribution. Power laws appear widely in physics, biology, earth and planetary sciences, econ ..." Cited by 170 (0 self) Add to MetaCart When the probability of measuring a particular value of some quantity varies inversely as a power of that value, the quantity is said to follow a power law, also known variously as Zipf’s law or the Pareto distribution. Power laws appear widely in physics, biology, earth and planetary sciences, economics and finance, computer science, demography and the social sciences. For instance, the distributions of the sizes of cities, earthquakes, solar flares, moon craters, wars and people’s personal fortunes all appear to follow power laws. The origin of power-law behaviour has been a topic of debate in the scientific community for more than a century. Here we review some of the empirical evidence for the existence of power-law forms and the theories proposed to explain them. I. - In VLDB , 1999 "... In many applications, users specify target values for certain attributes, without requiring exact matches to these values in return. Instead, the result to such queries is typically a rank of the "top k" tuples that best match the given attribute values. In this paper, we study the advantages and li ..." Cited by 139 (4 self) Add to MetaCart In many applications, users specify target values for certain attributes, without requiring exact matches to these values in return. Instead, the result to such queries is typically a rank of the "top k" tuples that best match the given attribute values. In this paper, we study the advantages and limitations of processing a top-k query by translating it into a single range query that traditional relational DBMSs can process e#ciently. In particular, we study how to determine a range query to evaluate a top-k query by exploiting the statistics available to a relational DBMS, and the impact of the quality of these statistics on the retrieval e#ciency of the resulting scheme. 1 Introduction Internet Search engines rank the objects in the results of selection queries according to how well these objects match the original selection condition. For such engines, query results are not flat sets of objects that match a given condition. Instead, query results are ranked starting - In VLDB , 2002 "... The duplicate elimination problem of detecting multiple tuples, which describe the same real world entity, is an important data cleaning problem. Previous domain independent solutions to this problem relied on standard textual similarity functions (e.g., edit distance, cosine metric) between m ..." Cited by 112 (3 self) Add to MetaCart The duplicate elimination problem of detecting multiple tuples, which describe the same real world entity, is an important data cleaning problem. Previous domain independent solutions to this problem relied on standard textual similarity functions (e.g., edit distance, cosine metric) between multi-attribute tuples. However, such approaches result in large numbers of false positives if we want to identify domain-specific abbreviations and conventions. In this paper, we develop an algorithm for eliminating duplicates in dimensional tables in a data warehouse, which are usually associated with hierarchies. We exploit hierarchies to develop a high quality, scalable duplicate elimination algorithm, and evaluate it on real datasets from an operational data warehouse. - Proceedings of the 25th VLDB Conference , 1999 "... The subject of this paper is the creation of knowledge bases by enumerating and organizing all web occurrences of certain subgraphs. We focus on subgraphs that are signatures of web phenomena such as tightly-focused topic communities, webrings, taxonomy trees, keiretsus, etc. For instance, the ..." Cited by 103 (2 self) Add to MetaCart The subject of this paper is the creation of knowledge bases by enumerating and organizing all web occurrences of certain subgraphs. We focus on subgraphs that are signatures of web phenomena such as tightly-focused topic communities, webrings, taxonomy trees, keiretsus, etc. For instance, the signature of a webring is a central page with bidirectional links to a number of other pages. We develop novel algorithms for such enumeration problems. A key technical contribution is the development of a model for the evolution of the web graph, based on experimental observations derived from a snapshot of the web. We argue that our algorithms run efficiently in this model, and use the model to explain some statistical phenomena on the web that emerged during our experiments. Finally, we describe the design and implementation of Campfire, a knowledge base of over one hundred thousand web communities. 1 Overview The subject of this paper is the creation of knowledge bases by ... - In Proceedings of international conference , 2007 "... The debate within the Web community over the optimal means by which to organize information often pits formalized classifications against distributed collaborative tagging systems. A number of questions remain unanswered, however, regarding the nature of collaborative tagging systems including wheth ..." Cited by 95 (3 self) Add to MetaCart The debate within the Web community over the optimal means by which to organize information often pits formalized classifications against distributed collaborative tagging systems. A number of questions remain unanswered, however, regarding the nature of collaborative tagging systems including whether coherent categorization schemes can emerge from unsupervised tagging by users. This paper uses data from tagged sites on the social bookmarking site del.icio.us to examine the dynamics of collaborative tagging systems. In particular, we examine whether the distribution of the frequency of use of tags for “popular ” sites with a long history (many tags and many users) can be described by a power law distribution, often characteristic of what are considered complex systems. We produce a generative model of collaborative tagging in order to understand the basic dynamics behind tagging, including how a power law distribution of tags could arise. We empirically examine the tagging history of sites in order to determine how this distribution arises over time and patterns prior to a stable distribution. Lastly, by focusing on the high-frequency tags of a site where the distribution of tags is a stabilized power law, we show how tag co-occurrence networks for a sample domain of tags can be used analyze the meaning of particular tags given their relationship to other tags. 1.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=175390","timestamp":"2014-04-21T13:28:42Z","content_type":null,"content_length":"38828","record_id":"<urn:uuid:9c837362-e8f7-4bac-ae7a-37502ed5677d>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Long Term Haurlan Index Divergence Chart In Focus Long Term Haurlan Index Divergence July 15, 2011 Free Chart In Focus email Delivered to you every week The Long Term Haurlan Index is showing a big divergence now, similar to ones that we have seen at past instances of impending stock market turmoil. That is the important part of the story in this week's chart. But the history of this indicator and its creator is also a really interesting story. The Haurlan Index is now a little known indicator used by a handful of technicians, but when it was introduced in the 1960s it got a lot of attention. It arguably still deserves that same amount of attention, even though we now have many more tools we can choose among. P.N. (Pete) Haurlan was an actual rocket scientist who worked for the Jet Propulsion Laboratory in Pasadena, CA in the early 1960s. Haurlan served as manager of the Advanced Technical Studies section under Fred Felberg, and helped envision several future missions including Venus/Mercury, Jupiter Gravity-Assist, and near-asteroid visits. He also loved the stock market, and so during the late night hours when JPL's computer was not busy, he used it to do analysis on stock prices. Try to imagine punching in price data onto a stack of IBM cards (remember hanging chads?), and then loading those into the hopper to have the computer read them and do calculations. This was the state of the art for computerized technical analysis in the 1960s. Haurlan's work led him to start the Trade Levels newsletter in the mid-1960s. It differed from the other newsletters of the day in that it had computer generated charts and statistics. Trade Levels was the sponsor of Gene Morgan's end of day TV program called "Charting The Market" on KWHY-TV in Los Angeles, a show which introduced thousands of people to the idea that a person could look at a chart of price movements, and derive useful information from it. That idea ran contrary to the conventional wisdom at that time. Haurlan was also the first person (to our knowledge) to introduce the use of exponential moving averages for tracking stock prices. It was a piece of mathematics which he borrowed from his work in rocketry. EMAs were employed in the design of analog tracking circuitry because they made for easier designs. To calculate a 50-day simple moving average, for example, one must keep track of the last 50 data points, which was a lot of work for the early computers. For an exponential moving average calculation, the computer just needs to know the current data value, the prior EMA value, and the smoothing constant to apply to the new data. For more background, see: Who First Came Up With Moving Averages? Exponential Moving Average Calculation In his Trade Levels Report, Haurlan included an indicator he called the Haurlan Index. There were actually 3 versions of this indicator, for short, intermediate, and long term timing purposes. Each one looked at the daily Advance-Decline (A-D) difference, and used different smoothing factors to incorporate that data into an indicator. From a 1972 issue of the Trade Levels report, Pete Haurlan defined the Haurlan Index as follows: HI = K x (P - I) Where I = previous day's Haurlan Index P = NYSE Advances minus Declines K = 50% for Short Term 10% for Intermediate Term 1% for Long Term So each day's value for the Haurlan Index changes by a fraction of the difference between the A-D number and yesterday's Haurlan Index value. The chart at the top of this article shows the Long Term Haurlan Index, which uses the 1% smoothing factor. It should be understood that the math involved is similar to the calculation of an exponential moving average, but it is not quite the same. In preparing this article, I ran into an interesting discovery. We keep exponential moving averages for several indicators, including the cumulative Advance-Decline Line shown below with its 1% Trend (199-day EMA). And so I wanted to see what the similarities were between the Long Term Haurlan Index, which does 1% smoothing on the daily A-D difference, and the 1% Trend of the A-D Line which is calculated on the values of the cumulative A-D Line. The 1% Trend is a nice long term moving average for the A-D Line, and for price indices as well. One additional tool that we like to look at is the distance between the A-D Line and the 1% Trend, or indeed any other moving average. We call this distance the "1% Trend Deviation", meaning how far prices have deviated away from that moving average. The chart below shows the 1% Trend Deviation for the NYSE A-D Line. For comparison, it also shows the same Long Term Haurlan Index shown above. This is the interesting discovery: It turns out that the two indicators are nearly identical. On a chart, they are indistinguishable, and in the calculations the two are very close together, if we apply a multiplication factor of 100. This is part of the magic of the math involved with calculations using exponential moving averages. The Key Takeaway While the NYSE's A-D Line is still making higher highs right now, the Long Term Haurlan Index is showing a significant divergence relative to prices. This is similar to divergences we are seeing elsewhere, such as in the McClellan A-D Summation Index that is featured in our twice monthly newsletter. These divergences are saying that even though the A-D Line may be making higher highs, it is not doing so with the same degree of vigor, which can be a setup for a meaningful intermediate to long-term correction. Pete Haurlan passed away in the late 1970s, and for several years the Trade Levels Report was continued by the late David Holt, who later went on to be research chief at Wedbush Morgan. It says something interesting about the era that in a February 1980 issue, the authors still included this caveat: "Data has been prepared largely by computer. Because of schedule requirements and the magnitude of the task, it is not possible to check thoroughly. Therefore, there may be errors and omissions." These days, we do not think much about the idea that a few decades ago, there was far less trust of data that came out of a computer, without any human eyes making sure it was all okay. Times surely have changed. The computer data may or may not be any more accurate now than it was in 1980, but we are all more accustomed to the idea of consuming such data. Paul Carroll wrote about the 3 versions of the Haurlan Index in an article which appeared in the January 1994 issue of Technical Analysis of Stocks and Commodities. Tom McClellan Editor, The McClellan Market Report Related Charts Jul 08, 2011 Oct 01, 2010 Jan 21, 2011 Jan 01, 2010 Using the 10% Trend By Itself RASI or Classic - Which Summation Is Better? Why Even Fundamental Analysts Should Watch A-D Line Nasdaq A-D Line “Divergence”
{"url":"http://www.mcoscillator.com/learning_center/weekly_chart/long_term_haurlan_index_divergence/","timestamp":"2014-04-20T10:48:15Z","content_type":null,"content_length":"24520","record_id":"<urn:uuid:0097d684-5e05-4d58-a453-d155428e2f03>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Some better Jokes. Here are some. Teacher: What is 2k + k? Student: 3000! Q:What do you get if you divide the cirucmference of a jack-o-lantern by its diameter? A: Pumpkin Pi! A math student is pestered by a classmate who wants to copy his homework assignment. The student hesitates, not only because he thinks it's wrong, but also because he doesn't want to be sanctioned for aiding and abetting. His classmate calms him down: "Nobody will be able to trace my homework to you: I'll be changing the names of all the constants and variables: a to b, x to y, and so on." Not quite convinced, but eager to be left alone, the student hands his completed assignment to the classmate for copying. After the deadline, the student asks: "Did you really change the names of all the variables?" "Sure!" the classmate replies. "When you called a function f, I called it g; when you called a variable x, I renamed it to y; and when you were writing about the log of x+1, I called it the timber of Q: How does one insult a mathematician? A: You say: "Your brain is smaller than any e>0!"
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=254999","timestamp":"2014-04-17T12:36:06Z","content_type":null,"content_length":"22167","record_id":"<urn:uuid:ee65f35f-8c2c-4b28-9e63-2e7431f70869>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
We are still both novices with the program, but have seen its benefits nonetheless. John Kattz, WA I do not have any issues. I just wanted to let you know that I am glad I purchased your product. I also appreciate the updates as they not only make for a better looking product, but things seem to be more user friendly now. A.R., Arkansas It was hard to go back to school as an adult, especially when I had to redo math courses because it had been two decades since graduation. I needed help badly and thankfully, your product delivered. I cant thank you enough. Trish Cooper, CO My daughter is in 10th grade and son is in 7th grade. I used to spend hours teaching them arithmetic, equations and algebraic expressions. Then I bought this software. Now this algebra tutor teaches my children and they are improving at a better pace. Gwen Ferber, TN To watch my daughter, who just two years ago was so frustrated by algebra, accepting the highest honors in her entire school for her Outstanding Academic Achievement in Mathematics, was no doubt one of the proudest moments of my life. Thank you, Algebrator! T.G., Florida
{"url":"http://roots-and-radicals.com/radicals-homework/how-to-use-the-ti-30x-iis--to-.html","timestamp":"2014-04-21T13:04:28Z","content_type":null,"content_length":"23275","record_id":"<urn:uuid:60b72af3-ad91-4b29-9e9f-c72267a866c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
An algebra question June 16th 2010, 10:12 AM An algebra question Question: Simplify the following expression. (2x5^n)^2 + 25^n this whole equation is divided by Answer; 5^n+1 Remarks: I have tried doing this question twice but both attempts were unsuccesful. Please help. June 16th 2010, 10:28 AM 1. Use the laws of powers: $\frac{(2 \cdot 5^n)^2 + 25^n}{5^n} = \frac{4 \cdot 5^{2n} + 5^{2n}}{5^n}= \frac{5 \cdot 5^{2n}}{5^n}=5 \cdot 5^n = 5^{n+1}$ June 16th 2010, 10:33 AM slight question I understand most of it but how did you move along from the 1st step of your answer to the second one that is, in specific terms, where did the 4 go? June 16th 2010, 10:39 AM considering the numerator (top of fraction) only: $4 . 5^{2n} + 5^{2n} = (4+1).5^{2n} = 5 \times 5^{2n} = 5^{2n+1}$ June 16th 2010, 10:41 AM Thanks for the message. But I guess you misinterpret the sign of this "." It is not 4.5 (4 decimal 5) but 4 . 5 meaning 4 x 5 = 4 times 5 Hope this clears up the misconception June 16th 2010, 10:47 AM Still don't understand this step. Where did the 1 come from ? What about the positive sign? Can someone explain? Thanks! June 16th 2010, 11:18 AM it is only factorising. i was not interpreting your dot as a decimal. in general: 4x + x = (4+1)x = 5x You have $4 \times 5^{2n} + 1 \times 5^{2n}$ = $5 \times 5^{2n}$ Which says: "4 lots of $5^{2n}$ plus 1 lot of $5^{2n}$" equals "5 lots of $5^{2n}$" June 17th 2010, 01:03 PM Hi ecolover, answer is correct 5^n+1 follow exp rules carefully. the 4 disappears naturally because 4+1 =5 You should get it from this clue
{"url":"http://mathhelpforum.com/algebra/148707-algebra-question-print.html","timestamp":"2014-04-17T21:26:47Z","content_type":null,"content_length":"9142","record_id":"<urn:uuid:71c38ad1-8475-4ed3-a0a1-ea746546fdf4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Basis for Subspace March 21st 2007, 04:27 PM #1 Basis for Subspace In the vector space of all real-valued functions, find a basis for the subspace spanned by {sin(t); sin(2t); sin(t)cos(t)} I can clearly see that sin(2t) can be written as 2*sin(t)cos(t), so that means that 2*sin(t)cos(t) and sin(t)cos(t) are linearly dependent sets. Am I right so far? So where do I go from here? In the vector space of all real-valued functions, find a basis for the subspace spanned by {sin(t); sin(2t); sin(t)cos(t)} I can clearly see that sin(2t) can be written as 2*sin(t)cos(t), so that means that 2*sin(t)cos(t) and sin(t)cos(t) are linearly dependent sets. Am I right so far? So where do I go from here? Since sin(t) and sin(2t) are linearly independent it looks to me like your basis is {sin(t), sin(2t)} or {sin(t), sin(t)cos(t)}. Of course. Two possible bases for a 2D Euclidean space are the familiar: {i, j} <-- Cartesian basis {r, theta} <-- Plane polar basis (where all of the above are unit vectors.) I want to add to topsquarks post. Since he does not explain why, Those two a linearly independent (and hence for a basis for its subspace). It is because these are not konstant multiples of each other. Using the special theorem for exactly two vectors. Are they really linearly independent?? are sin(t) and sin(t)*cos(t) really linearly independent??? because when t=0 both are zero and so is the linear combination and nt their scalar multiples...?? March 21st 2007, 05:31 PM #2 March 21st 2007, 05:48 PM #3 March 21st 2007, 05:51 PM #4 March 21st 2007, 06:49 PM #5 Global Moderator Nov 2005 New York City October 16th 2008, 12:00 PM #6 Oct 2008 October 16th 2008, 06:20 PM #7 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/advanced-algebra/12833-basis-subspace.html","timestamp":"2014-04-20T01:47:59Z","content_type":null,"content_length":"52978","record_id":"<urn:uuid:03d1a630-b8ff-46a8-89c6-b005f25dcc44>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Number January 6th 2009, 08:15 PM Real Number Good evening all of our class is having a online debate with the following question, any feedback would be appreciated: In the Real Number realm, ab = 0 è a = 0 or b = 0 Is the same theorem true in the Complex Number realm? (Why or why not?) January 6th 2009, 08:24 PM In the complex number 'realm', for a complex number to be considered equal to $0$, then both it's real and imaginary parts must be equal to zero! $z_1 = a+ib$ $z_2 = c+id$ $z_1 \times z_2 = ac+adi+cbi-bd = (ac-bd) + i(ad+cb) = 0 + 0i$ Hence $ac-bd = 0$ and $ad+cb=0$ for $z_1 \times z_2=0$ So let's look at the case where $z_1 = a+ib =0+0i$. For this $a=0$ and $b = 0$, which means that $ac-bd = 0(c)-(0)d = 0$. So the first equation is satisfied. $(0)d+(0)c = 0$. So the 2nd is Now look at the case where $z_2 = c+id =0+0i$. For this c=0 and d = 0, which means that $ac-bd = 0(a)-(0)b = 0$. So the first equation is satisfied. $(0)b+(0)b = 0$. So the 2nd is satisfied. January 6th 2009, 09:11 PM Ahhh I see what you were asking. But the question remains, are there any combinations of $z_1$ and $z_2$ for which their product is zero, but for which neither of them are 0?! Well. Let's try this then. For a complex number to be zero, then it's modulus must be zero, yes? So let's find the modulus of our product. $|z_1.z_2| = |(ac-bd) + i(ad+bc)|$ $= \sqrt{(ac-bd)^2 + (ad+bc)^2}$ $= \sqrt{(ac)^2-2abcd+(bd)^2 + (ad)^2+2abcd+(bc)^2}$ $= \sqrt{(ac)^2+(bd)^2 + (ad)^2+(bc)^2} = 0$ Clearly, if this is zero, then the expression inside the square root is zero! $= (ac)^2+(bd)^2 + (ad)^2+(bc)^2 = 0$ $= a^2(c^2+d^2)+b^2(c^2+d^2) = 0$ $= (a^2+b^2)(c^2+d^2) = 0$ Hence, by the logic of REAL numbers (a, b, c and d must all be real, remember!) either: $a^2+b^2 = 0$ $c^2+d^2 = 0$ For these to be true: $a = \pm \sqrt{-b^2}$ $c = \pm \sqrt{-d^2}$ $b^2$ and $d^2$ are always positive numbers, which means that the solutions to these 2 equations for all values of b and d are not real solutions, but purely imaginary solutions. And by the definition of the complex numbers $z_1$ and $z_2$, a, b, c and d must be REAL numbers. Hence there are no two non-zero complex numbers whose product is zero, and hence if $z_1.z_2 = 0 +0i$ then $z_1 = 0+0i$ or $z_2 = 0+0i$
{"url":"http://mathhelpforum.com/algebra/67123-real-number-print.html","timestamp":"2014-04-17T06:11:43Z","content_type":null,"content_length":"13127","record_id":"<urn:uuid:15ca5f12-130a-41f4-9508-f6564c464472>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
The digraph module implements a version of labeled directed graphs. What makes the graphs implemented here non-proper directed graphs is that multiple edges between vertices are allowed. However, the customary definition of directed graphs will be used in the text that follows. A directed graph (or just "graph") is a pair (V, E) of a finite set V of vertices and a finite set E of directed edges (or just "edges"). The set of edges E is a subset of V × V (the Cartesian product of V with itself). In this module, V is allowed to be empty; the so obtained unique graph is called the empty graph. Both vertices and edges are represented by unique Erlang terms. Graphs can be annotated with additional information. Such information may be attached to the vertices and to the edges of the graph. A graph which has been annotated is called a labeled graph, and the information attached to a vertex or an edge is called a label. Labels are Erlang terms. An edge e = (v, w) is said to emanate from vertex v and to be incident on vertex w. The out-degree of a vertex is the number of edges emanating from that vertex. The in-degree of a vertex is the number of edges incident on that vertex. If there is an edge emanating from v and incident on w, then w is is said to be an out-neighbour of v, and v is said to be an in-neighbour of w. A path P from v[1] to v[k] in a graph (V, E) is a non-empty sequence v[1], v[2], ..., v[k] of vertices in V such that there is an edge (v[i],v[i+1]) in E for 1 <= i < k. The length of the path P is k-1. P is simple if all vertices are distinct, except that the first and the last vertices may be the same. P is a cycle if the length of P is not zero and v[1] = v[k]. A loop is a cycle of length one. A simple cycle is a path that is both a cycle and simple. An acyclic graph is a graph that has no cycles. new(Type) -> graph() | {error, Reason} Type = [cyclic | acyclic | public | private | protected] Reason = {unknown_type, term()} Returns an empty graph with properties according to the options in Type: Allow cycles in the graph (default). The graph is to be kept acyclic. The graph may be read and modified by any process. Other processes can only read the graph (default). The graph can be read and modified by the creating process only. If an unrecognized type option T is given, then {error, {unknown_type, T}} is returned. Deletes the graph G. This call is important because graphs are implemented with ets. There is no garbage collection of ets tables. The graph will, however, be deleted if the process that created the graph terminates. G = graph() InfoList = [{cyclicity, Cyclicity}, {memory, NoWords}, {protection, Protection}] Cyclicity = cyclic | acyclic Protection = public | protected | private NoWords = integer() >= 0 Returns a list of {Tag, Value} pairs describing the graph G. The following pairs are returned: □ {cyclicity, Cyclicity}, where Cyclicity is cyclic or acyclic, according to the options given to new. □ {memory, NoWords}, where NoWords is the number of words allocated to the ets tables. □ {protection, Protection}, where Protection is public, protected or private, according to the options given to new. add_vertex(G, V, Label) -> vertex() add_vertex(G, V) -> vertex() add_vertex(G) -> vertex() G = graph() V = vertex() Label = label() add_vertex/3 creates (or modifies) the vertex V of the graph G, using Label as the (new) label of the vertex. Returns V. add_vertex(G, V) is equivalent to add_vertex(G, V, []). add_vertex/1 creates a vertex using the empty list as label, and returns the created vertex. Tuples on the form [´$v´ | N], where N is an integer >= 1, are used for representing the created vertex(G, V) -> {V, Label} | false G = graph() V = vertex() Label = label() Returns {V, Label} where Label is the label of the vertex V of the graph G, or false if there is no vertex V of the graph G. no_vertices(G) -> integer() >= 0 Returns the number of vertices of the graph G. G = graph() Vertices = [vertex()] Returns a list of all vertices of the graph G, in some unspecified order. Deletes the vertex V from the graph G. Any edges emanating from V or incident on V are also deleted. del_vertices(G, Vertices) -> true G = graph() Vertices = [vertex()] Deletes the vertices in the list Vertices from the graph G. add_edge(G, E, V1, V2, Label) -> edge() | {error, Reason} add_edge(G, V1, V2, Label) -> edge() | {error, Reason} add_edge(G, V1, V2) -> edge() | {error, Reason} G = graph() E = edge() V1 = V2 = vertex() Label = label() Reason = {bad_edge, Path} | {bad_vertex, V} Path = [vertex()] add_edge/5 creates (or modifies) the edge E of the graph G, using Label as the (new) label of the edge. The edge is emanating from V1 and incident on V2. Returns E. add_edge(G, V1, V2, Label) is equivalent to add_edge(G, E, V1, V2, Label), where E is a created edge. Tuples on the form [´$e´ | N], where N is an integer >= 1, are used for representing the created edges. add_edge(G, V1, V2) is equivalent to add_edge(G, V1, V2, []). If the edge would create a cycle in an acyclic graph, then {error, {bad_edge, Path}} is returned. If either of V1 or V2 is not a vertex of the graph G, then {error, {bad_vertex, V}} is returned, V = V1 or V = V2. edge(G, E) -> {E, V1, V2, Label} | false G = graph() E = edge() V1 = V2 = vertex() Label = label() Returns {E, V1, V2, Label} where Label is the label of the edge E emanating from V1 and incident on V2 of the graph G. If there is no edge E of the graph G, then false is returned. G = graph() V = vertex() Edges = [edge()] Returns a list of all edges emanating from or incident on V of the graph G, in some unspecified order. Returns the number of edges of the graph G. G = graph() Edges = [edge()] Returns a list of all edges of the graph G, in some unspecified order. Deletes the edge E from the graph G. G = graph() Edges = [edge()] Deletes the edges in the list Edges from the graph G. out_neighbours(G, V) -> Vertices G = graph() V = vertex() Vertices = [vertex()] Returns a list of all out-neighbours of V of the graph G, in some unspecified order. in_neighbours(G, V) -> Vertices G = graph() V = vertex() Vertices = [vertex()] Returns a list of all in-neighbours of V of the graph G, in some unspecified order. G = graph() V = vertex() Edges = [edge()] Returns a list of all edges emanating from V of the graph G, in some unspecified order. G = graph() V = vertex() Edges = [edge()] Returns a list of all edges incident on V of the graph G, in some unspecified order. Returns the out-degree of the vertex V of the graph G. Returns the in-degree of the vertex V of the graph G. G = graph() V1 = V2 = vertex() Deletes edges from the graph G until there are no paths from the vertex V1 to the vertex V2. A sketch of the procedure employed: Find an arbitrary simple path v[1], v[2], ..., v[k] from V1 to V2 in G. Remove all edges of G emanating from v[i] and incident to v[i+1] for 1 <= i < k (including multiple edges). Repeat until there is no path between V1 and V2. get_path(G, V1, V2) -> Vertices | false G = graph() V1 = V2 = vertex() Vertices = [vertex()] Tries to find a simple path from the vertex V1 to the vertex V2 of the graph G. Returns the path as a list [V1, ..., V2] of vertices, or false if no simple path from V1 to V2 of length one or more exists. The graph G is traversed in a depth-first manner, and the first path found is returned. get_short_path(G, V1, V2) -> Vertices | false G = graph() V1 = V2 = vertex() Vertices = [vertex()] Tries to find an as short as possible simple path from the vertex V1 to the vertex V2 of the graph G. Returns the path as a list [V1, ..., V2] of vertices, or false if no simple path from V1 to V2 of length one or more exists. The graph G is traversed in a breadth-first manner, and the first path found is returned. get_cycle(G, V) -> Vertices | false G = graph() V1 = V2 = vertex() Vertices = [vertex()] If there is a simple cycle of length two or more through the vertex V, then the cycle is returned as a list [V, ..., V] of vertices, otherwise if there is a loop through V, then the loop is returned as a list [V]. If there are no cycles through V, then false is returned. get_path/3 is used for finding a simple cycle through V. get_short_cycle(G, V) -> Vertices | false G = graph() V1 = V2 = vertex() Vertices = [vertex()] Tries to find an as short as possible simple cycle through the vertex V of the graph G. Returns the cycle as a list [V, ..., V] of vertices, or false if no simple cycle through V exists. Note that a loop through V is returned as the list [V, V]. get_short_path/3 is used for finding a simple cycle through V. See Also Tony Rogvall - support@erlang.ericsson.se stdlib 1.9.4 Copyright © 1991-2001 Ericsson Utvecklings AB
{"url":"http://www.erlang.org/documentation/doc-5.0.2/lib/stdlib-1.9.4/doc/html/digraph.html","timestamp":"2014-04-20T15:59:10Z","content_type":null,"content_length":"21281","record_id":"<urn:uuid:dd0a9291-73cc-4002-9c30-1844dfe5da55>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: [ap-calculus] larson pg 137 exercise 69 Replies: 1 Last Post: Oct 17, 2012 3:47 PM RE: [ap-calculus] larson pg 137 exercise 69 Posted: Oct 17, 2012 3:47 PM This ap-calculus EDG will be closing in the next few weeks. Please sign up for the new AP Calculus Teacher Community Forum at https://apcommunity.collegeboard.org/getting-started and post messages there. Here is how I solved the problem that you asked. step 1: we need to find the point that the tangent line and the graph intersect thus we need to set the function equal to the tangent line 5x-4 = x^2 -kx Step 2: We need to find the derivative so we get f'(x) = 2x - k Step 3: We know the value of the derivative at the intersection point is 5 so 5 = 2x - k or k = 2x - 5 Step 4: Now lets plug in for k into step 1 so we get 5x - 4= x^2 -x(2x-5) 5x -4 = x^2 -2x^2 + 5x -4 = -x^2 x = +/- 2 Step 5: Now lets plug in our two x values into step 3 to find our k values k = 2(-2) - 5 = -9 k = 2(2) - 5 = -1 Take care Douglas A. Dosky (c) 614-260-4699 email: ddosky@worthington.k12.oh.us Thomas Worthington High School AP Calculus BC and AB Teacher From: Dwayne Wellington [mrdw27@gmail.com] Sent: Wednesday, October 17, 2012 9:29 AM To: AP Calculus Subject: [ap-calculus] larson pg 137 exercise 69 This ap-calculus EDG will be closing in the next few weeks. Please sign up for the new AP Calculus Teacher Community Forum at https://apcommunity.collegeboard.org/getting-started and post messages there. Good morning, I was working on exercise 69 from Larson early Transcendentals and am missing the key to answering the question. Find the value of k such that the line is tangent to the graph of the function. (function) f(x) = x^2 - kx (line) y = 5x -4 the answers are x= -1, -9 but I can't seem to get that when I work out the problem. Thank you for the assistance in advance. To search the list archives for previous posts go to ***This message and any response to it may constitute public record and thus may be publicly available to anyone who requests it. Confidentiality Notice: This e-mail message, from Worthington Schools, Worthington, Ohio, including any attachments, is for the sole use of the intended recipient(s) and may contain information that is privileged, confidential and/or exempt from disclosure under applicable law. If you are not the intended recipient or authorized to receive information for the recipient, you are hereby notified that any review, use, disclosure, distribution, copying, printing, or action taken in reliance on the contents of this e-mail is strictly prohibited. If you have received this communication in error, please immediately contact the sender and destroy the material in its entirety. Thank you. To search the list archives for previous posts go to
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2409634&messageID=7907634","timestamp":"2014-04-17T18:37:01Z","content_type":null,"content_length":"17468","record_id":"<urn:uuid:2cfe2943-89fb-469d-8d58-ac12bbd756e3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Linear , a linear function f ) is one which satisfies the following two properties (but see below for a slightly different usage of the term): • Superposition: f(x + y) = f(x) + f(y) • Homogeneity: f(αx) = αf(x) for all α In this definition, is not necessarily a real number , but can in general be a member of any vector space . In the case that the is the or a finite field , superposition is enough to imply homogeneity. However, in the case of the complex numbers , both relations are needed. We are often concerned with bounded linear functions[?] , which is equivalent to ones. Although it is possible for a function to be linear and unbounded, these functions are usually of little practical importance. The concept of linearity can be extended to linear operators which are linear if they satisfy the superposition and homogenity relations. Examples of linear operators are del and the derivative function[?]. When a differential equation can be expressed in linear form, it is particularly easy to solve by breaking the equation up into smaller pieces, solving each of those pieces, and adding the solutions up. Nonlinear equations and functions are of interest to physicists and mathematicians because they are hard to solve and give rise to interesting phenomena such as chaos. In a slightly different usage to the above, a polynomial of degree 1 is said to be linear. Over the reals, a linear function is of the form: f(x) = m'x + c m is often called the slope or gradient; c the intercept, which gives the point of intersection between the graph of the function and the y-axis. Note that this usage of the term linear is not the same as the above, because linear polynomials over the real numbers do not in general satisfy either superposition or homogeneity. In fact, they do so if and only if c = 0. All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/li/Linear","timestamp":"2014-04-19T19:48:07Z","content_type":null,"content_length":"16075","record_id":"<urn:uuid:70b801d8-d2f2-4d27-9ccc-7aec97eccbb6>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
Brevet US7072709 - Method and apparatus for determining alternans data of an ECG signal The present invention relates to cardiology, and more specifically to methods and apparatus for determining alternans data of an electrocardiogram (“ECG”) signal. Alternans are a subtle beat-to-beat change in the repeating pattern of an ECG signal. Several studies have demonstrated a high correlation between an individual's susceptibility to ventricular arrhythmia and sudden cardiac death and the presence of a T-wave alternans (“TWA”) pattern of variation in the individual's ECG signal. While an ECG signal typically has an amplitude measured in millivolts, an alternans pattern of variation with an amplitude on the order of a microvolt may be clinically significant. Accordingly, an alternans pattern of variation is typically too small to be detected by visual inspection of the ECG signal in its typical recorded resolution. Instead, digital signal processing and quantification of the alternans pattern of variation is necessary. Such signal processing and quantification of the alternans pattern of variation is complicated by the presence of noise and time shift of the alternans pattern of variation to the alignment points of each beat, which can be caused by limitation of alignment accuracy and/or physiological variations in the measured ECG signal. Current signal processing techniques utilized to detect TWA patterns of variation in an ECG signal include spectral domain methods and time domain methods. In light of the above, a need exists for a technique for detecting TWA patterns of variation in an ECG signal that provides improved performance as a stand-alone technique and as an add-on to other techniques. Accordingly, one or more embodiments of the invention provide methods and apparatus for determining alternans data of an ECG signal. In some embodiments, the method can include determining at least one value representing at least one morphology feature of each beat of the ECG signal and generating a set of data points based on a total quantity of values and a total quantity of beats. The data points can each include a first value determined using a first mathematical function and a second value determined using a second mathematical function. The method can also include separating the data points into a first group of points and a second group of points and generating a feature map by plotting the first group of points and the second group of points in order to assess an alternans pattern of variation. FIG. 1 is a schematic diagram illustrating a cardiac monitoring system according to the invention. FIG. 2 illustrates an ECG signal. FIG. 3 is a flow chart illustrating one embodiment of a method of the invention. FIG. 4 illustrates a maximum morphology feature. FIG. 5 illustrates a minimum morphology feature. FIG. 6 illustrates an area morphology feature. FIG. 7 illustrates another area morphology feature. FIG. 8 illustrates a further area morphology feature. FIG. 9 illustrates still another area morphology feature. FIG. 10 illustrates a plurality of beats, each beat being divided into a plurality of portions. FIG. 11 illustrates a window establishing a size of one of the plurality of portions of FIG. 10. FIG. 12 illustrates a feature matrix. FIG. 13 illustrates a decomposition of the feature matrix of FIG. 12 as generated by a principal component analysis. FIG. 14 illustrates a plot of values of data corresponding to values representative of a morphology feature. FIG. 15 illustrates a determination of difference features using the values plotted in FIG. 14. FIG. 16 illustrates another determination of difference features using the values plotted in FIG. 14. FIG. 17 illustrates a further determination of a difference feature using the values plotted in FIG. 14. FIG. 18 illustrates a feature map of first and second groups of points generated using values of a vector of data. FIG. 19 illustrates a feature map generated using values of a vector of data generated by performing a principal component analysis on a feature matrix including the vector of data utilized to generate the feature map of FIG. 18. FIG. 20 illustrates a feature map of first and second groups of points generated using a first mathematical function and a second mathematical function. FIG. 21 illustrates a feature map of third and fourth groups of points generated using a third mathematical function and a fourth mathematical function. FIG. 22 illustrates a feature map of fifth and sixth groups of points generated using a fifth mathematical function and the sixth mathematical function. FIG. 23 illustrates a distance between a first center point of a first group of points and a second center point of a second group of points each plotted to form a feature map. FIG. 24 illustrates a spectral graph generated using values of a vector of data. FIG. 25 illustrates a spectral graph generated using values of a vector of data generated by performing a principal component analysis on a feature matrix including the vector of data utilized to generate the spectral graph of FIG. 24. Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limited. The use of “including,” “comprising” or “having” and variations thereof herein is meant to encompass the items listed thereafter and equivalents thereof as well as additional items. The terms “mounted,” “connected” and “coupled” are used broadly and encompass both direct and indirect mounting, connecting and coupling. Further, “connected” and “coupled” are not restricted to physical or mechanical connections or couplings, and can include electrical connections or couplings, whether direct or indirect. In addition, it should be understood that embodiments of the invention include both hardware and electronic components or modules that, for purposes of discussion, may be illustrated and described as if the majority of the components were implemented solely in hardware. However, one of ordinary skill in the art, and based on a reading of this detailed description, would recognize that, in at least one embodiment, the electronic based aspects of the invention may be implemented in software. As such, it should be noted that a plurality of hardware and software based devices, as well as a plurality of different structural components may be utilized to implement the invention. Furthermore, and as described in subsequent paragraphs, the specific mechanical configurations illustrated in the drawings are intended to exemplify embodiments of the invention and that other alternative mechanical configurations are possible. FIG. 1 illustrates a cardiac monitoring system 10 according to some embodiments of the invention. The cardiac monitoring system 10 can acquire ECG data, can process the acquired ECG data to determine alternans data, and can output the alternans data to a suitable output device (e.g., a display, a printer, and the like). As used herein and in the appended claims, the term “alternans data” includes TWA data, or any other type of alternans data that is capable of being determined using one or more embodiments of the invention. The cardiac monitoring system 10 can acquire ECG data using a data acquisition module. It should be understood that ECG data can be acquired from other sources (e.g., from storage in a memory device or a hospital information system). The data acquisition module can be coupled to a patient by an array of sensors or transducers which may include, for example, electrodes coupled to the patient for obtaining an ECG signal. In the illustrated embodiment, the electrodes can include a right arm electrode RA; a left arm electrode LA; chest electrodes V1, V2, V3, V4, V5 and V6; a right leg electrode RL; and a left electrode leg LL for acquiring a standard twelve-lead, ten-electrode ECG. In other embodiments, alternative configurations of sensors or transducers (e.g., less than ten electrodes) can be used to acquire a standard or non-standard ECG signal. A representative ECG signal is schematically illustrated in FIG. 2. The ECG signal can include [G] beats including beat-one B[1 ]through beat-[G] B[G ]where [G] is a value greater than one. As used herein and in the appended claims, a capital letter in brackets represents a quantity, and a capital letter without brackets is a reference character (similar to a typical reference numeral). The data acquisition module can include filtering and digitization components for producing digitized ECG data representing the ECG signal. In some embodiments, the ECG data can be filtered using low pass and baseline wander removal filters to remove high frequency noise and low frequency artifacts. The ECG data can, in some embodiments, be filtered by removing arrhythmic beats from the ECG data and by eliminating noisy beats from the ECG data. The cardiac monitoring system 10 can include a processor and a memory associated with the processor. The processor can execute a software program stored in the memory to perform a method of the invention as illustrated in FIG. 3. FIG. 3 is a flow chart of a method of the invention used to determine and display alternans data of an ECG signal. Although the cardiac monitoring system 10 is described herein as including a single processor that executes a single software program, it should be understood that the system can include multiple processors, memories, and/or software programs. Further, the method of the invention illustrated in FIG. 3 can be performed manually or using other systems. As shown in FIG. 3, the processor can receive (at 100) ECG data representing an ECG signal. The acquired ECG data can be received (e.g., from a patient in real-time via the data acquisition module or from storage in a memory device) and can be processed as necessary. The ECG data can represent continuous and/or non-continuous beats of the ECG signal. In one embodiment, the ECG data, or a portion thereof, can be parsed into a plurality of data sets. Each data set can represent a portion of a respective beat B of the ECG signal (e.g., the T-wave portion of a respective beat B of the ECG signal), a portion of a respective odd or even median beat of the ECG signal, a portion of a respective odd or even mean beat of the ECG signal, and the like. The parsed data sets can be saved in an array (e.g., a waveform array). In other embodiments, the ECG data can be saved in a single data set, or alternatively, saved in multiple data sets. The processor can determine (at 102) a quantity [C] of values W representing a quantity [D] of morphology features F of a beat B (e.g., beat-one B[1]) of a quantity [G] beats, where [C] and [D] are each a quantity greater than or equal to one. In some embodiments, a single value W is determined for each morphology feature F (i.e., the quantity of [C] is equal to the quantity of [D]). However, in some embodiments, multiple values W are determined for a single morphology feature F and/or a single value W is determined for multiple morphology features F. Determining a quantity [C] of values W representing a quantity [D] of morphology features F can be repeated for a quantity [H−1] of beats of the quantity [G] of beats represented in the collected ECG data where a quantity [H] is greater than or equal to one and less than or equal to the quantity [G]. In some embodiments, any morphology features F of the beats B can be determined. FIGS. 4–9 illustrate some examples of such morphology features F. FIG. 4 illustrates a maximum morphology feature (i.e., the maximum value of the data set representing the T-wave portion of a respective beat). FIG. 5 illustrates a minimum morphology feature (i.e., the minimum value of the data set representing the T-wave portion of a respective beat). FIG. 6 illustrates an area morphology feature (i.e., the area between a curve formed by the data set representing the T-wave portion of a respective beat and a baseline established by the minimum value of the data set). FIG. 7 illustrates another area morphology feature (i.e., the area between a curve formed by the data set representing the T-wave portion of a respective beat and a baseline established by the maximum value of the data set and a point of the data set representing the maximum up-slope of the curve). FIG. 8 illustrates still another area morphology feature (i.e., the area between a curve formed by the data set representing the T-wave portion of a respective beat and a baseline established by the minimum value of the data set and a point of the data set representing the maximum down-slope of the curve). FIG. 9 illustrates yet another area morphology feature (i.e., the area between a curve formed by the data set representing the T-wave portion of a respective beat and a baseline established by a point of the data set representing the maximum up-slope of the curve and a point of the data set representing the maximum down-slope of the curve). Other types of maximum, minimum, and area morphology features can also be used. Other examples of morphology features that can be used include amplitude morphology features (e.g., an amplitude of a point representing the maximum down-slope of the curve formed by the data set representing the T-wave portion of a respective beat) and slope morphology features (e.g., a maximum positive slope of the curve formed by the data set representing the T-wave portion of a respective beat). Another example is mathematical model morphology features obtained by determining values representing a mathematical model of the curve formed by the data set representing the T-wave portion of a respective beat using, for example, a Gaussian function model, a power of Cosine function model, and/or a bell function model. A further example is time interval morphology features (e.g., a time interval between a maximum value and a minimum value of the data set representing a T-wave portion of a respective beat). Still another example is shape correlation morphology features obtained by determining a value representing a shape correlation of the curve formed by the data set representing the T-wave portion of a respective beat using, for example, a cross-correlation method and/or an absolute difference correlation method. An additional example is ratio morphology features (e.g., a ST:T ratio). Any other suitable morphology feature can be used in other embodiments of the invention. In some embodiments, as discussed above, the morphology feature can be determined using values of the data set(s) of the ECG data. In other embodiments, the morphology features can be determined using values representing the values of the data set(s) of the ECG data (e.g., a morphology feature of the first derivative of the curve formed by a respective data set). Morphology features can be determined using an entire parsed data set as illustrated in FIGS. 4–9, or alternatively, using a portion thereof as illustrated in FIGS. 10 and 11. As shown in FIG. 10, each of the beats B can be divided up in a plurality of portions. The center of each portion can be defined by a vertical divider line. As shown in FIG. 11, a window can be established to define the size of the portion. The window can include a single value of the data set (e.g., a value representing the point where the divider line crosses the curve formed by the data set), or values of the data set representing any number of points adjacent the intersection of the curve and the divider line. As shown in FIG. 3, the processor can generate (at 104) a feature matrix. As used herein and in the appended claims, the term “matrix” includes any table of values. The generated feature matrix can include a quantity [C] of values W representing each of the quantity [D] of morphology features F for each of the quantity [H] of beats B (i.e., the feature matrix includes a quantity [C]×[H] of values W). Each value W can directly represent the determined morphology feature F (e.g., the actual value of the determined area morphology feature), or can indirectly represent the determined morphology feature (e.g., a normalized value of the determined area morphology feature). A representative column-wise feature matrix A is illustrated in FIG. 12. The feature matrix A can include [C] columns and [H] rows. The feature matrix A can use the columns to represent the quantity [D] of morphology features F (i.e., each column includes a quantity [H] of values W of the same morphology feature as determined for each of the quantity [H] of beats B), and the rows to represent the beats B (i.e., each row includes a quantity [C] of values representing the quantity [D] of morphology features for each of the quantity [H] of beats). The values W of the morphology features F can be represented in the illustrated feature matrix A using the notation W[IBJ ]and F[I]B[J ]where I is a value between one and [C], the quantity of [C] being equal to the quantity of [D], and J is a value between one and [H]. In other embodiments, the feature matrix A can be arranged in other suitable manners. In yet other embodiments, the values W representing the morphology features F can be saved for later processing. As shown in FIG. 3, the processor can preprocess (at 106) the feature matrix A. In some embodiments, a principal component analysis (PCA) can be performed on the feature matrix A. PCA involves a multivariate mathematical procedure known as an eigen analysis which rotates the data to maximize the explained variance of the feature matrix A. In other words, a set of correlated variables are transformed into a set of uncorrelated variables which are ordered by reducing variability, the uncorrelated variables being linear combinations of the original variables. PCA is used to decompose the feature matrix A into three matrices, as illustrated in FIG. 13. The three matrices can include a matrix U, a matrix S, and a matrix V. The matrix U can include the principal component vectors (e.g., the first principal component vector u[1], the second principal component vector u[2], . . . , the pth principal component vector u [p]). The principal component vectors are also known as eigen vectors. The first principal component vector u[1 ]can represent the most dominant variance vector (i.e., the first principal component vector u[1 ]represents the largest beat-to-beat variance), the second principal component vector u[2 ]can represent the second most dominant variance vector, and so on. The S Matrix can include the principal components (e.g., the first principal component S[1], the second principal component S[2], . . . , the pth principal component Sp). The first principal component S[1 ]can account for as much of the variability in the data as possible, and each succeeding principal component S can account for as much of the remaining variability as possible. The first principal component S[1 ]can be used to determine alternans data (e.g., the square-root of the first PCA component S[1 ]can provide an estimation of the amplitude of the most dominant alternans pattern of variation). In some embodiments, the second principal component S[2 ]and the third principal component S[3 ]can also provide useful alternans data. The matrix V is generally known as the parameter matrix. The matrix V can be raised to a power of T. In other embodiments, the preprocessing of the feature matrix A can include other types of mathematical analyses. The robustness of the preprocessing of the feature matrix A can be enhanced by increasing the quantity of [H] as the quantity of [D] increases. In other words, an increase in the number of morphology features F represented in the feature matrix A generally requires a corresponding increase in the number of beats B for which the morphology features F are being determined. The correspondence between the quantities of [D] and [H] is often based on the dependency between each of the [D] morphology features F. In some embodiments, the quantity of [H] is greater than or equal to 32 and less than or equal to 128. In other embodiments, the quantity of [H] is less than 32 or greater than 128. In some embodiments, the value of [H] is adaptively changed in response to a corresponding change in the level of noise in the measured ECG signal. As shown in FIG. 3, the processor can determine (at 108) [E] points L using data corresponding to at least some of the values W, [E] being a quantity greater than or equal to one. The data corresponding to the values W can include at least one value W, at least one value of a principal component vector (e.g., the first principal component vector u[l]), and/or at least one value of any other data that corresponds to the values W. Each point L can include a first value (e.g., one of an X-value and a Y-value) determined using a first mathematical function Feature(beat+[N]), and a second value (e.g., the other of the X-value and the Y-value) determined using a second mathematical function Feature(beat), [N] being a quantity greater than or equal to one. Each of the first and second values of the points L represents a feature of the data corresponding to the values W. In the illustrated embodiment, the feature is a difference feature Q (i.e., the difference in amplitude between two values of the data corresponding to the values W as specified by the respective mathematical function). In other embodiments, the first and second values of the points L can represent another difference features (e.g., an absolute difference feature, a normalized difference feature, a square-root difference feature, and the like), or any other mathematically-definable feature of the data corresponding to the values W. For example, the feature can include a value feature where the feature is equal to a specified value of the data corresponding to the determined values W. Equations 1 and 2 shown below define an example of the mathematical functions Feature(beat+[N]) and Feature(beat), respectively. The first values of the points L determined using the mathematical function Feature(beat+[N]) can represent a difference feature Q[K+[N] ]and the second values of the points L determined using the mathematical function Feature(beat) can represent the difference feature Q[K], where K is a value equal to a beat (i.e., the beat for which the respective mathematical function is being used to determine either the first or second value of a point L). Feature(beat+[N])=W [(beat+2[N])] −W [(beat+[N])] =Q [K+[N]][e1] Feature(beat)=W [(beat+[N])] −W [(beat)] =Q [K][e2] Tables 1–3 shown below represent the determination of points L using the mathematical functions Feature(beat+[N]) and Feature(beat) as defined in Equations 1 and 2 for [N]=1, 2, and 3, respectively. Equations 3 and 4 shown below define the mathematical functions Feature(beat+[N]) and Feature(beat) for [N]=1. Feature(beat+1)=W [(beat+2)] −W [(beat+1)] =Q [K+1][e3] Feature(beat)=W [(beat+1)] −W [(beat)] =Q [K][e4] Equations 5 and 6 shown below define the mathematical functions Feature(beat+[N]) and Feature(beat) for [N]=2. Feature(beat+2)=W [(beat+4)] −W [(beat+2)] =Q [K+2][e5] Feature(beat)=W [(beat+2)] −W [(beat)] =Q [K][e6] Equations 7 and 8 shown below define the mathematical functions Feature(beat+[N]) and Feature(beat) for [N]=3. Feature(beat+3)=W [(beat+6)] −W [(beat+3)] =Q [K+3][e7] Feature(beat)=W [(beat+3)] −W [(beat)] =Q [K][e8] As shown by Equations 3–8, the offset between the difference feature Q[K+[N]] and the difference feature Q[K ]is dependent on the value of [N]. For [N]=1, the first value of the point L is determined by finding the difference between the value W of the second next beat B[1+2 ]and the value W of the next beat B[1+1], while the second value of the point L is determined by finding the difference between the value W of the next beat B[1+1 ]and the value W of the current beat B[1]. For [N]=2, the first value of the point L is determined by finding the difference between the value W of the fourth next beat B[1+4 ]and the value W of the second next beat B[1+2], while the second value of the point L is determined by finding the difference between the value W of the second next beat B[1+2 ]and the value W of the current beat B[1]. For [N]=3, the first value of the point L is determined by finding the difference between the value W of the sixth next beat B[1+6 ]and the value W of the third next beat B[1+3], while the second value of the point L is determined by finding the difference between the value W of the third next beat B[1+3 ]and the value W of the current beat B[1]. Accordingly, the first values of the points L determined using the first mathematical function Feature(beat+[N]) are offset relative to the second values of the points L determined using the second mathematical function Feature(beat) by a factor of [N]. For example, for [N]=1, the first mathematical function Feature(beat+[N]) determines Feature(2) . . . Feature(Z+1) for beat-one B[1 ]through beat-(Z) B[Z], while the second mathematical function Feature(beat) determines Feature(1) . . . Feature(Z) for beat-one B[1 ]through beat-(Z) B[Z]; for [N]=2, the first mathematical function Feature (beat+[N]) determines Feature(3) . . . Feature(Z+2) for beat-one B[1 ]through beat-(Z) Bz, while the second mathematical function Feature(beat) determines Feature(1) . . . Feature(Z) for beat-one B[1 ]through beat-(Z) B[Z]; for [N]=3, the first mathematical function Feature(beat+[N]) determines Feature(4) . . . Feature(Z+3) for beat-one B[1 ]through beat-(Z) B[Z ]while the second mathematical function Feature(beat) determines Feature(1) . . . Feature(Z) for beat-one B[1 ]through beat-(Z) B[Z]. This offset relationship between the first values of the points L determined using the first mathematical function Feature(beat+[N]) and the second values of the points L determined using the second mathematical function Feature(beat) is further illustrated in Tables 1–3. In Tables 1–3 shown below, the “Beat” column can represent respective beats B of the ECG signal and the “Feature Value” column can represent a value W of a morphology feature F of the corresponding respective beat B (e.g., an area morphology feature). As discussed above, the points L can be generated using values of other data corresponding to the determined values W. Also in Tables 1–3, an asterisk (*) represents an undetermined value of the point L (i.e., a value of the point L for which feature values W corresponding to beats B subsequent to the listed beats B[1]-B[12 ]are required to determine the value of the point L), “f(b+N)” represents the mathematical function Feature(beat+[N]), and “f(b)” represent the mathematical function Feature(beat). Each point L shown in Tables 1–3 includes an X-value determined using the first mathematical function Feature(beat+[N]) and a Y-value determined using the second mathematical function Feature(beat). [N] = 1 Feature f(b + N) = W[(b+2N) ]− W[(b+N)] f(b) = W[(b+N) ]− W[(b)] Feature Map Beat Value f(b + 1) = W[(b+2) ]− W[(b+1)] f(b) = W[(b+1) ]− W[(b)] Point Group 1 2 f(2) = 3 − 5 = −2 f(1) = 5 − 2 = 3 (−2, 3) A 2 5 f(3) = 6 − 3 = 3 f(2) = 3 − 5 = −2 (3, −2) B 3 3 f(4) = 2 − 6 = −4 f(3) = 6 − 3 = 3 (−4, 3) A 4 6 f(5) = 4 − 2 = 2 f(4) = 2 − 6 = −4 (2, −4) B 5 2 f(6) = 3 − 4 = −1 f(5) = 4 − 2 = 2 (−1, 2) A 6 4 f(7) = 7 − 3 = 4 f(6) = 3 − 4 = −1 (4, −1) B 7 3 f(8) = 3 − 7 = −4 f(7) = 7 − 3 = 4 (−4, 4) A 8 7 f(9) = 5 − 3 = 2 f(8) = 3 − 7 = −4 (2, −4) B 9 3 f(10) = 3 − 5 = −2 f(9) = 5 − 3 = 2 (−2, 2) A 10 5 f(11) = 7 − 3 = 4 f(10) = 3 − 5 = −2 (4, −2) B 11 3 f(12) = W[13 ]− 7 = * f(11) = 7 − 3 = 4 (*, 4) A 12 7 f(13) = W[14 ]− W[13 ]= * f(12) = W[13 ]− 7 = * (*, *) B [N] = 2 Feature f(b + N) = W[(b+2N) ]− W[(b+N)] f(b) = W[(b+N) ]− W[(b)] Feature Map Beat Value f(b + 2) = W[(b+4) ]− W[(b+2)] f(b) = W[(b+2) ]− W[(b)] Point Group 1 2 f(3) = 2 − 3 = −1 f(1) = 3 − 2 = 1 (−1, 1) A 2 5 f(4) = 4 − 6 = −2 f(2) = 6 − 5 = 1 (−2, 1) B 3 3 f(5) = 3 − 2 = 1 f(3) = 2 − 3 = −1 (1, −1) A 4 6 f(6) = 7 − 4 = 3 f(4) = 4 − 6 = −2 (3, −2) B 5 2 f(7) = 3 − 3 = 0 f(5) = 3 − 2 = 1 (0, 1) A 6 4 f(8) = 5 − 7 = −2 f(6) = 7 − 4 = 3 (−2, 3) B 7 3 f(9) = 3 − 3 = 0 f(7) = 3 − 3 = 0 (0, 0) A 8 7 f(10) = 7 − 5 = 2 f(8) = 5 − 7 = −2 (2, −2) B 9 3 f(11) = W[13 ]− 3 = * f(9) = 3 − 3 = 0 (*, *) A 10 5 f(12) = W[14 ]− 7 = * f(10) = 7 − 5 = 2 (*, *) B 11 3 f(13) = W[15 ]− W[13 ]= * f(11) = W[13 ]− 3 = * (*, *) A 12 7 f(14) = W[16 ]− W[14 ]= * f(12) = W[14 ]− 7 = * (*, *) B [N] = 3 Feature f(b + N) = W[(b+2N) ]− W[(b+N)] f(b) = W[(b+N) ]− W[(b)] Feature Map Beat Value f(b + 3) = W[(b+6) ]− W[(b+3)] f(b) = W[(b+3) ]− W[(b)] Point Group 1 2 f(4) = 3 − 6 = −3 f(1) = 6 − 2 = 4 (−3, 4) A 2 5 f(5) = 7 − 2 = 5 f(2) = 2 − 5 = −3 (5, −3) B 3 3 f(6) = 3 − 4 = −1 f(3) = 4 − 3 = 1 (−1, 1) A 4 6 f(7) = 5 − 3 = 2 f(4) = 3 − 6 = −3 (2, −3) B 5 2 f(8) = 3 − 7 = −4 f(5) = 7 − 2 = 5 (−4, 5) A 6 4 f(9) = 7 − 3 = 4 f(6) = 3 − 4 = −1 (4, −1) B 7 3 f(10) = W[13 ]− 5 = * f(7) = 5 − 3 = 2 (*, *) A 8 7 f(11) = W[14 ]− 3 = * f(8) = 3 − 7 = −4 (*, *) B 9 3 f(12) = W[15 ]− 7 = * f(9) = 7 − 3 = 4 (*, *) A 10 5 f(13) = W[16 ]− W[13 ]= * f(10) = W[13 ]− 5 = * (*, *) B 11 3 f(14) = W[17 ]− W[14 ]= * f(11) = W[14 ]− 3 = * (*, *) A 12 7 f(15) = W[18 ]− W[15 ]= * f(12) = W[15 ]− 7 = * (*, *) B FIG. 14 illustrates a plot of the feature values from Tables 1–3 for beat-one B[1 ]through beat-seven B[7 ]where each peak and each valley of the plot can represent a respective feature value W (e.g., value-one W[1 ]which represents beat-one B[1], value-two W[2 ]which represents beat-two B[2], . . . , value-seven W[7 ]which represents beat-seven B[7]). FIG. 15 illustrates for [N]=1 how the mathematical functions Feature(beat+[N]) and Feature(beat) determine the first and second values of the points L which represent the difference features Q[K ]and Q[K+1]. For [N]=1, the seven values (i.e., value-one W[1 ]through value-seven W[7]) generate six difference features (i.e., difference feature-one Q[1 ]through difference feature-six Q[6]). Referring to Table 1, the first mathematical function generates difference feature-two Q[2 ]through difference feature-six Q[6 ]for beat-one B[1 ]through beat-five B[5], respectively, using the seven values, and the second mathematical function generates difference feature-one Q[1 ]through difference feature-six Q[6 ]for beat-one B[1 ]through beat-six B[6], respectively, using the seven values. The difference feature Q is illustrated in FIG. 15 as dotted-line arrows extending between two specified values of the plot of FIG. 14. As an example, to determine difference feature-three Q[3 ] (i.e., the first value of the point L as determined by the first mathematical function Feature(beat+[N]) for beat-two B[2], the second value of the point L as determined by the second mathematical function Feature(beat) for beat-three B[3]), the difference can be found between value-four W[4 ]which represents beat-four B[4 ]and value-three W[3 ]which represents beat-three B[3]. Similarly, to determine difference feature-six Q[6 ](i.e., the first value of the point L as determined by the first mathematical function Feature(beat+[N]) for beat-two B[5], the second value of the point L as determined by the second mathematical function Feature(beat) for beat-six B[6]), the difference can be found between value-four W[7 ]which represents beat-seven B[7 ]and value-six W[6 ]which represents beat-six B[6]. FIG. 16 illustrates for [N]=2 how the mathematical functions Feature(beat+[N]) and Feature(beat) determine the first and second values of the points L which represent the difference features Q[K ]and Q[K+2]. For [N]=2, the seven values (i.e., value-one W[1 ]through value-seven W[7]) generate five difference features (i.e., difference feature-one Q[1 ]through difference feature-five Q[5]). Referring to Table 2, the first mathematical function generates difference feature-three Q[3 ]through difference feature-five Q[5 ]for beat-one B[1 ]through beat-three B[3], respectively, using the seven values, and the second mathematical function generates difference feature-one Q[1 ]through difference feature-five Q[5 ]for beat-one B[1 ]through beat-five B[5], respectively, using the seven The difference feature Q is illustrated in FIG. 16 as dotted-line arrows extending between two specified values of the plot of FIG. 14. As an example, to determine difference feature-three Q[3 ] (i.e., the first value of the point L as determined by the first mathematical function Feature(beat+[N]) for beat-one B[1], the second value of the point L as determined by the second mathematical function Feature(beat) for beat-three B[3]), the difference can be found between value-five W[5 ]which represents beat-five B[5 ]and value-three W[3 ]which represents beat-three B[3]. Similarly, to determine difference feature-five Q[5 ](i.e., the first value of the point L as determined by the first mathematical function Feature(beat+[N]) for beat-three B[3], the second value of the point L as determined by the second mathematical function Feature(beat) for beat-five B[5]), the difference can be found between value-four W[7 ]which represents beat-seven B[7 ]and value-five W[5 ]which represents beat-five B[5]. FIG. 17 illustrates for [N]=3 how the mathematical functions Feature(beat+[N]) and Feature(beat) determine the first and second values of the points L which represent the difference features Q[K ]and Q[K+3]. For [N]=3, the seven values (i.e., value-one W[1 ]through value-seven W[7]) generate four difference features (i.e., difference feature-one Q[1 ]through difference feature-four Q[4]). Referring to Table 3, the first mathematical function generates difference feature-four Q[4 ]for beat-four B[4 ]using the seven values, and the second mathematical function generates difference feature-one Q[1 ]through difference feature-four Q[4 ]for beat-one B[1 ]through beat-four B[4], respectively, using the seven values. The difference feature Q is illustrated in FIG. 17 as dotted-line arrows extending between two specified values of the plot of FIG. 14. As an example, to determine difference feature-three Q[4 ] (i.e., the first value of the point L as determined by the first mathematical function Feature(beat+[N]) for beat-one B[1], the second value of the point L as determined by the second mathematical function Feature(beat) for beat-three B[3]), the difference can be found between value-seven W[7 ]which represents beat-seven B[7 ]and value-four W[4 ]which represents beat-four B[4]. As shown by the “Group” column of Tables 1–3, each point L can be assigned to a respective group (e.g., group A or group B). The points L representing each odd beat (e.g., beat-one B[1], beat-three B [3], . . . , beat-eleven B[11]) can be assigned to a first group (i.e., group A), and the points representing each even beat (e.g., beat-two B[2], beat-four B[4], . . . , beat-twelve B[12]) can be assigned to a second group (i.e., group B). The points L can be assigned to group A and group B in this manner to represent a proposed odd-even alternans pattern of variation (i.e., ABAB . . . ). In other embodiments, the points L can be alternatively assigned to groups to represent other proposed alternans patterns of variation (e.g., AABBAABB . . . , AABAAB . . . , and the like). As shown in FIG. 3, the processor can plot (at 110) a feature map [e.g., a feature map of Feature(beat+[N]) versus Feature(beat)]. Both groups of points L (e.g., group A and group B) can be plotted on the same axis to generate the feature map. The polarity of the differences of the group A points are inverted relative to the polarities of the differences of the group B points. As a result, plotting the points L determined using the mathematical functions Feature(beat) and Feature(beat+[N]) as defined by Equations 1 and 2 can accentuate any difference between the values specified by the mathematical functions Feature(beat) and Feature(beat+[N]). The inverted polarity of the differences between the first and second groups is illustrated in FIGS. 15–17 where the direction of the dotted-line arrows that represent the difference features Q alternates between adjacent difference features Q. The feature map provides a visual indication of the divergence of the two groups of points, and thus the existence of a significant alternans pattern of variation. If there is a significant ABAB . . . alternans pattern of variation, the two groups of points will show separate clusters on the feature map (for example, as shown in FIGS. 20 and 22). If there is not a significant ABAB . . . alternans pattern of variation, the feature map will illustrate a more random pattern of points from the two groups (for example, as shown in FIG. 21). FIGS. 18 and 19 illustrate two examples of feature maps. The [E] points plotted to generate the feature maps of FIGS. 18 and 19 were determined using ECG data representative of an ECG signal having a 5 microvolt TWA pattern of variation, 20 microvolts of noise, and 20 milliseconds of offset, where [H] is equal to 128. The first and second groups of points can be distinguished by the markers utilized to represent the points of the group (i.e., the first group of points, group A, can include asterisks shaped markers, and the second group of points, group B, can include round markers). Lines can be used to connect sequential markers of each group (e.g., for group A, point-two P[2A ]can be connected to each of point-one PIA and point-three P[3A ]by lines). The feature map of FIG. 18 illustrates a plot of points determined using values directly from the feature matrix A (i.e., the feature matrix A was not preprocessed using a principal component analysis or other mathematical analysis). As illustrated in FIG. 18, the points of the first and second groups are intermixed (i.e., the feature map illustrates a random pattern of the points from the two groups). Accordingly, the feature map of FIG. 18 does not illustrate the presence of a significant divergence of the two groups of points, and thus, does not indicate the existence of a significant alternans pattern of variation. The feature map of FIG. 19 illustrates a plot of points determined using values of a first principal vector u[1]. The first principal vector u[1 ]is a result of a principal component analysis performed on the same feature matrix A from which the values used to determine the points L plotted in FIG. 18 were obtained. As illustrated in FIG. 19, although the first and second groups of points are partially overlapped, the first group of points is primarily positioned in the upper-left quadrant of the feature map and the second group of points is primarily positioned in the lower-right quadrant of the feature map. Accordingly, the feature map of FIG. 19 appears to illustrate the presence of a significant divergence of the two groups of points, and thus, a significant alternans pattern of variation may exist. Although FIGS. 18 and 19 illustrate the same ECG data, the feature map of FIG. 19 indicates the existence of an alternans pattern of variation, while the feature map of FIG. 18 does not. The effect of noise and time shift in the measured ECG signal on the determined alternans data is clearly indicated by the feature maps of FIGS. 18 and 19. Preprocessing the feature matrix A increases the robustness of the determination of alternans data by limiting the effect of noise and time shift in the measured ECG signal. In some embodiments, multiple feature maps can be generated for various quantities of [N] using the same set of values (e.g., the feature maps for [N]=1, 2, and 3, respectively, can be generated using the points determined in Tables 1–3). The display of multiple feature maps can further verify the existence of a significant alternans pattern of variation for the proposed alternans pattern of variation (e.g., a ABAB . . . alternans pattern of variation). FIGS. 20–22 illustrate feature maps for [N]=1, 2, and 3, respectively, where the points plotted in each of the feature maps were determined using the same set of values. The divergence of the first and second groups of points in the feature maps of FIGS. 20 and 22 in combination with the lack of divergence of the first and second groups of points in the feature map of FIG. 21 provides visual evidence that the proposed ABAB . . . alternans pattern of variation is correct. The operator can change the proposed alternans pattern of variation (i.e., change the grouping of the points to a different alternans pattern of variation) if the feature maps for [N]=1, 2, and 3 do illustrate differing divergence patterns for [N]=1 and 3 and [N]=2, respectively. For example, if the two groups of points diverge in the feature map for [N]=1 and 2, but not for the feature maps of [N]=3, the ECG signal represented by the values used to determine the points for the feature maps does not represent the proposed ABAB . . . alternans pattern of variation. However, the ECG signal can include a different alternans pattern of variation. Reassignment of the [E] points to different groups can be used to test a different proposed alternans pattern of variation. As shown in FIG. 3, the processor (at 112) can statistically analyze the data plotted in the feature map. Although the feature map provides a visual indication of the existence of a significant alternans pattern of variation, the feature map does not provide a quantitative measure of the confidence level of the alternans pattern of variation. Accordingly, the data plotted in the feature map, or similar types of data that are not plotted in a feature map, can be statistically analyzed to provide such quantitative measures of the confidence level of the alternans pattern of variation. In some embodiments, a paired T-test can be performed on the first and second groups of points. A paired T-test is a statistical test which is performed to determine if there is a statistically significant difference between two means. The paired T-test can provide a p-value (e.g., p=0.001). In one embodiment, the confidence level is increased (i.e., a significant alternans pattern of variation exists) when the p-value is less than 0.001. In other embodiments, other suitable threshold levels can be established. In some embodiments, a cluster analysis (e.g., a fuzzy cluster analysis or a K-mean cluster analysis) can be performed on the [E] points to determine a first cluster of points and a second cluster of points. The cluster analysis can also generate a first center point for the first cluster and a second center point for the second cluster. The first and second clusters of points can be compared with the first and second groups of points, respectively. A determination can be made of the number of clustered points that match the corresponding grouped points. For example, if point-one L[1 ]and point-two L[2 ]are clustered in the first cluster, point-three L[3 ]and point-four L[4 ]are clustered in the second cluster, point-one L[1], point-two L[2], and point-three L[3 ]can be grouped in the first group, and point-four L[4 ]can be grouped in the second group. Clustered point-three L[3 ]does not correspond to grouped point-three L[3], thereby resulting in a 75% confidence level. The confidence level can represent the percentage of clustered points that match the corresponding grouped points. In one embodiment, a confidence level about 90% can be a high confidence level, a confidence level between 60% and 90% can be a medium confidence level, and a confidence level below 60% can be a low confidence level. In other embodiments, the thresholds for the high, medium, and/ or low confidence levels can be other suitable ranges of percentages or values. As shown in FIG. 3, the processor can determine (at 114) an estimate of an amplitude of the alternans pattern of variation. As discussed above, in one embodiment, the square-root of a principal component (e.g., the first principal component S[1]) can be used to provide an estimate of the amplitude. In other embodiments, a distance can be determined between a first center point of a first group of points and a second center point of a second group of points. The center points can include the center points of the first and second groups of points A and B as determined using a mathematical analysis (e.g., by taking the mean or median of the values of the points for each respective group), the center points provided by the Paired T-test, the center points provided by the cluster analysis, or any other determined center points that represent the ECG data. FIG. 23 illustrates a distance measurement between the first and second center points. The distance can be determined using Equation 9 shown below, where the first center point includes an X-value X [1 ]and a Y-value Y[1 ]and the second center point includes an X-value X[2 ]and a Y-value Y[2]. Amplitude[ESTIMATE]=√{square root over ((X [1] −X [2])^2+(Y [1] −Y [2])^2)}{square root over ((X [1] −X [2])^2+(Y [1] −Y [2])^2)}[e9] The amplitude of the alternans pattern of variation often depends on the [D] morphology features used to determine the values W. Accordingly, the estimated amplitude is generally not an absolute value that can be compared against standardized charts. However, comparisons can be generated for estimated amplitudes of alternans patterns of variation based on the morphology features F that are determined and the processing step that is used. As shown in FIG. 3, the processor can report (at 116) alternans data to a caregiver and/or the processor can store the alternans data. The alternans data (e.g., the feature maps, the estimated amplitudes of the alternans pattern of variation, the confidence level of the alternans pattern of variation, the uncertainty level of the alternans pattern of variation, the p-value of the alternans pattern of variation, and the like) can be reported using any suitable means (e.g., output to a suitable output device such as a display, a printer, and the like). As shown in FIG. 3, in some embodiments, the processor can plot (at 118) a spectral graph using values resulting from preprocessing the feature matrix (e.g., the values of the first principal component vector u[1]). FIGS. 24 and 25 illustrate two examples of spectral graphs. The values used to generate the spectral graphs of both FIGS. 24 and 25 were determined using ECG data representative of an ECG signal having a 5 microvolt TWA pattern of variation, 20 microvolts of noise, and 20 milliseconds of offset, where [H] is equal to 128. FIG. 24 illustrates a spectral graph generated using values directly from the feature matrix A (i.e., the feature matrix A was not preprocessed using a principal component analysis or other mathematical analysis). As illustrated in FIG. 24, the spectral graph does not include a dominant frequency at half of the beat sample frequency, but instead includes a number of frequency spikes having varying amplitudes. Accordingly, the spectral graph of FIG. 24 does not indicate the existence of a significant alternans pattern of variation. FIG. 25 illustrates a spectral graph generated using values of a first principal vector u[1]. The first principal vector u[1 ]is a result of a principal component analysis performed on the same feature matrix A from which the values used to generate the spectral graph of FIG. 24 were obtained. FIG. 25 illustrates a single frequency spike at half of the beat sample frequency. Accordingly, unlike the spectral graph of FIG. 24, the spectral graph of FIG. 25 appears to illustrate the presence of a significant alternans pattern of variation. The effect of noise and time shift in the measured ECG signal on the determined alternans data is indicated by the spectral graphs of FIGS. 24 and 25. Preprocessing the feature matrix A increases the robustness of the determination of alternans data when using spectral domain methods.
{"url":"http://www.google.fr/patents/US7072709","timestamp":"2014-04-18T23:32:33Z","content_type":null,"content_length":"197964","record_id":"<urn:uuid:a295d67c-bff4-4a94-831a-9983a76e6a5a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
elastic deformation equation Best Results From Wikipedia Yahoo Answers Youtube From Wikipedia Rubber band A rubber band (in some regions known as a binder, an elastic or elastic band, a lackey band, laggy band, lacka band or gumband) is a short length of rubber and latex formed in the shape of a loop. Rubber bands are typically used to hold multiple objects together. The rubber band was patented in England on March 17, 1845 by Stephen Perry. Rubber bands are made by extruding the rubber into a long tube to provide its general shape, putting the tubes on mandrels and curing the rubber with heat, and then slicing it across the width of the tube into little bands. While other rubber products may use synthetic rubber, rubber bands are primarily manufactured using natural rubber because of its superior elasticity. Natural rubber originates from the sap of the rubber tree. Natural rubber is made from latex which is acquired by tapping into the bark layers of the rubber tree. Rubber trees belong to the spurge family (Euphorbiaceae) and live in warm, tropical areas. Once the latex has been “tapped� and is exposed to the air it begins to harden and become elastic, or “rubbery.� Rubber trees only survive in hot, humid climates near the equator and so the majority of latex is produced in the Southeast Asian countries of Malaysia, Thailand and Indonesia. Rubber Band Sizes A rubber band has three basic dimensions: length, width, and thickness. (See picture.) A rubber band's length is half its circumference. Its thickness is the distance from the inner circle to the outer circle. If one imagines a rubber band in manufacture, that is, a long tube of rubber on a mandrel, before it is sliced into rubber bands, the band's width is how far apart the slices are cut. Rubber Band Size Numbers A rubber band is given a [quasi-]standard number based on its dimensions. Generally, rubber bands are numbered from smallest to largest, width first. Thus, rubber bands numbered 8-19 are all 1/16 inch wide, with length going from 7/8&nbsp;inch to 3 1/2&nbsp;inches. Rubber band numbers 30-34 are for width of 1/8&nbsp;inch, going again from shorter to longer. For even longer bands, the numbering starts over for numbers above 100, again starting at width 1/16&nbsp;inch. The origin of these size numbers is not clear and there appears to be some conflict in the "standard" numbers. For example, one distributor has a size 117 being 1/16&nbsp;inch wide and a size 127 being 1/8&nbsp;inch wide. However, an OfficeMax size 117 is 1/8&nbsp;inch wide. A manufacturer has a size 117A (1/16&nbsp;inch wide) and a 117B (1/8&nbsp;inch wide). Another distributor calls them 7AA (1/16&nbsp;inch wide) and 7A (1/8&nbsp;inch wide) (but labels them as specialty bands). Temperature affects the elasticity of a rubber band in an unusual way. Heating causes the rubber band to contract, and cooling causes expansion. An interesting effect of rubber bands in thermodynamics is that stretching a rubber band will produce heat (press it against your lips), while stretching it and then releasing it will lead it to absorb heat, causing its surroundings to become cooler. This phenomenon can be explained with Gibb's Free Energy. Rearranging ΔG=ΔH-TΔS, where G is the free energy, H is the enthalpy, and S is the entropy, we get TΔS=ΔH-ΔG. Since stretching is nonspontaneous, as it requires an external heat, TΔS must be negative. Since T is always positive (it can never reach absolute zero), the ΔS must be negative, implying that the rubber in its natural state is more entangled (fewer microstates) than when it is under tension. Thus, when the tension is removed, the reaction is spontaneous, leading ΔG to be negative. Consequently, the cooling effect must result in a positive ΔG, so ΔS will be positive there. Red rubber bands In 2004 in the UK, following complaints from the public about postal carriers causing litter by discarding the rubber bands which they used to keep their mail together, the Royal Mail introduced red bands for their workers to use: it was hoped that, as the bands were easier to spot than the traditional brown ones and since only the Royal Mail used them, employees would see (and feel compelled to pick up) any red bands which they had inadvertently dropped. Currently, some 342 million red bands are used every year. Model use Rubber bands have long been one of the methods of powering small free-flight model aeroplanes, the rubber band being anchored at the rear of the fuselage and connected to the propeller at the front. To 'wind up' the 'engine' the propeller is repeatedly turned, twisting the rubber band. When the propeller has had enough turns, the propeller is released and the model launched, the rubber band then turning the propeller rapidly until it has unwound. One of the earliest to use this method was pioneer aerodynamicistGeorge Cayley, who used them for powering his small experimental models. These 'rubber motors' have also been used for powering small model boats. In mathematics, a geodesic (ˌdʒi�ɵˈdi�zɨk, ˌdʒi�ɵˈdɛsɨk| , ) is a generalization of the notion of a "straight line" to "curved spaces". In the presence of a metric, geodesics are defined to be (locally) the shortest path between points in the space. In the presence of an affine connection, geodesics are defined to be curves whose tangent vectors remain parallel if they are transported along it. The term "geodesic" comes from geodesy, the science of measuring the size and shape ofEarth; in the original sense, a geodesic was the shortest route between two points on the Earth's surface, namely, a segment of a great circle. The term has been generalized to include measurements in much more general mathematical spaces; for example, in graph theory, one might consider a geodesic between two vertices/nodes of a graph. Geodesics are of particular importance in general relativity, as they describe the motion of inertial test particles. The shortest path between two points in a curved space can be found by writing the equation for the length of a curve (a function f from an open interval of R to the manifold), and then minimizing this length using the calculus of variations. This has some minor technical problems, because there is an infinite dimensional space of different ways to parametrize the shortest path. It is simpler to demand not only that the curve locally minimize length but also that it is parametrized "with constant velocity", meaning that the distance from f(s) to f(t) along the geodesic is proportional to |s&minus;t|. Equivalently, a different quantity may be defined, termed the energy of the curve; minimizing the energy leads to the same equations for a geodesic (here "constant velocity" is a consequence of minimisation). Intuitively, one can understand this second formulation by noting that an elastic band stretched between two points will contract its length, and in so doing will minimize its energy; the resulting shape of the band is a geodesic. In Riemannian geometry geodesics are not the same as "shortest curves" between two points, though the two concepts are closely related. The difference is that geodesics are only locally the shortest distance between points, and are parametrized with "constant velocity". Going the "long way round" on a great circle between two points on a sphere is a geodesic but not the shortest path between the points. The map t→t^2 from the unit interval to itself gives the shortest path between 0 and 1, but is not a geodesic because the velocity of the corresponding motion of a point is not constant. Geodesics are commonly seen in the study of Riemannian geometry and more generally metric geometry. In relativistic physics, geodesics describe the motion of point particles under the influence of gravity alone. In particular, the path taken by a falling rock, an orbiting satellite, or the shape of a planetary orbit are all geodesics in curved space-time. More generally, the topic of sub-Riemannian geometry deals with the paths that objects may take when they are not free, and their movement is constrained in various ways. This article presents the mathematical formalism involved in defining, finding, and proving the existence of geodesics, in the case of Riemannian and pseudo-Riemannian manifolds. The article geodesic (general relativity) discusses the special case of general relativity in greater detail. The most familiar examples are the straight lines in Euclidean geometry. On a sphere, the images of geodesics are the great circles. The shortest path from point A to point B on a sphere is given by the shorter arc of the great circle passing through A and B. If A and B are antipodal points (like the North pole and the South pole), then there are infinitely many shortest paths between them. Metric geometry In metric geometry, a geodesic is a curve which is everywhere locally a distance minimizer. More precisely, a curveγ: I→ M from an interval I of the reals to the metric spaceM is a geodesic if there is a constantv≥ 0 such that for any t∈ I there is a neighborhood J of t in I such that for any t[1], t[2]∈ J we have This generalizes the notion of geodesic for Riemannian manifolds. However, in metric geometry the geodesic considered is often equipped with natural parametrization, i.e. in the above identity v = 1 From Yahoo Answers Question:I already know the equation for a non-deforming CV. Could you give me the equation for the deformable control volume? Answers:There isn't one. A control volume, by definition, is a region of space which remains constant in volume and is used to track energy flow in to and out of the region. You will need to define your own version of the first law of thermodynamics if you wish for it to be for a deformable region of space. Remember: the first law of thermodynamics is nothing more than conservation of energy. How do forms of energy exist in you system of interest? Question:I'm completing a comprehension question and I'm a bit confused. We're relating the elasticity of a bouncy ball polymer (assuming it follows hooks law) to it's spring constant/force constant. Short question: Does a higher k mean more or less elastic than a small k? Long version of the question: F = -k*x and v (frequency) = 1/2pi(k/m)^1/2 I'm awfully confused about what exactly "elasticity" means. Is it the ease with which an object stretches for a given force (and still returns to its original shape) or the ease with which is returns to its original shape for a given force? A higher k value means that it will return to its shape faster, and in my mind that seems like it's more elastic. Higher force constants in my elastomer bouncy ball result in a more elastic collision between the floor and the bouncy ball and a higher bounce (energy is more efficiently converted from elastic potential to kinetic energy, instead of propagating as heat throughout a distorting ball) Higher force constant also means that there will be a higher frequency of oscillation. Yet one line of my text read " Higher spring constant means the material is less elastic...higher frequency of oscillation." This doesn't make sense to me, doesn't a higher frequency mean its capable of returning to its shape more readily for a given amount of force applied? Answers:I do not like the adjective "elastic"...except when just stating that it relates to the force types of elasitcity. It is very unclear whether "elastic" as a quantifying adjective means hard to deform or easy to deform. Often times, science uses it to mean hard to deform, but common usage uses it to mean easy to deform. SO CONFUSING. From now on, elasticity means nothing more than a classification of forces and a subject of study. It is like electricity, electricity is not "what you fill a battery with" or what you get from the socket...electricity is just a subject in physics. Same with elasticity...just a name for subject in physics. Elastic is a good adjective for force...as in the force in the cord is an elastic force, or as in elastic forces hold a body's shape together when gravity is insignificant. ------------------------- So for that reason, use the words stiff and flexible to indicate what you really mean. Stiff means hard to deform. More force or stress required per unit deformation distance or strain. Flexible means easy to deform. More deformation distance or strain results per unit force or stress. The spring constant (the k-value) indicates how stiff a spring is. You can call the k-value the spring stiffness if you want. The plural of stiffness in this context is "measures of stiffness". There are numerous examples of measures of stiffness...both at the structural member level (spring constant, torsional constant), and at the individual material fiber level (Young's modulus, Shear modulus, Bulk modulus). It IS TRUE that higher k-value means a higher frequency. That you can always count upon. Mass (inertia) will make the vibration slower. Question:A car moving at speed v undergoes a one-dimensional collision with an identical car initially at rest. The collision is neither elastic nor fully inelastic; 2/17 of the initial kinetic energy is lost. Find the velocities of the two cars after the collision. Express your answer in units of v Cannot figure out how to get started. Any help is appreciated. Answers:Let equal mass of both cars = m Initial velocity of 2nd car, u = 0 Let v' and u' be the final velocities of the 1st and 2nd car respectively. Momentum is always conserved. So, mv + 0 = mv' + mu' => v = v' + u' ... ( 1 ) 2/17 of initial K.E. is lost => 15/17 of initial K.E. = Final K.E. => (15/17)(1/2)mv^2 = (1/2)mv'^2 + (1/2)mu'^2 => (15/17) v^2 = v'^2 + u'^2 ... ( 2 ) From equations ( 1 ) and ( 2 ), (v' + u')^2 - (v'^2 + u'^2) = v^2 - (15/17)v^2 => 2v'u' = (2/17) v^2 => (v' - u')^2 = (v' + u')^2 - 4v'u' = v^2 - (4/17)v^2 = (13/17)v^2 => v' - u' = v (13/17) = 0.874 v ... ( 3 ) Solving equations ( 1 ) and ( 3 ), v' = 0.987 v and u' = 0.063v You can refer to my free educational website www.schoolnotes4u.com and download study materials without requiring any form of Question:what are the dimensions of Xnth in the standard equation Xnth = u + a/2 (2n-1) where u is intial velocity ,a is uniform accleration and n is time? what are the 2 essential conditions in each isothermal and adiabatic process to take place? thank you Answers:For a perfectly rigid body, the strain produced is zero, no matter how much the stress, so the Young's modulus of such a body is infinite. THe dimensions should be that of distance as this is the equation for the distance covered in the nth second. In isothermal process, the temperature should remain constant. In adiabatic proces, there should be no exchange of temperature between the system and the surroundings. The temperature of the system might increase but it should not be distributed to the surroundings. SO the enclosure of such a system should be perfect insulators of heat. From Youtube Application Of Elasticity :Check us out at www.tutorvista.com Elasticity is the ratio of the percent change in one variable to the percent change in another variable. It is a tool for measuring the responsiveness of a function to changes in parameters in a unit-less way. Frequently used elasticities include price elasticity of demand, price elasticity of supply, income elasticity of demand, elasticity of substitution between factors of production and elasticity of intertemporal substitution. Elasticity is one of the most important concepts in economic theory. It is useful in understanding the incidence of indirect taxation, marginal concepts as they relate to the theory of the firm, and distribution of wealth and different types of goods as they relate to the theory of consumer choice. Elasticity is also crucially important in any discussion of welfare distribution, in particular consumer surplus, producer surplus, or government surplus. In empirical work an elasticity is the estimated coefficient in a linear regression equation where both the dependent variable and the independent variable are in natural logs. Elasticity is a popular tool among empiricists because it is independent of units and thus simplifies data analysis. Generally, an "elastic" variable is one which responds "a lot" to small changes in other parameters. Similarly, an "inelastic" variable describes one which does not change much in response to changes in other parameters. A major study of the price elasticity of supply and the ... Simple Collision and Elastic Demo :A simple demo of collision with a ground plane and elastic deformation on blocks.
{"url":"http://www.edurite.com/kbase/elastic-deformation-equation","timestamp":"2014-04-19T14:33:49Z","content_type":null,"content_length":"89123","record_id":"<urn:uuid:e563ed95-54fa-4ccb-b0b4-04330d2b8255>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Mesa, AZ Math Tutor Find a Mesa, AZ Math Tutor ...Is English a challenge? I shall be happy to tutor your child in the Language Arts as well. (I am not trained to teach English as a Second Language.)The materials I use will keep your child's interest, and your child will learn. I shall take your child from the level (s)he is to where (s)he is supposed to be or even higher with patience and consistency. 7 Subjects: including algebra 1, prealgebra, English, reading ...As Assistant Professor in for a graduate degree program: Taught 5 years of Global Security Affairs, 4 years as instructor for international student program, 5 years instructing international regional & cultural studies, 4 years directing culture and language program. B.A. in History, 1980. Courses in Government. 8 Subjects: including SPSS, elementary (k-6th), special needs, world history ...If so, I think I'd be your best choice to review all of the math topics that are included in the SAT. The GRE is an important exam, and much of it depends upon careful reading to determine the meaning of a sentence, and to choose the correct answer by reasoning and interpretation. Perhaps more ... 17 Subjects: including algebra 2, calculus, geometry, ACT Math ...I know that my experience and personality will enable me to help children achieve great things as students. I look forward to creating a positive and successful learning environment with you.Wrote timed essays for several AP classes which enabled me to earn college credits while still in high sc... 30 Subjects: including algebra 2, American history, study skills, special needs ...When I was 5, I tested to a high school level. After taking my entrance test for Mesa Community College, I tested out of reading, meaning I don't have to take any reading classes because my reading comprehension is at or above college level. I have tutored in reading, recently, with much success. 16 Subjects: including prealgebra, English, reading, writing Related Mesa, AZ Tutors Mesa, AZ Accounting Tutors Mesa, AZ ACT Tutors Mesa, AZ Algebra Tutors Mesa, AZ Algebra 2 Tutors Mesa, AZ Calculus Tutors Mesa, AZ Geometry Tutors Mesa, AZ Math Tutors Mesa, AZ Prealgebra Tutors Mesa, AZ Precalculus Tutors Mesa, AZ SAT Tutors Mesa, AZ SAT Math Tutors Mesa, AZ Science Tutors Mesa, AZ Statistics Tutors Mesa, AZ Trigonometry Tutors
{"url":"http://www.purplemath.com/mesa_az_math_tutors.php","timestamp":"2014-04-18T01:06:14Z","content_type":null,"content_length":"23668","record_id":"<urn:uuid:321880b6-8a08-4a38-9708-f3d75f8af8ad>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Cronbach's Alpha Cronbach's Alpha (α) using SPSS Cronbach's alpha is the most common measure of internal consistency ("reliability"). It is most commonly used when you have multiple Likert questions in a survey/questionnaire that form a scale and you wish to determine if the scale is reliable. If you are concerned with inter-rater reliability, we also have a guide on using Cohen's (κ) kappa that you might find useful. A researcher has devised a nine-question questionnaire to measure how safe people feel at work at an industrial complex. Each question was a 5-point Likert item from "strongly disagree" to "strongly agree". In order to understand whether the questions in this questionnaire all reliably measure the same latent variable (feeling of safety) (so a Likert scale could be constructed), a Cronbach's alpha was run on a sample size of 15 workers. Setup in SPSS In SPSS, the nine questions have been labelled Qu1 through to Qu9. To know how to correctly enter your data into SPSS in order to run a Cronbach's alpha test, see our Entering Data into SPSS tutorial. Alternately, you can learn about our enhanced data setup content here. Test Procedure in SPSS The eight steps below show you how to check for internal consistency using Cronbach's alpha in SPSS. At the end of these eight steps, we show you how to interpret the results from your Cronbach's • Click Analyze > Scale > Reliability Analysis... on the top menu, as shown below: Published with written permission from SPSS, IBM Corporation. • You will be presented with the Reliability Analysis dialogue box, as shown below: Published with written permission from SPSS, IBM Corporation. • Transfer the variables Qu1 to Qu9 into the Items: box. You can do this by drag-and-dropping the variables into their respective boxes or by using the Published with written permission from SPSS, IBM Corporation. • Leave the Model: set as "Alpha", which represents Cronbach's alpha in SPSS. If you want to provide a name for the scale, enter it in the Scale label: box. Since this only prints the name you enter at the top of the SPSS output, it is certainly not essential that you do (in our example, we leave it blank). • Click on the Reliability Analysis: Statistics dialogue box, as shown below: Published with written permission from SPSS, IBM Corporation. • Select the Item, Scale and Scale if item deleted options in the –Descriptives for– area, and the Correlations option in the –Inter-Item– area, as shown below: Published with written permission from SPSS, IBM Corporation. • Click the Reliability Analysis dialogue box. • Click the SPSS Output for Cronbach's Alpha SPSS produces many different tables. The first important table is the Reliability Statistics table that provides the actual value for Cronbach's alpha, as shown below: Published with written permission from SPSS, IBM Corporation. From our example, we can see that Cronbach's alpha is 0.805, which indicates a high level of internal consistency for our scale with this specific sample. Item-Total Statistics The Item-Total Statistics table presents the "Cronbach's Alpha if Item Deleted" in the final column, as shown below: Published with written permission from SPSS, IBM Corporation. This column presents the value that Cronbach's alpha would be if that particular item was deleted from the scale. We can see that removal of any question, except question 8, would result in a lower Cronbach's alpha. Therefore, we would not want to remove these questions. Removal of question 8 would lead to a small improvement in Cronbach's alpha, and we can also see that the "Corrected Item-Total Correlation" value was low (0.128) for this item. This might lead us to consider whether we should remove this item. Cronbach's alpha simply provides you with an overall reliability coefficient for a set of variables (e.g., questions). If your questions reflect different underlying personal qualities (or other dimensions), for example, employee motivation and employee commitment, Cronbach's alpha will not be able to distinguish between these. In order to do this and then check their reliability (using Cronbach's alpha), you will first need to run a test such as a principal components analysis (PCA). You can learn how to carry out principal components analysis (PCA) using SPSS, as well as interpret and write up your results, in our enhanced content. You can learn more here. It is also possible to run Cronbach's alpha in Minitab.
{"url":"https://statistics.laerd.com/spss-tutorials/cronbachs-alpha-using-spss-statistics.php","timestamp":"2014-04-20T09:04:58Z","content_type":null,"content_length":"15017","record_id":"<urn:uuid:5c554e68-1866-4c40-ae2f-7b5606595d73>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 55 Keeping Score: Assessment in Practice Chapter 4: Assessment and Opportunity to Learn In this chapter, the focus shifts away from task development and issues that impinge on students' opportunity to perform and toward assessment tasks as seen in the social milieu of the classroom because that is where students are (or are not) provided opportunity to learn. Providing such opportunity means creating access for students to the procedural, conceptual, and strategic knowledge to support a deep and robust understanding of mathematics and the know how necessary to demonstrate this multifaceted knowledge. A number of key research reports have focused attention on the importance of collecting opportunity-to-learn data to inform the interpretation of assessment data (NCTM & NRC, 1997; NRC, 1989, 1997, 1998; Porter, Kirst, Osthoff, Smithson, & Schneider, 1993; Schmidt, McKnight, Valverde, Houang, & Wiley, 1997; Stigler & Hiebert, 1997). In particular, opportunity-to-learn issues emerge as an essential strategy in any effort to improve education and address equity issues (Massell, Kirst, & Hoppe, 1997; Black & Wiliam, 1998). The concept of opportunity to learn is linked to the concept of opportunity to perform as described in Chapter 3. If students lack opportunities to perform, they will not be able to show what they know and can do. If students lack opportunities to learn, they win not be able to avail themselves of opportunities to perform. In Chapter 3, issues concerning opportunity to perform led to a focus on the development of assessment tasks and the dimensions of a balanced assessment. Here, issues concerning opportunity to learn lead to a focus on teaching and learning. OCR for page 55 Keeping Score: Assessment in Practice The task development experience that underpins both the model for balanced assessment in Chapter 2 and the discussion of opportunity-to-perform issues in Chapter 3 was not conducted behind the closed doors of assessment designers' offices. Instead, much of this experience has been obtained in mathematics classrooms across the country, because each assessment task is put through several rounds of systematic classroom trials, including initial task trials implemented in two or three mathematics classrooms, and also large-scale field tests where the tasks are put through trials with a large stratified sample of students. Each classroom trial is observed by at least one of the following: a full-time assessment developer, a full-time mathematics teacher who is participating as a co-developer in the assessment development process, a classroom teacher who is responsible for providing written evaluations of the task in action, or a teacher who is participating in a professional development program focusing on assessment. This chapter draws upon this extensive body of classroom-generated experience to present a series of recommendations and conclusions based on observations of assessment tasks in the social context of the classroom. Many of the sections of this chapter highlight barriers to opportunity to learn, such as tight sequencing of teaching and testing, inappropriate emphasis on skills acquisition activities, inappropriate task modification, the need to cover the curriculum, preconceptions of teachers, and gaps in the curriculum. Each of these discussions is framed by locating it within the context of relevant and recent research. Where appropriate, larger implications for the teaching and learning of mathematics are identified. Other sections discuss how complex tasks may be used to enhance learning opportunities in the classroom. While working through such a task, for example, misconceptions and mistakes may be viewed as opportunities to learn rather than as complications to be avoided. Furthermore, class work on complex tasks is necessary to develop problem-solving tenacity and the ability to communicate about mathematics. The purpose of this chapter is to identify how efforts to improve assessment might be used to improve mathematics instruction and learning. Therefore, the primary concern in this chapter is not just finding better ways to assess students but finding ways to enable students to perform better on worthwhile assessments. The chapter closes with a list of recommendations based upon these issues and informed by Black and Wiliam's (1998) contention that learning is driven by what teachers do in classrooms. OCR for page 55 Keeping Score: Assessment in Practice Tight sequencing of teaching and testing A common obstacle to success on non-routine tasks is the tendency of students to attempt to apply the specific mathematics that they are currently studying to the task at hand, whatever that task might be. For example, when students were presented with either Shopping Carts or Paper Cups immediately after they had studied area or volume, many began their work by trying to find the area or the volume of the cart or cup. Similarly, when a large number of students tried to set up and solve a system of linear equations in response to one of these two tasks, it turned out that the class had just been studying systems of linear equations. When a disproportionate number of students in a class provided solutions involving y = mx + b to Broken Plate, a task investigating the relationship between percent decrease and percent increase, their teacher confirmed that his students were working on the slope-intercept form of the equation of a line. When considering what might have inspired such seemingly incongruous responses, it seemed that students were using whatever tool was most readily at hand rather than grappling with and making sense of the task. In fact, the premise on which most classroom assessment rests is to assess what has just been taught. The problem with this what we just studied phenomenon is that it usually does not work well in creating access to the task, and often makes the task less accessible for the student. In the words of one teacher commenting on her students' work: ''Most students made the task harder than it was, when they tried to make it fit with what we were currently studying." When teachers discussed this at professional development workshops on assessment or at assessment co-developers meetings, they realized that they shared a common problem. Teachers are frequently quite taken aback by this realization and hypothesize that it is their common practice of teaching a topic then testing it, teaching another topic and then testing, that might cultivate this behavior in their students. These teachers acknowledge that they rarely test the mathematics that their students have learned twelve, six, or even just two months previously. Therefore, it is also rare that their students are required to attempt challenging non-routine tasks. This what we just studied phenomenon is not reserved for those students who are under-prepared in mathematics or for those who find learning mathematics difficult, but can be observed even in honors classes where student participation is usually marked by a high level of success. Many highly successful students, well on their way to successful completion of Algebra II and Trigonometry courses, failed to solve a problem such as that posed by the unscaffolded version of Shopping Carts because, rather than trying to make OCR for page 55 Keeping Score: Assessment in Practice sense of the task, they attempted to bring only their most recently learned but inappropriate mathematics to bear. It was as if these students had in their heads a directory of template problems that they had learned how to solve. Instead of thinking about the task at hand and making decisions about the mathematics that might be needed to solve it, these students simply forced aspects of one template problem after another onto Shopping Carts. When students of this caliber work on non-routine assessment tasks, it is evident that they know a lot of mathematics—far more than is actually needed to solve the task. But it also is evident that their mathematical understanding is fragile and inflexible (Lesh, Lamon, Lester, & Behr, 1992). Further, it is evident that these students have had little practice either making sense of mathematics or using mathematics in a practical fashion. These observations also reinforce an earlier conclusion stated by Schoenfeld: "'Knowing' a lot of mathematics may not do some students much good if their beliefs keep them from using it" (1987, p. 198). The most serious aspect of this what we just studied phenomenon is that when students attempt to make only their most recently learned mathematics relevant to the task at hand, they are providing evidence of a routine that may characterize all of their learning of mathematics. The students' habit is to do mathematics without having to think about the task. Unfortunately, these students are not only doing what they usually do, but also are doing what usually works for them. The coupling of teaching and testing in this way has consequences both for the mathematics that is learned and for students' perception of the learning of mathematics. On the one hand, it leaves students ill-equipped to tackle non-routine tasks where an important hurdle is the selection of relevant mathematics. On the other hand, it instills in students the notion that mathematics can be learned by applying recently learned mathematics without a great deal of thought. It also runs the risk of teaching students that making mathematical sense or using common sense are not appropriate behaviors for the mathematics classroom. Coupling teaching and testing in this way also has consequences for what a teacher can say about her students' learning of mathematics. How does a teacher know whether her students have really learned the mathematics? How does the teacher know that her students will retain what they have been taught over the longer term? How does the teacher know what her students can do with the mathematics they have learned? The view of teaching and learning that is evidenced by such tight coupling of teaching and testing also has been criticized by Schoenfeld: OCR for page 55 Keeping Score: Assessment in Practice All too often we focus on a narrow collection of well-defined tasks and train students to execute those tasks in a routine, if not algorithmic fashion. Then we test the students on tasks that are very close to the ones that they have been taught. If they succeed on those problems, we and they congratulate each other on the fact that they have learned some powerful mathematical techniques. In fact, they may be able to use such techniques mechanically while lacking some rudimentary thinking skills. To allow them, and ourselves, to believe that they understand the mathematics is deceptive and fraudulent. (Schoenfeld, 1988, p. 30) One of the more far-reaching effects of this practice, as shown by our own experience and addressed in the discussion by Schoenfeld, emerges most acutely when narrowly defined tests are used for state-or district-wide accountability purposes. In such cases, teachers report that they find themselves under increasing administrative pressure to spend greater and greater amounts of time preparing for the test (Romberg, Zarinnia, & Williams, 1990). This can breed an ever-expanding culture of test preparation and, at its most extreme, runs the risk that test preparation could completely replace instruction. Mathematics classes might then become characterized by students working repetitively on set after set of questions that mimic those that are on the test. When narrowly defined tests are used in this way to address accountability needs, the consequences for learning are inevitable. Students' opportunity to learn is replaced by the opportunity only to practice a narrow range of test questions. There is a well grounded fear that this approach will fail to prepare students for higher level mathematics courses. Such an approach will do little to inculcate a mathematical disposition or to encourage students to invest in further study of mathematics. Finally, the costs of this kind of testing can become hidden—large amounts of teacher time, and classroom resources are diverted away from teaching and learning and are used instead to prepare students for narrow tests that are at best loosely connected to a balanced curriculum. Inappropriate emphasis on skills acquisition activities Teachers often say that although some non-routine tasks are interesting, rich, and target worthwhile mathematics, they are not appropriate for their students. When we explore this perception further, we find that many of their students are considered by these teachers to be under-prepared in mathematics. In the view of their teachers, these students lack basic skills. Teachers described how, in an effort to rectify this situation, they restricted their students to sets of short, closed, procedural exercises. They have the perception that their students must acquire some basic level of achievement in rudimentary OCR for page 55 Keeping Score: Assessment in Practice mathematics before they can be permitted to attempt challenging non-routine tasks. In these teachers' views, the full range of tasks illustrated here would be far too challenging for their students, and so they believe it necessary to restrict students to skills-based tasks. One serious problem with this common approach in teaching mathematics to students who are under-prepared is that there is very little evidence that it works to do anything more than teach simple calculation procedures, terms, and definitions (Hiebert, 1999). Hiebert draws on the most recent National Assessment of Educational Progress (NAEP) to answer the question, "What are students learning from traditional instruction? " He reports: In most classrooms, students have more opportunities to learn simple calculation procedures, terms, and definitions than to learn more complex procedures and why they work or to engage in mathematical processes other than calculation and memorization. (Hiebert, 1999, p. 12) Another serious problem is that it is simply inequitable for large numbers of students to emerge from high school without ever having had the opportunity to engage in mathematics work that has been designed to develop conceptual and strategic capabilities. Clearly, the intention is not to deny students this opportunity. Teachers usually intend to shift to a more interesting gear after their students provide evidence that they have acquired the basic skills. Unfortunately, this frequently does not happen, and many students leave school without ever having been given the opportunity to learn mathematics in a broad and balanced way. The problem can be approached somewhat differently. There is increasing evidence that the memorization of decontextualized fragments of mathematics does not work well in helping students learn mathematics (Hiebert, 1999). But there also is increasing evidence that students can learn when instruction regularly emphasizes engagement with challenging tasks (Stein & Lane, 1996; Schoen & Ziebarth, 1998), or when teachers regularly use technological tools to develop mathematical ideas (Heid, 1988; Hiebert & Wearne, 1996). The Carnegie Learning Program makes extensive use of technology and is currently demonstrating great success in motivating reluctant learners in large urban districts (Hadley, personal communication, February, 1999). Inappropriate task modification As noted above, teachers will often argue that tasks of the type presented in Chapters 2 and 3 are more appropriate for students who are better prepared mathematically. Perhaps as a consequence, when teachers administer these tasks to their students, many of them massage the challenge of each task, in the hope OCR for page 55 Keeping Score: Assessment in Practice their students will become neither too frustrated nor too confused by its demands. This practice of massaging the challenge of particular tasks to close the gap between the teachers' perception of what their students know and the perceived demands of the task has also been reported by others (Doyle, 1988; Henningsen & Stein, 1996). Many teachers are particularly adept at deploying gap-closing strategies. They will often provide directive hints, construct pertinent demonstrations, introduce task scaffolding, and when all else fails they will sometimes try to walk their students through the task. The evidence presented in Chapter 3 shows how scaffolding and other well-intended challenge reduction techniques can radically alter the assessment target of the task. This evidence suggests that teachers' gap-closing processes can have far-reaching implications for students' opportunity to learn through challenging non-routine tasks, and that these strategies can restrict the actual range of tasks that their students will truly have the opportunity to tackle. Covering the curriculum Another factor that can inhibit the use of worthwhile assessment tasks in classrooms is teachers' perception of the length of time that it will take their students to do the tasks. Teachers sometimes fear that, if they were to invest the time necessary for administering rich assessment tasks, they might be unable to cover large portions of the material they are expected to cover. Choices about the allocation of precious classroom time are difficult. However, many involved in the reform of mathematics teaching and learning urge teachers to cover less but spend more time going deeper, thus creating a broader and more balanced system of instruction (NCTM, 1989, 1995; NRC 1993b; Schmidt, McKnight, & Raizen, 1997; Schmidt, McKnight, Valverde, Houang, & Wiley, 1997; Stigler & Hiebert, 1997). The implementation of worthwhile assessment tasks in classrooms is not the only innovation that is labeled as time-consuming. Indeed, most effective teaching strategies are time-consuming and therefore regarded as untenable by teachers who are faced with a large amount of material to cover. There is little doubt that if teachers are to be freed to provide opportunity to learn for all, they must be freed from the burden of covering large amounts of material. Preconceptions of teachers It is interesting to observe teachers as they consider assessment tasks with a view toward possibly embedding them in their instruction. Frequently, teachers will work through a task and OCR for page 55 Keeping Score: Assessment in Practice then draw extensively on this experience in their appraisal of the task's appropriateness. As a consequence, this process leads some teachers to reject certain tasks outright. One teacher said, This task would not be appropriate for my students. If it took me this long to complete the task, my students would never be able to stay at it. Another stated, This task is too abstract, I had to really think about this task. My students would never be able to start it. Clearly, teachers are very concerned about overwhelming their students and about selecting appropriately demanding tasks for them. This is not surprising given the large number of students who give up all too quickly when they are presented with an assignment that does not immediately resemble one that they have been taught how to do. These findings about teachers' perceptions of the appropriateness of such assessment tasks in their classrooms corroborates research that addresses teachers' perceptions of the appropriateness of instructional materials. For example, teachers' perceptions have been found to be affected by both their perceptions about their students' backgrounds and abilities and the mathematical knowledge of the teachers themselves (Floden, 1996). Some teachers do recognize that even tasks that challenge the teachers themselves sometimes can be appropriate for their students. It is difficult, however, to persuade other teachers that almost all of their students can learn to do challenging mathematics tasks and that students can learn mathematical skills at the same time that they are working on challenging tasks. This is in contrast to what seems to be a deep-seated belief that students' ability (or inability) to do mathematics is immutable and not something that can be improved upon by creating new or enhanced opportunities to learn Through classroom observations and interviews about assessment tasks, some revealing aspects of student beliefs about learning mathematics have also been identified. Many students have clearly defined and somewhat narrow views of what counts as appropriate behavior for the mathematics classroom. For example, many students have great difficulty in formulating a workable approach to a non-routine task. When students evaluate such tasks and describe how the tasks might be improved, they almost invariably judge the tasks as not giving them a clear enough indication of what they are supposed to do. They gave responses such as, "Be more specific about what you want us to do on paper." "Tell us more information on what we are actually supposed to figure out." OCR for page 55 Keeping Score: Assessment in Practice "You do not make it clear what you want us to do. It is better if you say-do this, then do this." In these responses, the students have revealed that they do not expect to have to formulate an approach to challenging tasks. They expect that their assignments will make clear not only what they are supposed to do but also the steps that they should take to do it. Many students simply do not perceive doing challenging mathematics as appropriate work for mathematics classrooms. Many students just want to be told what to do by their teachers. By the same token many teachers believe that, with so much content to cover, there is little time to do anything but tell their students as much as possible. Many students also will express a lack of confidence in teachers who wish to delve deeply into mathematics rather than rush through larger amounts of material at great speed (Borasi, 1996). Far from relishing the opportunity to dwell on fundamental mathematics with new eyes, students are often concerned that they will never be able to cover the given curriculum, or that focusing on specific aspects of mathematics in greater depth will adversely affect their final grade. Gaps in the curriculum Developing assessment tasks sometimes highlights limitations in the ways in which curriculum content is determined. For example, the study of solids and their volume is a content area that is often de-emphasized in the current high school curriculum, and students invariably find our tasks involving solids and their volume difficult. As an illustration, Table 2 shows the distribution of scores for responses to Snark Soda (Figure 5, p. 19). To earn a score of 4, a student must fully accomplish the task. To do so, the student must model the entire bottle using two or more solids, consider the curvature of the top and bottom of the bottle, address accuracy, and communicate each step of the work. This level of success requires significant integration of mathematical skill, conceptual understanding, and problem solving, but it is reasonable to expect that students in the eleventh grade will have fully absorbed these specific skills and concepts. Therefore it is disappointing that so few students are able to make use of these skills and concepts to fully accomplish the task. Table 2. Scoring of 11th grade responses to Snark Soda Score Off task Score 1 Score 2 Score 3 Score 4 N = 877 2 431 291 129 24 % 0.3 49.1 33.2 14.7 2.7 OCR for page 55 Keeping Score: Assessment in Practice To earn a score of 3, a student must prepare a response that, while not fully complete, can be characterized as ready for revision. It should be reasonable to infer that the student has the mathematical knowledge and ability to solve the task. The student can show this by modeling the entire bottle using two or more solids and addressing the curvature of either the top or the bottom of the bottle. The student might or might not address the accuracy of the volume and might not fully communicate each step of the work. Even so, just one student in six was able to reach or exceed this level of achievement on the task. To earn a score of 2, a student must show partial success by modeling the bottle using more than one geometric solid (e.g., two cylinders). The student might not address either curvature and may use a combination of area and volume formulas. For most of the students who were able to achieve any significant success with this task, this level of achievement was as far as they got. To earn a score of 1, a student must engage with the task but will have done so with little or no success. For example, the response might use only one cylinder to model the entire bottle. When the response contains only words or drawings that are unrelated to the task, the response is scored as "off task." Notice that these two categories account for almost one-half of all of the eleventh-grade student responses. What can be said about such disappointing performance? In this version of the task, students were advised to use a ruler, so the problem was not that students did not think to use a ruler to measure the bottle. One hypothesis is that the issue has less to do with specific task characteristics and more to do with students' experience with solids in their classroom. Many teachers readily confide that they often have only a few days left at the end of the tenth grade to devote to volume. Others indicate that they feel that the large body of knowledge they are obliged to cover during the tenth grade sometimes makes it impossible to cover volume at all. Why would a teacher choose to leave out volume rather than any other topic? It appears that this decision often reflects teachers' perceptions of what is or is not necessary for the next mathematics course their students will take. For many students studying geometry, Algebra II is the next course in the sequence, and there seems to be a belief that a study of solids and their volume is not critical for success in Algebra II. As a consequence, the topic is often neglected. Unfortunately, even though the study of solids and their volume may not be a prerequisite for Algebra II as it is traditionally defined, a sound conceptual understanding of this subject area truly is a prerequisite for calculus. Clearly, if high school curriculum is determined solely by a perception OCR for page 55 Keeping Score: Assessment in Practice of what is required for the next course, then the longitudinal coherence of school mathematics is jeopardized. Another place where a study of solids and their volume is de-emphasized is in large-scale assessments. In New York, for example, the Spring 1997 Pilot Questions that are used by many teachers to prepare their students for the Mathematics A Examination (which will soon replace Course I in the New York Regents sequence) suggest that a study of solids and their volume will be confined to finding the volume of a rectangular prism. Undoubtedly, this de-emphasis at the assessment level will bring about a de-emphasis on all other solids in the curriculum. Study of solids and their volume should not be added to the curriculum in a cursory way because the problems described here cannot be addressed without placing such study at the firmly in the mathematics curriculum. The marginalization of solids within the curriculum has unfortunate consequences that extend beyond student preparation for the study of calculus. A study of solids and their volume provides an abundance of useful material for those seeking to enhance the learning of mathematics through an emphasis on connections, both within mathematics and with worthwhile and relevant contexts outside of mathematics. The Principles and Standards for School Mathematics: Discussion Draft (NCTM, 1998) goes a long way toward placing the study of solids and their volume firmly in the curriculum, and does so in a way that provides a coherent sequence across the Pre-K-12 curriculum. Unfortunately, the problems identified by this assessment work are not confined to the study of solids and the tenth-grade curriculum. Schmidt and Cogan write: Review of the TIMSS U.S. mathematics achievement and curriculum analysis results forms a rather compelling notion that the fundamental problem with our mathematics education system lies not with students or teachers but primarily with the way in which we think about and develop our mathematics curriculum. (Schmidt & Cogan, 1999, p. 7) The TIMSS study, which characterized the U.S. curriculum as repetitive and lacking depth, indicates that the learning of mathematics can be tackled effectively only after something is done to re-conceptualize the mathematics curriculum. Assessment development experience also has demonstrated that a fragmented and cluttered curriculum puts teachers under enormous pressure and restricts their opportunities to deepen the focus of their instruction. OCR for page 55 Keeping Score: Assessment in Practice Misconceptions and mistakes as opportunities to learn Attempts to develop tasks designed to assess the robustness of students' understanding of mathematics have been met by an interesting mix of teacher reaction. For some teachers, this approach has validated their own classroom practice, characterized by a constructive focus on the robustness of conceptual development. In part, such an approach requires putting a constructive focus on misconceptions that are brought into the open by presenting students with thought-provoking and sensitive tasks. According to these teachers, assessments that made misconceptions visible were an invaluable aid to long-term learning and retention. For many other teachers, however, conceptually oriented tasks that have the power to realize student misconceptions are to be avoided, lest these confuse students who already find learning mathematics difficult. In recent development work, New Standards staff prepared a formative assessment package to be used in a conceptual approach to the teaching and learning of slope (NCEE, 1998). Students use this package to investigate slopes of ramps, slopes of stairs, and slopes of lines, and to work through a range of challenging and conceptually oriented assignments on slope. A final assignment invites students to imagine a world where slope is defined not as rise over run but as run over rise. Students are asked to discuss the implications of this redefinition of slope. This final task is designed to assess the robustness of students' conceptual understanding of slope. Many teachers have reacted vehemently, arguing that this will run the risk of confusing students, or even leave them with the erroneous view that slope is defined as run over rise. To steer away from conceptually oriented tasks is to adopt a view of student learning that is characterized by memorization of isolated fragments of knowledge, inherently fragile and unlikely to be retained. Indeed, teachers should use assessments that do operationalize student misconceptions, not to confuse students, but as part of the process of developing mathematical understandings that are robust and can withstand both the test of time and of counter-argument. Tasks that ferret out student misconceptions will provide insight into how the student has internalized the body of knowledge that the teacher is attempting to teach. It is only from a clear sense of what the student understands that subsequent instruction can be tailored to benefit the student. Seen in this way, tasks that illuminate student misconceptions will be crucial to the process of benchmarking growth in student understanding (Borasi, 1996). OCR for page 55 Keeping Score: Assessment in Practice Assessment practice makes perfect-developing tenacity When teachers regularly administer quality non-routine tasks to their students and provide feedback to students about their progress, it becomes possible to distinguish those aspects of student performance that are more resistant to change from those that are less so. One aspect of student performance where there are real opportunities to foster improved learning behaviors is that of tenacity-student readiness to stay with non-routine problems. If students can be led to recognize and accept that it often takes time and effort to know what to do when they look at a task, they are less likely to give up prematurely. Teachers can foster this recognition and acceptance by giving the following directions each time their students are asked to work on a non-routine assessment task: This task is designed to assess how well you can solve non-routine problems. You will not have learned how to solve this problem in class. But you will have learned the mathematics that you will need to solve this problem. Remember that when you look at this task for the first time you will probably not know what to do. This task is designed to see what you do when you don't know immediately what to do. So don't give up immediately-read the question again and again. Try to say in your own words what you are being asked to do. Teachers report that some variation on this theme was important in focusing student attention on the task, reducing student frustration, and increasing student tenacity. Assessment practice makes perfect-developing communication Communication is one aspect of student performance that is frequently difficult to develop. It is common to see a group of students making substantial inroads in solving a challenging task, where classroom discourse is characterized by focused mathematical discussion and quality thought. It is disappointing to read student responses later, only to find that little of their engaging work has been committed to paper. Even when the work is recorded, it is often difficult to see complete chains of thought. To encourage students to communicate more effectively and so earn the credit their work suggests they deserve, we recommend providing students might be provided with the following opportunities: to score other student responses to tasks, trying to follow the line of reasoning, and to provide feedback to the student; OCR for page 55 Keeping Score: Assessment in Practice to be given true mathematical statements and asked to explain why they are true; to represent ideas using more than one form of mathematical representation; to represent mathematical ideas using their own words and to practice writing down the main tenets of these ideas. When these strategies are used with students, the students gain a better understanding of how to communicate their efforts. For example, when students scored other students' work, they were provided with a model of different levels of communication and were able to use this model to evaluate the effectiveness of responses. When students were asked to say why a given statement was true, they were relieved of the manipulative challenge of the task and could concentrate entirely on communicating their understanding. What can be done—some recommendations This section presents recommendations for those who are interested in using assessment to enhance instruction, including several recommendations advanced by Black and Wiliam (1998), who make the centrally important point that learning is driven by what teachers and students do in classrooms. Black and Wiliam draw upon a great number of research studies to argue persuasively that to enhance learning, specific attention must be paid to formative assessment. This is an aspect of teaching that Black and Wiliam posit as indivisible from effective teaching, and indeed as the heart of effective teaching: We use the general term assessment to refer to all those activities undertaken by teacher—and by their students in assessing themselves—that provide information to be used as feedback to modify the teaching and learning activities. Such assessment becomes formative assessment when the evidence is actually used to adapt the teaching to meet student needs [italics in original]. (Black & Wiliam, p. 140) Nonetheless, for formative assessment to be effective, it cannot be simply bolted on to existing practice. Instead, there is a need for a radical re-conceptualization of what teachers and students do in the classroom. Teachers will need a great deal of support as they attempt to rethink and restructure their classroom practice. Here are some key recommendations for enhancing instruction and learning: Provide teachers with a rich and varied supply of worthwhile assessment tasks to be used as classroom-embedded instruction and that will provide students with opportunities to perform. OCR for page 55 Keeping Score: Assessment in Practice Encourage schools to move toward the use of high-quality, end-of-course assessments that are standardized across an entire school, district, or state. This will help teachers appreciate the importance of teaching to a set of publicly agreed upon and challenging standards. Encourage teachers, parents, and students to de-emphasize grades and emphasize feedback to students. Provide professional development for teachers that will help them provide their students with useful and constructive feedback that can improve learning rather than compare or rank students. Structure professional development to enable teachers to recognize and appreciate student growth. All students can learn to complete challenging mathematics tasks. Student work that demonstrates growth in tenacity, communication, and procedural, conceptual, and strategic knowledge should be generated and shared with teachers in the school, district, or state. Work with teachers to develop a view of the student as an active rather than as a passive learner. Provide professional development that will enable teachers to incorporate student self-assessment as a useful tool for learning. Provide teachers with tasks for class and homework that are aligned with standards or learning expectations. Demonstrate that all students can learn mathematics (either by using videos or student work that shows growth). This is important in encouraging teachers to regard students as having potential to be tapped rather than having innate inability. Encourage students and teachers to become willing participants in a diagnostic approach to learning, where errors and misconceptions are exposed and resolved, rather than left as unacknowledged and invisible obstacles to learning. Create approaches to learning where students are given time to communicate, to explore, to receive feedback on, and to re-orient their evolving understanding. In such classrooms, students can work for mathematical understanding rather than only for coverage. Develop approaches to curriculum adoption that are coherent within and across grades. Enable teachers to use a curriculum that encourages a more integrated and connected approach to learning mathematics. Many of the curricula developed with funding from the National Science Foundation are excellent resources. OCR for page 55 Keeping Score: Assessment in Practice Encourage teachers to avoid textbooks that take a superficial approach to mathematical connections. Consider organizing teaching in a way that enables teachers to teach across an entire grade span. For example, a teacher might continue to teach the same cohort of sixth-grade students through grades seven and eight. This would provide continuity for students and enable teachers to develop a longitudinal view of the larger curriculum.
{"url":"http://www.nap.edu/openbook.php?record_id=9635&page=55","timestamp":"2014-04-16T05:06:17Z","content_type":null,"content_length":"75054","record_id":"<urn:uuid:c484fe8e-0e90-4b02-a846-3e5899d858fb>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Princeton University Math Club Geometry is a branch of mathematics that studies the properties of space. This includes the usual three-dimensional space of ordinary experience—suitably formalized, of course—but it includes many more exotic spaces. You might have heard of the Moebius strip or the Klein bottle, for example. These are both examples of spaces with interesting geometric properties. They are by no means the only ones. Going beyond these types of spaces, which resemble ordinary space on a small scale, geometry also studies a range of other types of spaces: varying from spaces that share the small scale structure of the complex plane to spaces defined purely in algebraic terms. This variety of spaces can be roughly divided into those studied by differential geometry and those studied by algebraic Differential geometry is a part of geometry that studies spaces, called “differential manifolds,” where concepts like the derivative make sense. Differential manifolds locally resemble ordinary space, but their overall properties can be very different. Think of the surface of a donut: on a small scale, it looks like a slightly bent piece of a plane, but globally, it is nothing like a plane. Besides being bounded, it also has the unusual property that a string can be rolled up on it in a way that does not allow it to be unraveled. Differential geometry is a wide field that borrows techniques from analysis, topology, and algebra. It also has important connections to physics: Einstein’s general theory of relativity is entirely built upon it, to name only one example. Algebraic geometry is a complement to differential geometry. It’s hard to convey in just a few words what the subject is all about. One way to think about it is as follows. A line, or a circle, or an ellipse, are all certainly examples of geometric structures. Now these can be thought of intrinsically, the way differential geometry might consider them, or they can be thought of as subsets of a larger space: the plane. Moreover, they are subsets with the very special property of being describable using Cartesian coordinates as the set of solutions to a collection of polynomial equations. Such sets are called “algebraic varieties,” and they can be studied not only in the setting of real-valued coordinates, but with coordinates that are complex numbers or, really, take values in any field. This is the classical face of algebraic geometry, and it is very likely to be your first introduction to the area. If you go further in it, you will be brought over to the abstract, modern point of view, which gives a way to define the geometries of algebraic varieties without reference to any outside space, or any polynomial equations. The vehicle for doing so is the notorious and unjustly vilified “scheme.” Algebraic geometry has connections just as far ranging as those of its differential cousin. It’s particularly important as a field in its own right and in algebraic number theory, but it has found uses in theoretical physics and even biology, as well. Courses [Show]Courses [Hide] MAT 355: Introduction to Differential Geometry This course is taught by Professor Yang, and its topics are known to vary from year to year, especially those covered toward the end of the semester. Prof. Yang covered, with some level of detail, the first four (out of the five) chapters of do Carmo’s Differential Geometry. In particular, there was a heavy emphasis on the the Gauß map (involving discussion of the first and second fundamental forms) from chapter three, and the intrinsic geometry of surfaces R3 gone over in chapter four. Aside from do Carmo’s book, there was reliance on other sources to cover material, like discussion about minimal surfaces and the materials of the last couple of weeks. The last two weeks had a heavy emphasis on looking at the Laplacian on those surfaces, and the uniformization of surfaces. MAT 416: Introduction to Algebraic Geometry (Kollar) This is a course on varieties, which are sets of solutions to polynomial equations. Commutative algebra is a prerequisite, either in the form of MAT 447 or by reading Atiyah and MacDonald’s classic text and doing lots of exercises to get comfortable with the tools used in algebraic geometry. The course follows Shafarevich’s text and focuses on aspects of varieties, their local and global geometry, embeddings into projective space, and the specific case of curves which is extremely well-understood. The final third of the course consisted of student presentations about various special topics like elliptic curves, surfaces, resolutions of singularities, algebraic groups and others. This course is fast-paced and challenging, but worth the effort. Homeworks tended to vary in length, frequency and difficulty. MAT 416: Introduction to Algebraic Geometry (Katz) This is a course on sheaves, schemes and the cohomology of coherent sheaves on projective varieties. It follows the well-known text by Hartshorne. Commutative algebra is an absolute prerequisite and an introduction to varieties is highly recommended since schemes and sheaves are very abstract objects and having a good stock of examples in hand is vital to understand the material well. One unusual feature of this course which adds to its difficulty is that the bulk of the material on schemes and sheaves is relegated to the readings and homework while the instructor lectures on the cohomology of projective varieties. Thus in some sense, it is two courses rolled into one and one would be wise to treat it as such. This is arguably the most challenging course offered by the mathematics department due to the constantly steep learning curve and the exceptionally heavy workload. A final expository project makes for a fun finish to what certainly will be a grueling MAT 455: Advanced Topics in Geometry – Lie Theory The goal of this course is to study the structure theory of Lie groups and Lie algebras. These objects are ubiquitous in mathematics and are studied using a variety of algebraic, analytic and geometric techniques. This course covers the geometry, structure theory, classification and touches upon their representation theories. Some background in differential geometry is essential, mostly material from the first few weeks of MAT 355. Alternatively, reading through the first few chapters of Spivak’s book on Differential Geometry should suffice. A variety of textbooks are useful — in particular, Adams, Humphreys and Bump. Max Rabinovich ’13 (mrabinov[at]princeton[dot]edu) You must log in to post a comment.
{"url":"http://blogs.princeton.edu/mathclub/guide/courses/geometry/","timestamp":"2014-04-20T03:42:24Z","content_type":null,"content_length":"31288","record_id":"<urn:uuid:7f7b83ab-c491-4004-ad91-64b14508f28a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
ICHM-Sponsored Satellite Session Report on the ICHM-Sponsored Satellite Session on the History of Mathematics of the Fourth International Congress on Representation Theory (ICRT IV) Held in Lhasa, Tibet Thursday and Friday, 19-20 July, 2007 by Karen V. H. Parshall This session of nine, forty-five-minute talks was co-organized by Joseph W. Dauben (City University of New York), XU Yibao (Borough of Manhattan Community College), and Karen V. H. Parshall (University of Virginia) in conjunction with Da Luosang Langjie (Tibet University) and WANG Jian-pan (East China Normal University). Wang was the principal organizer of ICRT IV and originally suggested the possibility of an associated satellite session in the history of mathematics. From left to right: Da Luosang Langjie, David Zitarelli, HAN Qi, Karen Parshall, Tom Archibald, Joe Dauben, Elena Ausejo, Jose Antonio Cervera, and XU Yibao. Speakers, Titles, and Abstracts (in alphabetical order by speaker) Chinese Mathematicians and the International Research Community: The Case of HU Mingfu and Integral Equations Tom Archibald, Department of Mathematics, Simon Fraser University, Burnaby, BC V5A 1S6, Canada The late nineteenth century saw a revival of interest in European mathematics in Chinese educational circles. The overall background to this development has been described by Dauben; Dauben and Zhang give additional information on the question of exchanges between the US and China in the early twentieth century. In this talk, I discuss the early career of Hu Mingfu (1891-1927), who studied at Cornell supported by the Boxer Indemnity. Hu went on to be the first Chinese national to complete a Ph. D., which he did at Harvard in 1917 in mathematics. His thesis, supervised by M. Bôcher, was on integral equations, a subject that was enjoying explosive growth at that time. If time permits, I will compare Hu's career to that of some other early Chinese students abroad. Commercial Arithmetic in the Spanish Renaissance Elena Ausejo, Facultad De Cierrcias (Mathematicas), University of Zaragoza, E-50009 Zaragoza, Spain Juan de Yciar (1522-1590), the most important calligrapher in the Spanish Renaissance, is also the author of works such as Libro Subtilissimo, por el qual se enseña a escreuir y contar perfectamente el qual lleua el mesmo orden que lleua vn maestro con su discipulo Hecho y experimentado por Iuan de Yciar Vizcayno (Book That Teaches How To Write and Count in the Same Order as a Teacher Does with His Student;Made and Tested by Iuan de Yciar Vizcayno), and Arte Breue y Prouechoso de cuenta Castellana y Arithmetica, donde se muestran las cinco reglas de guarismo por la cuenta castellana, y reglas de memoria. Y agora nueuame[n]te en esta postrera impression se han añadido vnas cuentas muy graciosas y prouechosas, sacadas del libro de Fray Iuan de Ortega : y mas al cabo va añadida vna cuenta abreuiada de marauedis (Brief and Useful Art of Castilian Counting and Arithmetic, Where the Five Rules of Figure on Castilian Reckoning Are Shown, and Memory Rules. And Now in This Last Edition Very Amusing and Useful Calculations, from Fray Iuan de Ortega's Book, Have Been Added, and at the End an Abridged Count of Maraveds). Both of these works, which had been printed in Zaragoza by the mid sixteenth century, testify to Juan de Yciar's interest in teaching counting together with writing. One of his rarer books entitled Libro intitulado aritmética práctica muy provechoso para toda persona que quisiere ejercitarse en aprender a contar (Book Entitled Practical Arithmetic Very Useful for Everybody Willing To Be Trained in Reckoning Learning) (1549) is purely mathematical. Until recently, only a single copy of the latter work had been thought to survive and that in the British Library. This talk will examine the contents of a second copy of the book that has recently been found in Spain. The Chou Suan by Giacomo Rho: An Example of Mathematical Adaptation in China Jose A. Cervera, ITESM, Campus Monterrey, Dep. Estudios Humanisticos, 64849 Monterrey, Mexico The Jesuits Giacomo Rho (Luo Yagu, 1592-1638) and Adam Schall von Bell (Tang Ruowang, 1592-1666) translated a huge number of European astronomical and mathematical treatises into Chinese between 1630 and 1635. The result (137 juan) was given the name Chongzhen Lishu (Calendar Compendium of the Chongzhen Era). It was reedited in 1645 with the new name Xiyang Xinfa Lishu (Calendar Compendium According to the Western New Methods). One of the mathematical treatises included in the Xiyang Xinfa Lishu was the Chou Suan (Calculus with Rods), written by Rho in 1628. The Chou Suan is the adaptation of John Napier's Rabdology (1617). There are several differences between the Rabdology and the Chou Suan. The latter gives more details on every use of the rods. Most important, the examples in the Chou Suan are practical ones. In this way, Rho shows his inculturation, following the patterns of traditional Chinese books on mathematics. In this talk, I will give a general survey on Giacomo Rho¡¯s Chou Suan, and I will compare this book with Napier¡¯s Rabdology as a typical example of adaptation of European mathematics in China. Zhu Shijie and the Jade Mirror of the Four Unknowns Joseph W. Dauben, Herbert H. Lehman College and Ph. D. Program in History, The City University of New York, 504 West 110th St., FC, New York, NY 10025, USA jdauben@gc.cuny.edu; jdauben@att.net The Yuan dynasty mathematician Zhu Shijie published his Jade Mirror of the Four Unknowns (also known as The Precious Mirror of the Four Elements) in Yangzhou in 1303. Widely regarded as the most significant work of traditional Chinese mathematics, the Jade Mirror offers a systematic method for solving simultaneous equations in as many as four unknowns. This work has been translated into French with a comprehensive commentary in the doctoral thesis of Jock Hoe (1977). Recently, an English translation made by Ch'en Tsai Hsin (1879-1945) in 1925 found in the library of the Institute for the History of Natural Science in Beijing has been edited and published by Guo Shuchun and Guo Jinhai (2006). This talk will discuss the meaning of the title of the treatise by Zhu Shijie, and explain the significance of the generalization of the "celestial element" method that he used to provide a very powerful, general method for the solution of simultaneous equations in the context of traditional Chinese mathematics. Antoine Thomas (1644-1709) and the First Introduction of Western Algebra into China Qi Han, Institute for the History of Natural Science, Chinese Academy of Sciences, Beijing 100010, China Antoine Thomas (1644-1709), a Belgian Jesuit from Namur, was a very important figure in the history of science in the seventeenth and eighteenth centuries. After he arrived in Beijing in 1685, he soon served as vice-president of the Imperial Board of Astronomy and as an assistant of his confrère, the Belgian Jesuit F. Verbiest (1623-1688). After 1688, he served as an imperial tutor of science of the Kangxi Emperor along with T. Pereira (1645-1708) and the French Jesuits J. Bouvet (1656-1709) and J.-F. Gerbillon (1654-1707). The most important of his scientific work in China was his meridian-line work in 1702. Through astronomical observations, he established the relation between the li and the degree of terrestrial latitude. Research on Antoine Thomas's scientific activities has only used European sources. In this talk, I would like to analyze his scientific activities on the basis of both Chinese and European sources and to identify him as the author of a book on algebra Jiegenfang Suanfa (Aspects of the Mathematical Method of Jiegenfang) and its abridged edition Jiegenfang Suanfa Jieyao (Essential Aspects of the Mathematical Method of Jiegenfang). I would also like to analyse Jiegenfang Bili, which was based on the former, and show its influence on Qing mathematicians in the eighteenth and nineteenth centuries. Duchung Zursti: A Secular and Official Tibetan Mathematical Textbook Da Luosang Langjie, Tibet University, Lhasa, Tibet 850000, China Duchung Zurtsi is a mathematical textbook written by Duchungpa A-nanda, a great scholar and a controller in the 5th Dalai Lama Ngagwang Lolsag Gyatso period (1617-1682). The book is the earliest extant mathematical textbook in Tibetan. It mainly deals with rod-arithmetic in conversions between different volume measuring tools, which involve complicated operations of fractions. This book provides invaluable material for studying the development of mathematics and of mathematics education in Tibet. This talk will explore the contents and main features of the book as well as the important methods contained therein. It will also analyze the impact of the book on the Tibetan economy in the seventeenth century and thereafter and raise several issues for further study. This is joint work with Xuanji Hua (Fudan University, Shanghai, China). XU Yibao interpreting for Da Luosang Langjie. XU Yibao, Da Luosang Langjie, and the audience during Da Luosang's talk. 4000 Years of Algebra: An Historical Tour from BM 13901 to Moderne Algebra Karen Hunger Parshall, Departments of History and Mathematics, University of Virginia, Charlottesville, VA 22904-4137, USA How is it that the high school analysis of polynomial equations and the modern algebra of the research mathematician—so seemingly different in their objectives, in their tools, and in their philosophical outlook—are both called algebra ? Are they even related? The fact is that they are. This talk will sketch the long and complicated story of how they are related via a 4000-year-long history that stretches from Mesopotamia around 1800 B.C.E.—when mathematicians recorded an algorithm for solving quadratic equations on clay tablets like BM 13901, held today in the British Museum—to the publication in 1930 of Bartel van der Waerden's classic text, Moderne Algebra. Chinese Gougu Theory Versus Euclidean Geometry: Views of a Seventeenth-Century Chinese Mathematician Yibao Xu, Borough of Manhattan Community College, The City University of New York, New York, NY 10007, USA Mathematics in China now, as this Fourth International Conference for Representation Theory indicates, has become an active part of world mathematics. About 100 years ago Chinese mathematics, however, remained distant from the mainstream of Western mathematics. The differential and integral calculus, algebra, and probability theory were only introduced to China in the second half of the nineteenth century, and the question of how traditional Chinese mathematics interacted with Western mathematics is a subject that remains to be explored in detail. This year, however, marks the four hundredth anniversary of the publication of the Chinese translation of the first six books of Euclid's Elements by Matteo Ricci, known as Li Madou in China, and Xu Guangqi. This talk will discuss the first actual interaction between Western and Chinese mathematics in the work of the great Chinese mathematician Mei Wending (1633-1721). How he merged traditional Chinese gougu theory with the newly-introduced Greek geometry will be examined in the context of a manuscript copy of the Chinese translation of the Elements preserved in the David E. Smith Archives in Columbia University's Rare Books and Manuscripts Library. Whether this manuscript is the one actually used by Mei himself, and if so, how the marginal annotations in the manuscript may be related to his published works, will also be considered. Miss Mullikin and the Internationalization of Topology David E. Zitarelli, Department of Mathematics, Temple University, Philadelphia, PA 19122, USA The necessity for a formal definition of a connected set arose at the beginning of the nineteenth century, when three mathematicians—one each from the United States, Hungary, and Germany—arrived at the same expression. However, it was another pair in a fourth country, Poland, who in 1921 presented the first investigation devoted entirely to connected sets. At the same time, Anna Mullikin (1893-1975) was conducting a research program on such sets entirely unaware of the others. In this talk, I introduce this high-school teacher and present her work in the context of the development of topology in the 1920s. I also describe how one of her theorems helped catalyze an era of international cooperation and competition between schools of topology in Poland and the U.S. As an added bonus, I will demonstrate how Mullikin's nautilus can be used to illustrate limits geometrically in the plane to a class in advanced calculus. The audience listening to Joe Dauben's talk.
{"url":"http://www.unizar.es/ichm/reports/Lhasa2007.htm","timestamp":"2014-04-20T06:07:30Z","content_type":null,"content_length":"17346","record_id":"<urn:uuid:b70bb4f8-4a84-4c5b-96f0-7ab66a016394>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: problem using -clock- with military time Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: problem using -clock- with military time From Steve Nakoneshny <scnakone@ucalgary.ca> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject Re: st: problem using -clock- with military time Date Sat, 2 Jun 2012 13:28:59 -0600 Thanks for the insight, Nick. After my initial post, I started to explore -subinstr-, but then ran into a (temporary) roadblock with how to make it selectively replace only those 3-digit times and not for all values. It's always nice to see multiple potential solutions, especially as well explained as this one. Sent via carrier pigeon On Jun 2, 2012, at 2:31 AM, "Nick Cox" <njcoxstata@gmail.com> wrote: > There are many other ways of tackling this problem. Here are a few > more comments. Others should be able to suggest yet more. > The question was posed as one of inserting a "0" after the space > whenever the second part of the date is too short, i.e. three digits > not four. > That means we should focus on identifying the space and inserting the > "0", which in Stata just means changing " " to " 0", as there isn't an > "insert in string" function. (There isn't a "delete from string" > function, either: both can be just special cases of -subinstr()-.) > The assumption is that there should be precisely one space. > replace Arrive = trim(itrim(Arrive)) > does our best to make that so. -trim()- removes any leading or > trailing spaces, while -itrim()- reduces all multiple internal spaces > to single spaces. That -itrim()- didn't appear in the previous > posting. I feel comfortable with making any such changes as they can't > affect the meaning of a date string. Those concerned with absolute > data integrity should work with a copy of the original variable. > We should check that there is precisely one space. After what we have > just done, and in any case, that would mean that there are precisely > two words. In Stata, words are whatever are separated by spaces > (except that " " and `" "' bind tighter than spaces separate), so > "frog toad" are two words, and so are "123 456" and "2011/04/06 1630". > Stata has a -wordcount()- function, so we can go > assert wordcount(Arrived) == 2 > asserts that that is so, and you will get an error message if it > isn't. (The principle is, very much, "No news is good news", but if > there is bad news, there are fixes needed.) Many Stata beginners would > do here something like this > gen nwords = wordcount(Arrived) > tab nwords > but for problems like this you don't need a new variable and you can > insist Stata does the checking. (Conversely, there are more open-ended > problems in which looking at the patterns shown by the table is > exactly the right thing to do.) As there can be only two words > replace Arrived = subinstr(Arrived, " ", " 0", 1) if > length(word(Arrived, 2)) == 3 > is an alternative to what was posted previously. > Another way to think about it is that it appears that there are two > kinds of date, long and short, so we could work with > -length(Arrived)-, which should be 15 or 14. For problems like this, I > tend to copy and paste examples and feed them to -display-, as in > . di length("2011/04/06 1630") > 15 > because Stata is better at counting than I am. So -if length(Arrived) > == 14- identifies short dates that need fixing. > Nick > On Sat, Jun 2, 2012 at 12:00 AM, Nick Cox <njcoxstata@gmail.com> wrote: >> clear >> input str15 ArrivedOnPCU >> "2011/04/06 1630" >> "2010/07/18 700" >> "2011/09/06 400" >> "2011/06/23 130" >> end >> replace Arrived = trim(Arrived) >> replace Arrived = subinstr(Arrived, " ", " 0", 1) if >> length(word(Arrived, -1)) == 3 >> list >> This example boosts my prejudice that few parts of Stata are so >> unfairly overlooked as the basic string functions. See also >> Cox, N.J. 2011. Speaking Stata: Fun and fluency with functions. The >> Stata Journal 11(3): 460-471 >> Abstract. Functions are the unsung heroes of Stata. This column is a >> tour of functions that might easily be missed or underestimated, with >> a potpourri of tips, tricks, and examples for a wide range of basic >> problems. >> for a review. >> On Fri, Jun 1, 2012 at 11:39 PM, Steve Nakoneshny <scnakone@ucalgary.ca> wrote: >>> I have been provided with a dataset containing date and time variables in string format. I wish to convert these to SIF type using the -clock- function, however I have run into a small problem given that the times are formatted as military time (sadly without the leading zero). The code -gen double pcutime = clock(ArrivedOnPCU, "YMDhm")- executes imperfectly. >>> After formatting pcutime to %tc, I can see that some of the times translate imperfectly: >>> ArrivedOnPCU pcutime >>> 2011/04/06 1630 06apr2011 16:30:00 >>> 2010/07/18 700 . >>> 2011/09/06 400 . >>> 2011/06/23 130 23jun2011 13:00:00 >>> If I manually edit the second obs to read as "2010/07/18 0700" and -replace pcutime = clock(ArrivedOnPCU, "YMDhm"), pcutime displays 18jul2010 07:00:00. It is pretty obvious to me that I'm choosing the wrong mask in the clock function to fail to account for both the missing values in pcutime as well as the incorrect times (i.e. 0130 translating to 13:00). >>> I've tried a various permutations of hm/HM/HHMM/hhmm to try to adjust, but to no avail. Can anybody suggest a better mask for me to use? Or perhaps some relatively simple means of inserting a leading "0" into the time portion of the string prior to using -clock-? > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2012-06/msg00091.html","timestamp":"2014-04-20T19:16:52Z","content_type":null,"content_length":"14363","record_id":"<urn:uuid:29b83b41-c68f-4066-a9a0-67f36099ff54>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
San Francisco Algebra Tutor Find a San Francisco Algebra Tutor ...I have successfully tutored many students in prealgebra throughout my years of tutoring. Prealgebra is a very foundational subject, so I think it is important to help my students gain a thorough understanding of the logic behind the math. I have completed many years of math education, through calculus. 13 Subjects: including algebra 1, chemistry, biology, anatomy ...Work in the social services sector lead me to pursue interests in other areas, in particular, health and wellness and how a child’s personal surroundings impact his/her performance at school. I became so fascinated with these subjects that I began taking night courses in various science discipli... 57 Subjects: including algebra 2, algebra 1, chemistry, English I have been an academic tutor through AmeriCorps for 1 year, serving students of all levels. I have worked with students who are what the schools classify as Far Below Basic and Below Basic level proficiency in Math & English, to raise their grades up 1 level. In some cases I have raised the grades of students from D's in math to B's. 6 Subjects: including algebra 1, elementary (k-6th), study skills, proofreading ...Then I became a scientific programmer developing algorithm and software tools for the analysis of genomic data. Currently I'm working in a biotech company providing blood tests for heart-transplant patients. The above short description of my background and experiences show that I'm very competent in the acquisition of knowledge. 22 Subjects: including algebra 1, chemistry, calculus, French ...I've been focused on environmental science for a number of years now. It is one of the most exciting disciplines and will almost certainly continue to become more so in the coming years. If you need brushing up in basic concepts in science, you can count on me to lead the way with you. 19 Subjects: including algebra 1, reading, writing, English Nearby Cities With algebra Tutor Alameda algebra Tutors Albany, CA algebra Tutors Berkeley, CA algebra Tutors Concord, CA algebra Tutors Daly City algebra Tutors Hayward, CA algebra Tutors Oakland, CA algebra Tutors Piedmont, CA algebra Tutors Richmond, CA algebra Tutors San Leandro algebra Tutors San Mateo, CA algebra Tutors San Pablo, CA algebra Tutors San Rafael, CA algebra Tutors South San Francisco algebra Tutors Vallejo algebra Tutors
{"url":"http://www.purplemath.com/San_Francisco_Algebra_tutors.php","timestamp":"2014-04-16T04:30:56Z","content_type":null,"content_length":"24227","record_id":"<urn:uuid:2f2fe7d2-90e9-43f8-ace1-44a9c90ff41a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: In figure below if a charge of 4uc flows from the 6V battery to plate P of the 1 uF capacitor. what charge flow from a - Q to R b - S to the battery c - What is the pd across C1 and C2 d - What is capacitance of C2 e - What is the single capacitor which is equivalent to C1 and C2 in series and what charge would it store? • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. a. 4uC b. 4uC c. 6V d. \[Q = C _{eq}V\] \[\frac{ 1 }{ C _{eq}} = \frac{ 1 }{ C _{1}}+\frac{ 1 }{ C _{2}}\] \[Q = ( \frac{ 1 }{ C _{1}}+\frac{ 1 }{ C _{2}})^{-1}V\] Best Response You've already chosen the best response. make sense? Best Response You've already chosen the best response. use conservation of charge Best Response You've already chosen the best response. @Algebraic! please can you explain how u derived a,b,c Best Response You've already chosen the best response. if +4uC is on the plate at P then -4uC will be on the plate at Q meaning +4uC flowed to the plate at R meaning that there's - 4 uC at the plate at S meaning +4uC flowed to the terminal of the battery. For c. ) I did the total voltage drop, which must be 6V, you could also do the drop per capacitor, and then find the capacitance of c2 from that. I didn't do it that way, I combined the solution to d and e into one part instead. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5069fb67e4b0e78f215d7126","timestamp":"2014-04-19T15:23:19Z","content_type":null,"content_length":"53076","record_id":"<urn:uuid:d55d71b9-1fa6-4ea8-b231-cf1bba1d90e7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
Queens, NY Algebra Tutor Find a Queens, NY Algebra Tutor ...I prefer to talk about Physics, Chemistry, and Calculus as they are thoroughly interrelated; however, I often must strengthen the student's Algebra, Geometry, Trigonometry, or Precalculus in order to get the most out of Physics or Chemistry. I am more of an academic coach than a tutor. I am always available by phone or email for any questions. 9 Subjects: including algebra 2, chemistry, physics, calculus ...As long as you’re ready to study smart with my guidance, the sky is the limit. With prior experience to teaching introductory and upper level basic science courses, I firmly believe you will be adequately prepared for your science classes. At Cornell, I was a teaching assistant for Comparative Physiology, where we were trained in learning strategies and principles of pedagogy. 17 Subjects: including algebra 2, algebra 1, chemistry, MCAT ...If you are interested in one-on-one sessions following this intensive curriculum please contact me for pricing details. I am also happy to send out a syllabus if requested. Contact me today for more information!When it comes to SAT math, I know that it is important that each student learns an approach that will suit his or her learning style. 37 Subjects: including algebra 2, algebra 1, reading, English ...For the last ten years I have worked at one of Long Island's distinguished high schools. I have instructed students in all content area subjects with an emphasis on mathematics. I have prepared students for the regents exams and prepared them to be successful. 14 Subjects: including algebra 1, algebra 2, Spanish, reading ...Although I'd like to professionally focus on languages, I graduated in Mechanical Engineering by the Public University of Navarre (Spain), and upon that background I've built my experience as a Math and Physics tutor too. Getting back to languages, I offer a wide range of methods, adapting the l... 10 Subjects: including algebra 1, algebra 2, French, Spanish Related Queens, NY Tutors Queens, NY Accounting Tutors Queens, NY ACT Tutors Queens, NY Algebra Tutors Queens, NY Algebra 2 Tutors Queens, NY Calculus Tutors Queens, NY Geometry Tutors Queens, NY Math Tutors Queens, NY Prealgebra Tutors Queens, NY Precalculus Tutors Queens, NY SAT Tutors Queens, NY SAT Math Tutors Queens, NY Science Tutors Queens, NY Statistics Tutors Queens, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Queens_NY_Algebra_tutors.php","timestamp":"2014-04-20T06:29:37Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:4176edf9-782d-40cf-b4d7-e012ee62f7c4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
Choptuik collapse and Barak Kol Barak Kol from Jerusalem - who is visiting Cambridge in August - was explaining me various things related to his work in recent years. For example, he and E. Sorkin have shown that the behavior of black strings and black holes - and especially the smoothness of the transition between them - seems to depend on the dimension in such a way that "D=10" or "D=11" is one of the "critical" spacetime dimensionalities, namely one above which the transition becomes smooth. Let me not go into details - but this "D=10" or "D=11" bound is a property of purely classical GR. No doubt, one of the obvious questions is whether there is any rational explanation why this "D=10" or "D=11" is simultaneously the critical dimension of string theory or M-theory; it can be just an accident, of course. Especially those of us who consider M-theory in "D=11" to be equally fundamental as the 10-dimensional supersymmetric vacua may ask "Why did not we get the other number from the set {10,11}, for example?" Barak's current work is related to self-similar solutions in gravity. Let us focus on the Choptuik collapse. Imagine a spherically symmetric distribution of mass - for example, a spherical wave of a scalar field - that is going to produce a black hole. Well, it will only end up as a black hole if the initial parameters are properly chosen. Let's choose a line in the space of the initial conditions parameterized by a number we will call "P". If "P" is smaller than a certain number, which we will normalize to one, no black hole is formed. If "P" is larger than one, a black hole is inevitably the final state of the collapse. It is not surprising that some interesting behavior occurs near "P=1". If "P" is slightly below one, the mass will bounce many times, but it will avoid the collapse into the black hole. At the critical value "P=1", the corresponding classical solution of GR will locally have a character of a self-similar fractal; the proper times of the individual bounces will generate a geometrical sequence. The self-similarity may be interpreted as a symmetry under a discrete subgroup of the Weyl symmetry of rescalings - and one may think about this picture in terms of a spontaneously broken conformal symmetry. Let's define the total mass of the black hole "M" we create as a function of "P". For "P" smaller than one, by definition, "M(P)" must vanish. However, it starts to grow above "P=1", namely as • "M" goes like "(P-1)^{gamma}" where "gamma" is an exponent, perhaps something like "0.301308" (the right value is different but the last four digits of my random value may be inspiring for Quantoken). Its value is known numerically and it is universal for a fixed spacetime dimensionality and for all spherically symmetric collapses of a scalar field. It is certainly a number that deserves an analytical derivation. There are other "critical exponents" in this game; the coefficient "exp(Delta)" determining the decrease of the proper distances in the fractal solution is another example. It is hard to hide that the behavior near the critical point "P=1" shares many features with the theory of phase transitions. Of course, these things are properties of classical GR, and we tend to consider the fractal to be unphysical at sub-Planckian distances; at least I do. Nevertheless, you know that there have been speculations that gravity could have an ultraviolet fixed point, a scale invariant theory valid at the sub-Planckian distances whose scale invariance is spontaneously broken at the Planck scale. I, for one, don't believe these things, especially because they seem to be inconsistent with everything we know about string theory, but we can't definitely rule out their existence at this moment, I think. Of course, the more one studies some fractal classical solutions of GR, the more one would like these super-short features of the solutions to be physical. ;-) But one should resist the temptation: self-similar geometries don't seem to be a part of the "real physics" so far. It seems to me that discretely self-similar solutions may be more physical if they occur within a conformal field theory rather than gravity. But we will see whether Barak finds something interesting about the self-similar gravitational solutions. Good luck to him. snail feedback (2) : reader gordon said... My mail was stopped short, so I post it again in a little more detail. The four-dimensional GR equations require a 6-fold (12_R dim) CY in order to solve. This makes 12-dimensions special, as 4-dimensions is also special in topology. Apart from supersymmetry in d= 11, this is a mathematical relation that makes d=12 special (and specifically relates to the speciality of d=4 topology, which I find interesting). Some of this, although you do not want to read perhaps, is in my CY differential eqn work. Likewise, the proof of the Poincare conjecture in 3-dimensions requires the topology and cohomology of an associated 3-fold (6_R dim), and might be formulated in terms of the latter. reader gordon said... I realized today and yesterday, that the cohomology of the Einstein equations when written in terms of the Calabi-Yau metrics is probably enough to prove the Poincare conjecture, and also its analogs in higher dimensions. The metrics are formulated in terms of geodesic flows in a Calabi-Yau space, and the deformations of the flows correspond to global deformations of manifolds in d>3 dimensions. The cohomology of the CY manifold corresponding to the Einstein eqns with w=\pm 1,0 are all that is required to find the obstructions to homotophies of manifolds. (Dimension 12 is probably special from the four-dimensional point of view.)
{"url":"http://motls.blogspot.com/2005/08/choptuik-collapse-and-barak-kol.html?m=0","timestamp":"2014-04-16T15:59:36Z","content_type":null,"content_length":"193584","record_id":"<urn:uuid:4b9cfd3f-1d07-45bb-bed4-791e6c989057>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics The branch of mathematics in which one studies algebraic operations (cf. Algebraic operation). Historical survey. The simplest algebraic operations — arithmetic operations on positive integers and positive rational numbers — can be encountered in the oldest mathematical texts, which indicates that the principal properties of these operations were known even in early antiquity. In particular, the Arithmetic of Diophantus (3rd century A.D.) had a major influence on the development of algebraic ideas and symbols. The term "Algebra" originates from the work Al jabr al-muqabala by Mohammed Al-Khwarizmi (9th century A.D.), which describes general methods for solving problems which can be reduced to algebraic equations of the first and second degree. Towards the end of the 15th century the cumbersome verbal descriptions of mathematical operations which had previously prevailed began to be replaced by the contemporary symbols "+" and "-" , and subsequently symbols for powers, roots, and parentheses appeared. F. Viète, at the end of the 16th century, was the first to use the letters of the alphabet to denote the constants and the variables in a problem. Most of the present-day symbols of algebra were known as early as the mid-17th century, which marks the end of the "prehistory" of algebra. The development of algebra proper took place during the next three centuries, during which views as to the proper subject matter of this discipline kept radically changing. In the 17th century and 18th century "Algebra" was understood to mean the science of computations carried out on algebraic symbols — "identity" transformations of formulas consisting of letters, solving algebraic equations (cf. Algebraic equation), etc. — as distinct from arithmetic, which dealt with calculations performed on explicit numbers. It was assumed, however, that the symbols stood for actual numbers: integers or fractions. A brief table of the contents of one of the best textbooks of that time, L. Euler's Introduction to algebra includes integers, ordinary and decimal fractions, roots, logarithms, algebraic equations of degrees one to four, progressions, additions, Newton's binomial and Diophantine equations. Thus, by the mid-18th century, algebra corresponded, more or less, to the "elementary" algebra of our own days. The principal subject dealt with by the algebra of the 18th century and 19th century were polynomials. Historically, the first problem was the solution of algebraic equations in one unknown, i.e. equations of the type: The purpose was to derive formulas expressing the roots of the equation in terms of its coefficients, by means of addition, multiplication, subtraction, division and extraction of roots ( "solution by radicalssolution by radicals" ). Mathematicians were able to solve first- and second-degree equations even in the earliest times. Substantial advances were made in the 16th century by Italian mathematicians: a formula was found for solving third-degree equations (cf. Cardano formula) and fourth-degree equations (cf. Ferrari method). During the following three centuries fruitless efforts were made to find similar formulas for solving equations of higher degrees; in this connection, the problem of finding at least a "formula-free" proof of the existence of a complex root of an arbitrary algebraic equation with complex coefficients became of major interest. This theorem was first stated in the 17th century by A. Girard, but was rigorously proved by C.F. Gauss only towards the end of the 18th century (cf. Algebra, fundamental theorem of). Finally, it was established by N.H. Abel in 1824 that equations of degree higher than four cannot, in general, be solved by radicals, and E. Galois in 1830 stated a general criterion for the solvability of algebraic equations by radicals (cf. Galois theory). Other problems were neglected at that time, and algebra was understood to mean the "analysis of equations" , as noted by J. Serret in his course of higher algebra (1849). Studies on algebraic equations in one unknown were accompanied by studies on algebraic equations in several unknowns, in particular of systems of linear equations. The study of linear equations resulted in the introduction of the concepts of a matrix and a determinant. Matrices subsequently became the subject of an independent theory, the algebra of matrices, and their scope of application was extended beyond the solution of systems of linear equations. From the mid-19th century onwards studies in algebra gradually moved away from the theory of equations towards the study of arbitrary algebraic operations. The first attempts at an axiomatic study of algebraic operations dates back to the "theory of relations" of Euclid, but no progress was made in this direction, since a geometrical interpretation of even the simplest arithmetic operations — ratios of lengths or of areas — is impossible. Further progress only became possible as a result of gradual generalization and intensive study of the concept of a number, and of the appearance of arithmetic operations performed on objects entirely unlike any number. The first such examples were Gauss' "composition of binary quadratic forms" , and P. Ruffini's and A.L. Cauchy's multiplication of permutations. The abstract concept of an algebraic operation appeared in the mid-19th century in the context of studies on complex numbers (cf. Complex number). There appeared G. Boole's algebra of logic, H. Grassmann's exterior algebra, W. Hamilton's quaternions (cf. Quaternion) and A. Cayley's matrix calculus, while C. Jordan published a major treatise on permutation groups. These studies prepared the way for the transition of algebra at the turn of the 19th century into its modern stage of development, which is characterized by the combination of previously separate algebraic ideas on a common axiomatic basis and by a considerable extension of the scope of its applications. The modern view of algebra, and of the general theory of algebraic operations, crystallized at the beginning of the 20th century under the influence of D. Hilbert, E. Steinitz, E. Artin and E. Noether, and was fully established by 1930 with the appearance of B.L. van der Waerden's Modern algebra. The subject matter of algebra, its principal branches and its connection with other branches of mathematics. The subject matter of modern algebra are sets and algebraic operations on these sets (i.e. algebras or universal algebras, cf. the terminology in Algebra; Universal algebra), considered up to an isomorphism. This means that, from the point of view of algebra, the sets themselves and the sets as carriers of algebraic operations are indistinguishable, and in this sense the proper subject of study are the algebraic operations themselves. For a long time the studies that were actually carried out concerned only a few basic types of universal algebras which naturally appeared in the development of mathematics and its applications. One of the most important and most thoroughly studied type of algebras is a group, i.e. an algebra with one associative binary operation, containing a unit element and, for each element, an inverse element. The concept of a group was, historically, the first example of a universal algebra and in fact served, in many respects, as a model for the construction of algebra and of mathematics in general at the turn of the 19th century. Independent studies on generalizations of groups such as semi-groups, quasi-groups and loops began only much later (cf. Loop; Quasi-group; Semi-group). Rings and fields are very important types of algebras with two binary operations. The operations in rings and fields are usually called addition and multiplication. A ring is defined by Abelian group axioms for the addition, and by distributive laws for multiplication with respect to addition (cf. Rings and algebras). Originally, only rings with associativity multiplication were studied, and this requirement of associativeness occasionally forms part of the definition of a ring (cf. Associative rings and algebras). The study of non-associative rings (cf. Non-associative rings and algebras) is today a fully recognized independent discipline. A skew-field is an associative ring in which the set of all non-zero elements is a multiplicative group. A field is a skew-field in which multiplication is commutative. Number fields, i.e. sets of numbers closed under addition, multiplication, subtraction and division by non-zero numbers, were implicitly included in the very first studies of algebraic equations. Associative-commutative rings and fields are the main objects studied in commutative algebra and in the closely related field algebraic geometry. Another important type of algebra with two binary operations is a lattice. Typical examples of lattices include: the system of subsets of a given set with the operations of set-theoretic union and intersection, and the set of positive integers with the operations of taking the least common multiple and the greatest common divisor. Linear (or vector) spaces over a field may be treated as universal algebras with one binary operation (addition) and with a unary operation (multiplication by scalars of the ground field). Linear spaces over skew-fields have also been studied. If a ring is considered instead of a set of scalars, the more general concept of a module is obtained. An important part of algebra, linear algebra, studies linear spaces, modules and their linear transformations, as well as problems related to them. A part of it, the theory of linear equations and the theory of matrices, was formulated as early as the 19th century. A closely related subject is that of multilinear algebra. Initial studies on the general theory of arbitrary universal algebras (this theory is sometimes called "universal algebra" ) date back to the 1930s and were carried out by G. Birkhoff. At the same time A.I. Mal'tsev and A. Tarski laid the foundations for the theory of models (cf. Model (in logic)), i.e. sets with marked relations on them. Subsequently, the theory of universal algebras and the theory of models became so closely linked that they gave rise to a new discipline, intermediate between algebra and mathematical logic, called the theory of algebraic systems (cf. Algebraic system), the subject of which are sets with algebraic operations and relations defined on them. A number of disciplines intermediate between algebra and other fields of mathematics have been created by the introduction into universal algebras of complementary structures compatible with the algebraic operations. These include topological algebra (including the theory of topological groups and Lie groups, cf. Lie group; Topological group), the theory of normed rings (cf. Normed ring), differential algebra, and theories of various ordered algebraic formations. Homological algebra, which originates both from algebra and from topology, arose in the 1950s as a discipline in its own The role of algebra in modern mathematics is extremely important, and there is a trend towards further "algebraization" of mathematics. A typical way of studying many mathematical objects that are sometimes far removed from algebra is to construct algebraic systems which adequately represent the behaviour of these objects. Thus, the study of Lie groups can be largely reduced to the study of their algebraic counterparts: Lie algebras (cf. Lie algebra). A similar method is used in topology: to each topological space is assigned, in some standard manner, an infinite series of homology groups (cf. Homology group), and these series of algebraic "reflections of them" make it possible to evaluate, very accurately, the properties of the spaces themselves. The recent major discoveries in topology were made using algebra as a tool (cf. Algebraic topology). It would appear at first sight that the translation of problems into the language of algebra, solving them in this language and translating them back is merely a superfluous complication. In fact, such a method turns out to be highly convenient, and occasionally the only possible one. This is because by algebraization one solves problems not only by purely verbal considerations, but also by using the powerful apparatus of formal algebraic calculations, so that one may occasionally overcome highly involved complications. This role of algebra in mathematics may be compared with the role of modern computers in the solution of practical problems. Algebraic concepts and methods are widely employed in number theory (cf. Algebraic number theory), in functional analysis, in the theory of differential equations, in geometry (cf. Invariants, theory of; Projective geometry; Tensor algebra), and in other mathematical disciplines. Besides its fundamental role in mathematics, algebra is very important from the point of view of its applications; examples are applications within physics (the representation theory of finite groups in quantum mechanics; discrete groups in crystallography), cybernetics (cf. Automata, theory of) and mathematical economics (linear inequalities, cf. Linear inequality). For references, see also the articles on individual algebraic disciplines. [1] , The history of mathematics from Antiquity to the beginning of the XIX-th century , 1–3 , Moscow (1970–1972) (In Russian) [2] A.I. Mal'tsev, "On the history of algebra in the USSR during her first twenty-five years" Algebra and Logic , 10 : 1 (1971) pp. 68–75 Algebra i Logika , 10 : 1 (1971) pp. 103–118 Zbl 0234.01004 [3] , Mathematics, its content, methods and meaning , 1–3 , Moscow (1956) (In Russian) Zbl 1049.00004 Zbl 0232.00001 Zbl 0111.00103 [4] A.G. Kurosh, "Higher algebra" , MIR (1972) (Translated from Russian) MR0945393 MR0926059 MR0778202 MR0759341 MR0628003 MR0384363 Zbl 0237.13001 [5] N. Bourbaki, "Elements of mathematics. Algebra: Algebraic structures. Linear algebra" , 1 , Addison-Wesley (1974) pp. Chapt.1;2 (Translated from French) MR0354207 [6] B.L. van der Waerden, "Algebra" , 1–2 , Springer (1967–1971) (Translated from German) MR1541390 Zbl 1032.00002 Zbl 1032.00001 Zbl 0903.01009 Zbl 0781.12003 Zbl 0781.12002 Zbl 0724.12002 Zbl 0724.12001 Zbl 0569.01001 Zbl 0534.01001 Zbl 0997.00502 Zbl 0997.00501 Zbl 0316.22001 Zbl 0297.01014 Zbl 0221.12001 Zbl 0192.33002 Zbl 0137.25403 Zbl 0136.24505 Zbl 0087.25903 Zbl 0192.33001 Zbl [7] S. Lang, "Algebra" , Addison-Wesley (1974) MR0783636 Zbl 0712.00001 [8] A.I. Mal'tsev, "Algebraic systems" , Springer (1973) (Translated from Russian) Zbl 0266.08001 [a1] R. Lidl, G. Pilz, "Applied abstract algebra" , Springer (1984) MR0765220 Zbl 0572.00001 [a2] N. Jacobson, "Lectures in abstract algebra" , 1–3 , v. Nostrand (1951–1964) MR0392906 MR0392227 MR0369381 MR0172871 MR0053905 MR1570588 MR0041102 Zbl 0455.12001 Zbl 0326.00001 Zbl 0322.12001 Zbl 0314.15001 Zbl 0124.27002 Zbl 0053.21204 How to Cite This Entry: Algebra(2). Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Algebra(2)&oldid=23744 This article was adapted from an original article by Yu.I. MerzlyakovA.I. Shirshov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article
{"url":"http://www.encyclopediaofmath.org/index.php/Algebra(2)","timestamp":"2014-04-19T04:21:19Z","content_type":null,"content_length":"41251","record_id":"<urn:uuid:4231a891-488e-4b1b-b034-0a171131d012>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
whole prime number Sure, you can count using the counting system. I just never said you could interpret each position as a number in the sense of what can be done once you define a number base system. Yes, indeed you can count with the counting system. My definition of count will have to become something like: "take another algorithmic step" In the systems I give, you may suppose modus ponens is the only underlying logic. Consider the system: Axiom 1. A You can take as many algorithmic steps as you like with this system: 1. A (1) 2. A (1) 3. A (1) 4. A (1) . . . Thus it lets you count in your terminology. Perhaps you mean taking steps that are essentially different from those before? Consider this system: Axiom 1. A Axiom 2. A --> B Axiom 3. B --> A We can take as many algorithmic steps as you like: 1. A (1) 2. A --> B (2) 3. B (MP) 4. B --> A (3) 5. A (MP) . . . Axiom 1. A Axiom 2. For all x, x --> x. 1. A (1) 2. A --> A (2) 3. (A --> A) --> (A --> A) (2) 4. A --> (A --> A) (MP) . . . Plenty of algorithmic steps, but there's no real way to count with this one. For a more concrete system, consider forming sets: {{}, {{}}} {{}, {{}}, {{{}}}} {{}, {{{}}}} {{{}, {{{}}}}, {{}}} Sets that are subsets of others can be said to be smaller, but some sets are incomparable -- neither is smaller. This doesn't make a "number line" so much as a web. I am interested in a system that lets me move forward in the "line" and I don't care at this point about whether or not you can label each position. I know this system is going to be almost useless for most people. But if you remember that the Peano system is basically able to simulate this counting system then you can not deny that lots of stuff in mathematics is related to a counting system... .it might just be harder to recognize that fact since there are so many other things you can do with Peano like the fancy multiplication or addition. I don't think the Peano axioms simulate arithmetic; I think they define how something has to act to I see set theory as the basis for mathematics more than counting, but I'm sure a counting system could be used as an alternate basis. My field (number theory) would find that particularly natural.
{"url":"http://www.physicsforums.com/showthread.php?p=1389752","timestamp":"2014-04-19T07:35:05Z","content_type":null,"content_length":"90565","record_id":"<urn:uuid:c312e4ec-1d05-40b9-8cba-c22809f899d3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Can we decide if an abelian variety is simple by knowing its Zeta function ? up vote 3 down vote favorite Let $A$ be an Abelian variety defined over the finite field with $q$ elements. Let $P_i(T)$ be the characteristic polynomial of the action of the Frobenius on the $i^{th}$ étale cohomology group. Is the following assertion true: "The Abelian variety $A$ is simple over the finite field with $q$ elements if and only if $P_1(T)$ is irreducible over $\mathbb Q$" ? One implication is obvious, what about the other one ? This is true if the abelian variety is ordinary since in this case Frobenius generates the endomorphism algebra. – ulrich Feb 15 '12 at 14:43 To Ulrich : what means ordinary ? – Xavier Roulleau Feb 15 '12 at 14:48 One way to define ordinary is as follows: if $A$ is $g$ dimensional then $A$ is ordinary if it has $p^g$ points of order $p$ over the algebraic closure of the base field $k$, where $p$ is $char(k) $. – ulrich Feb 15 '12 at 15:07 add comment 2 Answers active oldest votes The following result follows from Tate-Honda theory Let $A$ be an abelian variety over a finite field $k$, and let $f_A$ be the characteristic polynomial of $A$. Then $A$ is isogenous to a power of a simple abelian variety if and only if $f_A$ is a power of an irreducible polynomial. I can't find a set of online notes which contains this statement. Kirsten Eisenträger's notes are generally very good, but they get this result wrong on the first page -- Theorem 1.1 claims that, if $f_A$ is a power of an irreducible polynomial, then $A$ is simple, ignoring the possibility that $A$ is a power of a simple variety. Let $A$ be isogenous to $\bigoplus A_i^{n_i}$, where the $A_i$ are simple and mutually non-isogenous. Every abelian variety has such a decomposition. Then $f_A = \prod f_{A_i}^{n_i}$. Suppose that $f_A$ is a power of an irreducible polynomial. Then all of the $f_{A_i}$ must also be powers of that polynomial. In particular, for any $i$ and $j$, either $f_{A_i}$ divides $f_{A_j}$ or vice versa; without loss of generality, suppose $f_{A_i} | f_{A_j}$. By a result of Tate, this means that $A_i$ is isogenous to a subvariety of $A_j$. Since $A_i$ and $A_j$ are simple, this means that $A_i$ and $A_j$ are isogenous. Since we assumed that the $A_i$ were mutually nonisogenous, there must in fact be only one summand in our decomposition of $A$, and $A$ is isogenous to $A_1^{n_1}$ for some simple $A_1$ and some $n_1$. Suppose now that $A$ is isogenous to $B^{n}$ for $B$ simple. Then $f_A = f_B^n$. So our goal is to show that $f_B$ is a power of an irreducible polynomial. If not, write $f_B = gh$ where $g$ and $h$ are relatively prime of positive degree. By a result of Honda, there exist abelian varieties $C$ and $D$ with characteristic polynomials $g$ and $h$. By the result of Tate cited above, $C$ and $D$ are isogenous to subvarieties of $B$, contradicting that $B$ is simple. $\square$ up vote 13 The answer to the question in your title is "yes", we can decide whether $A$ is irreducible by knowing its $\zeta$ function. Fix a prime power $q$. Let $k$ be the field with $q$ elements. down vote Let $W(q)$ be the set of irreducible monic polynomials over $\mathbb{Q}$ all of whose roots have norm $q^{1/2}$. The main result of Honda-Tate theory (Theorem 4.1 in Kirsten's notes) is accepted that there is a bijection between isogeny classes of $k$-simple abelian varieties over $k$ and $W(q)$. For each polynomial $g$ in $W(q)$, there is some positive integer $n(g,q)$ such that the characteristic polynomial of the corresponding simple abelian variety is $f^{n(g,q)}$. The tricky point is that $n(g,q)$ is not always $1$. For example, in Denis's answer, what is going on is that $n(x-p, p^2)=2$. So it is true that $A$ is $k$-simple if and only if $f_A$ is of the form $g^{n(g,q)}$; you just need to know how to compute that $n$ function. I think you should be able to extract this from sections 4 and 5 of Kirsten's notes, but I don't know the details. UPDATE: Brian Conrad e-mails to spell out the recipe (hope I copied this correctly). Let $f$ be irreducible of the required form. Let $\pi$ be a root of $f$ and let $F$ be the field $\ mathbb{Q}(\pi)$. For every $p$-adic place $v$ of $F$, let $d_v$ be the denominator of $v(\pi) [F_v:\mathbb{Q}_p]/v(q)$ when written in lowest terms. Let $d = LCM(d_v)$ where the LCM ranges over all possible $v$'s. Then $f^d$ is the characteristic polynomial of the simple abelian variety. If I'm not mistaken, this condition can be stated in an elegant geometric way. For any polynomial $g$ over $\mathbb{Q}_p$, let $N(g)$ be the $p$-adic Newton polytope of $g$. We will subdivide the path $N$ as follows: Recall that, if $h$ is irreducible over $\mathbb{Q}_p$, then $N(h)$ is a line segment, and that, if $g$ factors as $\prod h_i^{r_i}$, then $N(g)$ is the concatenation of $r_i$ copies of each $N(h_i)$, ordered with increasing slope. We will decompose $N(g)$ into one piece for each distinct irreducible factor, with that piece being $r_i$ times $N(h_i)$. For example, $x^2-p^2$, $x^2+p^2$ and $x^2-2xp+p^2$ all have Newton polytope a line segment from $(2,0)$ to $(0,2)$. In the first case, we would subdivide this line segment into two line segments, touching at $(1,1)$, because the two factors $x+p$ and $x-p$ are distinct. In the second case, we would subdivide if $x^2+p^2$ factored in $\mathbb{Q}_p$ (i.e. if $p$ is $1 \mod 4$) but not if it remained irreducible (if $p$ is $3 \mod 4$). In the third case, we would not subdivide, because the factor $(x-p)$ is repeated. Then I believe the condition is that $f$ is the characteristic polynomial of an abelian variety if and only if all the vertices of $N(f)$, subdivided as above, have heights that are integer multiples of $v_p(q)$. I will study that.Thanks a lot David. – Xavier Roulleau Feb 15 '12 at 16:12 add comment No, an elliptic curve over a field with $p^2$ elements may have zeta function $(X-p)^2=X^2-2pX+p^2$. up vote 9 down vote (See, e.g., section 4 of Waterhouse's paper "Abelian varieties over finite fields", though this probably goes back to Deuring.) Nice example, thank you. The implication I was thinking about was not so obvious. – Xavier Roulleau Feb 15 '12 at 16:21 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry etale-cohomology zeta-functions or ask your own question.
{"url":"http://mathoverflow.net/questions/88520/can-we-decide-if-an-abelian-variety-is-simple-by-knowing-its-zeta-function?sort=oldest","timestamp":"2014-04-17T04:20:00Z","content_type":null,"content_length":"63635","record_id":"<urn:uuid:072b4f84-4cc2-4491-b438-44de31dd9c26>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Various stuff, mostly about sets I want to prove some various but closely-related things about a language, so if I need help later on, I'll just add it here. Definitions are in indigo. The primitive symbols of formal language L fall into two disjoint sets: a countably infinite set of propositional symbols and a non-empty finite set of connective symbols. The set S of primitive symbols is then countably infinite. A string of length n is a map from {1, 2, 3, ..., n} to S. The empty string has length 0. Conjecture: The set G of strings is countably infinite. I got this idea from seeing Cantor's diagonal process. Let G[n] denote the set of strings of length n. Then G is countable if each G[n] is at most countable. G[0] has one member. G[1] = S, so is countable. For G[2], consider the infinite array, where p[n] is a primitive symbol: p[1]p[1], p[1]p[2], p[1]p[3], ... p[2]p[1], p[2]p[2], p[2]p[3], ... p[3]p[1], p[3]p[2], p[3]p[3], ... Then the members of G[2] can be arranged in the sequence: (p[1]p[1]; p[2]p[1], p[1]p[2]; p[3]p[1], p[2]p[2], p[1]p[3]; ...). So G[2] is countable. I can't think of a way to visualize the process for higher G[n]s, but here's the idea, with G[3] as an example. Arrange the members of G[3] into a sequence by groups. Let group 1 contain all arrangements of p[1]: p[1]p[1]p[1]. Let group 2 contain all arrangements of p[1] and p[2] not in group 1 (I'll just write the subscripts): 112, 121, 122, 211, 212, 221, 222. Let group M contain all arrangements of p[1], p[2], ..., p[M] not in previous groups. Put the members of each group into the sequence. Then the sequence contains all members of G[3] and is countable. Do the same for each remaining G[n]. Then every G[n] is at most countable. Does that work? Make sense?
{"url":"http://www.physicsforums.com/showthread.php?t=72460","timestamp":"2014-04-19T15:04:02Z","content_type":null,"content_length":"44625","record_id":"<urn:uuid:7b69ec1a-c3c0-42be-91db-4bb8057865c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
Safe Cracking A La Feynman While as yet unproven, a promising theorem in particle physics states that physicists are people, too. (If you prick them—the theorem goes—they are likely to bleed, etc.) So far, the strongest support for this idea is the anecdotal evidence of Richard Feynman, a Nobel-Prize-winning physicist who was almost certainly a person. Feynman’s reputation for humanizing buffoonery included his ability to open supposedly secure safes—a skill he honed while working on the atom bomb at Los Alamos Lab during the Second World War. First, Feynman noticed that safe dials were not as precise as they might be—while a combination might include the number 42, Feynman found the adjoining numbers 40, 41, 43 and 44 also worked. This narrowed the total possibilities from nearly 1,000,000 (100-cubed) to only 8,000 (20-cubed). With practice, Feynman found he could try 400 combinations in thirty minutes, so even in the unlikely case of opening the safe on the last possible permutation it could take a maximum of only ten hours. Still, who has ten hours to spare when also racing Nazi Germany into the atomic age? If Feynman could define one of a combination’s three numbers, then opening the lock could only take him a maximum of half an hour (20-squared combinations). To do this, when in a colleague’s office with the safe open, Feynman would pretend to idly play with the lock. In fact, he found that a lock only resets itself after spinning past the first number in its combination. So Feynman would turn the combination lock, going one number further each time until the lock clicked shut, at which point he would know he had found the combination’s first number. Voila—half an hour, tops. In fact, it usually took much less time, as Feynman first tried psychologically likely numbers—the factory preset, birthdays, phone numbers, or—most commonly at Los Alamos—a snippet of the number pi. Join me every Monday morning for grandtastic goodies from The Geeks' Guide to World Domination . Or if you like your geekery delivered fresh, consider subscribing to my rss feed or joining my Facebook Fan Page I voted for Tesla. He looks like he really knows what he's doing and will be adventurous. Good article yet another thing I did not know about my favorite nutty kooky prof. By the by how'd you get the poll feature to work? I tried a couple days ago and no joy. Science advances as much by mistakes as by plans. Hontas Farmer | 10/19/09 | 14:23 PM Thanks, Hontas,Yep—the poll function deleted my info once, but the second time was a charm. Personally, I have a massive man-crush on Sir Isaac Newton, "bad boy of 18th century alchemy." Although, I think much of his work was derivative... Garth Sundem, TED speaker, Wipeout loser and author of Brain Trust Garth Sundem | 10/19/09 | 14:47 PM Hank Campbell | 10/19/09 | 16:04 PM But his work's "derivative"...eh? Eh? Eh? I've been waiting YEARS to make that joke. Garth Sundem, TED speaker, Wipeout loser and author of Brain Trust Garth Sundem | 10/19/09 | 19:20 PM Hontas Farmer | 10/19/09 | 20:45 PM But for a different take on "Integral", I suggest the novel We, by Yevgeny Zamyatin. Robert H. Olley / Quondam Physics Department / University of Reading / England Robert H Olley | 10/20/09 | 07:09 AM
{"url":"http://www.science20.com/geeks039_guide_world_domination/safe_cracking_la_feynman","timestamp":"2014-04-20T06:25:28Z","content_type":null,"content_length":"46336","record_id":"<urn:uuid:225e4d4f-4828-452a-a8ca-cba1aca6d2b5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Converting between Moles and Atoms - Boundless Open Textbook As introduced in the previous atom, the mole can be used to relate masses of substances to the quantity of atoms therein. The benefit thereof is an easy way of characterizing chemical reactions and determining how much of one substance can react with a given amount of another substance. From moles of a substance, one can also find the number of atoms in a sample and vice versa. The bridge between atoms and moles is Avogadro's number, 6.022 x 10^23. Avogadro's number is dimensionless, but when it defines the mole, it can be expressed as 6.022 x 10^23 mol^-1; this form of the number is known as Avogadro's constant. This form shows the role of Avogadro's number as a conversion factor between the number of entities and the number of moles. Therefore, given the relationship 1 mol/6.022 x 10^23 atoms, converting between moles and atoms of a substance becomes a simple dimensional analysis problem. Converting Moles to Atoms Given a known number of moles (x), one can find the number of atoms in this molar quantity by multiplying it by Avogadro's number: $x \ moles \cdot 6.02\cdot 10^{23} \frac{atoms}{1 \ mole} = y \ atoms$ For example, if we want to know how may atoms are in six moles of sodium, we could solve: 6 moles*(6.022*10^23 atoms/mole)=3.61*10^24 atoms Note that the solution is independent of whether the element is sodium or otherwise. Converting Atoms to Moles Reversing the calculation above, we can convert a number of atoms to a molar quantity by dividing it by Avogadro's number: $\frac{x \ atoms}{6.022\cdot 10^{23} \frac{atoms}{1 \ mole}} = y \ moles$ This can be written without a fraction in the denominator by multiplying the number of atoms by the reciprocal of Avogadro's number: $x \ atoms \cdot \frac{1 \ mole}{6.022\cdot 10^{23} \ atoms} = y \ moles$ For example, if we know there are 3.5*10^24 atoms in a sample, we can calculate the number of moles this quantity represents: 3.5*10^24 atoms*(1 mole/6.022*10^23 atoms)=5.81 moles
{"url":"https://www.boundless.com/chemistry/mass-relationships-and-chemical-equations/molar-mass/converting-between-moles-and-atoms/","timestamp":"2014-04-20T20:56:45Z","content_type":null,"content_length":"65899","record_id":"<urn:uuid:0766a848-19e7-4a2f-9eed-3649306d45b7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
Energy loss due to friction So the frictional force is: Ff = (mu)k * N = (0.266)* (74.9sin17.6 + (15*9.8)) = 45.1 N You are not handling the plus/minus signs corerctly when calculating the normal force. The normal force and vert comp of the applied force act up, and the weight force acts down. The algebraic sum of these 3 forces adds up to 0, per application of Newton 1 in the y direction. But how would I calculate the work done? The block doesn't move in the y-direction, so I would think no work coule be done since W=Fd In the y direction, yes, there is no work done. But there is work done in the x direction. Find the work done in the x direction by the friction force. That is the energy lost due to friction. Then use energy methods to calculate the kinetic energy change.
{"url":"http://www.physicsforums.com/showthread.php?t=475256","timestamp":"2014-04-19T07:42:40Z","content_type":null,"content_length":"49850","record_id":"<urn:uuid:792e8e8d-603d-47d3-80d5-d884a1070a1a>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
determinant of minor If given a n*n matrix with all rows and columns sum to 0, how do I argue that all its (n-1)*(n-1) minor have the same determinant up to a sign? Since all rows and columns all sum to 0, then I know that any column is a linear combination of all others, so that the determinant of this n*n matrix must be zero, then since the determinant is calculated using minors, it seems to imply that all (n-1)*(n-1) minors must have the same determinant up to a sign, but how do I rigorously prove that?
{"url":"http://www.physicsforums.com/showthread.php?t=469227","timestamp":"2014-04-17T09:56:02Z","content_type":null,"content_length":"22022","record_id":"<urn:uuid:3145f048-d7db-4bba-851e-d6105a65890e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
If light is quantized, why are EM spectrum and Blackbody spectrum continuous? Meir Achuz in the same way that a liquid composed of individual molecules is continuous. N is so large and \Delta E so small that you don't notice the quantization. Maybe, but the interesting question is whether the spectrum has discreteness at ANY fundamental level. In thermal radiation from real materials you have molecular radiation which adds all sorts of "random" (continuous) noise, especially in the solid state which may have unbound electrons drifting near the surface. I highlighted some specific processes where an arbitrary continuous variable might be assumed for the photon frequencies, and these will only be discrete if time is discrete in nature, a hypothesis which has no experimental support.
{"url":"http://www.physicsforums.com/showthread.php?p=3191853","timestamp":"2014-04-20T18:30:56Z","content_type":null,"content_length":"70222","record_id":"<urn:uuid:0f441833-03e8-49f4-bb97-0a747094b723>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
Daniel E. López-Fogliani Publications (16)42.49 Total impact [show abstract] [hide abstract] ABSTRACT: The extended field content of the "$\mu$ from $\nu$" supersymmetric standard model ($\mu\nu$SSM) can accommodate very light scalars, pseudoscalars and neutralinos in certain regions of the parameter space with leading right-handed sneutrino and neutrino composition, respectively. Direct production of these states at colliders is suppressed due to gauge singlet nature. Nevertheless, production of these states is possible in the decay cascades of heavier ones. In this letter we emphasis how these light states can lead to unusual signals from $Z$ or $W^\pm$ boson decays at the LHC with prompt or displaced multi-leptons/jets at the final states. These new modes would give distinct evidence of new physics even when direct searches remain unsuccessful. We address possibilities to probe these non-standard signatures with ongoing or upcoming collider experiment. [show abstract] [hide abstract] ABSTRACT: The "$\mu$ from $\nu$" supersymmetric standard model ($\mu\nu$SSM) cures the $\mu$-problem and concurrently reproduces measured neutrino data by using a set of usual right-handed neutrino superfields. Recently, the LHC has revealed the first scalar boson which naturally makes it tempting to test $\mu\nu$SSM in the light of this new discovery. We show that this new scalar while decaying to a pair of unstable long-lived neutralinos, can lead to a distinct signal with non-prompt multileptons. With concomitant collider analysis we show that this signal provides an unmistakable signature of the model, pronounced with light neutralinos. Evidence of this signal is well envisaged with sophisticated displaced vertex analysis, which deserves experimental Physical review D: Particles and fields 11/2012; 88(1). [show abstract] [hide abstract] ABSTRACT: The $\mu\nu$SSM is a supersymmetric standard model that accounts for light neutrino masses and solves the $\mu$ problem of the MSSM by simply using right-handed neutrino superfields. Since this mechanism breaks R-parity, a peculiar structure for the mass matrices is generated. The neutral Higgses are mixed with the right- and left-handed sneutrinos producing 8$\times$8 neutral scalar mass matrices. We analyse the Higgs sector of the $\mu\nu$SSM in detail, with special emphasis in possible signals at colliders. After studying in general the decays of the Higges, we focus on those processes that are genuine of the $\mu\nu$SSM, and could serve to distinguish it from other supersymmetric models. In particular, we present viable benchmark points for LHC searches. For example, we find decays of a MSSM-like Higgs into two lightest neutralinos, with the latter decaying inside the detector leading to displaced vertices, and producing final states with 4 and 8 $b$-jets plus missing energy. Final states with leptons and missing energy are also found. Journal of High Energy Physics - J HIGH ENERGY PHYS. 07/2011; 10. [show abstract] [hide abstract] ABSTRACT: Motivated by the recent re-confirmation by CoGENT of the low-energy excess of events observed last year, and the recent improved limits from the XENON-100 experiment that are in contention with the CoGENT data, we re-examine the low mass neutralino region of the Minimal Supersymmetric Standard Model and of the Next-to-Minimal Supersymmetric Standard Model, both without assuming gaugino mass unification. We make several focused scans for each model, determining conservative constraints on input parameters. We then determine how these constraints are made increasingly stringent as we re-invoke our experimental constraints involving the dark matter relic abundance, collider constraints from LEP and the Tevatron, and then from flavour physics, as a series of successive 2 sigma hard cuts. We find that for both models, when all relevant constraints are applied in this fashion, we do not generate neutralino LSPs that possess a spin-independent scattering cross section in excess of 10^-5 pb and a mass ~7 GeV < m_chi < ~9 GeV that is necessary in order to explain the CoGENT observations. [show abstract] [hide abstract] ABSTRACT: We examine the extent to which it is possible to realize the NMSSM "ideal Higgs" models espoused in several papers by Gunion et al in the context of partially universal GUT scale boundary conditions. To this end we use the powerful methodology of nested sampling. We pay particular attention to whether ideal-Higgs-like points not only pass LEP constraints but are also acceptable in terms of the numerous constraints now available, including those from the Tevatron and $B$-factory data, $(g-2)_\mu$ and the relic density $\Omega h^2$. In general for this particular methodology and range of parameters chosen, very few points corresponding to said previous studies were found, and those that were found were at best $2\sigma$ away from the preferred relic density value. Instead, there exist a class of points, which combine a mostly singlet-like Higgs with a mostly singlino-like neutralino coannihilating with the lightest stau, that are able to effectively pass all implemented constraints in the region $80<m_h<100$. It seems that the spin-independent direct detection cross section acts as a key discriminator between ideal Higgs points and the hard to detect singlino-like points. Physical review D: Particles and fields 05/2011; 84. [show abstract] [hide abstract] ABSTRACT: The $\mu \nu$SSM proposes to use right-handed neutrino supermultiplets in order to generate the $\mu$ term and neutrino masses simultaneously. We discuss neutrino physics and the associated electroweak seesaw mechanism in this model. We show how to obtain, from the neutralino-neutrino mass matrix of the $\mu \nu$SSM, the effective neutrino mass matrix. In particular we discuss certain limits of this matrix that clarify the neutrino-sector behavior of the model. We also show that current data on neutrino masses and mixing angles can easily be reproduced. These constraints can be fulfilled even with a diagonal neutrino Yukawa matrix, since this seesaw does not involve only the right-handed neutrinos but also the MSSM neutralinos. To obtain the correct neutrino angles turns out to be easy due to the following characteristics of this seesaw: R-parity is broken and the relevant scale is the electroweak one. Comment: 7 pages, 1 figure. Talk given at "BUE, CTP International Conference on Neutrino Physics in the LHC Era", Luxor, Egypt, 15-19 Nov. 2009. [show abstract] [hide abstract] ABSTRACT: The munuSSM provides a solution to the mu-problem of the MSSM and explains the origin of neutrino masses by simply using right-handed neutrino superfields. Given that R-parity is broken in this model, the gravitino is a natural candidate for dark matter since its lifetime becomes much longer than the age of the Universe. We consider the implications of gravitino dark matter in the munuSSM, analyzing in particular the prospects for detecting gamma rays from decaying gravitinos. If the gravitino explains the whole dark matter component, a gravitino mass larger than 20GeV is disfavored by the isotropic diffuse photon background measurements. On the other hand, a gravitino with a mass range between 0.1-20 GeV gives rise to a signal that might be observed by the FERMI satellite. In this way important regions of the parameter space of the munuSSM can be checked. Journal of Cosmology and Astroparticle Physics 01/2010; 3(03):028-028. · 6.04 Impact Factor [show abstract] [hide abstract] ABSTRACT: We perform a first global exploration of the constrained next-to-minimal supersymmetric standard model using Bayesian statistics. We derive several global features of the model and find that, in some contrast to initial expectations, they closely resemble those of the constrained minimal supersymmetric standard model. This remains true even away from the decoupling limit which is nevertheless strongly preferred. We present ensuing implications for several key observables, including collider signatures and predictions for direct detection of dark matter. Physical review D: Particles and fields 11/2009; 80(9). [show abstract] [hide abstract] ABSTRACT: The μνSSM provides a solution to the μ problem of the MSSM and explains the origin of neutrino masses by simply using right-handed neutrino superfields. We have completed the analysis of the vacua in this model, studying the possibility of spontaneous CP violation through complex Higgs and sneutrino vacuum expectation values. As a consequence of this process, a complex MNS matrix can be present. Besides, we have discussed the neutrino physics and the associated electroweak seesaw mechanism in the μνSSM, including also phases. Current data on neutrino masses and mixing angles can easily be reproduced. Journal of High Energy Physics 08/2009; 2009(08):105. · 5.62 Impact Factor [show abstract] [hide abstract] ABSTRACT: We perform a first global exploration of the Constrained Next-to-Minimal Supersymmetric Standard Model using Bayesian statistics. We derive several global features of the model and find that, in some contrast to initial expectations, they closely resemble the Constrained MSSM. This remains true even away from the decoupling limit which is nevertheless strongly preferred. We present ensuing implications for several key observables, including collider signatures and predictions for direct detection of dark matter. [show abstract] [hide abstract] ABSTRACT: The μνSSM is a supersymmetric standard model that solves the μ problem of the MSSM using the R-parity breaking couplings between the right-handed neutrino superfields and the Higgses in the superpotential, λi cidu. The μ term is generated spontaneously through sneutrino vacuum expectation values, μ = λici, once the electroweak symmetry is broken. In addition, the couplings κijkcicjck forbid a global U(1) symmetry avoiding the existence of a Goldstone boson, and also contribute to spontaneously generate Majorana masses for neutrinos at the electroweak scale. Following this proposal, we have analysed in detail the parameter space of the μνSSM. In particular, we have studied viable regions avoiding false minima and tachyons, as well as fulfilling the Landau pole constraint. We have also computed the associated spectrum, paying special attention to the mass of the lightest Higgs. The presence of right and left-handed sneutrino vacuum expectation values leads to a peculiar structure for the mass matrices. The most important consequence is that neutralinos are mixed with neutrinos, and neutral Higgses with sneutrinos. Journal of High Energy Physics 12/2008; 2008(12):099. · 5.62 Impact Factor [show abstract] [hide abstract] ABSTRACT: The viability of the lightest neutralino as a dark matter candidate in the next-to-minimal supersymmetric standard model is analysed. We carry out a thorough analysis of the parameter space, taking into account accelerator constraints as well as bounds on low-energy observables, such as the muon anomalous magnetic moment and rare K and B meson decays. The neutralino relic density is also evaluated and consistency with present bounds imposed. Finally, the neutralino direct detection cross section is calculated in the allowed regions of the parameter space and compared to the sensitivities of present and projected dark matter experiments. Regions of the parameter space are found where experimental constraints are fulfilled, the lightest neutralino has the correct relic abundance and its detection cross section is within the reach of dark matter detectors. This is possible in the presence of very light singlet-like Higgses and when the neutralino is either light enough so that some annihilation channels are kinematically forbidden, or has a large singlino component. Journal of Cosmology and Astroparticle Physics 03/2007; 2007(06). · 6.04 Impact Factor [show abstract] [hide abstract] ABSTRACT: The fact that neutrinos are massive suggests that the minimal supersymmetric standard model (MSSM) might be extended in order to include three gauge-singlet neutrino superfields with Yukawa couplings of the type H2Lnuc. We propose to use these superfields to solve the mu problem of the MSSM without having to introduce an extra singlet superfield as in the case of the next-to-MSSM (NMSSM). In particular, terms of the type nuc H1H2 in the superpotential may carry out this task spontaneously through neutrino vacuum expectation values. In addition, terms of the type (nuc)3 avoid the presence of axions and generate effective Majorana masses for neutrinos at the electroweak scale. On the other hand, these terms break lepton number and R parity explicitly. For Dirac masses of the neutrinos of order 10(-4) GeV, eigenvalues reproducing the correct scale of neutrino masses are obtained. Physical Review Letters 08/2006; 97(4):041801. · 7.94 Impact Factor [show abstract] [hide abstract] ABSTRACT: We analyse the direct detection of neutralino dark matter in the framework of the Next-to-Minimal Supersymmetric Standard Model. After performing a detailed analysis of the parameter space, taking into account all the available constraints from LEPII, we compute the neutralino-nucleon cross section, and compare the results with the sensitivity of detectors. We find that sizable values for the detection cross section, within the reach of dark matter detectors, are attainable in this framework. For example, neutralino-proton cross sections compatible with the sensitivity of present experiments can be obtained due to the exchange of very light Higgses with $m_{h_1^0}\lsim 70$ GeV. Such Higgses have a significant singlet composition, thus escaping detection and being in agreement with accelerator data. The lightest neutralino in these cases exhibits a large singlino-Higgsino composition, and a mass in the range $50\lsim m_{\tilde\chi_1^0}\ lsim 100$ GeV. Journal of High Energy Physics 09/2004; · 5.62 Impact Factor [show abstract] [hide abstract] ABSTRACT: The μνSSM is a supersymmetric standard model that accounts for light neutrino masses and solves the μ problem of the MSSM by simply using right-handed neutrino superfields. Since this mechanism breaks R-parity, a peculiar structure for the mass matrices is generated. The neutral Higgses are mixed with the right- and left-handed sneutrinos producing 8 × 8 neutral scalar mass matrices. We analyse the Higgs sector of the μνSSM in detail, with special emphasis in possible signals at colliders. After studying in general the decays of the Higges, we focus on those processes that are genuine of the μνSSM, and could serve to distinguish it form other supersymmetric models. In particular, we present viable benchmark points for LHC searches. For example, we find decays of a MSSM-like Higgs into two lightest neutralinos, with the latter decaying inside the detector leading to displaced vertices, and producing final states with 4 and 8 b-jets plus missing energy. Final states with leptons and missing energy are also found. Journal of High Energy Physics 2011(10). · 5.62 Impact Factor [show abstract] [hide abstract] ABSTRACT: The �ν SSM is a supersymmetric standard model that solves theproblem of the MSSM using the R-parity breaking couplings between the right-handed neutrino superfields and the Higgses in the superpotential, λi ˆ νc i ˆ Hd ˆ Hu. Theterm is generated spontaneously through sneutrino Top Journals • 2012 □ University of Buenos Aires ☆ Department of Physics (FI) Buenos Aires, Buenos Aires F.D., Argentina • 2011 □ Université Paris-Sud 11 ☆ Laboratoire de Physique Théorique d'Orsay Orsay, Île-de-France, France • 2004–2010 □ Universidad Autónoma de Madrid ☆ Department of Theoretical physics Madrid, Madrid, Spain • 2008–2009 □ The University of Sheffield ☆ Department of Physics and Astronomy Sheffield, ENG, United Kingdom
{"url":"http://www.researchgate.net/researcher/29101068_Daniel_E_Lopez-Fogliani","timestamp":"2014-04-17T13:44:11Z","content_type":null,"content_length":"302714","record_id":"<urn:uuid:bdb91969-26cd-48f3-ad68-03335a56aab9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Ordinary Differential Equations/Substitution 2 Substitution methods are really applicable anywhere you can find a differential equation. However, there's very few instances where you will always give a certain substitution. You generally pick one and plug it in as needed. So I'll give situations where you could use a substitution method, although you may later learn better methods. Parametric equationsEdit One time where you may need it is when solving parametric equations. Lets say we're given functions for velocity in two dimensions- $v_x(t)$ and $v_y(t)$. If we want to solve for $y(x)$, you have to divide $\frac{v_y}{v_x}$. This works out to be $\frac{\frac{dv_y}{dx}}{\frac{dv_x}{dx}}=\frac{dy}{dx}$. When you do this, you will frequently (although not always) get a chance to use $\frac{y}{x}$ Constant velocityEdit Lets say we're swimming across a river with constant velocity $v_0$. The river has no current. We start swimming at an angle of $\theta$ with respect to the shore. Solve for $y(x)$ The first thing we need to do is break the velocity into x and y components. This is fairly simple. Using simple trig, we can remove the theta. Now we divide the two to find $\frac{dy}{dx}$. Now this is simple to solve separably. It could also be solved via substituion. This is a trivial example, but it can be made more complicated. Motion against a currentEdit Imagine the same swimmer. Now there is a current with speed r going straight up the river (positive y direction). How does this change our example? The x component is still the same. And in the y direction we also have a term due to the current. You can get $\frac{dy}{dx}$ by dividing the two equations We can move the x into the root to simplify the equation a bit Well, this complicated equation looks like a case for $\frac{y}{x}$ substitution. That looks like a nice, easily solved separable equation. Let solve it. $\int \frac{dv}{\sqrt{1+v^2}}=\int \frac{r}{xv_0}$ The left end is an ugly integral. Just trust me on it. Lets try to get rid of that root. Isolate it, and square both sides. Plugging in for v, we get We can solve for y by multiplying through by x This complicated equation does make sense- the bigger the current, the further you go in the y direction as a portion of the x. If you ever find an equation this evil in real life, do yourself a favor and buy a computer program to solve it. Last modified on 27 November 2010, at 05:43
{"url":"http://en.m.wikibooks.org/wiki/Ordinary_Differential_Equations/Substitution_2","timestamp":"2014-04-18T13:18:49Z","content_type":null,"content_length":"22136","record_id":"<urn:uuid:2e4f08ec-3b54-45b5-96f8-21f61331d707>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove that: a) $Hom_{\mathbb{Z}}(\mathbb{Z},\mathbb{Z}_n) eq 0$ b) $Hom_{\mathbb{Z}}(\mathbb{Z}_5,\mathbb{Z}_7) = 0$ c) $Hom_{\mathbb{Z}}(\mathbb{Q},\mathbb{Z}) = 0$ thanks!!! $Hom_{R}(A,B)$ are all R-module homomorphisms from A to B. thanks! the natural $\mathbb{Z}-$homomorphism $f: \mathbb{Z} \longrightarrow \mathbb{Z}_n$ defined by $f(k)=[k]_n$ is obviously nonzero. Quote: b) $Hom_{\mathbb{Z}}(\mathbb{Z}_5,\mathbb{Z}_7) = 0$ let $f \in \text{Hom}_{\mathbb{Z}}(\mathbb{Z}_5, \mathbb{Z}_7).$ then: $[0]_7=f([0]_5)=f([5]_5)=5f([1]_5).$ thus: $f([1]_5)=[0]_7.$why? hence: $f=0.$ Quote: c) $Hom_{\mathbb{Z}}(\mathbb{Q},\mathbb{Z}) = 0$ let $f \in \text{Hom}_{\mathbb{Z}}(\mathbb{Q},\mathbb{Z}).$ then for any integer n > 0: $nf(1/n)=f(1).$ so all (positive) integers are divisors of the integer f(1), which is possible only if f(1) = 0. therefore: $f=0.$why?
{"url":"http://mathhelpforum.com/advanced-algebra/58540-hom-print.html","timestamp":"2014-04-20T16:31:14Z","content_type":null,"content_length":"10138","record_id":"<urn:uuid:bec4a780-6e03-484e-85da-3b33e32c1324>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Polynomials help~~ Heh, so I posted this thread in the wrong category so I'm reposting it! =) Hello. So here was this problem I came across: If x^4-x^3+x^2-x^1+x^0=0, what is the numerical value of x^40-x^30+x^20-x^10+x^0? I did try doing many stuffs (symmetry) & factoring, but I think none of these steps helped. Enlighten the youngster, gracias.
{"url":"http://www.physicsforums.com/showpost.php?p=3776767&postcount=1","timestamp":"2014-04-19T04:37:32Z","content_type":null,"content_length":"8622","record_id":"<urn:uuid:c630654c-f005-41f5-a878-02ca55b027aa>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
When do reflexive coequalizers preserve weak equivalences? up vote 3 down vote favorite In my work I've run into the following situation. In a model category, I have two reflexive coequalizers $A_i \stackrel{\to}{\to} B_i \to C_i$ and a map of diagrams which is levelwise a weak equivalence (i.e. $A_1\to A_2$ and $B_1\to B_2$ are weak equivalences). I need to conclude $C_1\to C_2$ is a weak equivalence. I'd love it if this were true all the time, but it probably isn't. Are there any standard axioms on a model category which let me conclude this is true? For example, left properness? I can't assume any of the objects are either cofibrant or fibrant, and the levelwise weak equivalences won't be fibrations or cofibrations. Note that reflexive coequalizers often come up when one studies model categories because they are important for building objects of interest in categories of algebras over a monad (and to prove such categories are cocomplete). I've googled around quite a bit and can't find anything saying reflexive coequalizers preserve weak equivalences, so I'm pretty sure it's not true in general (and probably you can find a counterexample just in $Ch(R)$ via $A\otimes_R B$ or doing something similar to this) but I would really love to hear expert opinions on extra hypotheses which guarantee this. add comment 1 Answer active oldest votes Disclaimer: I am not an expert on model categories. $\newcommand{\MM}{\mathcal{M}} \newcommand{\pair}{\mathsf{P}} \newcommand{\dom}{\operatorname{dom}} \newcommand{\codom}{\operatorname {codom}} \newcommand{\colim}{\operatorname*{colim}} \newcommand{\hocolim}{\operatorname*{hocolim}} \newcommand{\id}{\mathrm{id}} \newcommand{\op}{\mathrm{op}} \newcommand{\Map}{\ Assume $\MM$ is a model category which verifies the condition — as required by the question — that reflexive coequalizers preserve weak equivalences. Then $\MM$ must be homotopically discrete in the sense that, for any two objects $X$ and $Y$ of $\MM$, the derived space of maps from $X$ to $Y$ is homotopically discrete. Here is a summary of the steps in a proof of that 1. Any colimit can be written as a reflexive coequalizer. 2. Then the condition required in the question entails that all colimits of objectwise cofibrant diagrams in $\MM$ are homotopy invariant. 3. Therefore, under good technical conditions, colimits must actually be homotopy colimits. This will hold in particular for coequalizers of cofibrant objects in $\MM$. 4. Applying to coequalizers of constant diagrams, we then show that $S^1\otimes X\to X$ (say, if $\MM$ is a simplicial model category) is a weak equivalence when $X$ is cofibrant. 5. Finally, it follows that for any cofibrant $X$ and fibrant $Y$ in $\MM$, the space of morphisms $X\to Y$ is homotopically discrete. For precision, let us first fix some terminology and notation: the category $\pair=(1\rightrightarrows 0)$ is a category with two objects, and generated by two parallel arrows from one to the other. A reflexive pair in a category $C$ is a diagram $\pair\to C$ such that the corresponding two parallel arrows in $C$ happen to admit a common section. A reflexive coequalizer is then simply the coequalizer of a reflexive pair. The assumption on the model category $\MM$ described in the question asserts the homotopy invariance of reflexive coequalizers: if $F,G:\pair\to\MM$ are two reflexive pairs in $\MM$ and $\ alpha:F\to G$ is a natural transformation which is objectwise a weak equivalence, then the induced map on the coequalizers $\colim F\to\colim G$ is also a weak equivalence. In this answer, we will assume this hypothesis holds for the model category $\MM$. Step 1 Importantly, observe that any colimit in a cocomplete category $C$ can be functorially written as a reflexive coequalizer in the usual manner. If $F:I\to C$ is a small diagram in $C$ then its colimit is exactly the coequalizer of two parallel arrows $$ D_F \; : \qquad \coprod_{f\in I_1}F(\dom f)\rightrightarrows\coprod_{x\in I_0}F(x) $$ where $I_0$ and $I_1$ are the sets of objects and morphisms of $I$, respectively. Observe that this coequalizer is reflexive, i.e. there exists a common section to the two parallel arrows above. In fact, for any functor $F:I\ to C$ one can construct a full simplicial object in $C$ such that the reflexive pair $D_F$ above is recovered as the image of the two maps $[0]\rightrightarrows[1]$ in $\Delta$. Step 2 Note that the above reflexive pair $D_F$ is functorial in the diagram $F$. Therefore, if $\MM$ is a model category verifying the desired condition of homotopy invariance of reflexive coequalizers, then it will verify a more stringent property. Namely, small colimits of objectwise cofibrant diagrams are homotopy invariant in $\MM$: if $F,G: I\to\MM$ are two small diagrams in $\MM$ with cofibrant values, and $\alpha:F\to G$ is a natural transformation which is objectwise a weak equivalence, then $\alpha$ induces a weak equivalence on colimits $\ colim F\to\colim G$. Remark: We are assuming that $F$ and $G$ take cofibrant values in $M$ so that any coproduct of the maps $\alpha_x$ for $x\in I$ are also weak equivalences. For a general model category, we can only say that the coproduct of weak equivalences between cofibrant objects is a weak equivalence (see propositions 13.1.2 and 17.9.1 in Hirschhorn's book). Thus, the map induced by $\ alpha$ on the reflexive pairs $D_F\to D_G$ is again a weak equivalence if $F$ and $G$ are objectwise cofibrant. Obviously, we can drop this restriction if coproducts in $\MM$ preserve weak Step 3 In summary, the requirement in the question entails that colimits of objectwise cofibrant diagrams in $\MM$ are homotopy invariant. Consequently, as long as the projective model structure exists on the category of functors $I\to\MM$, the colimit of any objectwise cofibrant diagram $F:I\to\MM$ is weakly equivalent to its homotopy colimit via the canonical map $\hocolim F\to\ colim F$. This holds because the homotopy colimit of $F$ is actually the colimit of a cofibrant replacement of $F$ in the projective model structure on the category of functors $I\to\MM$. It is well known that the projective model structure exists on the category of functors $I\to\MM$, for any small category $I$, whenever $\MM$ is cofibrantly generated. Regardless, the projective model structure always exists for the indexing category $I=\pair$, for any model category $\MM$. This is described in section 10 of Dwyer–Spalinski's Homotopy theories and model categories: specifically, see subsection 10.13 of that article. up vote 8 down vote Remark: Alternatively, even in the absence of the projective model structure, we can apply the formalism of the book of Dwyer–Hirschhorn–Kan–Smith titled Homotopy limit functors on model accepted categories and homotopical categories. For any small indexing category $I$, given that colimits of objectwise cofibrant diagrams $I\to\MM$ are homotopy invariant, they must actually be homotopy colimits in the sense of that book. The formalism in that book is somewhat overkill, and we can probably instead use other similar frameworks for derived functors which encompass homotopy colimits in any model category: an example is this manuscript by Chacholski and Scherer. Edit: Alternative to steps 2 and 3 The preceding step 3 has the advantage of being fairly conceptual, not depending directly on reflexive coequalizers (only on the conclusion from step 2). Notwithstanding, our focus on reflexive coequalizers allows for a simple approach which bypasses the steps 2 and 3 above, and does not require the existence of the projective model structure. For completeness, I explain it here. In case $\MM$ is a simplicial model category, the homotopy colimit of $F:I\to\MM$ can be defined via a bar construction, giving $\hocolim F$ as the coequalizer of a reflexive pair $B_F:\ pair\to\MM$ (see chapter 18 of Hirschorn's book, for example): $$ B_F \; : \qquad \coprod_{f\in I_1} B((\codom f) \downarrow I)^\op \otimes F(\dom f) \rightrightarrows \coprod_{x\in I_0} B (x\downarrow I)^\op \otimes F(x) $$ This diagram $B_F$ admits a natural projection to the previous reflexive pair $D_F$ whose coequalizer is $\colim F$. If $F$ takes cofibrant values, this projection is objectwise a weak equivalence, so we conclude that the natural map $$ \hocolim F=\operatorname{\colim_\pair} B_F \overset{\sim}{\longrightarrow}\operatorname{\colim_\pair} D_F=\colim F $$ is a weak equivalence. When $\MM$ is an arbitrary model category (not simplicial), one should be able to give a similar argument by applying the definition of homotopy colimits in terms of framings of model categories, as described at the end of Hirschhorn's book. Step 4 For simplicity, I will assume $\MM$ is a simplicial model category for the remaining steps. Nevertheless, we can get away with any model category $\MM$ (with no extra conditions) by using homotopy function complexes and framings in $\MM$ (as in Hirschhorn's book), or by working in the quasi-category associated with $\MM$. Consider the coequalizer of the constant diagram $c_X:\pair\to\MM$ equal to a cofibrant object $X$ of $\MM$. Then $\colim c_X = X$. On the other hand, in a simplicial model category, the homotopy colimit of the constant functor $c_X$ is weakly equivalent to the tensor product of the classifying space of the index category with the (cofibrant) object $X$: $$ \operatorname{\ hocolim_\pair} c_X = (\operatorname{\hocolim_\pair} 1)\otimes X\simeq B\pair\otimes X\simeq S^1\otimes X $$ where the first identity is a consequence of the definition of homotopy colimits in a simplicial model category as bar constructions (see the expression for $B_F$ above). Moreover, it also follows readily from this description that the map from the homotopy colimit to the actual colimit is induced by the unique map $!:S^1\to 1$: $$ \operatorname{\hocolim_\pair} X\simeq S^1\otimes X \overset{!\otimes\id_X}{\longrightarrow} 1\otimes X =X=\operatorname{\ colim_\pair} X $$ As described in step 3, this map must be a weak equivalence under the hypothesis from the question. Step 5 Now we easily conclude that $\MM$ is homotopy discrete. Let $X$ be a cofibrant object of $\MM$ and $Y$ a fibrant object of $\MM$. Since the above map $!\otimes\id_X:S^1\otimes X\to 1\ otimes X=X$ is a weak equivalence, then the induced map $$ !^\ast: \Map(1,\MM(X,Y)) = \MM(X,Y) \overset{(!\otimes\id_X)^\ast}{\longrightarrow} \MM(S^1\otimes X,Y) = \Map(S^1,\MM(X,Y)) $$ is also a weak equivalence. Here $\MM(X,Y)$ denotes the simplicial set of morphisms from $X$ to $Y$ coming from the simplicial enrichment of $\MM$. Note that the preceding map $!^\ast$ has a left inverse: $$ i^\ast:\Map(S^1,\MM(X,Y)) \longrightarrow \Map(1,\MM(X,Y)) = \MM(X,Y) $$ for any choice of basepoint $i:1\to S^1$. Then $i^\ast$ is also a weak equivalence. The axioms for a simplicial model category imply that $\MM(X,Y)$ is a Kan complex, hence: • $i^\ast$ is a Kan fibration, and • the geometric realization of the fibre of the map $i^\ast$ at any $f\in\MM(X,Y)_0$ is equivalent to the space of based loops $\Omega_f\lvert\MM(X,Y)\rvert$. Since $i^\ast$ is a weak equivalence and a fibration, its fibres must be weakly contractible, that is $\Omega_f\lvert\MM(X,Y)\rvert$ is contractible for any $f\in\MM(X,Y)_0$. In conclusion, $\MM(X,Y)$ has all its components weakly contractible, i.e. is homotopy discrete. Concluding remarks A priori, I would not expect to find any practical conditions ensuring that some useful class of reflexive coequalizers (or other colimits) in a model category are homotopy invariant, short of assuming some cofibrancy conditions on the diagrams. This belief is bolstered by the arguments above, and more broadly by the general ideology of model categories. I would certainly be very interested in learning of results or examples invalidating that view. Moreover, in the specific case of algebras for monads, it may be a good idea to think about simplicial objects and geometric realizations instead of reflexive coequalizers. In particular, it is important to note that the usual reflexive coequalizer constructed from an algebra for a monad actually extends to an augmented simplicial object. Hi Ricardo. Thanks for you answer. I need some time to process this, and unfortunately things are crazy right now in my personal life. So I'll read it when I can and hopefully be able to give meaningful feedback. For now it seems I have to prove what I want another way, but I already have an idea for that. So thanks! – David White Mar 4 '13 at 23:21 @David: You are most welcome! Please let me know if you find any issues with my answer. – Ricardo Andrade Mar 5 '13 at 3:21 add comment Not the answer you're looking for? Browse other questions tagged model-categories ct.category-theory colimits monads at.algebraic-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/123451/when-do-reflexive-coequalizers-preserve-weak-equivalences","timestamp":"2014-04-17T07:10:48Z","content_type":null,"content_length":"67307","record_id":"<urn:uuid:e09b2bd5-6150-4364-a29c-b8995a10c1a2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Lie derivatives October 23rd 2007, 08:20 AM Lie derivatives Say you have a scalar field $\Phi(x)$ defined on a manifold M. Then imagine evaluating the field at a point an infitesimal distance away from x at $x'=x+\epsilon X$ where X is a illing vector of the manifold. Then $\Phi(x')=\Phi(x)+\epsilon X^{\mu}\partial_{\mu}\Phi(x)$ to first order. Now the second term on the rhs is the Lie derivative of a scalar, so $\Phi(x')=\Phi(x)+\epsilon L_{X}\Phi(x)$ Say now that $\Phi$ is not a scalar, say maybe a vector of spinor or tensor. Does the same thing follow, that $\Phi^{u}(x')=\Phi^{u}(x)+\epsilon L_{X}\Phi^{u}(x)$ Where now $L_{X}$ is not the appropriate Lie derivative of whatever object $\Phi$ is. October 23rd 2007, 08:34 AM Say you have a scalar field $\Phi(x)$ defined on a manifold M. Then imagine evaluating the field at a point an infitesimal distance away from x at $x'=x+\epsilon X$ where X is a illing vector of the manifold. Then $\Phi(x')=\Phi(x)+\epsilon X^{\mu}\partial_{\mu}\Phi(x)$ to first order. Now the second term on the rhs is the Lie derivative of a scalar, so $\Phi(x')=\Phi(x)+\epsilon L_{X}\Phi(x)$ Say now that $\Phi$ is not a scalar, say maybe a vector of spinor or tensor. Does the same thing follow, that $\Phi^{u}(x')=\Phi^{u}(x)+\epsilon L_{X}\Phi^{u}(x)$ Where now $L_{X}$ is not the appropriate Lie derivative of whatever object $\Phi$ is. Yes, though you might wind up with something uglier like $A_{u}(x')=A_{u}(x)+\epsilon g_{u \lambda}X^{\mu}\partial_{\mu}A^{\lambda}(x)$ for your Lie derivative. (That would be from General Relativity where $[g_{\mu u} ]$ is a metric. I'm not sure of the general structure if the embedding in the manifold isn't a space-time.) October 23rd 2007, 09:37 AM Thanks very much for your quick reply. The manifold is a spacetime, I forgot to mention. Yes, though you might wind up with something uglier like for your Lie derivative. Maybe I didn't explain very well. That expression is exactly the same as the expression I wrote, because that derivative should be a covariant derivative when operating on a vector and so the metric commutes through it and lowers the index on $A_{\lambda}$ Is that right? That expression won't work for say a vector because we are comparing vectors in two different tangent spaces. I meant would it still be the same expression, for say a vector $A_{u}(x')=A_{u}+\epsilon L_{X} A_{u}(x)$ where now the expression for the Lie derivatie on a covariant vector is $L_{X}A_{u}=X^{\mu}abla_{\mu}A_{u}+A_{\mu}a bla_{u}X^{\mu}$ if I got the indices right. I mean roughly can the Lie derivative be thought of as a generalisation of the directional derivative in vector calculus to more general settings. October 24th 2007, 04:15 AM Well, it's been (regrettably) about 10 months since I even looked at my QFT books, and I came up with that expression on the fly, so I'm not particularly surprised that it wasn't quite right for your needs. :o I meant would it still be the same expression, for say a vector $A_{u}(x')=A_{u}+\epsilon L_{X} A_{u}(x)$ where now the expression for the Lie derivatie on a covariant vector is $L_{X}A_{u}=X^{\mu}abla_{\mu}A_{u}+A_{\mu}a bla_{u}X^{\mu}$ if I got the indices right. I mean roughly can the Lie derivative be thought of as a generalisation of the directional derivative in vector calculus to more general settings. Yes, as far as I know. October 24th 2007, 04:35 AM Well, it's been (regrettably) about 10 months since I even looked at my QFT books, and I came up with that expression on the fly, so I'm not particularly surprised that it wasn't quite right for your needs. :o lol, I know. In the office last week we were discussing the CMB background and whether it violates relativity by defining a prefered reference frame. We decided it didn't, but later we were discussing the area of a paralelogram and the determinant of a two by two matrix, and also how to factorise cubics. We were totally lost. So on the same day we saved general relativity, but failed to remember some A-level algebra lol. Thanks for your help. I'll just go with it.
{"url":"http://mathhelpforum.com/differential-geometry/21133-lie-derivatives-print.html","timestamp":"2014-04-21T15:50:30Z","content_type":null,"content_length":"14637","record_id":"<urn:uuid:0ce6cb1c-5c39-4b8d-85fb-2aedf260e45f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
OK The Question Is A Charge Q1=2.00*10^6C (mass=1.22*10^-19)is... | Chegg.com OK the question is a charge Q1=2.00*10^6C (mass=1.22*10^-19)is at the origin. Charge Q2=4.00*10^-6 (mass=2.50*10^-19) is heldat rest at the location x=2.00 a.find the magnitude and direction of the electric field atthe point x=1.5 b.If Q2 is let go, what will be its speed infinitely faraway? For part a do you use E1=kq/r^2 and E2=kq/r^2 then add thetwo, and is the direction in the negative X direction? And for part B what how do you solve it , what formula do youuse?
{"url":"http://www.chegg.com/homework-help/questions-and-answers/ok-question-charge-q1-200-10-6c-mass-122-10-19-origin-charge-q2-400-10-6-mass-250-10-19-he-q116951","timestamp":"2014-04-18T01:47:25Z","content_type":null,"content_length":"21319","record_id":"<urn:uuid:9047a048-8049-4588-8e39-c5e1b9cc3cd2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Tangent Vector practice problems Please check the following. Vector r = (e^3t, sin(2t), t^2) What is vector r(t) prime? I am getting (3e^3t, 2cos(2t), 2t) Compute the unit tangent vector T(t) and do not simplify the answer. I get vector T(t) = (3e^3t, 2cos(2t), 2t) divided by square root of (9e^6t + 4cos^2(2t) +4t^2) Thanks in advance. I have discovered a truly marvellous signature, which this margin is too narrow to contain. -Fermat Give me a lever long enough and a fulcrum on which to place it, and I shall move the world. -Archimedes Young man, in mathematics you don't understand things. You just get used to them. - Neumann
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=19678","timestamp":"2014-04-18T08:11:03Z","content_type":null,"content_length":"26630","record_id":"<urn:uuid:ff273a41-57c1-452d-a3f4-6d3b729f3444>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron.n. J. Diff. Eqns., Vol. 1999(1999), No. 07, pp. 1-37. Distributional asymptotic expansions of spectral functions and of the associated Green kernels R. Estrada & S. A. Fulling Abstract: Asymptotic expansions of Green functions and spectral densities associated with partial differential operators are widely applied in quantum field theory and elsewhere. The mathematical properties of these expansions can be clarified and more precisely determined by means of tools from distribution theory and summability theory. (These are the same, insofar as recently the classic Cesaro-Riesz theory of summability of series and integrals has been given a distributional interpretation.) When applied to the spectral analysis of Green functions (which are then to be expanded as series in a parameter, usually the time),these methods show: (1) The ``local'' or ``global'' dependence of the expansion coefficients on the background geometry, etc., is determined by the regularity of the asymptotic expansion of the integrand at the origin (in ``frequency space''); this marks the difference between a heat kernel and a Wightman two-point function, for instance. (2) The behavior of the integrand at infinity determines whether the expansion of the Green function is genuinely asymptotic in the literal, pointwise sense, or is merely valid in a distributional (Cesaro-averaged) sense; this is the difference between the heat kernel and the Schrodinger kernel. (3) The high-frequency expansion of the spectral density itself is local in a distributional sense (but not pointwise). These observations make rigorous sense out of calculations in the physics literature that are sometimes dismissed as merely formal. Submitted April 29, 1998. Published March 1, 1999. Math Subject Classification: 35P20, 40G05, 81Q10. Key Words: Riesz means, spectral asymptotics, heat kernel, distributions. An addendum was attached on July 22, 2005. See last page of this article. Show me the PDF file (286K), TEX file, and other files for this article. │ │ Ricardo Estrada │ │ │ P. O. Box 276 │ │ │ Tres Rios, Costa Rica │ │ │ e-mail: restrada@cariari.ucr.ac.cr │ │ │ S. A. Fulling │ │ │ Department of Mathematics │ │ │ Texas A&M University │ │ │ College Station, Texas 77843-3368 USA │ │ │ e-mail: fulling@math.tamu.edu │ Return to the EJDE web page
{"url":"http://ejde.math.txstate.edu/Volumes/1999/07/abstr.html","timestamp":"2014-04-19T04:30:00Z","content_type":null,"content_length":"3080","record_id":"<urn:uuid:ab37256d-5202-4aeb-95c4-b1eadc43eeae>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
algebra (mathematics) :: Fields Article Free Pass A main question pursued by Dedekind was the precise identification of those subsets of the complex numbers for which some generalized version of the theorem made sense. The first step toward answering this question was the concept of a field, defined as any subset of the complex numbers that was closed under the four basic arithmetic operations (except division by zero). The largest of these fields was the whole system of complex numbers, whereas the smallest field was the rational numbers. Using the concept of field and some other derivative ideas, Dedekind identified the precise subset of the complex numbers for which the theorem could be extended. He named that subset the algebraic integers. Finally, Dedekind introduced the concept of an ideal. A main methodological trait of Dedekind’s innovative approach to algebra was to translate ordinary arithmetic properties into properties of sets of numbers. In this case, he focused on the set I of multiples of any given integer and pointed out two of its main properties: 1. If n and m are two numbers in I, then their difference is also in I. 2. If n is a number in I and a is any integer, then their product is also in I. As he did in many other contexts, Dedekind took these properties and turned them into definitions. He defined a collection of algebraic integers that satisfied these properties as an ideal in the complex numbers. This was the concept that allowed him to generalize the prime factorization theorem in distinctly set-theoretical terms. In ordinary arithmetic, the ideal generated by the product of two numbers equals the intersection of the ideals generated by each of them. For instance, the set of multiples of 6 (the ideal generated by 6) is the intersection of the ideal generated by 2 and the ideal generated by 3. Dedekind’s generalized versions of the theorem were phrased precisely in these terms for general fields of complex numbers and their related ideals. He distinguished among different types of ideals and different types of decompositions, but the generalizations were all-inclusive and precise. More important, he reformulated what were originally results on numbers, their factors, and their products as far more general and abstract results on special domains, special subsets of numbers, and their Dedekind’s results were important not only for a deeper understanding of factorization. He also introduced the set-theoretical approach into algebraic research, and he defined some of the most basic concepts of modern algebra that became the main focus of algebraic research throughout the 20th century. Moreover, Dedekind’s ideal-theoretical approach was soon successfully applied to the factorization of polynomials as well, thus connecting itself once again to the main focus of classical algebra. Systems of equations In spite of the many novel algebraic ideas that arose in the 19th century, solving equations and studying properties of polynomial forms continued to be the main focus of algebra. The study of systems of equations led to the notion of a determinant and to matrix theory. Given a system of n linear equations in n unknowns, its determinant was defined as the result of a certain combination of multiplication and addition of the coefficients of the equations that allowed the values of the unknowns to be calculated directly. For example, given the systema[1]x + b[1]y = c[1]a[2]x + b[2]y = c[2] the determinant Δ of the system is the number Δ = a[1]b[2] − a[2]b[1], and the values of the unknowns are given byx = (c[1]b[2] − c[2]b[1])/Δy = (a[1]c[2] − a[2]c[1])/Δ. Historians agree that the 17th-century Japanese mathematician Seki Kōwa was the earliest to use methods of this kind systematically. In Europe, credit is usually given to his contemporary, the German coinventor of calculus, Gottfried Wilhelm Leibniz. In 1815 Cauchy published the first truly systematic and comprehensive study of determinants, and he was the one who coined the name. He introduced the notation (a[l, n]) for the system of coefficients of the system and demonstrated a general method for calculating the determinant. Do you know anything more about this topic that you’d like to share?
{"url":"http://www.britannica.com/EBchecked/topic/14885/algebra/231082/Fields","timestamp":"2014-04-19T17:32:45Z","content_type":null,"content_length":"96844","record_id":"<urn:uuid:91dfd239-0ef6-493f-b008-f6bd11c6ae49>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
3.5 Linear Dependence and Independence Home | 18.013A | Chapter 3 Tools Glossary Index Up Previous Next 3.5 Linear Dependence and Independence A linear dependency among vectors v(1) to v(k) is an equation, linearly independent if there is no linear dependence among them, and linearly dependent if there is one or more linear dependence. Example: suppose v(1) = i + j; v(2) =2i; v(3) = 3j. Then v(1), v(2) and v(3) are linearly dependent because there is the relation 6v(1) = 3v(2) + 2v(3), or 6v(1) - 3v(2) - 2v(3) = 0 Exercise 3.11 Prove: any k + 1 k-vectors are linearly dependent. (You can do it by using mathematical induction.) (If you are not familiar with mathematical induction read this solution and become familiar with it!) Solution
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter03/section05.html","timestamp":"2014-04-16T10:15:56Z","content_type":null,"content_length":"2631","record_id":"<urn:uuid:e1a5a53a-6474-4e13-9377-fa3d60d509de>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Victory Gardens, NJ Math Tutor Find a Victory Gardens, NJ Math Tutor ...I earned my Bachelor of Arts in Sociology from The College of New Jersey in May 2013. Since graduation, I have worked at a Non Profit Organization in Central Jersey, but am now looking to pursue my passion for education. I have completed numerous hours of volunteer work which has included tutoring students in a variety of subjects and working with elementary aged children. 49 Subjects: including algebra 2, elementary (k-6th), phonics, prealgebra ...I teach for "reasonableness" and love to take complicated things and make them easy. I believe that reading is the bedrock of all other education. Reading comprehension includes things like identifying the main idea and supporting details. 43 Subjects: including prealgebra, ACT Math, GED, elementary (k-6th) ...I taught precalculus as a high school math teacher and am extremely comfortable with all the material. Even though the material might look hard initially, with the proper instruction and right practice, you will master the concepts and even enjoy solving problems! Trigonometry might seem like a... 26 Subjects: including trigonometry, algebra 1, algebra 2, calculus ...When giving oral presentations students may need adequate training in pronunciation of English words to reach their intended audience. After a student has successfully executed an exemplary English writing project, additional mastery can be supported by reading and writing journals. Literature can be a literally amazing tool to reinforce your reading, English and grammar skills. 21 Subjects: including prealgebra, reading, English, writing ...I have 5 years of lab research studying air pollution. I have retired after 30 years of programming both scientific and commercial. When tutoring I stress methodology, drill and constant 6 Subjects: including algebra 2, algebra 1, trigonometry, chemistry Related Victory Gardens, NJ Tutors Victory Gardens, NJ Accounting Tutors Victory Gardens, NJ ACT Tutors Victory Gardens, NJ Algebra Tutors Victory Gardens, NJ Algebra 2 Tutors Victory Gardens, NJ Calculus Tutors Victory Gardens, NJ Geometry Tutors Victory Gardens, NJ Math Tutors Victory Gardens, NJ Prealgebra Tutors Victory Gardens, NJ Precalculus Tutors Victory Gardens, NJ SAT Tutors Victory Gardens, NJ SAT Math Tutors Victory Gardens, NJ Science Tutors Victory Gardens, NJ Statistics Tutors Victory Gardens, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/Victory_Gardens_NJ_Math_tutors.php","timestamp":"2014-04-17T04:56:04Z","content_type":null,"content_length":"24222","record_id":"<urn:uuid:9ea3b4e9-030b-4e79-a0bf-6132d63978f1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Photography Wiki The shutter speed rule From Photography Wiki The simple description Use the inverse of the focal length as the minimum shutter speed to avoid blurred pictures (i.e. 50mm -> max 1/50 sec) "Executive summary" There is a well known rule of thumb in the photographers' world that you shouldn't use a shutter speed lower than the focal length. This rule is coming from the 35mm film world. With such a camera indeed, it is recommend to shoot at for instance at least 1/200sec or faster when shooting at 200mm, and that to avoid any motion blur due to your own (lack of) stability. This is of course valid for hand held shooting only. The usual translation in the digital world for APS-C sensors is that you should take the crop factor into account (1.5). This means that a 200mm lens will give you a field of view equivalent to a 300mm on a 35mm camera and hence will need a shutter speed of 1/300 sec or faster. This is more or less known to several generations of photographers, but this is more than a rule of thumb... you can show why this is true with some basic mathematics and optics... But there is more to it. Sensor resolution is also playing a role and if you are a pixel peeper, looking at your pictures at pixel level and printing huge posters, you'll see than the better the camera (the higher the resolution), the higher the needed shutter speed. The boring explanation Here follow some technical explanations: First of all, my deepest apologies to those not familiar with mathematics or physics… But the conclusions are very interesting…. (see at the end) I did some serious thinking to understand the rule of thumb of using a shutter speed equal to the inverse of the focal distance. Here are the results: first some (half-serious) maths: Let's start with the basic definitions and assumptions: To simplify the calculations, we assume we can model the lens with only one convex element with a focal distance f. We also assume that we focus on infinity and the picture is forming on the sensor at a distance corresponding to the focal distance. Let's call h = the height of the sensor (16mm for a Nikon D70 for instance, a 1.5 crop sensor camera) v = the vertical resolution of the sensor (2000 for a Nikon D70 for instance, a 6MP camera) s = shutter open time in seconds The angle of view can easily be computed like this: $LaTeX: a = 2 \arctan \frac{h}{2f}$ 50 mm lens on a D70: 2 arctan (16/100) 18°: vertical angle of view Usual definition (taking the diagonal size into account) 50mm lens on a D70: 2 arctan (28.8/100) = 32° diagonal angle 50 mm lens on an F100: 2 arctan (43.2/100) = 46.7° diagonal angle (full frame or film format) If we assume that the blur due to your own movements is depending on the angular velocity of your movement (you usually use the left hand to hold the lens at a certain distance from the sensor, and the longer the lens, the larger the movement). The maximum angle variation during the shot should correspond to maximum one pixel on the sensor: d(a) = arctan (h/(vf)) approx= h/(vf) for small angles If you induce a movement resulting in an angular velocity of w, you need a shutter time s so that w x s <= d(a) (in other words, you have to shoot fast enough so that the angular movement you generate is less than the angle variation corresponding to a pixel on the sensor). The conclusion : Combining the formulas I get s (shutter time in seconds) = d(a)/w = (h/v) * w * (1/f) or in other words s = constant * (1/f) which is the well known thumb rule: HOWEVER, that constant factor depends on three factors: - the lower your own angular velocity (shake) the lower the allowed shutter speed (evident) - the shutter time should be proportional to the height of the sensor (h) – this is also known: DX sized (16mm) vs FF-35mm (24mm) means the safe shutter speed should be multiplied by the crop factor, - the shutter time should also be adapted based on the RESOLUTION of the sensor (v) If linear resolution is increased by a factor 2, the safe speed should be as well: to be clear if 1/100sec is safe on a D1, you NEED 1/200sec on a D2X to get a sharp picture at pixel level. Disclaimer: there are some approximations in the reasoning, but the qualitative conclusions are valid. I will try to rephrase the conclusions using normal words: The concept of a "safe shutter speed" based on focal distance is scientifically correct: (to compensate motion blur due to the photographer) 100mm means 1/100sec on a FF or 35mm camera 400mm means 1/400sec etc... but the following elements also have to be taken into account: - your own "shaking" ability - evident - the crop factor 100mm means 1/100sec safe on an F100 BUT 100mm means 1/150sec safe on a D70 or D200 or D2X... - the sensor resolution 1/100sec just safe (for you) on a D1 means 1/200sec (also for you) will be needed on a D200 or if 1/100 is safe on a D70 you'll need 1/125 on a D200 otherwise you'll get the impression that your pictures are not too sharp. Hope this makes the point a bit more understandable.
{"url":"http://www.techniphoto.com/wiki/index.php?title=The_shutter_speed_rule","timestamp":"2014-04-17T21:24:21Z","content_type":null,"content_length":"18171","record_id":"<urn:uuid:50e5f593-7849-4479-a552-5636cd1407b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple Axes Scales with Old Faithful Data R users find it notoriously difficult to plot two series on the same graph but with different y scales. Plotly has just introduced a feature that makes it really, really easy. Read on to find out The above graph of the classic R Old Faithful dataset is a great example of when you might want two different scales on a single graph. The histogram shows the bimodal nature of the time between eruptions. Awesome. However, one of the coolest features of the Old Faithful data is that the bimodal eruption lag times correlate to a separate bimodal distribution of eruption length. A straight scatterplot doesn’t show this relationship very well, and if we try to graph both a scatterplot and a histogram on the same axes, we get this extremely unhelpful graph (with icky mixed units on the y Multiple y axes to the rescue! Our histogram scale of Probability is independent of our Eruption Time scale in minutes. If our histogram changes (maybe we add more points, or change from a probability distribution to a frequency distribution), the graph is still clear and compact. Plus, the scatterplot overlaid on the histogram has the added bonus of reinforcing the nature of a histogram — it represents how many points fall within each bin on the x axis (you can read more about the nature of histograms here) Plotly multi-axis graphs are also fun to interact with. If you mouse-over an axis (so that it’s a double arrow), you can click and drag to move just the trace that axis represents up or down (or left /right if you click on the x axis). If you move the mouse to the top or bottom of the axis (so that it becomes a single arrow), click, and drag, then you can zoom in or out on a single trace. So our first graph is just a few clicks and drags away from this one: If you’re an R user, you’re probably chomping at the bit to know how to make this graph. We’ve made it pretty easy for you. You can find all the code which created the graph in this gist, but the bulk of it (the first 112 lines) has to do with styling the layout, look, and feel of the graph. You can read more about building and styling your graphs in R in our API documentation. Here, I’ll highlight the key elements which build your multi-axis graph. We store data and layout information in named lists. Axes are lists within these lists. Here, we create three axis lists: xaxis, yaxis, and yaxis2, placing one of them on the right and one on the left (the default). You can about read many more options for multiple axes, inset axes, or separate subplots in our Next, we create two graph objects: the histogram and the scatterplot. The key line here is yaxis = ‘y2’. ‘y2’ points this trace to the yaxis2 object (while ‘y3’ would point to a yaxis3 object, if we had created one). If you’re already comfortable with R code in Plotly, you see that you’re really adding just a few new lines of code. If this is your first time checking out Plotly and R, we hope the whole process seems simple and clean to you. If you’re not comfortable or don’t feel like coding, Plotly also makes it easy to add multiple axes in our GUI. Once you’ve created a graph with multiple traces, you can add a new axes under the Style menu by clicking We’ll leave you with an alternative way to view this Old Faithful data, still built using Plotly’s multi-axis feature. The R code for this graph can be found here (again, the first 118 lines largely define the style and layout). The histograms for each variable are plotted on sub axes, the width of which is determined by the “domain” element. While this type of graph might be something you’re already comfortable making in R, we hope you see how easy it is to make in Plotly, interact with dynamically, customize, save, and share.
{"url":"http://blog.plot.ly/post/69647810381/multiple-axes-scales-with-old-faithful-data","timestamp":"2014-04-16T19:12:23Z","content_type":null,"content_length":"124955","record_id":"<urn:uuid:cac2d26f-56ae-4230-ac27-79482473c99f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Showing that a finction is not integrable August 16th 2011, 11:12 AM #1 Jan 2011 Showing that a finction is not integrable I've got what looks like a simple problem but I just can't get it right. Here it goes: Show that $f(x)=d/dx (x^2\sin x^{-3}), x>0$, is not in $L^1(0,1)$. My first thought was to simply compute the integral $\int_{0}^{1}|f(x)|dx$ and show that it diverges - however I had no luck doing that. Then I thought that I might simply estimate it from below with something that I know is infinite but no luck there either, all I got was: $\int_{0}^{1}|2x\sin x^{-3}-3x^{-2}\cos x^{-3}|dx \geq \int_{0}^{1}(|2x\sin x^{-3}|-|3x^{-2}\cos x^{-3}|)dx$ Re: Showing that a finction is not integrable You don't need to compute the integral. Just notice that $\sin (x^{-3})\overset{0}{\sim} x^{-3}$ and the integral $\int_0^1\frac 1xdx$ is divergent. August 16th 2011, 12:49 PM #2
{"url":"http://mathhelpforum.com/differential-geometry/186241-showing-finction-not-integrable.html","timestamp":"2014-04-17T11:48:18Z","content_type":null,"content_length":"33865","record_id":"<urn:uuid:195a67e7-2366-4f67-979d-ed4db1d2358c>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Question: Normal form G Barmpalias georgeb at amsta.leeds.ac.uk Thu Aug 16 15:16:36 EDT 2001 A question: It has been obtained an improvement of the Normal for theorem for partial recursive functions, such that the (universal) predicate T_n and the function U belong to the smalest Grzegorczyk class E^0 (for every partial recursive function f, there an index e such that f(x)= U(\mu~y[T_n(e,x,y)]) ). Does any member of the list know a specific reference for this result? PS Odifreddi mentions the result in his book classical recursive functions Vol.II page 306. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2001-August/005009.html","timestamp":"2014-04-21T15:19:50Z","content_type":null,"content_length":"2672","record_id":"<urn:uuid:96de6feb-7bd2-48c1-a9a4-93c9bf52c648>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Pattern independent maximum current estimation in power and ground buses of CMOS VLSI circuits: algorithms, signal correlations, and their resolution Results 1 - 10 of 30 - IEEE, IEEE Computer Society , 1995 "... Is the Framingham coronary heart disease absolute risk function ..." - in VLSI circuits,” in Proc. Int. Conf. Comput.-Aided Des "... In this paper, we present simulation techniques to estimate the worst-case voltage variation using a RC model for the power distribution network. Pattern independent maximum envelope currents are used as a periodic input for performing the frequency-domain steady-state simulation of the linear RC ci ..." Cited by 20 (0 self) Add to MetaCart In this paper, we present simulation techniques to estimate the worst-case voltage variation using a RC model for the power distribution network. Pattern independent maximum envelope currents are used as a periodic input for performing the frequency-domain steady-state simulation of the linear RC circuit to evaluate the worst-case instantaneous voltage drop for the RC power distribution networks. The proposed technique unlike existing techniques, is guaranteed to give the maximum voltage drop at nodes in the RC power distribution network. We present experimental results to compare the frequency-domain and time-domain simulation techniques for estimating the maximum instantaneous voltage drop. We also present frequency domain sensitivity analysis based decoupling capacitance placement for reducing the voltage variation in the power distribution network. Experimental results on circuits extracted from layout are presented to validate the simulation and optimization techniques. 1 - Proceedings of IEEE 1997 Custom Integrated Circuits Conference , 1997 "... We present a genetic-algorithm-based approach for estimating the maximum power dissipation and instantaneous current through supply lines for CMOS circuits. Our approach can handle large combinational and sequential circuits with arbitrary but known delays. To obtain accurate results we extract the ..." Cited by 20 (8 self) Add to MetaCart We present a genetic-algorithm-based approach for estimating the maximum power dissipation and instantaneous current through supply lines for CMOS circuits. Our approach can handle large combinational and sequential circuits with arbitrary but known delays. To obtain accurate results we extract the timing and current information from transistor-level and general-delay gate-level simulation. Our experimental results show that the patterns generated by our approach produce on the average a lower bound on the maximum power which is 41% tighter than the one obtained by weighted random patterns for estimating the maximum power. Also, our lower bound for the maximum instantaneous current is 21% tighter as compared to the weighted random patterns. 1. Introduction With increasing demands for high reliability in modern VLSI designs, accurate estimation of the maximum power dissipation and maximum instantaneous current during the design process is becoming essential. Excessive power dissipatio... , 1995 "... With the advent of portable and high-density microelectronic devices, the power dissipation of very large scale integrated (VLSI) circuits is becoming a critical concern. Accurate and eficient power estimation during the design phase is required in order to meet the power specifications without a co ..." Cited by 18 (0 self) Add to MetaCart With the advent of portable and high-density microelectronic devices, the power dissipation of very large scale integrated (VLSI) circuits is becoming a critical concern. Accurate and eficient power estimation during the design phase is required in order to meet the power specifications without a costly redesign process. Recently, a variety of power estimation techniques have been proposed, most of which are based on: I the use of simplified delay models, and 2 modeling t 1 e long-term behavior of logic signals wit I! probabili-ties. The array of available techniques diger in subtle ways in the assumptions that they make, the accuracy that they provide, and the kinds of circuits that they apply to. In this tutorial, I will survey the many power estimation techniques that have been recently proposed and, in an attempt to make sense of all the variety, I will try to explain the diflerent assumptions on which these techniques are based, and the impact of these as-sumptions on their accuracy and speed. - IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems , 2005 "... This paper presents an efficient method for optimizing the design of power/ground (P/G) networks by using locally regular, globally irregular grids. The procedure divides the power grid chip area into rectangular sub-grids or tiles. Treating the entire power grid to be composed of many tiles connect ..." Cited by 17 (4 self) Add to MetaCart This paper presents an efficient method for optimizing the design of power/ground (P/G) networks by using locally regular, globally irregular grids. The procedure divides the power grid chip area into rectangular sub-grids or tiles. Treating the entire power grid to be composed of many tiles connected to each other enables the use of a hierarchical circuit analysis approach to identify the tiles containing the nodes having the greatest drops. Starting from an initial equal number of wires in each of the rectangular tiles, wires are added in the tiles using an iterative sensitivity based optimizer. A novel and efficient table lookup scheme is employed to provide gradient information to the optimizer. Experimental results on test circuits of practical chip sizes show that the proposed P/G network topology after optimization saves 16 % to 28 % of the chip wiring area over other commonly used topologies. - in Proc. ACM/IEEE DAC , 2004 "... Power supply integrity analysis is critical in modern high performance designs. In this paper, we propose a stochastic approach to obtain statistical information about the collective IR and LdI/ dt drop in a power supply network. The currents drawn from the power grid by the blocks in a design are mo ..." Cited by 14 (0 self) Add to MetaCart Power supply integrity analysis is critical in modern high performance designs. In this paper, we propose a stochastic approach to obtain statistical information about the collective IR and LdI/dt drop in a power supply network. The currents drawn from the power grid by the blocks in a design are modelled as stochastic processes and their statistical information is extracted, including correlation information between blocks in both space and time. We then propose a method to propagate the statistical parameters of the block currents through the linear model of the power grid to obtain the mean and standard deviation of the voltage drops at any node in the grid. We show that the run time is linear with the length of the current waveforms allowing for extensive vectors, up to millions of cycles, to be analyzed. We implemented the approach on a number of grids, including a grid from an industrial microprocessor and demonstrate its accuracy and efficiency. The proposed statistical analysis can be use to determine which portions of the grid are most likely to fail as well as to provide information for other analyses, such as statistical timing analysis. - in Proc. of ISLPED , 1998 "... We propose a new technique for generating a small set of patterns to estimate the maximum power supply noise of deep sub-micron designs. We first build the charge/discharge current and output voltage waveform libraries for each cell, taking power and ground pin characteristics, the power net RC and ..." Cited by 13 (7 self) Add to MetaCart We propose a new technique for generating a small set of patterns to estimate the maximum power supply noise of deep sub-micron designs. We first build the charge/discharge current and output voltage waveform libraries for each cell, taking power and ground pin characteristics, the power net RC and other input characteristics as parameters. Based on the cells ’ current and voltage libraries, the power supply noise of a 2-vector sequence can be estimated efficiently by a cell-level waveform simulator. We then apply the Genetic Algorithm based on the efficient waveform simulator to generate a small set of patterns producing high power supply noise. Finally, the results are validated by simulating the obtained patterns using a transistor level simulator. Our experimental results show that the patterns generated by our approach produce a tight lower bound on the maximum power supply noise. 1. - in VLSI Circuits”, Proceedings of Design Automation Conference , 1997 "... This paper proposes to use quantile points of the cumulative distribution function for powerconsumption to provide detailed information about the powerdistribution in a circuit. Thepaper also presents two techniquesbasedon population pruning and stratification to improve the efficiency of estimation ..." Cited by 11 (4 self) Add to MetaCart This paper proposes to use quantile points of the cumulative distribution function for powerconsumption to provide detailed information about the powerdistribution in a circuit. Thepaper also presents two techniquesbasedon population pruning and stratification to improve the efficiency of estimation. Both population pruningand stratification are basedon a low cost predictor, such as zero-delay power estimate. Experimental results show the effectiveness of the proposed techniques in providing detailed power distribution information. 1 Introduction In the past, average and peak power dissipations have been the primary focus of power estimation techniques and tools. It has however become important to estimate the power distribution of the circuit over a large number of clock cycles. This information is especially useful for determining the circuit reliability, performing dc/ac noise analysis, andchoosingappropriate packagingand cooling techniques for IC's. The power consumption per cl... , 2000 "... We present new techniques for estimating the maximum instantaneous current through the power supply lines for CMOS circuits. We investigate four different approaches: (1) timed-ATPG based approach, (2) probability based approach, (3) genetic algorithm based approach and (4) integer linear programmin ..." Cited by 9 (3 self) Add to MetaCart We present new techniques for estimating the maximum instantaneous current through the power supply lines for CMOS circuits. We investigate four different approaches: (1) timed-ATPG based approach, (2) probability based approach, (3) genetic algorithm based approach and (4) integer linear programming (ILP) approach. The first three approaches produce a tight lower bound on the maximum current. The ILP based approach produces the exact solutions for small circuits, and tight upper bounds of the solutions for large circuits. Our experimental results show that the upper bounds produced by the ILP approach combined with the lower bounds produced by the other three approaches confine the exact solution for the maximum instantaneous current to a small range. Index Terms -- Maximum instantaneous current, power supply, timed-ATPG based, probability based, genetic algorithm based, integer linear programming based. 1. INTRODUCTION Continuous shrinking of the device feature sizes introduces an... - IEEE TRANSACTIONS ON VLSI SYSTEMS , 1997 "... In this paper, we present a new gate-level approach to power and current simulation. We propose a symbolic model of complementary metal–oxide–semiconductor (CMOS) gates to capture the dependence of power consumption and current flows on input patterns and fan-in/fan-out conditions. Library elements ..." Cited by 9 (0 self) Add to MetaCart In this paper, we present a new gate-level approach to power and current simulation. We propose a symbolic model of complementary metal–oxide–semiconductor (CMOS) gates to capture the dependence of power consumption and current flows on input patterns and fan-in/fan-out conditions. Library elements are characterized once for all and their models are used during event-driven logic simulation to provide power information and construct time-domain current waveforms. We provide both global and local pattern-dependent estimates of power consumption and current peaks (with accuracy of 6 and 10 % from SPICE, respectively), while keeping performance comparable with traditional gate-level simulation with unit delay. We use VERILOG-XL as simulation engine to grant compatibility with design tools based on Verilog HDL. A Web-based user interface allows our simulator (PPP) to be accessed through the Internet using a standard web browser.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=337641","timestamp":"2014-04-20T18:06:56Z","content_type":null,"content_length":"40681","record_id":"<urn:uuid:a2099ee3-cb40-4619-89fa-93e4e517b5dc>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
School of Arts & Sciences Email: tbeckett@endicott.edu Phone: (978) 232-2183 Office Location: Judge Science Center Office Number: 211 Office Hours: 1 - 3 M / W /F MTH 126 - Applied Statistics MTH 115 - Perspectives of Geometry MTH 326 - Advanced Statistics ED 301 - Math Methods · Ed.D. - Math Education University of Massachisetts - 1999 Dissertation Title: Development of Conceptual Understanding of Statistics for Concrete Thinkers in a Constructivist Learning Environment · M.S. in Mathematics; Salem State College in Salem, MA. 1986 · B.S. in Systems Analysis; Miami University in Oxford, Ohio. 1971 RELATED ACTIVITIES: PALMS professor for the Northeast region · attend workshops for the purpose of experimenting with new methodolgy for using inquiry based instruction in mathematics STEMTEC professor · attend a week long workshop of inquiry based, constructivist methodology for math and science teachers 2001 grant from Massachusetts Department of Education Summer Content Institute · Develop, teach, and create assessment tools for 20 middle school teachers in a graduate level, 45 hour workshop entitled “Middle School Mathematics: Data Analysis, Statistics and Probability”. 2002 grant from Massachusetts Department of Education · Develop technology based workshops and act as a mentor to 12 middle school mathematics teachers in the Danvers Middle School 2002 grant from Massachusetts Department of Education Summer Content Institute · Develop, teach and create assessment tools for 25 middle school teachers in a graduate level, 60 content hour workshop entitled “ Technology and Tools for Teaching Algebra” REARCH ACTIVITIES NCTM (National Conference of Teachers of Mathematics) convention Annaheim, CA , April 6-9,2005. · Present a workshop entitled “Picture It! AVisual Approach to Problem Solving” NCTM (National Conference of Teachers of Mathematics) convention in San Antonio,April 9, 2003. · Present a workshop on inquiry based instruction for middle school teachers. NCTM (National Conference of Teachers of Mathematics) convention in Chicago, April 17, 2000. · Presented a workshop entitled “Bringing Life to Statistics”f or middle school teachers ICTCM (International Conference for Technology and Computers in Mathematics) Atlanta, Nov. 2000. · Poster presentation entitled “Jump: Using the CBL to teach Introductory Statistics” Beckett,T (2004). Probability & statistics: Beyond the formula. Boston, MA, Pearson Custom Campus Involvement President, ECFA (Endicott College Faculty Association) 2005 - 2007 Chair, Curriculum Committee 2001 - 2003
{"url":"http://www.endicott.edu/php/faculty/index.php?var=196","timestamp":"2014-04-21T14:47:10Z","content_type":null,"content_length":"7409","record_id":"<urn:uuid:6ac30115-6772-410f-8a02-91e39672400f>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
DCM Algorithm vs Direct Integration Hi Everyone. I am new to this forum... long time reader, first time poster. I hope to contribute to the community. My question is regarding the DCM algorithm: I have simulated Bill's DCM algorithm in MatLAB/Simulink, and it seems, that direct integration always meets or exceeds the DCM integration/multiplication. See results below: It seems while simply integrating the said non-linear differentials, I find the results to be of better accuracy and MUCH LESS processing power!! So why even use the DCM orientation integration? What are the advantages? Please note: In this simulation, I have intentionally not yet taken into account the PI drift correction (because, no matter what pre-algorithm you use, in the long run, post PI correction will always give you the correct orientation).
{"url":"http://diydrones.com/forum/topics/dcm-algorithm-vs-direct?id=705844%3ATopic%3A184295&page=2","timestamp":"2014-04-16T10:13:59Z","content_type":null,"content_length":"84533","record_id":"<urn:uuid:335f4ef7-ce5c-400f-9fdc-335dd3bedf8b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] creating nested lists in DrScheme From: Aniket Karmarkar (akarmarkar at mail.bradley.edu) Date: Fri May 7 23:55:43 EDT 2010 That's what this function does: (define (equal L1 L2) ((eq? (caar L1) (car L2)) (car L1)) (else (equal (cdr L1) L2)) This function find the sublist which match. I had it before but I forgot to include it. So it returns (b c d). I tried the map function but you need the arguments to have same number in both arguments. :( On Fri, May 7, 2010 at 10:39 PM, Stephen Bloch <sbloch at adelphi.edu> wrote: > On May 7, 2010, at 10:42 PM, Aniket Karmarkar wrote: > This is what I am trying to do. > Write the function partialpaths, of two arguments. The first argument is a > list of lists, representing a graph(adjacency list representation of a > graph). The second argument is a partial path in the reverse order, for > example the path "from a to b to c" is represented by the list (c b a). The > function should return a list of sublists containing all possible expansions > of this partial path by one node. For example: (partialpaths '( (a b c) (b c > d) (c d a) (d)) '(b a)) should return: ( (c b a) (d b a)) > I get the (b c d) part now I need to attach the cdr of this to b a multiple > times so I try putting it in a loop. > Here's what I have: > (define nod ()) > (define ma '('())) > (define (partialpaths L1 L2) > (set! nod (equal L1 L2)) > (do() > ;exit test > ((null? (cdr nod))) > (list ma (attach(cdr nod)L2)) > (set! nod (cdr nod)) > ) > ma > ) > Yigg. Are the loop, the mutation, and the global variables really > necessary? With all due respect, this doesn't look like Scheme code to me > :-) > (define (attach List1 List2) > (list (car List1) List2) > ) > This almost certainly doesn't do what you think it does. > Seems to me what you want to do is extract the first (i.e. last) element of > the path, find the row of the table that starts with it, and cons each of > the other elements of that row onto the path. > So off the top of my head, I would define a variable equal to (car path), > then another equal to the row (if any) in the adjacency matrix whose car > matches it (this may require a helper function), then map over the cdr of > the row a function which conses its argument onto the path. I did this in > about six lines (not counting test cases) with no loops, no mutation, no > global variables, and only one recursion (in the helper function). > Stephen Bloch > sbloch at adelphi.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.racket-lang.org/users/archive/attachments/20100507/1d03c151/attachment.html> Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2010-May/039409.html","timestamp":"2014-04-19T02:57:37Z","content_type":null,"content_length":"8693","record_id":"<urn:uuid:500d8612-cfb0-4210-89bd-ab3c8a749be2>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
Chester, PA Find a Chester, PA Precalculus Tutor ...I hold degrees in economics and business and an MBA. I have been in upper management since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university. In the past 10 years, I have taught various freshman and sophomore level classes in mathematics which have included modules in linear algebra. 13 Subjects: including precalculus, calculus, algebra 1, geometry What makes me most happy about tutoring is the emotional reward: seeing someone I help feel and perform better with their subject is what keeps me going and wanting to help more people. My name is Michael, and I am an experienced young professional conducting independent research at UPenn. I studi... 9 Subjects: including precalculus, calculus, physics, geometry ...Another of my elementary school students was having difficulty with fractions and division. It was a challenge, but by breaking down certain terms and providing shortcuts that worked for him, I was able to assist him. Again, I work with each student to determine the best approach for them. 21 Subjects: including precalculus, English, reading, algebra 1 ...I enjoy working with teens, and I generally get along pretty well with them. Although, adults are okay too (they tend to ask better questions). So if you are someone in need of tutoring in math and/or physics (and there is no shame in asking for extra help), then please feel free to contact me. ... 16 Subjects: including precalculus, English, calculus, physics ...Hard work pays off! My background and credentials related to Study Skills include: As an Instructional Aide in the Lower Merion School District I work daily under the supervision of Special Educational and other content certified teachers to help students: * review goals * review current progre... 35 Subjects: including precalculus, chemistry, English, geometry Related Chester, PA Tutors Chester, PA Accounting Tutors Chester, PA ACT Tutors Chester, PA Algebra Tutors Chester, PA Algebra 2 Tutors Chester, PA Calculus Tutors Chester, PA Geometry Tutors Chester, PA Math Tutors Chester, PA Prealgebra Tutors Chester, PA Precalculus Tutors Chester, PA SAT Tutors Chester, PA SAT Math Tutors Chester, PA Science Tutors Chester, PA Statistics Tutors Chester, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/chester_pa_precalculus_tutors.php","timestamp":"2014-04-21T07:28:23Z","content_type":null,"content_length":"24333","record_id":"<urn:uuid:31c686a1-cbbd-41a9-a5ca-41624df9df22>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00062-ip-10-147-4-33.ec2.internal.warc.gz"}
A Guide to the Max Beberman Film Collection, ca. Beberman was a professor of math education at the University of Illinois and a leader in the "new math" movement. The collection consists of 47 film reels of the University of Illinois Committee on School Mathematics (UICSM) series "Teaching High School Mathematics," featuring mathematician Max Beberman from the University of Illinois. DVDs of each title are available for viewing. Forms part of the Archives of American Mathematics. Unrestricted access. Viewing equipment for 16 mm reels not currently available. DVDs are available for each title. Max Beberman Film Collection, ca. 1950-1960, Archives of American Mathematics, Dolph Briscoe Center for American History, University of Texas at Austin. Teaching High School Mathematics, first course: 2R771 DVDs of all titles FILM2/F20 Teaching High School Mathematics: Numbers and Numerals, Part 1 FILM2/F21 Numbers and Numerals, Part 2; 81648-1 FILM2/F22 Real Numbers: Developing the Concept; 81649-1 FILM2/F23 Adding Real Numbers; 90053-1 FILM2/F24 Advent of Awarenesss; 81650-1 FILM2/F25 Isomorphism: Developing the Concept; 81651-1-A FILM2/F26 Isomorphism: Developing the Concept; 81651-1-B FILM2/F27 Punctuation and Conventions in Mathematics; Pt. 1; 81652-1 FILM2/F28 Punctuation and Conventions in Mathematics; Pt. 2; 81653-1 FILM2/F29 Operations: Binary, Singulary; 81654-1 FILM2/F30 Operation Machines; 81655-1 FILM2/F31 Inverses of Operations; 81656-1 FILM2/F32 Functions: Foreshadowing the Concept; 81657-1 FILM2/F33 Subtracting Real Numbers; 81658-1 FILM2/F34 Dividing Real Numbers; 81659-1 FILM2/F35 Distributive Principles for Numbers -- Arithmetic, Pt. 2; 90056-1 FILM2/F36 Basic Principles for Real Numbers, Part 3; 81660-1 FILM2/F37 Basic Principles for Real Numbers, Part 4: Discovery and Patterns; 90057-1 FILM2/F38 Comparing Real Numbers: The Number Line; 81661-1 FILM2/F39 Prerequisite to Communication; 81662-1 FILM2/F40 Numerical Variables: Developing the Concept, Pt. 1; 81664-1 FILM2/F41 Numerical Variables: Developing the Concept, Pt. 2; 81664-2 FILM2/F42 Bound Variables: Matching Language with Awareness; 90058-1 FILM2/F43 Verbalizing Generalizations in the Classroom; 81665-1 FILM2/F44 Prelude to Deduction; 81666-1 FILM2/F46 Substitution for the Linking Rule; 90059-1 FILM2/F47 Prelude to Proofmaking; 81666-1 FILM2/F48 Proving Generalizations, Part 1, Test Patterns Principle; 81667-1 FILM2/F49 Proving Generalizations, Part 2, Classroom Examples; 81668-1 FILM2/F50 Organizing Knowledge by Deduction; 90061-1 FILM2/F51 Principles and Discovery in Algebraic Manipulation, Part 2: Simplification; 81670-1 FILM2/F52 Principles and Discovery in Algebraic Manipulation, Part 4: Some Other Common Cases; 81671-1 FILM2/F53 Sentences and Solution Sets; 81672-1 FILM2/F54 Naming Sets: The Set Abstractor; 81673-1 FILM2/F55 Number Line Graphs of Solution Sets; 81674-1 FILM2/F56 Solving Equations, Informal Approach; 81675-1 FILM2/F57 Logical Basis for Equation Transformation Basis Principles, Pt. 1; 81676-1 FILM2/F58 Logical Basis for Equation Transformation Basis Principles, Pt. 2; 81677-1 FILM2/F59 Logical Basis for Equation Transformation Basis Principles, Pt. 3; 81678-1 FILM2/F60 Subset of a Set: Developing the Concept; 81679-1 FILM2/F61 Equivalent Equations: Developing the Concept; 81679-1 FILM2/F62 Equivalent Equations and Transformation Principles; 81680-1 FILM2/F63 Equation Transformation Principles in Practice, Part 1; 81681-1 FILM2/F64 Equation Transformation Principles in Practice, Part 2; 81682-1 FILM2/F65 Transformation Principles for Equations, Part 1; 81683-1 FILM2/F66 Transformation Principles for Equations, Part 2; 81684-1 FILM2/F67 Solving Worded Problems; 81685-1
{"url":"http://www.lib.utexas.edu/taro/utcah/00184/00184-P.html","timestamp":"2014-04-16T05:44:47Z","content_type":null,"content_length":"16778","record_id":"<urn:uuid:aab48a4e-9fa0-47ef-9eb1-c93416a965fa>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
John Frank Adams Born: 5 November 1930 in Woolwich, London, England Died: 7 January 1989 in Near Brampton, Huntingdonshire, England Click the picture above to see two larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Frank Adams's mother was Jean Mary Baines, a biologist, and his father was William Frank Adams, a civil engineer. He was the eldest of his parents two children, having one younger brother. The family was evacuated from London during World War II which somewhat disrupted his early education. As a consequence he attended school in a number of places but his education was mainly at Bedford School. By the time his schooling was completed World War II had ended but Britain still had national service and all young men were required to serve for two years. Adams served in the Royal Engineers during 1948 and 1949 before beginning his university education. Adams entered Trinity College, Cambridge, in 1949 to study mathematics. He took Part II of the Mathematical Tripos in 1951, and Part III in the following year. After taking his first degree he started graduate work at Cambridge with Besicovitch on geometric measure theory. He married Grace Rhoda Carty in 1953. James writes in [1]:- Soon after their marriage she became a minister in the Congregational church. They had a son and three daughters (one adopted). Family life was extremely important to Adams, though he preferred to keep it separate from his professional life. The family used to do many things together, especially fell-walking in the Lake District. He changed supervisors and began working on algebraic topology with Shaun Wylie. However, he was most strongly influenced by Henry Whitehead, who led the foremost British school of algebraic topology. This happened during the year 1954 which Adams spent as a junior lecturer at the University of Oxford. He won a Fellowship at Trinity College, Cambridge, with his doctoral thesis on spectral sequences On Special Sequences of Self-Obstruction Invariants which he submitted in 1955. He returned to Cambridge in 1956 to take up the Fellowship and during this period he developed the spectral sequence which today is called the "Adams' spectral sequence". Adams won a Commonwealth Scholarship which enabled him to visit Chicago as a research associate in 1957-58. While in the United States he also visited Princeton. Adams said:- ... I regard the progress of my researches in America as most successful. ... By good luck, moreover, my new methods were sufficiently powerful to answer one of the classical problems of my subject, that proposed by H Hopf in 1935. The conjecture that Adams solved was the famous conjecture about the existence of H-structures on spheres. On his return from the United States he became a College Lecturer at Trinity Hall Cambridge. His work turned towards K-theory, the generalised cohomology theory on vector bundles. Using this theory he solved another important conjecture, this one being about vector fields on spheres. After spending further time in Princeton, Adams took up a post at Manchester as a Reader in 1962, being appointed to Newman's chair when he retired in 1964. At this time he became Fielden professor. He continued to produce work of outstanding depth and originality, and during his first few years at Manchester he wrote a series of papers On the groups J(X) which were highly influential in homotopy theory. In 1964 Adams was elected a Fellow of the Royal Society. In [3] James says:- It was in 1965, however, that he suffered the first attack of a psychiatric illness, as a result of which he was on sick leave for some months. It was apparently brought on by the worry caused by his responsibilities as head of department ... In 1970 Adams succeeded Hodge as Lowndean Professor of Astronomy and Geometry at Cambridge, and at this time he returned to Trinity College. His research continued to be of fundamental importance in the homotopy theory of classifying spaces of topological groups, finite H-spaces and equivariant homotopy theory. Around this time in addition to his research papers, he began to publish expository work, some resulting from lecture courses. These books are of major importance, and include Stable homotopy theory (1964), Lectures on Lie groups (1969), Algebraic topology: a student's guide (1972), Stable homotopy theory and generalized homology (1974), Localisation and completion (1975), and Infinite loop spaces (1978). Let us say a little about these works. Stable homotopy theory (1964) is a short 74 page book which is based on six lectures Adams gave at the University of California at Berkeley in 1961. Lectures on Lie groups (1969) is described by N R Wallach as follows:- This book covers in a concise manner the fine structure and representation theory of compact Lie groups, with emphasis on the classical groups. The exposition of the book is aimed at the reader who has some understanding of algebraic topology and would like to understand the aspects of the theory of compact Lie groups that are relevant to algebraic topology. The book fulfils its aims admirably and should be a useful reference for any mathematician who would like to learn the basic results on compact Lie groups. Algebraic topology: a student's guide (1972) is rather unusual. It is in two parts, the first contains a description of the topics that Adams thought essential for any young mathematician interested in algebraic topology. It links to a wide variety of textbooks with Adams indicating the one which treats the topic in the way he considers best. The second part contains excerpts from some famous papers on algebraic topology together with surveys of generalized cohomology theories and complex cobordism written by Adams. Stable homotopy theory and generalized homology (1974) comprises of three lecture courses, one on the algebra of stable operations in complex cobordism delivered in 1967, the second on complex cobordism theory delivered in 1970, and the third on stable homotopy and generalized homology theories delivered in 1971. Stewart B Priddy, reviewing Infinite loop spaces (1978), writes:- Over the past few years, various topologists have been heard to complain about the lengthy and technical nature of infinite loop space theory. Even if one suspected that some of these detractors had not come to grips with the problems involved, there was still an undeniable need for a compact and moderately elementary introduction to the subject and to the current literature. Adams' book fills this need nicely and it can be recommended to anyone seeking a substantial overview of the main topics. As is evident from the lecture courses which Adams published, his lectures were well prepared but usually hard. He once received a letter from a second year undergraduate class saying:- The class wishes to inform Professor Adams that it has been left behind. He replied:- At any rate I have done exterior algebra, even if the second year haven't. In [1] James describes his attitude to research students and to research:- Adams was an awe-inspiring teacher who expected a great deal of his research students and whose criticism of work which did not impress him could be withering. For those who were stimulated rather than intimidated by this treatment, he was generous with his help. The competitive instinct in Adams was highly developed, for example in his attitude to research. Priority of discovery mattered a great deal to him and he was known to argue such questions not just as to the day but as to the time of day. In a subject where 'show and tell' is customary he was extraordinarily secretive about research in progress. Adams received many awards for his work. Among these was the Sylvester Medal of the Royal Society of London which was awarded to him in 1982:- ... in recognition of his solution of several outstanding problems of algebraic topology and of the methods he invented for this purpose which have proved of prime importance in the theory of that subject. The London Mathematical Society awarded him their junior Berwick Prize in 1963, and their senior Whitehead Prize in 1974. He was elected to the National Academy of Sciences (United States) in 1985 and the Royal Danish Academy of Sciences in 1988. His health continued to cause him problems with another psychiatric illness in 1986. Perhaps his health contributed to his death since he decided to go to London, despite feeling unwell, to a celebration for the retirement of a friend. He was killed in a car crash only a few miles from his home on the return journey. He had apparently always had a reputation as a car driver. According to He drove cars with remarkable skill but in a style that left a lasting impression on his passengers. Finally we note that seven years after Adams died another book was published based on his lecture courses. This is Lectures on exceptional Lie groups published in 1996. The book is based on lectures which Adams gave at Cambridge which he considered to be sequel to his book Lectures on Lie groups (1969). Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (7 books/articles) Some Quotations (2) A Poster of Frank Adams Mathematicians born in the same country Honours awarded to Frank Adams (Click below for those honoured in this way) LMS Berwick Prize winner 1963 Fellow of the Royal Society 1964 Speaker at International Congress 1966 Lowndean chair 1970 - 1989 LMS Senior Whitehead Prize 1974 BMC morning speaker 1960, 1969, 1979 Royal Society Sylvester Medal 1982 Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © February 2005 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-and.ac.uk/Biographies/Adams_Frank.html","timestamp":"2014-04-19T14:30:32Z","content_type":null,"content_length":"21160","record_id":"<urn:uuid:0bffd2f1-f8fb-4241-bc0a-172aacd9f04d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00014-ip-10-147-4-33.ec2.internal.warc.gz"}
'Student's' t Test (For Paired Samples) Statistical tests 'Student's' t Test (For Paired Samples) Use this test to compare two small sets of quantitative data when data in each sample set are related in a special way. • The number of points in each data set must be the same, and they must be organized in pairs, in which there is a definite relationship between each pair of data points • If the data were taken as random samples, you must use the independent test even if the number of data points in each set is the same • Even if data are related in pairs, sometimes the paired t is still inappropriate • Here's a simple rule to determine if the paired t must not be used - if a given data point in group one could be paired with any data point in group two, you cannot use a paired t test The paired t test is generally used when measurements are taken from the same subject before and after some manipulation such as injection of a drug. For example, you can use a paired t test to determine the significance of a difference in blood pressure before and after administration of an experimental pressor substance. You can also use a paired t test to compare samples that are subjected to different conditions, provided the samples in each pair are identical otherwise. For example, you might test the effectiveness of a water additive in reducing bacterial numbers by sampling water from different sources and comparing bacterial counts in the treated versus untreated water sample. Each different water source would give a different pair of data points. The value of the paired t test is best demonstrated in an example. Suppose patient 1 responds to a drug with a 5 mm Hg rise in mean blood pressure from 100 to 105. Patient 2 has a 30 mm Hg rise, from 90 to 120. Likewise for several other subjects. The response to the drug varied widely, but all patients had one thing in common - there was always a rise in blood pressure. Some of that experimental error is avoided by the paired t test, which likely will pick up a significant difference. The independent test, which would be improperly applied in this case, would not be able to reject the null Be certain that use of the paired t test is valid before applying it to real data. An applied statistics course or supervision of a qualified mentor may provide the experience you need. Some spreadsheet programs include the paired t test as a built-in option. Even without a built-in option, is is so easy to set up a spreadsheet to do a paired t test that it may not be worth the expense and effort to buy and learn a dedicated statistics software program, unless more complicated statistics are needed.
{"url":"http://www.ruf.rice.edu/~bioslabs/tools/stats/pairedttest.html","timestamp":"2014-04-21T04:39:07Z","content_type":null,"content_length":"12493","record_id":"<urn:uuid:77a2f572-77f3-4ff0-94b8-ce2a066ef5ba>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
: siva siva program Documentation for the siva program is below, with links to related programs in the "see also" section. {version = 1.96; (* of siva.p 1999 Dec 13} (* begin module describe.siva *) siva: site information variance siva(sorted: in, sivap: in, incu: out, curves: out, list: out, output: out) sorted: the output of the sites program that contains a sorted list of sites for each experiment performed. sivap: parameters to control the program. first line: two integers, from and to coordinates over which to do the calculations. second line: repeats, the number of times to take passes through the data removing subsets. This improves the statistics. incu: the xyin input to xyplo, output of this program. Two columns: first column is the number of sites used to find the information second column is the amount of information in bits The curves loop around along the axis, so they remain connected. curves: another xyin file, for graphing the wiggling info curves first column is the position across the site second column is the information The curves loop around along the axis, so they remain connected. list: statistical picture of the result. Two columns: first column is the number of sites used to find the information second column is the average amount of information (corresponds to the second column of incu, but is the average) third column is the variance of the information (corresponds to what your eye picks out as the thickness of the incu curves) output: messages to the user Siva calculates the variance of the information in a set of randomized sites by eliminating each site in turn and keeping track of the increase in the information content. The information content must increase, since with fewer samples there must be less variation (this is the small sample bias effect). The program allows one to graph the information content versus the number of sites removed (incu). When this is done repeatedly, with different orders of removing the sites, a thick band of curves is created. The thickest part of this band shows the greatest possible amount of variation that could be in the total set of sequences. To be even-handed, the program removes the first sequence, then randomly removes the others. This creates the first curve. Then the program removes the second sequence and randomly removes the others for the second curve. If there are n sequences, then n removal curves will be generated. This is one complete repeat of the process. If you want, you can do this a number of times to get better statistics, using the repeat parameter in sivap. The largest variation in the information content is surely greater than the variation of the information content in all the sets of removals of sites. For several experiments, the statistics are joined into one set. With several experiments, surely the variation of the combined experiments would be less than the variations found for the individuals. So if one experiment gives a greater variation, that will increase the variation siva reports in list, so the highest value in list is an upper limit on the variation. author = "T. D. Schneider and G. D. Stormo", title = "Excess Information at Bacteriophage {T7} Genomic Promoters Detected by a Random Cloning Technique", year = "1989", journal = "Nucl. Acids Res.", volume = "17", pages = "659-674"} see also Thomas Dana Schneider none known (* end module describe.siva *) {This manual page was created by makman 1.44} {created by htmlink 1.55} Viewing Files Accessibility
{"url":"http://schneider.ncifcrf.gov/delila/siva.html","timestamp":"2014-04-20T14:10:40Z","content_type":null,"content_length":"5978","record_id":"<urn:uuid:22dbf9df-37c8-4ce6-83dc-01a1af8dc834>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
First order circuit with DC sources I learnt that when current reaches a crossroads, then the current likes to choose the path with less resistance. Since the wire is practically at zero resistance then wouldnt all current flow through the short circuit instead? That is not quite correct. The current divides according to the relative conductances of the paths (conductance being the inverse of resistance). By "path" we mean the entire path back to the source or to where all the paths to choose from join up again, not just to the next node in line for an individual path! Some current flows through all paths unless one of the paths happens to be zero resistance (infinite conductance!) and bypasses all of the other paths to the current's final destination. In the attached figure, i2 and i3 will be nonzero; Not all of the current flows through the lowest value of resistance!
{"url":"http://www.physicsforums.com/showthread.php?t=506512","timestamp":"2014-04-18T03:12:25Z","content_type":null,"content_length":"41918","record_id":"<urn:uuid:c9958553-3f83-4536-ae05-548149960ba5>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the shortest air column, closed at one end, that will resonate at a frequency of 440 Hz when the speed of sound is 344 m/s? I tried using the wavelength= v/f formula, but I keep getting the wrong answer... • one year ago • one year ago Best Response You've already chosen the best response. well you can calculate the wavelength. But how does that wavelength relate to the column of air? Best Response You've already chosen the best response. I still don't really know what to do :( I don't know how the length of the column relates to anything Best Response You've already chosen the best response. SO because the wavelength is .8, and in a closed cylinder, the length is 1/4 the wavelength, the answer is .2? Best Response You've already chosen the best response. Wavelength is v/4L for closed tube. Solve that for L to find length. Best Response You've already chosen the best response. The wave doesn't oscillate at the closed end; it can't. That's why there's a node at that end. At the open end, that's the natural place for an anti-node. So if you drew out one entire wave form 0 to max to 0 to min to 0, you can see that that first anti-node is at the max, and that is 1/4 of the total wave length. So, given f and v, you have solved for wavelength, lambda. Now the length of that tube is L = lambda/4. Best Response You've already chosen the best response. Thank you :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51156dd8e4b09e16c5c80b99","timestamp":"2014-04-18T00:17:57Z","content_type":null,"content_length":"42792","record_id":"<urn:uuid:20472fe7-0722-43d4-b32f-65770334a3a9>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Elwood, IL Precalculus Tutor Find an Elwood, IL Precalculus Tutor ...She got an "A" in her last course, and she was extremely happy with the results. Since then, I have continued reviewing pre-calculus material and I expect to do so for a long time - it is a subject I enjoy a great deal. I am now working on learning how to use the graphing calculator (TI-84) for pre-calculus problems. 13 Subjects: including precalculus, statistics, geometry, accounting ...I will work with you to set goals for your learning and create a plan for you to achieve those goals. I completed the class discrete mathematics for computer science while in college. The topics covered were logic, proofs, mathematical induction, sets, relations, graph theory etc. 31 Subjects: including precalculus, chemistry, calculus, statistics ...I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the GRE. I've helped students push past their goal scores in both the Quant and Verbal. I took the revised version of the GRE the first day it was offered and I scored a 170 on the Quant and a 168 on Verbal. 24 Subjects: including precalculus, calculus, physics, geometry ...I have my last two classes left and will be officially done! I have listed courses that I have passed with a C or better which is required by the Engineering Department. I am also one math class away from a minor in math as well! 26 Subjects: including precalculus, reading, chemistry, calculus ...Whether it is math abilities, general reasoning, or test taking abilities that need improvements, I can help you progress substantially. I work with systems of linear equations and matrices almost every day. My PhD in physics and long experience as a researcher in theoretical physics make me well qualified for teaching linear algebra. 23 Subjects: including precalculus, calculus, geometry, physics Related Elwood, IL Tutors Elwood, IL Accounting Tutors Elwood, IL ACT Tutors Elwood, IL Algebra Tutors Elwood, IL Algebra 2 Tutors Elwood, IL Calculus Tutors Elwood, IL Geometry Tutors Elwood, IL Math Tutors Elwood, IL Prealgebra Tutors Elwood, IL Precalculus Tutors Elwood, IL SAT Tutors Elwood, IL SAT Math Tutors Elwood, IL Science Tutors Elwood, IL Statistics Tutors Elwood, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/elwood_il_precalculus_tutors.php","timestamp":"2014-04-18T01:11:57Z","content_type":null,"content_length":"24160","record_id":"<urn:uuid:ceae843a-ecbd-4c10-a306-59c08c9635c4>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparing maps of reduced schemes up vote 4 down vote favorite Nice fact: Suppose f:X->Y is a map of schemes and Z⊆Y is a subscheme (locally closed immersion) containing the set-image of X. If X and Z are reduced, then it follows that f factors through Z. This is nice because it makes "factoring through" purely a consideration of the underlying topological spaces. So now I'm wondering, to what extent does "reduced" allow us to think only terms of topological spaces? Suppose we weaken the assumption that Z→Y is an inclusion. When can we say f factors through Z? More precisely: Suppose X,Z are reduced schemes, f:X→Y and g:Z→Y are scheme morphisms such that f factors through g in Top. When does f factor through g in Sch? I know the answer is "not always", for example if Y is a field and X,Z are incomparable field extensions of Y (in Ring^op). But does anyone know any positive results we can state here? Given that no-one has bitten yet, can I explicitly ask whether people think it might hold if X and K are reduced varieties over a field? Maybe one needs X to be smooth? I'm not sure. I'm a bit concerned about the resolution of the singularity of a cuspidal cubic being a homeomorphism on the top spaces (and the map only existing in one direction in alg geo) but can't make an explicit counterexample. – Kevin Buzzard Nov 4 '09 at 23:25 @KMB: Let f be the identity on the cuspidal cubic X=Y=Spec k[x,y]/(y^2-x^3), and let g be the normalization from A^1. Then there is no factorization, since k[x] doesn't map to k[x^2,x^3] in a way that commutes with the reverse inclusion. – S. Carnahan♦ Nov 5 '09 at 2:01 Ah, can you use Z or something instead of K? When I see K, I can only think "field". I suppose this is not very mathematicianly of me. It's kind of physicistly, maybe -- well, it's "field theory" after all ;-) – Kevin H. Lin Nov 5 '09 at 2:42 @Kevin: replaced K by Z. – Andrew Critch Nov 5 '09 at 4:47 What you are asking will (in my opinion) always require a scheme theoretic condition.. For instance, your example Z⊆Y being a subscheme is already a scheme theretic condition, you require something about the rings involved in the structure sheaf. – Jose Capco Nov 5 '09 at 10:17 show 1 more comment 1 Answer active oldest votes Here's an example that is not completely silly. I think you get scheme-theoretic factorization if g is etale, and X is simply connected. up vote 1 down vote add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/4120/comparing-maps-of-reduced-schemes","timestamp":"2014-04-19T20:14:09Z","content_type":null,"content_length":"56751","record_id":"<urn:uuid:ea69b51d-5eaa-40ef-b3b7-88f573f6c7e9>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}